Feeds

Google's robot army learns Spanish

La rebelión de las máquinas

Providing a secure and efficient Helpdesk

If you want to learn another language, you need to spend time in the country, talk to people, get drunk and attempt to order complex drinks, and eventually read that country's great works of literature – unless you're Google, that is.

In a recent paper, three Googlers outlined a new approach to machine-based translation that uses the Chocolate Factory's weapons of choice: masses and masses of data, and neural networks.

The paper, "Exploiting Similarities among Languages for Machine Translation", shows how Google is able to use a small dictionary of pairs of words in two languages to train a network that can infer missing dictionary entries.

"Our method can translate missing word and phrase entries by learning language structures based on large monolingual data and mapping between languages from small bilingual data," they write. "This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs."

The system works by visualizing the vectors of individual words, then projecting the vector from the source language to the target language and swapping in the word with that vector representation in that dictionary.

Google_machine_translation

Feeling nervous yet, human?

It is able to work because, the researchers explain, "all common languages share concepts that are grounded in the real world (such as that cat is an animal smaller than a dog), there is often a strong similarity between the vector spaces."

Google's technology relies on the Skip-gram or Continuous Bag-of-Words (CBOW) models proposed by Googlers in another, earlier paper, which found that word vectors could be used to infer other words. "For example, vector operations 'king' - 'man' + 'woman' results in a vector that is close to 'queen'."

These models let Google create neural network models that learn high-quality word vectors from vast datasets, and do so in a less compute-intensive way than ever before. This lets the company scale up the model far beyond previous limits.

"Using the DistBelief distributed framework, it should be possible to train the CBOW and Skip-gram models even on corpora with one trillion words, for basically unlimited size of the vocabulary," they wrote at the time. "That is several orders of magnitude larger than the best previously published results for similar models."

Now, the team has been able to put these models to use to train them to figure out the relationship between different words, and infer the vector representations of a word's counter in another language.

"Thus, if we know the translation of one and four from English to Spanish, we can learn the transformation matrix that can help us to translate even the other numbers to Spanish," they write.

The technique works for languages far more alien from each other such as English and Czech, and English and Vietnamese with high degrees of accuracy.

"In particular, our work can be used to enrich and improve existing dictionaries and phrase tables, which would in turn lead to improvement of the current state-of-the-art machine translation systems," they write. "Clearly, there is still much to be explored."

In other words, get tweaking the CV, translators, because Google's algo-army is coming for you. Comprender? ®

Internet Security Threat Report 2014

More from The Register

next story
MARS NEEDS WOMEN, claims NASA pseudo 'naut: They eat less
'Some might find this idea offensive' boffin admits
LOHAN crash lands on CNN
Overflies Die Welt en route to lively US news vid
Comet Siding Spring revealed as flying molehill
Hiding from this space pimple isn't going to do humanity's reputation any good
Experts brand LOHAN's squeaky-clean box
Phytosanitary treatment renders Vulture 2 crate fit for export
No sail: NASA spikes Sunjammer
'Solar sail' demonstrator project binned
Carry On Cosmonaut: Willful Child is a poor taste Star Trek parody
Cringeworthy, crude and crass jokes abound in Steven Erikson’s sci-fi debut
Origins of SEXUAL INTERCOURSE fished out of SCOTTISH LAKE
Fossil find proves it first happened 385 million years ago
Human spacecraft dodge COMET CHUNKS pelting off Mars
Odyssey orbiter yet to report, though - comet's trailing trash poses new threat
prev story

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why and how to choose the right cloud vendor
The benefits of cloud-based storage in your processes. Eliminate onsite, disk-based backup and archiving in favor of cloud-based data protection.
Three 1TB solid state scorchers up for grabs
Big SSDs can be expensive but think big and think free because you could be the lucky winner of one of three 1TB Samsung SSD 840 EVO drives that we’re giving away worth over £300 apiece.
Reg Reader Research: SaaS based Email and Office Productivity Tools
Read this Reg reader report which provides advice and guidance for SMBs towards the use of SaaS based email and Office productivity tools.
Security for virtualized datacentres
Legacy security solutions are inefficient due to the architectural differences between physical and virtual environments.