How Google's Smart Compose for Gmail works – and did it fake its robo-caller demo?

Plus: Classifying frogs can be hopping mad

frog peers around plant... pic by shutterstock

Roundup Hello, here's our weekly AI roundup. We have more information on how Google's sentence prediction in Smart Compose for Gmail works, as well as some questions about its Duplex robo-caller system. Also, decision trees to classify the mating calls of frogs and toads to study climate change.

Too lazy? Let AI write your emails Google unveiled Smart Compose, a new tool in Gmail that completes sentences a user types out with the help of machine learning, at its I/O last week. But how does it work?

A new blog post reveals the bag-of-words language model with a recurrent neural network. To try and create sentences that are relevant to the email, the model takes into account the subject and any previous conversations in earlier emails. These are encoded as word embeddings and converted into vectors.

“In this hybrid approach, we encode the subject and previous email by averaging the word embeddings in each field. We then join those averaged embeddings, and feed them to the target sequence RNN-LM at every decoding step,” according to the blog post.

google_duplex

A rough diagram of Google's Duplex RNN language model system. Image credit: Google AI

This allows the model to predict the next word given the previous one in the same sentence, whilst taking into account of the context of the email. The model was trained on billions of, probably, mundane emails to nail the prediction process. A whole TPU2 pod containing 64 TPU2 chips was used to train the model in less than a day.

Google said that balancing model complexity and inference speed was a critical issue. In order for Smart Compose to be useful, it has to offer predictions as the user is typing - ideally within 100 milliseconds.

So it’ll probably cope okay with emails that have a generic structure such as the one in the demo - emailing friends to organize dinner at your house - but it won’t be as good for more obscure, purposeless chit chat.

Did Google fake its Duplex demo? No answers here. More I/O related news. Questions about the authenticity of Google’s Duplex demo have been raised, where CEO Sundar introduced its AI-assisted robo caller on stage last week.

Axios were suspicious that the businesses did not identify the themselves during the call, and did not ask Duplex for its name or number. Google, however, may have snipped out this part of the conversation for privacy reasons.

There was also no ambient noise in either of the calls. No sounds of chattering in the restaurants or hairdressers.

When it quizzed Google PR about this, even promising to not publish the name of the businesses, there was no answer beyond a flimsy promise of getting back to the journalist.

The story was picked up by a few other publications including Vanity Fair and TechSpot. It’s a bit of a reach, but Google doesn’t help itself by ignoring questions either.

The Register also reached out to Google for clarification. But, surprise, surprise, we got radio silence too. ¯\_(ツ)_/¯

Brute force compute OpenAI has analyzed the amount of compute used to train the some of the largest and most popular models in AI since 2012 to find out how much its risen over the years.

“Since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase),” it wrote in a blog post. OpenAI reckons the amount of compute has increased by about a factor of 10 each year since 2012.

OpenAI_compute

Logarithmic scale of compute spent on some of the largest training models since 2012. Image credit: OpenAI

AlphaGo Zero tops the list, guzzling more than 1000 petaflops per second*day. AlphaZero is second. Google’s Neural Machine Translation is third, followed closely by its Neural Architecture Search. In 2012, it was AlexNet spending less than 0.1 petaflops per second*day.

A petaflops per second*day is equivalent to performing about 1015 neural net operations per second a day. So, it’s a lot of brute force number crunching to advance progress in AI. Decent results, however, don’t always necessarily rely on who used the most chips as seen from the latest DAWNBench results.

“Deep learning models are fine, but researchers mainly ignore the simple tricks that make them much faster to train,” Jeremy Howard, founder of fast.ai, a popular online deep learning course, and a researcher at the University of San Francisco, previously told The Register.

Nevertheless, it’s still interesting to track how much compute has increased due to the onset of custom hardware such as the many GPUs, ASICs, FPGAs, TPUs, etc.

Ribbit, ribbit A team of researchers have used AI to automatically classify the different sounds of frogs and toads to study climate change.

The sound of anuran mating calls is affected by temperature. If it gets too high then some physiological processes that produce the sound are impacted, and some calls are even actually suppressed. So by classifying and measuring the number of mating calls, it gives scientists a way to study climate change.

“We've segmented the sound into temporary windows or audio frames and have classified them by means of decision trees, an automatic learning technique that is used in computing", Amalia Luque Sendra, co-author of the work and a researcher at the Universidad de Sevilla, explained.

The researchers said they got a success rate close to 90 per cent when classifying the sounds. It also helps them to measure the number of individual species over a geographical region - another indicator of environmental change.

"A temperature increase affects the calling patterns," she said, "but since these in most cases have a sexual calling nature, they also affect the number of individuals. With our method, we still can't directly determine the exact number of specimens in an area, but it is possible to get a first approximation."

The paper has been published in the Journal of Expert Systems with Applications.




Biting the hand that feeds IT © 1998–2018