Emergent Tech

Artificial Intelligence

TensorFlow lightens up to land on smartmobes, then embed everywhere

Thanks for coming, TensorFlow Mobile, TensorFlow Lite is what the cool kids will code with now

By Richard Chirgwin


Google's released an Android/iOS version of TensorFlow.

The Chocolate Factory announced the developer preview of TensorFlow Lite in this Tuesday blog post. The post stated the release will initially target smartmobes, with later versions to target embedded devices.

Google first revealed its desire for machine learning everywhere at its I/O conference in May.

Pushing machine learning out to the devices makes sense, since it reduces latency for those running inference, and Google's not the only company to spot that. Qualcomm, for example, first announced its mobile-specific silicon, Zeroth, in 2013.

Google explained that TensorFlow Lite's architecture assumes that the grunt work of model training will happen upstream, as shown in the graphic below.

Google listed the tool's components thus:

Out on the target smartphone, a C++ API (native on iOS; wrapped in a Java API on Android) loads the TensorFlow Lite model and calls the interpreter.

A fully-loaded interpreter is 300 KB, including all machine learning operators (on its own, the interpreter is just 70 KB). Google notes that the current TensorFlow Mobile is 1.5 MB.

Androids can also offload processing to hardware accelerators if they're available, using the Android Neural Networks API.

Models available to TensorFlow Lite include the MobileNet and Inception v3 vision models; and the Smart Reply conversational model.

For now, TensorFlow Mobile stays on Google's books. Google's announcement stated that it viewed TensorFlow Mobile as the system to support production applications. However: “Going forward, TensorFlow Lite should be seen as the evolution of TensorFlow Mobile, and as it matures it will become the recommended solution for deploying models on mobile and embedded devices”. ®

Sign up to our NewsletterGet IT in your inbox daily


More from The Register

What do Tensor Flow, Caffe and Torch have in common? Open CVEs

Sooner or later, dependency hell creates a problem for everyone

Like the Architect in the Matrix... but not: Nvidia trains neural network to generate cityscapes

GPU biz also open sources PhysX and teases beefy graphics card

Nvidia adds nine nifty AI supercomputing containers to the cloud

Now you can splash out on tons of GPUs if you really need to

Nvidia says Google's TPU benchmark compared wrong kit

You're faster than the Kepler, but what about the newer and better Pascal?

Nvidia just can't grab a break. Revenues up, profit nearly doubles... and stock down 20%

Ongoing Bitcoin woes left the channel holding all the cards, and that's not a good thing

Google ramping up AI in China, Nvidia's Titan V, Intel's hip-hop misstep

Roundup And more in your machine-learning news summary

AI caramba! Nvidia devs get a host of new kit to build smart systems

Kubernetes for GPUs, a PyTorch extension, TensorRT 4, and much, much more

AI, AI, Pure: Nvidia cooks deep learning GPU server chips with NetApp

Pure Storage's AIRI reference architecture probably a bit jelly

Amazon's sexist AI recruiter, Nvidia gets busy, Waymo cars rack up 10 million road miles

Roundup Your two-minute guide to this week in machine-learning world

Try our new driverless car software says Nvidia, as it suspends driverless car trials

Post crash test hits share price