This article is more than 1 year old

Google and Intel cook AI chips, neural network exchanges – and more

A quick catch-up on what's been going on in machine-learning world

Roundup Welcome to our roundup of major AI news from the past two weeks. Machine learning is so hyped right now, it doesn't help when companies such as Intel and Nvidia announce new chips and reveal little information about the specs, but make lofty claims of increased speed and precision.

It's also difficult to keep track of all the different software frameworks and hardware options. Outfits like ARM, AMD, Amazon and Facebook are aware of this and are trying to make it easier to transfer models written in one language to another and optimize the models across various chips.

Google's ‘surprise’ Pixel 2 chip It's the first smartphone chip Google has ever designed, and it wasn’t announced during the launch of the Pixel 2, which features the new silicon, because, er, it isn't enabled nor supported by applications yet.

How DeepMind's AlphaGo Zero learned all by itself to trash world champ AI AlphaGo

READ MORE

The coprocessor, known as the Pixel Visual Core, has eight Image Processing Units (IPUs). There aren’t many details on the specs, but it’s been designed to run image-processing machine-learning software within the Pixel 2, mainly working on photos taken by the phone's camera. Each IPU core is packed with 512 arithmetic logic units capable or running 3 trillion operations per second, we're told.

It will power HDR+, a feature in the Pixel 2’s camera app that touches up and improves contrast for photos taken in dim lighting. What’s most interesting, however, is that Google has admitted the IPU is difficult to program.

“A key ingredient to the IPU’s efficiency is the tight coupling of hardware and software — our software controls many more details of the hardware than in a typical processor,” said Ofer Shacham and Masumi Reynders of Google's Pixel team. "Handing more control to the software makes the hardware simpler and more efficient, but it also makes the IPU challenging to program using traditional programming languages."

Google has had to use a mixture of Halide, a programming language for image processing, and TensorFlow to control the tiny accelerator. It’s also had to create a custom compiler to optimize the software for its IPU. The Pixel Visual Core is expected to be activated in a future software update for Pixel 2 users, once developers have been able to write apps for it.

Finally, the chip was fabbed by Intel's custom foundry wing: the SR3HX part number on the package, and Intel-shaped logo, gives it away.

Intel's Nervana Neural Processor During the D.Live tech conference, Intel announced a new chip designed for training and deploying deep learning models. No information on its specs was really revealed, but Intel cheekily insists on saying it’ll be faster. But how fast? We’re not quite sure. Intel has been asked to clarify.

The ASIC, formerly known as “Lake Crest”, has been three-and-a-half years in the making, and it’s designed to cope with the intense loads of matrix multiplication, convolutions and other operations needed to run neural networks.

Interestingly, it uses Flexpoint, a lower-precision number format, so it’s less computationally intensive and has more memory bandwidth. “Flexpoint allows scalar computations to be implemented as fixed-point multiplications and additions while allowing for large dynamic range using a shared exponent,” the Nervana team wrote in a blog post.

Intel has been working closely with Facebook, so the social media platform will get first dibs on the chip. It will be shipping to the social network by the end of the year, and perhaps others next year, and is based on technology Intel acquired when it bought Nervana last year.

Ex-Obama advisor to lead the Partnership of AI A megagroup of machine-learning powerhouses will be headed up by Terah Lyons, a former advisor to US President Barack Obama.

The Partnership of AI, which includes Amazon, Google, Facebook, Microsoft, DeepMind, and Apple and several other companies focuses on guiding AI "to benefit people and society".

Lyons will be the founding executive director after a brief stint as a technology policy fellow at the Mozilla Foundation, and a policy advisor to the US chief technology office at the White House office during Obama's presidency.

The Partnership of AI hasn't done much yet. But, the board of directors and representatives from member organizations will meet in Berlin next week to start working on its goal of developing the best practices on AI and to advance the public’s understanding of AI.

Comma AI's push for 'augmented driving' Comma AI, the self-driving car upstart headed by PlayStation 3 and iPhone hacker George Hotz has released a dashcam and real-time display for avid car tinkerers.

EON is designed to record videos of your ride and upload them to the cloud, where they can be played back by Comma AI’s chffr app.

The software also uses two relatively simple deep-learning models to analyze and overlay information on the live dashcam video feed in real time, displaying the combined result on a smartphone you fit on your dashboard or under the rear-view mirror. One of the models is called DepthNet, which highlights moving objects in pink. The brighter the color, the nearer you are to another vehicle. The other is called SegNet. It stands for segmentation network, and recognizes the different components in a typical driving scene such as the sky, signs, traffic lights, etc. At the moment, the system only highlights lane markings in green.

This isn't so much a self-driving car app – it doesn't control the vehicle at all – instead it's a driving aid. It's hackable and open-source so, essentially, it's a toolkit for tinkerers and developers to work on a live video heads-up display of their journey.

Hotz said it will help motorists know when they are drifting out of lane positions, or if another car is dangerously close. But that information should already be pretty obvious if drivers have their eyes on the road, so we aren’t completely sure what the dashcam is really good for.

It’s been integrated with Waze, a navigation app that allows drivers to avoid traffic jams, and Spotify, so there’s that.

Level 5 autonomous car chip?! Here is more vague chip news. Nvidia announced the launch of what is apparently going to be the "world’s first" computing platform geared towards developing completely autonomous taxis.

Pegasus will apparently propel autonomous cars to "level five," a class of cars where steering wheels and mirrors are optional as the vehicle will require no human intervention to drive.

There are scant details on Pegasus besides it being able to do "320 trillion operations per second." It will definitely take more than a powerful chip to build a completely autonomous car so we aren't quite sure what qualifies Nvidia's latest chip as level five.

Previous platforms in its Drive PX series use SoCs for semi-autonomous cars at Levels one, two and three. At the lower levels actions, the system can aid actions such as steering, acceleration and deceleration but the driver does the majority of the work.

Democratising AI Hardware ARM, AMD, Huawei, IBM, Qualcomm and Intel are supporting ONXX, aka open neural network exchange format, an open project that makes it easier to transfer models written in different AI frameworks.

Developers have favorites when it comes to software, and it can be tricky trying to use deep learning programs in a different language to the one it was written in. ONXX, a partnership between Facebook and Microsoft, aims to make this task easier. Neural networks can be trained in one framework, and transferred to another for the inference stage.

It’ll also try to optimize the models across different hardware platforms. So it’s mutually beneficial for companies – like Facebook – that lack their own custom AI chips or for businesses that have their own chips but do not specialize in software – such as ARM, IBM, Huawei and Qualcomm. ONXX currently only works between Caffe2, Pytorch and Cognitive Toolkit.

...and Amazon too Amazon also has a similar goal in mind with Gluon, a new programming interface. "The first result of this collaboration is the new Gluon interface, an open source library in Apache MXNet that allows developers of all skill levels to prototype, build, and train deep learning models," Amazon wrote in a blog post.

Gluon makes it easier to build neural networks as it includes models with predefined layers, optimizers, and initializers. Developers can also train and run a model together at a higher speed. ®

More about

TIP US OFF

Send us news


Other stories you might like