Emergent Tech

Artificial Intelligence

'Self learning' Intel chips glimpsed, Nvidia emits blueprints, AMD and Tesla rumors, and more

A quick guide to this week's reveals

By Katyanna Quach


AI roundup Here's your weekly dose of key announcements in the world of artificial intelligence. The flurry of hardware-related news shows how machine learning software is reinvigorating chip design.

Chipzilla's "self-learning chip" Intel is the latest company to hype up a chip specialized for AI. But it's not a GPU, a CPU, nor an FPGA... it's a neuromorphic chip codenamed Loihi. Anything to distract from its 10nm problems, the cynics among us are thinking.

Neuromorphic progress: And we for one welcome our new single artificial synapse overlords


Neuromorphic computing loosely mimics the activity of a brain. The circuits are programmed to fire like activated neurons passing information to the next group of neurons, and so on. It's an area that Intel has been interested in for a while. Loihi uses Intel's 14nm fabrication process, and has a total of 130,000 neurons and 130 million synapses, we're told. When it'll arrive on the scene is anyone's guess. Boffins will be offered the silicon in the first half of 2018, it's claimed.

Google Cloud gets GPU boost Cloud platforms are pretty useful for researchers who need to book some runtime on beefy GPU-backed clusters for machine learning, provided there's enough capacity to spare. Better graphics accelerators in the cloud means customers can train and run inference models faster. Google Cloud Platform announced it has deployed Nvidia P100 GPUs in beta, and Nvidia K80 GPU accelerators for its Google Compute Engine, improving its lineup of on-demand hardware acceleration for AI.

Nvidia pushing for faster inference Nvidia's CEO Jensen Huang paraded on stage in his signature black leather jacket for his outfit's GTC event in China. A few announcements included Nvidia Deep Learning Accelerator, an open-source set of Verilog blueprints for creating your own inference hardware accelerators for deep learning.

It also released TensorRT 3, software that performs "3.5x faster inference" on its latest Tesla V100 chips compared to its older P100 family. It supports optimization for models trained in Google's TensorFlow and Facebook's Caffe.

Apple reads the writing on the wall Apple has updated its machine-learning blog with a paper describing the "real-time recognition of handwritten Chinese characters spanning a large inventory of 30,000 characters." The team behind the tech concluded: "Building a high-accuracy handwriting recognition system which covers a large set of 30,000 Chinese characters is practical even on embedded devices."

Are Tesla and AMD really working on a chip together? Last week, it was claimed Elon Musk's Tesla was working with chip fabricator Global Foundries to manufacture an AI processor for its self-driving cars. The rumor was circulated by CNBC, but the US telly news channel later reworded its report to say Global Foundries is not working with Tesla. An unnamed source in the story alleged Tesla is working with AMD – presumably using one of AMD's semi-custom or embedded designs – for the robo-rides' neural-network accelerator.

It's not news that Musk is interested in developing custom chips for his fleet of autonomous vehicles – he hired processor guru Jim Keller and a bunch of other silicon engineers, after all – but a collaboration with AMD would be interesting, as it shows chip companies are pairing off with self-driving outfits to drive competition. Intel is working with Waymo, and it's assumed Nvidia is working with everyone else.

We asked Tesla and AMD for clarification. A Tesla spokesperson said: "Tesla's policy is to always decline to comment on speculation." AMD declined to comment. Global Foundries, which grew out of the x86 designer, confirmed it wasn't collaborating with Tesla.

Meanwhile, it appears Tesla has tapped up Intel for its in-vehicle entertainment. So it all seems rather up in the air, for now. ®

Sign up to our NewsletterGet IT in your inbox daily


More from The Register

Nvidia adds nine nifty AI supercomputing containers to the cloud

Now you can splash out on tons of GPUs if you really need to

AI caramba! Nvidia devs get a host of new kit to build smart systems

Kubernetes for GPUs, a PyTorch extension, TensorRT 4, and much, much more

AI, AI, Pure: Nvidia cooks deep learning GPU server chips with NetApp

Pure Storage's AIRI reference architecture probably a bit jelly

Try our new driverless car software says Nvidia, as it suspends driverless car trials

Post crash test hits share price

Looking to nab Nvidia's GeForce chips? You need cash and patience

GPU shortage equals four-month wait time for buyers

Nvidia quickly kills its AMD-screwing GeForce 'partner program' amid monopoly probe threat

GPU giant rails against rumors of stiffing sellers

Amazon's sexist AI recruiter, Nvidia gets busy, Waymo cars rack up 10 million road miles

Roundup Your two-minute guide to this week in machine-learning world

Nvidia shrugs off crypto-mining crash, touts live ray-tracing GPUs, etc

Roundup Also, how Apple's Siri uses your location to improve its speech recognition

Nvidia promises to shift graphics grunt work to the cloud, for a price

GPU giant waxes lyrical about 5G, GeForce Now leveling the playing field for gamers

If you have cash to burn, racks to fill, problems to brute-force, Nvidia has an HGX-2 for you

GTC Taiwan Imagine one giant virtual GPU at 2PFLOPS