Emergent Tech

Artificial Intelligence

'Self learning' Intel chips glimpsed, Nvidia emits blueprints, AMD and Tesla rumors, and more

A quick guide to this week's reveals

By Katyanna Quach

2 SHARE

AI roundup Here's your weekly dose of key announcements in the world of artificial intelligence. The flurry of hardware-related news shows how machine learning software is reinvigorating chip design.

Chipzilla's "self-learning chip" Intel is the latest company to hype up a chip specialized for AI. But it's not a GPU, a CPU, nor an FPGA... it's a neuromorphic chip codenamed Loihi. Anything to distract from its 10nm problems, the cynics among us are thinking.

Neuromorphic progress: And we for one welcome our new single artificial synapse overlords

READ MORE

Neuromorphic computing loosely mimics the activity of a brain. The circuits are programmed to fire like activated neurons passing information to the next group of neurons, and so on. It's an area that Intel has been interested in for a while. Loihi uses Intel's 14nm fabrication process, and has a total of 130,000 neurons and 130 million synapses, we're told. When it'll arrive on the scene is anyone's guess. Boffins will be offered the silicon in the first half of 2018, it's claimed.

Google Cloud gets GPU boost Cloud platforms are pretty useful for researchers who need to book some runtime on beefy GPU-backed clusters for machine learning, provided there's enough capacity to spare. Better graphics accelerators in the cloud means customers can train and run inference models faster. Google Cloud Platform announced it has deployed Nvidia P100 GPUs in beta, and Nvidia K80 GPU accelerators for its Google Compute Engine, improving its lineup of on-demand hardware acceleration for AI.

Nvidia pushing for faster inference Nvidia's CEO Jensen Huang paraded on stage in his signature black leather jacket for his outfit's GTC event in China. A few announcements included Nvidia Deep Learning Accelerator, an open-source set of Verilog blueprints for creating your own inference hardware accelerators for deep learning.

It also released TensorRT 3, software that performs "3.5x faster inference" on its latest Tesla V100 chips compared to its older P100 family. It supports optimization for models trained in Google's TensorFlow and Facebook's Caffe.

Apple reads the writing on the wall Apple has updated its machine-learning blog with a paper describing the "real-time recognition of handwritten Chinese characters spanning a large inventory of 30,000 characters." The team behind the tech concluded: "Building a high-accuracy handwriting recognition system which covers a large set of 30,000 Chinese characters is practical even on embedded devices."

Are Tesla and AMD really working on a chip together? Last week, it was claimed Elon Musk's Tesla was working with chip fabricator Global Foundries to manufacture an AI processor for its self-driving cars. The rumor was circulated by CNBC, but the US telly news channel later reworded its report to say Global Foundries is not working with Tesla. An unnamed source in the story alleged Tesla is working with AMD – presumably using one of AMD's semi-custom or embedded designs – for the robo-rides' neural-network accelerator.

It's not news that Musk is interested in developing custom chips for his fleet of autonomous vehicles – he hired processor guru Jim Keller and a bunch of other silicon engineers, after all – but a collaboration with AMD would be interesting, as it shows chip companies are pairing off with self-driving outfits to drive competition. Intel is working with Waymo, and it's assumed Nvidia is working with everyone else.

We asked Tesla and AMD for clarification. A Tesla spokesperson said: "Tesla's policy is to always decline to comment on speculation." AMD declined to comment. Global Foundries, which grew out of the x86 designer, confirmed it wasn't collaborating with Tesla.

Meanwhile, it appears Tesla has tapped up Intel for its in-vehicle entertainment. So it all seems rather up in the air, for now. ®

Sign up to our NewsletterGet IT in your inbox daily

2 Comments

More from The Register

Nvidia adds nine nifty AI supercomputing containers to the cloud

Now you can splash out on tons of GPUs if you really need to

AI caramba! Nvidia devs get a host of new kit to build smart systems

Kubernetes for GPUs, a PyTorch extension, TensorRT 4, and much, much more

AI, AI, Pure: Nvidia cooks deep learning GPU server chips with NetApp

Pure Storage's AIRI reference architecture probably a bit jelly

Try our new driverless car software says Nvidia, as it suspends driverless car trials

Post crash test hits share price

Looking to nab Nvidia's GeForce chips? You need cash and patience

GPU shortage equals four-month wait time for buyers

Nvidia quickly kills its AMD-screwing GeForce 'partner program' amid monopoly probe threat

GPU giant rails against rumors of stiffing sellers

If you have cash to burn, racks to fill, problems to brute-force, Nvidia has an HGX-2 for you

GTC Taiwan Imagine one giant virtual GPU at 2PFLOPS

Nvidia reports record revenues in latest fiscal quarter

But cryptocurrency not growing for the GPU giants

Yay for Nvidia, GPU giant report decent first quarter results despite recent setbacks

There's still not enough GPUs to go round however

Gone in 60.121 seconds: Your guide to the pricey new gear Nvidia teased at its annual GPU fest

GTC Yours if you can afford it... and wait long for the fabs to make the chips