What sort of silicon brain do you need for artificial intelligence?

Using CPUs, GPUs, FPGAs and ASICS to make sense of AI

FPGAs and more

When it isn’t drowning squirrels, Microsoft is rolling out field programmable gate arrays (FPGAs) in its own data centre revamp. These are similar to ASICs but reprogrammable so that their algorithms can be updated. They handle networking tasks within Azure, but Microsoft has also unleashed them on AI workloads such as machine translation. Intel wants a part of the AI industry, wherever it happens to be running, and that includes the cloud. To date, its Xeon Phi high-performance CPUs have tackled general purpose machine learning, and the latest version, codenamed Knight’s Mill, ships this year.

The company also has a trio of accelerators for more specific AI tasks, though. For training deep learning neural networks, Intel is pinning its hopes on Lake Crest, which comes from its Nervana acquisition. This is a co‑processor that the firm says overcomes data transfer performance ceilings using a type of memory called HBM2, which is around 12 times faster than DDR4.

While these big players jockey for position with systems built around GPUs, FPGAs and ASICs, others are attempting to rewrite AI architectures from the ground up.

Knuedge is reportedly prepping 256-core chips designed for cloud-based operations but isn’t saying much.

UK-based Graphcore, due to release its technology in 2017, has said a little more. It wants its Intelligence Processing Unit (IPU) to use graph-based processing rather than the vectors used by GPUs or the scalar processing in CPUs. The company hopes that this will enable it to fit the training and inference workloads onto a single processor. One interesting thing about its technology is that its graph-based processing is supposed to mitigate one of the biggest problems in AI processing – getting data from memory to the processing unit. Dell has been the firm’s perennial backer.

Wave Computing is also focusing on a different kind of processing, using what it calls its data flow architecture. It has a training appliance designed for operation in the data centre that it says can hit 2.9 PetaOPs/sec.

Edge-side AI

Whereas cloud-based systems can handle neural network training and inference, Client-side devices from phones to drones focus mainly on the latter. Their considerations are energy efficiency and low-latency computation.

“You can’t rely on the cloud for your car to drive itself,” says Nvidia’s Buck. A vehicle can’t wait for a crummy connection when making a split second decision on who to avoid, and long tunnels might also be a problem. So all of the computing has to happen in the vehicle. He touts the Nvidia P4 self-driving car platform for autonomous in-car smarts.

FPGAs are also making great strides on the device side. Intel has Arria, an FPGA co‑processor designed for low-energy inference tasks, while over at startup KRTKL, CEO Ryan Cousens and his team have bolted a low-energy dual-core ARM CPU to an FPGA that handles neural networking tasks. It is crowdsourcing its platform, called Snickerdoodle, for makers and researchers that want wireless I/O and computer vision capabilities. “You could run that on the ARM core and only send to the FPGA high-intensity mathematical operations,” he says.

AI is squeezing into even smaller devices like the phone in your pocket. Some processor vendors are making general purpose improvements to their architectures that also serve AI well. For example, ARM is shipping CPUs with increasingly capable GPU areas on the die that should be able to better handle machine learning tasks.

Qualcomm’s SnapDragon processors now feature a neural processing engine that decides which bits of tailored logic machine learning and neural inference tasks should run in (voice detection in a digital signal processor and image detection on a built‑in GPU, say). It supports the convolutional neural networks used in image recognition, too. Apple is reportedly planning its own neural processor, continuing its tradition of offloading phone processes onto dedicated silicon.

Sponsored: The Joy and Pain of Buying IT - Have Your Say


Biting the hand that feeds IT © 1998–2017