This article is more than 1 year old

What sort of silicon brain do you need for artificial intelligence?

Using CPUs, GPUs, FPGAs and ASICS to make sense of AI

Smarter smartphones

This all makes sense to ABI’s Orr, who says that while most of the activity has been in cloud-based AI processors of late this will shift over the next few years as device capabilities balance them out. In addition to areas like AR, this may show up in more intelligent-seeming artificial assistants. Orr believes that they could do better at understanding what we mean.

“They can’t take action based on a really large dictionary of what possibly can be said,” he says. “Natural language processing can become more personalised and train the system rather than training the user.”

This can only happen using silicon that allows more processing at given times to infer context and intent. “By being able to unload and switch through these different dictionaries that allow for tuning and personalization for all the things that a specific individual might say.”

Research will continue in this space as teams focus on driving new efficiencies into inference architectures. Vivienne Sze, professor at MIT’s Energy-Efficient Multimedia Systems Group, says that in deep neural network inferencing, it isn’t the computing that slurps most of the power. “The dominant source of energy consumption is the act of moving the input data from the memory to the MAC [multiply and accumulate] hardware and then moving the data from the MAC hardware back to memory,” she says.

Prof Sze works on a project called Eyeriss that hopes to solve that problem. “In Eyeriss, we developed an optimized data flow (called row stationary), which reduces the amount of data movement, particularly from large memories,” she continues.

There are many more research projects and startups developing processor architectures for AI. While we don’t deny that marketing types like to sprinkle a little AI dust where it isn’t always warranted, there’s clearly enough of a belief in the technology that people are piling dollars into silicon.

As cloud-based hardware continues to evolve, expect hardware to support AI locally in drones, phones, and automobiles, as the industry develops.

In the meantime, Microsoft’s researchers are apparently hoping to squeeze their squirrel-hunting code still further, this time onto the 0.007mm squared Cortex M0 chip. That will call for a machine learning model 1/10,000th the size of the one it put on the Pi. They must be nuts. ®

We'll be covering machine learning, AI and analytics – and specialist hardware – at MCubed London in October. Full details, including early bird tickets, right here.

More about

TIP US OFF

Send us news


Other stories you might like