This article is more than 1 year old

The biggest barrier to AI? It may be the AI companies themselves

We talk to specialist chipmakers about binning the BUS and secrecy

It seems the answer to every problem these days is artificial intelligence. Want to reduce traffic congestion? You need AI. Cut down on fake news? AI. Understand your business better? AI. Create next-generation sliced bread? AI.

But as has become repeatedly and painfully transparent – most recently with a live video of people being murdered in New Zealand – AI is often not up to the task that we imagine it is, despite what some companies continue to insist.

A big part of that disconnect is that theoretically AI should be able to do the tasks everyone claims it can: modern technology, particularly software, should be able to identify and make sense of pretty much anything we as humans do, and to learn with extraordinary rapidity.

Whenever these systems fail – and they do so persistently – the excuse is always the same: we didn't have a big enough dataset or the system wasn't sufficiently trained.

YouTube and Facebook both said they weren't able to block all the Christchurch shooting videos because their systems hadn't seen this sort of content before and because people had played around with copies – cropping them, splicing clips together, adding watermarks, and so on – before re-uploading.

"Like any piece of machine learning software, our matching technology continues to get better, but frankly, it's a work in progress," said YouTube's chief product officer Neal Mohan. Facebook VP Chris Sonderby complained about "variants" that made such videos "more difficult to detect" – even though to human beings such videos would have been instantly recognizable.

Is that right?

Technologists are not entirely convinced that explanation holds up. Far more likely, experts say, is that Facebook, Google at al simply don't have the infrastructure in place to scale up fast enough: both in terms of human moderators and servers doing AI work.

We spoke to one company that hopes to fix the computer side of things: Untether AI. The company is developing a new specialist chip that is optimized to do artificial intelligence work and Intel just invested $13m in it.

Oh no, photo via Shutterstock

Amazon shareholders revolt on Rekognition, Nvidia opens robotics lab, and hot AI chips on Google Cloud

READ MORE

It has long been accepted that current chip architecture is not great for doing AI work due to the fact that artificial intelligence is all about accessing existing memory to identify what a new input resembles.

With existing chip design, as the demand for AI processing goes up, a bottleneck appears as more and more data from the memory is demanded but has to go through the processor bus first. This design is great if a computer is expected to do lots of different tasks but it is sub-optimal if it is being asked to do effectively the same memory-intensive task over and over again.

In developing a more effective AI chip, a number of companies have either redesigned the chip from scratch or gone back to processor designs from the 1990s and rejigged them.

The biggest change is that the processor bus is basically thrown out and replaced with what Untether AI's CEO Martin Snelgrove calls "near-memory computing." He warned that the term has radically different meanings to different groups of people but in his usage, the memory is literally placed next to the processors, resulting in much faster data transfer at much lower energy consumption.

With each step away from the CPU, then the GPU (graphics) and then the TPU (tensor), there is an approximate ten times increase in efficiency, making the design 1,000 times faster than traditional chip design and providing what Untether AI claims is 2.5 petabits per second of data to the processors. In other words, a vast efficiency improvement for AI tasks.

Communication

But in conversation with El Reg, Snelgrove revealed one of the biggest problems he has in developing a chip specifically for the AI market is that none of his customers will tell him what they want to use it for.

"They just won't tell us," he explains with a mixture of disbelief, frustration and amusement. Such is the degree of competitiveness and secrecy in the market – where Google, Facebook and other big tech companies believe that whoever cracks AI first will be rewarded with riches beyond their wildest dreams – that they don't trust anyone enough to let them know what they are working on, what issues they are facing, or even how they even see the problem.

And so the guys making the chips that could actually make the difference between a video being identified and taken down before it appears online are, in some respects, operating in the dark.

Snelgrove gives one example of where this disconnect has held everyone back: Untether AI developed its chips with the idea of efficiency in mind, even going back and forth repeatedly in its design to reduce the amount of wire in order to get a slight efficiency boost.

But it was only when the company proudly told potential customers that their chips would be 30 per cent more efficient that they realized no one cared. Thirty percent more efficiency meant only that there was 30 per cent more potential processing power. "There is seemingly infinite demand," he explained. "It's a muscle power business."

Brute force

The current mindset of those building AI systems – from autonomous cars to digital assistants to image processing and so on – is that brute force wins.

This winner-takes-it-all belief also extends to the specialist AI chip business too. Snelgrove confesses that he believes there will ultimately be only one winner, a standard will emerge, and everyone will build on top of that for the foreseeable future in much the same way the x86 architecture has dominated the traditional chip market for 40 years.

The hassle of moving away just isn't worth the additional benefits. He calls it a "dog fight." And there are some powerful players: Google, Nvidia and Intel, among others, are all working on their own AI chips.

But in the meantime, the lack of communication across the industry is causing its own inefficiencies. Snelgrove tells us his chips are being manufactured in 16 nanometers, in large part because they can't be sure whether the designs will need to change in response to feedback from customers.

It would be more efficient to use the latest 14nm technology but the cost of doing so would be roughly four times greater, so the smart financial decision is to hold off. There are similar compromises across the entire manufacturing and design process.

There's one other aspect that is worth mentioning when it comes to specialist AI chips: the differentiator in the market, who gets to become the new x86, may not actually be the company that comes up with the best hardware solution, but the one that arrives with the right software/hardware mix.

Because of its specialist nature and the removal of so many aspects of a traditional processor, there has to be software that will enable other systems to communicate effectively with the optimized software. To that end, Snelgrove says a big chunk of the new $13m investment from Intel will actually go into building up the company's software team.

Another breakthrough

When it comes to AI chips, we also spoke to a company at the other end of the scale: Syntiant is basically doing the same thing as Untether AI in that it is optimizing a chip for AI by stripping out the bus and focusing on memory, but with two critical differences.

First, Syntiant's system is analog, not digital, which is better in some respects but won't ever scale to do things like batch processing of video. And second, it is aimed at the low-end of the market: the smaller and less power hungry the better.

The resulting chip that the company is focusing on is currently niche but potentially an enormous market: it can do voice recognition.

nano

First, Google touts $150 AI dev kit. Now, Nvidia's peddling a $99 Nano for GPU ML tinkerers. Do we hear $50? $50?

READ MORE

We spoke to Syntiant CEO Kurt Busch who told us the chip can recognize 64 words, measures less than two millimeters, and uses such little power than it can run in a small battery powered device.

What that means is that you can add speech to everything. Which means that something like Amazon's Alexa system could be added to pretty much anything. While that may sound like a nightmare to some, it does have some very good possible users.

For one, we could finally – finally! – get rid of the need to say a specific wake word like "Alexa" or, worse, "Ok Google" and allow users to select their own. We could also set it up so digital assistants could be trained to your and your family's voice, providing additional security and privacy around digital assistants.

And for those who hate the whole idea of Amazon or Google or Apple-owned bugs in your house, it could be used by others for application specific voice recognition – someone with a hearing aid, for example, would be able to raise or lower sensitivity by simply saying the right command out loud or your wireless headphones could be set up to work with a range of commands.

Plus it will only be a matter of time before someone figures out how to build a functional Star Trek communicator – and don't pretend you're not excited by that idea. ®

More about

TIP US OFF

Send us news


Other stories you might like