This article is more than 1 year old

‘Artificial Intelligence’ was 2016's fake news

Putting the 'AI' into FAIL

“Fake news” vexed the media classes greatly in 2016, but the tech world perfected the art long ago. With “the internet” no longer a credible vehicle for Silicon Valley’s wild fantasies and intellectual bullying of other industries – the internet clearly isn’t working for people – “AI” has taken its place.

Almost everything you read about AI is fake news. The AI coverage comes from a media willing itself into the mind of a three year old child, in order to be impressed.

For example, how many human jobs did AI replace in 2016? If you gave professional pundits a multiple choice question listing these three answers: 3 million, 300,000 and none, I suspect very few would choose the correct answer, which is of course “none”.

Similarly, if you asked tech experts which recent theoretical or technical breakthrough could account for the rise in coverage of AI, even fewer would be able to answer correctly that “there hasn’t been one”.

As with the most cynical (or deranged) internet hypesters, the current “AI” hype has a grain of truth underpinning it. Today neural nets can process more data, faster. Researchers no longer habitually tweak their models. Speech recognition is a good example: it has been quietly improving for three decades. But the gains nowhere match the hype: they’re specialised and very limited in use. So not entirely useless, just vastly overhyped. As such, it more closely resembles “IoT”, where boring things happen quietly for years, rather than “Digital Transformation”, which means nothing at all.

The more honest researchers acknowledge as much to me, at least off the record.

"What we have seen lately, is that while systems can learn things they are not explicitly told, this is mostly in virtue of having more data, not more subtlety about the data. So, what seems to be AI, is really vast knowledge, combined with a sophisticated UX," one veteran told me.

But who can blame them for keeping quiet when money is suddenly pouring into their backwater, which has been unfashionable for over two decades, ever since the last AI hype collapsed like a souffle? What’s happened this time is that the definition of “AI” has been stretched so that it generously encompasses pretty much anything with an algorithm. Algorithms don’t sound as sexy, do they? They’re not artificial or intelligent.

The bubble hasn’t yet burst because the novelty examples of AI haven’t really been examined closely (we find they are hilariously inept when we do), and they’re not functioning services yet. For example, have a look at the amazing “neural karaoke” that researchers at the University of Toronto developed. Please do: it made the worst Christmas record ever.

It's very versatile: it can the write the worst non-Christmas songs you've ever heard, too.

Neural karaoke. The worst song ever, guaranteed

Here I’ll offer three reasons why 2016’s AI hype will begin to unravel in 2017. That’s a conservative guess – much of what is touted as a breakthrough today will soon be the subject of viral derision, or the cause of big litigation. There are everyday reasons that show how once an AI application is out of the lab/PR environment, where it's been nurtured and pampered like a spoiled infant, then it finds the real world is a lot more unforgiving. People don’t actually want it.

3. Liability: So you're Too Smart To Fail?

Nine years ago, the biggest financial catastrophe since the 1930s hit the world, and precisely zero bankers went to jail for it. Many kept their perks and pensions. People aren’t so happy about this.

So how do you think an all purpose “cat ate my homework” excuse is going to go down with the public, or shareholders? A successfully functioning AI – one that did what it said on the tin – would pose serious challenges to criminal liability frameworks. When something goes wrong, such as a car crash or a bank failure, who do you put in jail? The Board, the CEO or the programmer, or both? "None of the above" is not going to be an option this time.

I believe that this factor alone will keep “AI” out of critical decision making where lives and large amounts of other people’s money are at stake. For sure, some people will try to deploy algorithms in important cases. But ultimately there are victims: the public, and shareholders, and the appetite of the public to hear another excuse is wearing very thin. Let's check in on how the Minority Report-style precog detection is going. Actually, let's not.

After “Too Big To Fail”, nobody is going to buy “Too Smart to Fail”.

More about

TIP US OFF

Send us news


Other stories you might like