This article is more than 1 year old

Artificial Intelligence: You know it isn't real, yeah?

It's not big and it's not clever. Well, not clever, anyway

Something for the Weekend, Sir? "Where's the intelligence?" cried a voice from the back.

It's not quite the question one expected during the Q&A session at the end of the 2019 BCS Turing Talk on Artificial Intelligence. The event was held earlier this week at the swanky IET building in London’s Savoy Place and the audience comprised academics, developers and tech professionals.

You'd think such an interjection was akin to someone grabbing the microphone in the main auditorium during a cryptocurrency conference and blurting "So… there aren't any actual coins?" Surely this was a cue for the auditorium to resound with an unpleasant cacophony of forehead-slapping and eye-rolling. And let me tell you, the ugly wet sound of hundreds of people rolling their eyes at the same time is the stuff of Japanese body horror nightmares.

Yet nobody laughed, no one tutted. Instead, there was merely the gentle rustle of uncomfortable shifting bums. Most of everyone else in the room had already seen the wiring under the board and knew full well that Artificial Intelligence is as dumb as fuck.

No, that's unfair. It's just that AI is not what the general public thinks it is. Fuelled by sci-fi, childishly innocent news media and outrageously misleading futurists, Joe Average is under the impression that AI is an amoral, self-thinking machine powered by ghosts.

The reality is mundane: AI is just a bunch of algorithms acting on data it is fed by its human programmers. One thing it isn't is amoral. If anything, the opposite is true as every input and process is steeped in bias and clouded by interpretation, whether out of goodwill or bad.

This was the core theme of the Turing Talk's keynote speaker, Dr Krishna Gummadi, head of the networked systems research group at the Max Planck Institute for Software Systems (MPI-SWS) and a professor at Germany's University of Saarland. He used the word "algorithms" a lot. It's a spooky word because everybody pronounces it differently. Dr Gummadi must have made it sound a bit like "Algarve" because when I checked my phone later, I found I had absent-mindedly jotted a reminder in ToDoIst to book my summer holiday.

(I am not mocking an accent. Dr Gummadi is a smart, articulate and engaging academic. He is also the only man in the world who can articulate the word "recidivism" mid-sentence without a few practice runs or pausing for a swig of Monster Energy between syllables.)

Dr Gummadi's theme that evening was how easily bias creeps into AI projects, followed by suggestions for ways of eradicating this in the future through ethical training techniques.

The examples he gave of the former included details from Equivant's (then Northpointe) notorious COMPAS project developed to assist judges in the US determine appropriate prison sentencing and the award of parole based on AI-churned data. Referred to as "predictive policing" – I'm sure you can conjure your own mandatory Minority Report references at this point – COMPAS was supposed to apply a bit of clever data profiling through interpretation of anonymised justice records to predict the likelihood of the accused standing in the dock to be rehabilitated or going on to re-offend.

What the AI actually did was urge judges to be more lenient with white people than black people.

Aghast, no doubt, the COMPAS team tried to eradicate this bias by removing certain fields from the datasets, most notably those containing racial detail. But despite deliberately being rendered colourblind, the AI algorithms continued to suggest giving black suspects a harder time in court than those with white skin. What was going wrong with the Recidivism Risk Protection Tool?

[…takes a slug of Relentless…]

The cause seems to be well-meaning-but-fundamentally dim AI failing to recognise doubtful data when it encounters it. If the computers ever overthrow their human overlords to establish an evil Robot Algocracy, they’ll achieve it through being thick.

Youtube Video

Fair objectives are not enough if the data itself is biased. For example, if the courts have been over-sentencing black people for generations – as a result of ignorance, social conditions of certain racial groups or plain malice in the justice system, who knows – letting an AI rip on the unbalanced data simply trains it to be similarly biased. Hiding a field labelled "skin color" does not compensate for anything when the AI's algorithms charge ahead identifying the same patterns of biased social profiling by the justice system anyway.

Another example is the inherent sexism of AI-powered translation software when converting nouns between languages that have alternative gender options. "Doctor" translated into Turkish might get the masculine form, but "nurse" will get the feminine. Try a sentence in Google Translate such as "I spoke to the nurse today" and convert it to French: you will never get the masculine "infirmier" but the feminine "infirmière" every time, nor will you be offered the option to choose between them.

Well, you can hardly lay the peculiarities of language and cultural norms at the feet of AI. The algorithms can only process the racist, sexist bullshit that humans feed them, in accordance with training processes that are similarly tainted. If the AI were intelligent, it would work this out for itself. It's not so it doesn't.

The final nail in the coffin of Joe Public's common concept of AI was driven in when another member of the Turing Talk audience who asked how it might be possible to imbue AI with "emotional intelligence" to determine for itself what is right and what is wrong. Dr Gummandi raised his hammer for the blow: "No, AI can't have emotional intelligence. It can be taught ethics, though."

Bang.

What if all the data you want to use is biased? "We need to reconsider the learning models with an eye on the future, allowing for change."

Bang.

So where's the intelligence? "It's in the process of prediction based on the data it is given."

Bang.

But this data's wrong, isn't it? "Yes."

Bang.

Youtube Video

Alistair Dabbs
Alistair Dabbs is a freelance technology tart, juggling tech journalism, training and digital publishing. He was recently faced with an ethical dilemma of his own while driving an out-of-control car. Given the choice of crashing into an old woman or a baby in a pushchair, he managed to skirt between them, mounting the kerb and directly onto the pavement itself, so that he would only collide with commuters on scooters, hipsters on skateboards, trundling delivery drones and yelling cyclists. @alidabbs

More about

TIP US OFF

Send us news


Other stories you might like