This article is more than 1 year old

AI bots will kill us all! Or at least may seriously inconvenience humans

Elon Musk again demands govt intervention to halt crazed computers

Hampering progress

Pascal Kaufmann, founder of Starmind, which makes an AI-flavored corporate question answering system, in an email to The Register voiced skepticism about conflating human intelligence and AI-oriented computation. "Such claims just foster this misconception and actually hamper progress in AI," he said.

Kaufmann sees regulating AI as a problem because we don't have a common definition for artificial intelligence or human intelligence.

"Before AI becomes a risk to our civilization, the brute-force automatization of processes and the loss of countless jobs worldwide pose a much larger conflict potential than sentient machines taking over: The simple efficiency increase of our labor challenges our society already," he said.

Jeffrey Bigham, associate professor at Carnegie Mellon's Language Technologies Institute, in a phone interview with The Register, said: "I think there's a reasonable concern to be thinking about our ability to understand what AI systems are doing. It's pretty worrying now that we're making a lot of decisions in part and in full with AI systems we don't fully understand, and that's only likely to increase."

Similarly, AI guru Andrew Ng once said worrying about killer artificial intelligence now is like worrying right now about overpopulation on Mars: sure, the latter may be a valid concern at some point, but we haven't set foot on the Red Planet yet.

The problem of black box code – software that operates in a way that can't be easily audited, understood, or anticipated – is a longstanding issue among those familiar with AI-oriented disciplines, such as deep learning, machine learning, reinforcement learning, and neural networks.

But it applies even to purely deterministic code, the sort not normally characterized as "intelligent." The May 6, 2010 Flash Crash saw major stock market losses in a matter of minutes as a result of a poorly designed trading algorithm. The problem wasn't AI so much as insufficient application of human intelligence – a failure to anticipate how the trading code would perform under unusual market conditions.

Transparency in terms of how code operates, said Bigham, is a matter of legitimate concern. While human decision-making can also be a black box of sorts, Bigham contends those biases can often be intuited. "Machine bias we don't have a way to query for or understand, especially with a black box model," he said.

Perhaps what Musk is asking for is simply the regulation of any computer code that can produce a substantive effect in the real world. To judge by the social harm done by unchecked Twitter propaganda bots, ticket scalping bots, and spam bots, regulation may be closer than it appears. ®

PS: We don't fear a robot revolution just yet, though. Here's a photo of a security patrol robot that fell into a water feature in a Washington DC office complex on Monday. Maybe this Dalek-like machine wasn't taking news of the 13th Dr Who too well. OpenAI also has today revealed some cool research into fooling neural-network classifiers.

More about

TIP US OFF

Send us news


Other stories you might like