Original URL: http://www.theregister.co.uk/2008/04/02/killer_robot_ban_call/

Landmine charity: Ban the killer robots before it's too late!

They're already here, chum - look around

By Lewis Page

Posted in Government, 2nd April 2008 07:02 GMT

Comment A London-based anti landmine and cluster bomb charity has now widened its remit and is calling for a moratorium on the use of killer robots.

"Our concern is that humans, not sensors, should make targeting decisions," said Richard Moyes of Landmine Action, quoted by New Scientist. "We don't want to move towards robots that make decisions about combatants and noncombatants."

It seems that Moyes and Landmine Action came up with the idea of a killer-robot ban following recent high-profile commments made by famed robopocalypse media-Cassandra Professor Noel Sharkey.

Prof Sharkey, perhaps best known for his role as a judge in TV's Robot Wars, is nonetheless firmly against any kind of actual robot wars. He has repeatedly warned of the coming danger to humanity posed by unfettered, soulless machine warriors in "a robot arms race that will be difficult to stop ... I can imagine a little girl being zapped because she points her ice cream at a robot to share".

On other occasions Sharkey has seemed to offer a different perspective on military death-droids, saying "it would be great if all the military were robots and they could fight each other". Overall, however, he is seen as the man to go to if you need a automatamageddon quote or soundbite.

Meanwhile, Moyes' group has decided to join Sharkey and campaign against any kind of military machines which would make their own targeting decisions. Moyes draws a parallel with current tank-buster munitions, deployed above a target area by artillery shell or airdrop. Lately, this sort of weapon will often detonate itself harmlessly in midair if it can't find anything it thinks is a tank; but Landmine Action believes this is bad, as the decision is not made by a human. Their plan would be a ban on that whole class of weapons, and a ban - as they see it, a pre-emptive ban - on killer robots.

Sharkey, Moyes et al are a bit behind the fair on this one. Killer robots within their definition - automatic systems which make combatant/noncombatant targeting decisions for themselves - have been around for some time.

Anti-shipping missiles have existed for decades which can be sent off over the horizon and look around for a target based on various criteria. Robot gun and missile anti-air installations are often designed for only basic human input - eg, turning a key to activate them - from which point they will sweep the skies of anything which looks dangerous to them.

Landmines both manufactured and improvised, far from being uniformly indiscriminate, will often be set up to attack specifically combatant targets. This is most commonly achieved by the charge being set to go off only if the road is subjected to the enormous weight of armoured vehicles, rather than pedestrians or mere civilian wheels.

Indeed, this debate is at least a century old. Sea mines have been required since the Hague Convention of 1907 to autonomously discriminate between legitimate and non-legitimate targets. Under the convention it is considered OK to use moored contact sea mines in publicised, warned minefields - provided they disarm themselves if their mooring cables come loose*. In effect, a "killer robot" mine decides whether or not a ship bumping into it is a legitimate target (that is, one acceptably near to the location of its own mooring sinker) - without human involvement. More modern magnetic/acoustic/pressure mines are often set up specifically to look for the signatures of specific types or sizes of vessel - generally seeking to pick out certain classes of enemy warship.

The killer robots, in this sense, are already widespread.

In any case, the idea that having a human in the loop will necessarily make for a weapon less prone to accidentally slaughter innocents seems hard to support. This is particularly the case when the human's own personal life may depend on a threat being eliminated quickly and certainly. A robot can be programmed to calmly accept the risk that failure to open fire may mean its own destruction; humans often tend to prioritise their own survival over absolute moral righteousness, no matter what instructions they may have been given.

Thus, the difference between the quick and the dead when up against an armed and dangerous enemy often leads soldiers and policemen to shoot or blow up people they shouldn't. A robot could well, in fact, be better at applying rules of engagement which may feel - may actually be - suicidal if followed strictly by human operators.

Even where the human isn't personally at risk, he or she is no panacea regarding the right call getting made. Rage, hatred, over-excitement, unwillingness to allow enemies to escape, simple stress or tiredness - all these can lead the controller of a remote system to be far more bloodthirsty than a correctly programmed autonomous one. It's also been noted many times that those who kill remotely are significantly less prone to freeze up on the trigger than those who must watch their victims die in front of them; so safe remote humans aren't necessarily better than frightened sweaty ones. There are no simple answers here.

And even the most righteous, unworried human still won't do any better at deciding whether or not to shoot unless there is some information which people excel at interpreting - for instance video or still images of people who may be armed or unarmed. Humans do well with certain kinds of images; that proficiency doesn't mean that they bring anything to the party when all you have is other kinds of data. It doesn't mean the absence of humans in the loop always makes killings unacceptable.

We here on the Reg killer-robot desk maintain a guardedly open mind regarding autonomous lethal systems. We don't especially welcome our new killer robot overlords, but we don't propose to panickily (and probably ineffectually) legislate them out of existence either. The more so as - in this sense - they're actually already here, and have been for at least a hundred years.®

Bootnote

*Readers may not be surprised to note that this piece of international law hasn't been universally followed during recent sea mining campaigns in the Persian/Arabian Gulf. Not that the miscreants in question had ratified the Hague Convention anyway.