This article is more than 1 year old

Brain-plug weapons could provide war crime immunity

Lawyer spots future brainhat-slaughter atrocity loophole

Comment An American law student has published an analysis of international law regarding war crimes that might be committed using future brain-interface-controlled weapon systems.

Stephen White, studying at Cornell Law School, had his paper Brave New World: Neurowarfare and the Limits of International Humanitarian Law published (pdf) in the current issue of the Cornell International Law Journal. The paper has been picked up here and there on the tech net. In it, White makes particular reference to the various "Brain-Machine Interface" ploys being pursued by DARPA, the Pentagon mad-science outfit that loves a long shot.

An example of such kit is the famous mind-probe hat, intended to monitor a soldier's brainwaves as he eyes the situation around him and throw up a threat marker on his visor before his conscious mind has even realised there is danger there.

DARPA reckon this could sometimes provide a useful speed advantage, as White notes:

One of the justifications for employing a brain-machine interface is that the human brain can perform image calculations in parallel and can thus recognize items, such as targets, and classify them in 200 milliseconds, a rate orders of magnitude faster than computers can perform such operations. In fact, the image processing occurs faster than the subject can become conscious of what he or she sees...

White looks forward to a day where such a system, rather than simply flagging up a subconsciously-spotted danger and perhaps training a weapon on it, actually opens fire without further ado. That would be fine if the soldier's brain had correctly spotted a legitimate target, but obviously less so if instead a noncombatant got smoked. That would be a war crime. But would the soldier be guilty? All he or she did was think a bad thought, White argues, and:

Anglo-American criminal law has refused to criminalize someone solely for his or her bad thoughts... This fairness concern reflects the awareness... that punishing bad thoughts might have perverse social consequences... criminal law has refused to stigmatize those who contemplate bad deeds but do not actually perform them.

A layman might argue that in fact certain areas of bad thought are often punished by the criminal law; but we'll skip over that. White says that to bust someone for a war crime you need to show that he or she consciously chose to commit it, and presumably he knows what he's on about.

In summary, a brain-interface guided weapon could circumvent the pilot’s normal volitional processing signals and rely solely on the recognition activity, thereby making it impossible for courts to determine whether a volitional act occurred before weapon targeting... a prosecutor could never definitively prove anything more than the most attenuated guilt for misdirected attacks on protected persons.

According to bonce-boffins cited by White, the conscious mind - especially in situations such as combat, where a lot of subconscious instincts are in play - tends to operate mainly by vetoing actions, rather than by thinking of them itself. The bloodthirsty subconscious tends to work on the "kill 'em all and let God sort them out" principle, but the more civilised part of the mind can suppress these impulses if it wants to. Under this theory, a human being doesn't so much exercise free will as "free won't".

So what's to be done? It would be silly to try and prohibit brain-shortout weapons altogether, says White. He reckons that if people had prohibited the smartbomb and the target-seeking weapon, for instance, we'd still be stuck with horrible messy cluster bombs.

Such a prohibition... might create the unintended consequence of hindering research into weapon systems that may prove more accurate than existing weapons...

International humanitarian law, therefore... should create incentives to produce maximally predictable and accurate weapons and to clarify the lines of authority in wartime in order to make criminal accountability easier to determine.

But then White suddenly executes a neck-snapping volte face, and starts arguing for wholesale technology suppression.

[This] would likely require prosecution of high-level civilian developers... Many high-level weapon designers have escaped prosecution for design of indiscriminate or disproportionate military systems... For instance, after World War II, the Nazi engineer Wernher von Braun evaded war crimes charges because the United States sought his expertise in designing rockets that were critical for military dominance in the Cold War.

Putting engineers on notice of their potential liability may create incentives for them to create less indiscriminate and disproportionate weapons. A view of command responsibility would also create de facto liability for those most responsible for sanctioning the use of such weapons.

Frankly, White seems to have gone off the rails altogether here. So the US should have executed Von Braun because his weapons were used to randomly bombard London? They'd have morally been bound to round up and shoot every boffin at the Manhattan Project, too - the A-bombs produced by Oppenheimer and his crowd were vastly more indiscriminate and deadly than the V-2s.

In any case, the successors to Von Braun's V-2s did allow the West not to lose the Cold War - which many would say was a good thing in itself, well worth amnestying him. Prosecutors can let people off in exchange for testimony, after all - why not for crucial help in preserving the very legal system they represent?

Furthermore, in the end the ballistic rockets turned into ICBMs accurate enough to take out individual hardened silos, in the process spawning various technologies including integrated circuits and then GPS. GPS is a major part of the precision weaponry that White approves of.

Getting back to brain-machine interfaces, there aren't that many military applications where milliseconds count so much that you might shortcircuit the human brain - even if you really could. Mostly it just wouldn't make sense.

For instance, quicksilver brain-directed weapons could conceivably be handy for close quarter gunfighting one day. But if the system is going from subconscious assessment to shooting without further ado, it must be aiming the gun itself as well as firing it - this thing is mainly an automated weapons turret on a robot, now. Why not put the operator and his wired-up brain off the battlefield via remote link, then? At which point the need to be really quick so as to keep him safe has vanished, so you may as well just give him a normal, consciously operated firing switch.

Etc, etc.

The brain-machine legal point is mildly interesting, but realistically brain jumpwire systems are probably never going to be a big deal; not ones with firing authority, anyway. The fact that DARPA is looking at an idea doesn't mean that it's likely to come into service - quite the reverse, actually. It could be that the international war-crimes judiciary has rather more serious issues to worry about.

As for "Putting engineers on notice of their potential liability" - oh dear. So, we'll hang rocket engineers in case they make ICBMs? Why not lock up Sir Frank Whittle, inventor of the jet engine, while we're at it. Jets have allowed far more indiscriminate ordnance to be dropped since 1945 than ever was before. Of course, that all means no space programme, no airliners, no silicon chips, no computers. No fertilisers; you might invent nerve gas by accident, so just try to eat less. Actually, moving on back in history, no technology at all - inventing the stone axe would be a crime under potential-liability rules of this sort.

Alternatively, how about putting lawyers on notice of their countervailing liability in cases of stifling progress and so condemning the human race to prolonged and unnecessary death and suffering? ®

More about

TIP US OFF

Send us news


Other stories you might like