Human Rights Watch proposes new laws of robotics
Wants autonomous 'bots banned before Predators become Terminators
Human Rights Watch (HRW) has issued a document titled Losing Humanity: The Case against Killer Robots that argues development of autonomous weapons must be stopped because it represents a threat to human rights.
The document defines three types of autonomous weapons, namely:
- Human-in-the-Loop Weapons: Robots that can select targets and deliver force only with a human command;
- Human-on-the-Loop Weapons: Robots that can select targets and deliver force under the oversight of a human operator who can override the robots’ actions; and
- Human-out-of-the-Loop Weapons: Robots that are capable of selecting targets and delivering force without any human input or interaction.
Only the third type does not exist today, the document says, but adds they are in development and that “Many countries employ weapons defense systems that are programmed to respond automatically to threats from incoming munitions.”
HRW thinks autonomous weapons present three big problems, because “By eliminating human involvement in the decision to use lethal force in armed conflict, fully autonomous weapons would undermine other, non-legal protections for civilians.”
The group worries that “robots would not be restrained by human emotions and the capacity for compassion, which can provide an important check on the killing of civilians.” That makes them ideal for “repressive dictators seeking to crack down on their own people without fear their troops would turn on them.”
Killer bots would also increase conflicts, the group argues, as by minimising human casualties among aggressors “it would also make it easier for political leaders to resort to force” and “The likelihood of armed conflict could thus increase, while the burden of war would shift from combatants to civilians caught in the crossfire.”
The third concern surrounds accountability, as it's hard to apply humanitarian law to a robot or its programmer. Existing laws and remedies would therefore struggle to deliver “meaningful retributive justice”.
Another issue is whether autonomous weapons would always act within the bounds of the laws of war, which insist that combatants distinguish between the civilian population and other combatants, while only directing force at military targets. “International humanitarian law also prohibits disproportionate attacks, in which civilian harm outweighs military benefits,” the document notes, later expressing doubts that artificial intelligence technologies will be able to make these kind of judgements effectively, or at all.
The document proposes “ an international legally binding instrument” to ban development, use and manufacture of autonomous weapons, plus local laws to do the same thing.
HRW even goes so far as to say that the global community should “Commence reviews of technologies and components that could lead to fully autonomous weapons” and nip them in the bud. Scientists working on related technologies, the document suggests, should be bound by “ … a professional code of conduct governing the research and development of autonomous robotic weapons, especially those capable of becoming fully autonomous, in order to ensure that legal and ethical concerns about their use in armed conflict are adequately considered at all stages of technological development.” ®
Sponsored: 2016 Cyberthreat defense report