This article is more than 1 year old

Engineers, coders – it's down to you to prevent AI being weaponised

Grunts already refer to drone kills as 'bugsplats' – machine learning cares less

Comment Debate has raged for months over various internet giants' forays into providing next-generation technology for war.

For example, in March, dissenters at Google went to the press about the web goliath's contract with the US military's Project Maven, which aims to fit drones with object-detecting AI among other things.

This US Department of Defense project apparently began a year before, in April 2017. Google has always maintained that its computer vision TensorFlow APIs are for "non-offensive" purposes only and repeatedly denied the malicious use of its technology in its Maven contract with the Pentagon.

Over 3,000 staff signed an open letter to Google CEO Sundar Pichai, stating that "Google should not be in the business of war".

Academics across the world also urged Google to stop working on the project.

Though the extent of its involvement has never been clear, Google caved last month, and said it won't renew its contract with the the Algorithmic Warfare Cross-Functional Team (AWCFT) – aka Maven. It also published a list of "AI principles" which suggest Google won't build weapons or "technologies whose principal purpose... is to cause or directly facilitate injury to people".

Some saw the move as a positive start. Others noticed that it leaves key questions unaswered.

The debate about drone warfare has some history – history Google should have considered before it made its bid. I've investigated the drone wars in Yemen and Pakistan since 2011, and I've represented civilians who lost innocent relatives in drone strikes. The public needs a healthy dose of realism about how America has used and will use these technologies, and how the war on terror looks on the ground where it is waged.

Of course it's about targeting

I was in Yemen the first time I saw how a black-box algorithm could go wrong. In 2013 I visited the capital Sana'a to investigate drone attacks, gathering evidence and interviewing survivors. At one point, an elderly man approached me. He was Faisal bin Ali Jaber, an environmental engineer. He had filled a hard drive with data about a US drone strike in 2012 on two of his relatives. His nephew, Waleed, was the village policeman; his brother-in-law, Salem, was a prominent imam. In a sermon days before the attack, Salem had denounced al-Qaeda. Both were killed in the strike.

Faisal wanted answers. Why was his family attacked? Together we travelled 7,000 miles to Washington DC, seeking an explanation. White House officials met Faisal, and later passed the family a cash "condolence" payment, but they never said why Salem and Waleed were caught in the crosshairs.

The problem was clear even then. This was likely a so-called "signature strike": the targets' identities were unknown, but surveillance data about them threw up red flags in a targeting algorithm. A human fired the missiles, but did so, in part, on the software's recommendation.

We have only begun to grapple with what weaponised AI will mean. When AI serves someone propaganda, we worry. When it leads to wrongful arrest, we worry a little more. The stakes in Maven are higher still.

Google defended its contribution to Project Maven, saying: "The technology flags images for human review, and is for non-offensive uses only." It's good to hope so, but naive. As anyone who has seen the "Collateral Murder" video knows, distinguishing objects – say, a camera from a gun – is the heart of targeting.

Target acquisition was always the point of Project Maven. The memorandum setting up Maven says its "first task" is "to augment or automate Processing, Exploitation, and Dissemination for tactical Unmanned Aerial System and MidAltitude Full-Motion Video in support of the Defeat-ISIS campaign". That's soldier-speak for "AI will scan drone feeds to find targets for attack."

The head of Project Maven made his ambition explicit: "People and computers will work symbiotically to increase the ability of weapon systems to detect objects," so "one analyst will be able to do twice as much work, potentially three times as much, as they're doing now."

None of this is a surprise. The US has used machine learning to hunt targets for years. Had the Googlers Googled, they would have found an NSA program called SKYNET – yes, after the killer computer in Terminator – which purported to use machine learning to find terror suspects in Pakistan. An NSA slide deck about SKYNET from 2012, part of the Edward Snowden-sourced revelations, tells you precisely where the US hopes to steer AI.

SKYNET didn't scan camera footage. It was light years behind what Google's brains could build today. But its purpose was clear: the slides describe using machine learning to spot couriers and, from the couriers, targets.

This raises questions. How many targets did SKYNET pick? What percentage are attacked? How are mistakes found and fixed? The slides don't say. Journalists and experts interpreted them differently. But we do know the algorithm zeroed in on a journalist as Target One – al Jazeera's Ahmed Zaidan. An NSA slide tags him a "member of al-Qaeda". Zaidan interviewed militants all the time; it was his beat. But there's no evidence to suggest he was a terrorist.

This is what modern warfare looks like: culling the haystack of signals data to the needle of the attack target. But most people killed in the drone wars died in "signature strikes" – the US never knew who they were. The US refuses to explain what "patterns of life" paint the bullseye on your back, but in societies where most men are armed, and insurgents are interwoven and married into civilian populations, network analysis will always make mistakes.

Proponents of AI-powered warfare suggest this is just an old problem: the fog of war. But the scale and speed of battle that AI will make possible is different. We know this from the drone wars already: a risk-free, push-button attack option makes deliberation harder. Errors get missed until it's too late. By the end of his term even President Obama accepted that drone technology had run ahead of law and ethics:

Our military, our intelligence teams were seeing this as really effective, and they started just... going. Because the goal was: let's get al-Qaeda, let's get these leaders, there's a training camp here, there's a high value target there, let's move.

That drones aren't yet autonomous does little to reassure. Yes, a "human is in the loop", but which humans and for which ends? Who imagines that AI-powered weapons would be used the same by Trump's America, Xi's China, or Assad's Syria?

We have only begun to grapple with what weaponised AI will mean. When AI serves someone propaganda, we worry. When it leads to wrongful arrest, we worry a little more. The stakes in Maven are higher still.

Ethics

Were Google's contracting leadership genuinely clueless about what they were getting into? The truth is complex.

Some of Google's people seemed less concerned about moral balance than they were to avoid public discussion of the contract at all. Fei-Fei Li, a Google AI scientist who writes about the ethical development of AI, sent emails insisting that the public did not get wind of Maven's thrust. "Avoid at ALL COSTS any mention or implication of AI," she said. "Weaponized AI is probably one of the most sensitized topics of AI – if not THE most. This is red meat to the media to find all ways to damage Google." Li was right: the uproar is why Google has binned the contract.

Once the story broke, the defence changed: Google might make warfare better and safer. "The technology," Google said, "is intended to save lives and save people from having to do highly tedious work." Even Wired's editor-in-chief bought Google's "moral argument" for building drone AI: "It's good for the US government to have the best AI. Particularly drones that can cause collateral damage when they misidentify." Diane Greene, the Google Cloud CEO, echoed this defence: "Saving lives was the overarching intent."

Doubtless Google was sincere. But whose lives were to be spared and whose taken?

Common to these defences is a single error: that the US kills civilians because of technical failings, and not indifference to civilian life. But the terms we've seen used in the drone war teach us better. "Bugsplats". "The Sky Raper". "Fun-sized terrorists". This isn't just the loose talk of military kids – it reaches the top. "Why did you wait?" That's President Trump, reportedly castigating drone operators for sparing the lives of a target's wife and kids.

The numbers bear this out. The Obama administration's drone wars killed hundreds of civilians. Under President Trump, the targeting rules have been made even looser, with predictable results: over 6,000 civilian deaths last year in Iraq and Syria alone.

It's unclear how much Google has backed away from military contracting. The line between lethal and non-lethal assistance is not as defined as its principles suggest.

And shortly before Google announced its AI principles, Google pressed its wares at a special forces convention in Florida. The pitch said Google could "streamline" special forces' analysis to "accelerate exploitation of valuable unclassified intelligence" from seized laptops and open-source materials. But as one reporter pointed out, all of that material would help special ops find targets. Google did not respond to a request for comment.

Google's effort to set forth some AI principles is welcome. It opened a crucial public conversation about whose interests AI will serve.

But the principles sit uncomfortably with Google's other statements, to other audiences, in which AI's potential as a weapon of war was made clear and even embraced.

Engineers have more power – and responsibility – than they realise

We all have a role to play in the debate about where AI should be used. But the most important audience is AI developers and engineers. The tale of Maven offers several lessons:

First, these aren't just technical problems. They are political. And once a tool is sold, the maker may have little say in its use. Even with "humans in the loop", not every human will deploy your technology ethically.

Second, the risks of AI are not the same for everyone. Since Trump, it's become popular to say that the crisis we failed to see coming wasn't Orwell's 1984; it was Huxley's Brave New World.⁠ AI is driving an "attention crisis", some say, distracting us from meaningful debate.

This is true mainly for the populations of wealthy nations. While you and I bicker on Twitter, buy crap on impulse, or do any of the things that figure in these TED-talk dystopias, Orwell is out there: for the poor, the remote, the non-white.

Mass data-sifting already plays a massive role in our lives and will only get greater. That's why some say engineering and computer science should be regulated like the old professions: medicine and law. That only takes us so far. Could unethical uses of AI land developers in hot water? Sure. The Volkswagen case gives a taste of that. But even developers who don't risk jail time should know their reputations, and people's lives, hang in the balance.

Just 3,100 engineers signed the letter here protesting against Maven. About four per cent of Google's staff stood up and shut down the Maven contract.

That's what could solve the AI ethics debate – for those with the gift to code to think about what they are building.

AI developers have immense power. They are a mobile, coveted population. Weapons and surveillance kit can only be built with their talent and assent. If they chose to wield their power for good, who knows what they could do? ®

Cori is a specialist researching and writing about the ethics of mass data-sifting. She ran the unit investigating human rights in counterterrorism for Reprieve.

More about

TIP US OFF

Send us news


Other stories you might like