This article is more than 1 year old

Regulate, says Musk – OK, but who writes the New Robot Rules?

Cause, accountability, responsibility

When the Knightscope K5 surveillance bot fell into the pond at an office complex in Washington, DC, last month, it wasn’t the first time the company’s Future of Security machines had come a cropper.

In April, a K5 got on the wrong side of a drunken punch but still managed to call it in, reinforcing its maker’s belief that the mobile security unit resembling Star Wars’ R2D2 has got, err, legs. However, while a robot rolling the wrong way into a pool of water may not exactly be life-threatening, increased automation, robots and AI-enabled machinery will touch lives, from autonomous vehicles through to shelf-stackers in supermarkets and even home care assistants.

So, what happens when robots and automation go wrong and who is responsible? If a machine kills a person, how far back does culpability go and what can be done about it?

“Current product liability and safety laws are already quite clear on putting the onus on the manufacturers of the product or automated systems, as well as on the distributors and businesses that supply services for product safety,” says Matthew Cockerill of London-based product design firm Seymourpowell.

He’s right of course. Product liability and safety laws already exist – the UK government is unequivocal on the matter – but we are talking here about technology that can learn to adapt, that is taking automation outside of the usual realms of business. Surely this can throw up a different set of circumstances and a different set of liabilities?

“I’d expect, certainly in the short term, the major difficulties to be around determining the liability from a specific accident or determining if an automated system has really failed or performed well,” adds Cockerill. “If an autonomous system acts to avoid a group of school children but then kills a single adult, did the system fail or perform well?”

Good question, although if a machine takes any life it is surely a fail. In this scenario, who would be to blame? Would developers, for example, be liable?

Urs Arbter, a partner at consultancy firm Roland Berger, suggests that in some cases this may happen. He says that in particular: “AI is reshaping the insurance industry,” and although he believes risk will decline with increased automation, especially with autonomous vehicles, “there could be some issues against developers.” Insurance companies he says are watching it all closely and although regional requirements will vary depending on local laws, there is, he says, room for further regulation.

Elon Musk would agree. A recent Tweet by the Tesla founder claimed that AI is now riskier than North Korea. He followed it up with another tweet saying “Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too.”

Easier said than done, but according to Chi Onwurah, UK Labour MP for Newcastle Central and Shadow Minister for Industrial Strategy, Science and Innovation, it’s not only Musk who has suggested that regulators and legislators need to consider AI. She points to Murray Shanahan (professor of cognitive robotics at Imperial College London), Chetan Dube (founder of IPsoft), Cathy O’Neil (author and mathematician) and many others, herself included, as believing that we need to reference AI in deciding how our regulatory and legislative framework needs to evolve.

“This is not ‘regulating against a potential threat,’ but protecting consumers, citizens, workers now and in the future, which is the job of government,” Onwurah told us. “Good regulation is always forward looking otherwise it is quickly obsolete, and the current regulation around data and surveillance is a prime example of that.”

She suggests there is a precedent too, referring to when communications regulator Ofcom regulated for the convergence of telecoms, audiovisual and radio before it happened.

“There was a long period of debate and discussion with a green paper and a white paper before the 2003 Communications Act was passed, with the aim of looking forward ten years and anticipating some of the threats as well as the opportunities,” says Onwurah.

“This government unfortunately has neither the will nor the intellectual capacity to look forward ten weeks, and as a consequence any AI regulation is likely to be driven by the European Union or knee-jerk reactions to bad tabloid headlines.”

Knee jerk is something we are used to – we’ve seen a lot of it recently in reaction to growing cyber security threats – but still, should we be going unilateral on this? Regulation seems a little pointless in the wider AI scheme of things if it’s not multilateral and we are a long way off that being discussed, let alone becoming a potential reality.

Next page: Starting small

More about

TIP US OFF

Send us news


Other stories you might like