Software

AI + ML

Why, Robot? Understanding AI ethics

Maybe we're headed for a robo-pocalpse, but let's deal with these other problems first, eh?


Not many people know that Isaac Asimov didn’t originally write his three laws of robotics for I, Robot. They actually first appeared in "Runaround", the 1942 short story*. Robots mustn’t do harm, he said, or allow others to come to harm through inaction. They must obey orders given by humans unless they violate the first law. And the robot must protect itself, so long as it doesn’t contravene laws one and two.

75 years on, we’re still mulling that future. Asimov’s rules seem more focused on “strong AI” – the kind of AI you’d find in HAL, but not in an Amazon Echo. Strong AI mimics the human brain, much like an evolving child, until it becomes sentient and can handle any problem you throw at it, as a human would. That’s still a long way off, if it ever comes to pass.

Instead, today we’re dealing with narrow AI, in which algorithms cope with constrained tasks. It recognises faces, understands that you just asked what the weather will be like tomorrow, or tries to predict whether you should give someone a loan or not.

Making rules for this kind of AI is quite difficult enough to be getting on with for now, though, says Jonathan M. Smith. He’s a member of the Association for Computing Machinery, and a professor of computer science at the University of Pennsylvania, says there’s still plenty of ethics to unpack at this level.

“The shorter-term issues are very important because they’re at the boundary of technology and policy,” he says. “You don’t want the fact that someone has an AI making decisions to escape, avoid or divert past decisions that we made in the social or political space about how we run our society.”

There are some thorny problems already emerging, whether real or imagined. One of them is a variation on the trolley problem, a kind of Sophie’s Choice scenario in which a train is bearing down on two sets of people. If you do nothing, it kills five people. If you actively pull a lever, the signals switch and it kills one person. You’d have to choose.

Critics of AI often adapt this to self-driving cars. A child runs into the road and there’s no time to stop, but the software could choose to swerve and hit an elderly person, say. What should the car do, and who gets to make that decision? There are many variations on this theme, and MIT even collected some of them into an online game.

There are classic counter arguments: the self-driving car wouldn’t be speeding in a school zone, so it’s less likely to occur. Utilitarians might argue that the number of deaths eliminated worldwide by eliminating distracted, drunk or tired drivers would shrink overall, which means society wins, even if one person loses.

You might point out that a human would have killed one of the people in the scenario too, so why are we even having this conversation? Yasemin J. Erden, a senior lecturer in philosophy at St Mary’s University, has an answer for that. She spends a lot of time considering ethics and computing on the committee of the Society for the Study of Artificial Intelligence and Simulation of Behaviour.

Decisions in advance suggest ethical intent and incur others’ judgement, whereas acting on the spot doesn’t in the same way, she points out

“The programming of a car with ethical intentions knowing what the risk could be means that the public could be less willing to view things as accidents,” she says. Or in other words, as long as you were driving responsibly it’s considered ok for you to say “that person just jumped out at me” and be excused for whomever you hit, but AI algorithms don’t have that luxury.

If computers are supposed to be faster and more intentional than us in some situations, then how they’re programmed matters. Experts are calling for accountability.

Do algorithms dream of electric ethics?

“I’d need to cross-examine my algorithm, or at least know how to find out what was happening at the time of the accident,” says Kay Firth-Butterfield. She is a lawyer specialising in AI issues and executive director at AI Austin. It’s a non-profit AI thinktank set up this March that evolved from the Ethics Advisory Panel, an ethics board set up by AI firm Lucid.​

W​e need a way to ​understand what ​AI algorithms are "thinking" when they do things​, she says. “How can you say to a patient's family if they died because of an intervention ‘we don't know how this happened’? So accountability and transparency are important.

Puzzling over why your car swerved around the dog but backed over the cat isn’t the only AI problem that calls for transparency. Biased AI algorithms can cause all kinds of problems. Facial recognition systems may ignore people of colour because their training data didn’t have enough faces fitting that description, for example.

Or maybe AI is self-reinforcing to the detriment of society. If social media AI learns that you like to see material supporting one kind of politics and only ever shows you that, then over time we could lose the capacity for critical debate.

​“J.S Mill made the argument that if ideas aren’t challenged then they are at risk of becoming dogma,” Erden recalls, nicely summarising what she calls the ‘filter bubble’ problem. (Mill was a 19th century utilitarian philosopher who was a strong proponent of logic and reasoning based on empirical evidence, so he probably wouldn’t have enjoyed arguing with people on Facebook much.​)

So if AI creates billions of people unwilling or even unable to recognise and civilly debate each other’s ideas, isn’t that an ethical issue that needs addressing?

Another issue concerns the forming of emotional relationships with robots. Firth-Butterfield is interested in two ends of the spectrum – children and the elderly. Kids love to suspend disbelief, which makes robotic companions with their AI conversational capabilities all that easier to embrace. She frets about AI robots that may train children to be ideal customers for their products.

Similarly, at the other end of the spectrum, she muses about AI robots used to provide care and companionship to the elderly.

“Is it against their human rights not to interact with human beings but just to be looked after by robots? I think that’s going to be one of the biggest decisions of our time,” she says.

That highlights a distinction in AI ethics, between how an algorithm does something and what we’re trying to achieve with it. Alex London, professor of philosophy and director at Carnegie Mellon University’s Center for Ethics and Policy, says that the driving question is what the machine is trying to do.

“The ethics of that is probably one of the most fundamental questions. If the machine is out to serve a goal that’s problematic, then ethical programming – the question of how it can more ethically advance that goal - sounds misguided,” he warns.

That’s tricky, because much comes down to intent. A robot could be great if it improves the quality of life for an elderly person as a supplement for frequent visits and calls with family. Using the same robot as an excuse to neglect elderly relatives would be the inverse. Like any enabling technology from the kitchen knife to nuclear fusion, the tool itself isn’t good or bad – it’s the intent of the person using it. Even then, points out Erden, what if someone thinks they’re doing good with a tool but someone else doesn’t?

Let's leave government out of this...

They might have to make good any damage they may cause, and could even apply “electronic personality” to cases where they act autonomously. How much should governments regulate these issues? Not much, says UPenn’s Smith.

“I think we should start with self-regulation, and the reason is that technology is evolving so rapidly,” he says. “The political process tends to be reactive and lag the technology process.” Governments should step in if the private sector makes a hash of it, he argues.

This is the direction it’s already taking. Google’s Deepmind has an ethics panel on AI, but it has drawn flak for being opaque. Elon Musk formed OpenAI to research "friendly AI", while Google, Microsoft, Amazon, IBM and Microsoft formed the Partnership on AI to benefit people and society.

Others are also working on this. Firth-Butterfield is vice-chair of the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. The IEEE is currently working on several standards including P7001, which outlines transparency in autonomous systems. Its ethical initiative has also produced a guidance document on ethically-aligned AI design that prioritises human wellbeing.

There is no shortage of guidelines and ethics research efforts to choose from. The high-profile Future of Life Institute, which sports Stephen Hawking, Elon Musk and others among its supporters, has published the Asilomar AI principles, while the British Standards Institute created BS8611, Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems.

Many of these proposed regulations and research efforts explore the ethical implications of AI now, and imagined. One idea involves creating a "kill switch" that could stop the singularity concept – the runaway recursive development of AI that just keeps bettering itself until it loops us out of existence.

That’s a concept that some refuse to address, including the authors of this whopping Stanford report on AI in 2030. They plan to update their report every five years for the next century, though, so it may surface later. Others, like Torrance, are keeping an eye on it. “I regard it as something that's important to be aware of as a danger in the mid to long term,” he says.

Along the road to the singularity would be strong AI. If that becomes a thing, some of the ethical discussions become more complex because AI would be dealing with more nuanced issues, just as we do.

Erder is sceptical that this will happen, but as a philosopher, she questions the idea of concrete ethical guidelines that don’t allow room for manoeuvre. She raises squishy ethical questions like whether it’s ok to lie.

No, it isn’t. Oh, really? What about in this situation, where you’re lying to save someone’s feelings? What about to save a life? What does lying mean, anyway? Can you lie by staying silent?

These are the kinds of Socratic conversations an enlightened parent might have with their kids as they teach them that things aren’t always as binary as they might think. And they’re things that make lists of rigid ethical guidelines difficult.

Some of the ethical concepts that may make their way into AI debates have been with us in one form or another since the Sophists, and we still haven’t perfected ourselves. We’re filled with our own biases. We discriminate against each other all the time, knowingly and unknowingly. We’d be less capable than an automated car of making the right decision in an accident – and there may not even be any firm rules on what the right decision was anyway.

Given that we can barely set and meet our own standards, should we worry that much about imposing them on the digital selves that may one day come after us?

Erden thinks so. “Ethics happens in the middle ground, where we accept that we’re not going to give up, but we’re not going to establish something clearly and finally and completely,” she says.

“So we have to manage the mess as best we can. The mess is beautiful, in lots of ways.” ®

We'll be covering machine learning, AI and analytics - and ethics - at MCubed London in October. Full details, including early bird tickets, right here.

*Yes, which was later included in the 1950 short story collection...

Send us news
42 Comments

Boston Dynamics' humanoid Atlas is dead, long live the ... new commercial Atlas

If the plan was to make this all-electric droid look mildly terrifying, mission accomplished

Industrial robots make people feel worse about jobs and themselves

Study finds workers' sense of meaningfulness and autonomy declines with automation

Boffins caution against allowing robots to run on AI models

Before building the Torment Nexus, consider the risks

The Who’s Who of AI just chipped in to fund humanoid robot startup Figure

$675 million to accelerate development of machine that can lift, but can’t keep up with humans

Cutting-edge robot space surgeon makes first incision in Zero-G

One giant leap for astronaut medicine

CERN is training robot dogs to spot radiation hazards at Large Hadron Collider

CERNquadbot can go off the rails – unlike science org's existing inspector bots

AMD crams five compute architectures onto a single board

What an Arm-ful of x86, Vega graphics, XDNA AI, and FPGA circuitry

DeepMind AI helps cook up 'novel' compounds – with sides of controversy

Published report 'should be retracted as the main claim of discovery is wrong', UCL chemistry professor tells us

Robots with a 'Berliner Schnauze' may appear more trustworthy to locals

Dialect study a mixed bag when it comes to droids speaking highbrow German

South Korea opens the door for robots to roam among pedestrians

They'll have a top speed of 15km/h and need insurance before they roll

Let's give these quadruped robot dogs next-gen XM7 rifles, says US Army

Black Mirror? Never heard of it

Beijing prepares for imminent rise of humanoid robots

Mass production of C-3POs pitched for 2025