This article is more than 1 year old

Not another Linux desktop! Robots cross the Uncanny Valley

Machine Learning meets human response

Britons are getting old: baby boomers combined with a low 1970s birth rate saw the proportion of our population 65 or older increase between 1975 and 2015 from 14.1 to 17.8 per cent. That number’s expected to hit a quarter of the population by 2045, says the Office for National Statistics.

Japan, of course, has already beaten us. The country boasts the dubious accolade of being the oldest nation in the world, with 20 per cent of Japan's population 65 or older.

That may give it a listing in the Guinness Book of Records, but it poses serious problems when it comes to social care - Merrill Lynch projects a shortfall of one million caregivers by 2025.

To address the issue, Japanese companies are leading in the development of carebots, robots designed to assist elderly people with things like getting food or turning off lights.

They range from robot-looking robots, your basic Asimo-style mechanoid like Toyota’s Kirobo and variations of that through to cuddly therapeutic bots like Paro.

No wonder tech vendors are getting ahead: the global personal robot market, which includes care bots, could reach $17.4bn by 2020, says Merrill Lynch.

But while the Japanese elderly might perfectly OK with the idea of cuddling a robot seal, would Brits feel the same way? And what happens, as is inevitable, when the state of the art moves on, so that care bots look more realistically human – nose, eyes, pinch-able skin?

Hiroshi Ishiguro is director of the Intelligent Robotics Laboratory, part of the Department of Systems Innovation in the Graduate School of Engineering Science at Osaka University and founder of Hiroshi Ishiguro Laboratories. He builds life-like robots, one of which - Erica - he claims is the most advanced android “in the world”. Naturally, he has built his own bot self.

Youtube Video

Not carebots but a vision of the future and where things can go. The Weird Hotel, in Sasebo, Japan, for example excited journalists a few years back for being run by robots, with a human-looking receptionist.

Too human... the uncanny valley

As in life, so in androids – and sometimes you can go too far. What if your bot is just "not right" – has the opposite of its desired effect and actually puts people off? In this case, how does your supposedly caring or public-facing bot cross what’s been called the "uncanny valley"?

The expression uncanny valley was coined in 1970 by the Japanese roboticist Masahiro Mori, who identified that as robots became more human-like, people would find them to be more acceptable and appealing than their mechanical counterparts. But only up to a point. When they were close to, but not quite, human, people developed a sense of unease and discomfort.

If human likeness increased beyond this point, and they became very human-like, the emotional response returned to being positive. It is this distinctive dip in the relationship between human-likeness and emotional response that is referred to as the uncanny valley.

Anything with a highly human-like appearance can be subject to the uncanny valley effect, but not all near-human robots provoke this negative response and, as the Japanese robot example no doubt illustrates, the perception of eeriness varies from person to person. So are there any rules of thumb about what works to engender the most positive response from the robots we develop?

The question is of growing importance as use of robots extends beyond a technical environment to penetrate our social sphere and the people who use and interact with them are increasingly unlikely to be technically trained experts, warn University of California researchers Maya Mathur and David Reichling.

They conducted a study of 80 real-world robots and found a clear valley effect in how much people liked and were willing to trust them.

One theory is that the uncanny valley might occur at the boundary where something moves from one category to another, in this case, between non-human and human.

Mind over matter

A study by Harvard social psychologist Christine Looser and Dartmouth College social psychologist and neuroscientist Thalia Wheatley’s of mannequin faces that were morphed into human faces found a valley at the point where the inanimate face started to look alive.

“Additionally, the impression of life was gleaned from the eyes more than from other facial features. These results suggest that human beings are highly attuned to specific facial cues, carried largely in the eyes, that gate the categorical perception of life,” they explain here.

Another suggestion is that we’re only freaked out if we’re able to believe that near-human entities possess a mind. A study by psychologists and authors of The Mind Club, Kurt Gray and Daniel Wegner, say robots only unnerved when people thought that they had the ability to sense and experience things, and robots that did not seem to possess a mind were not frightening.

Academic Stephanie Lay is a research student in psychology at the Open University who is fascinated by the sense of unease that people get when encountering something almost but not quite human.

Lay’s most recent research asks the question of what qualities make “uncanny” faces different from human and non-human faces of all kinds. She looked at responses to faces with different emotional expressions shown in the eyes and the rest of the face and found that the eeriest combinations were those where happy faces were paired with fearful or angry eyes.

Dr Nathan Lepora, an expert in robotics and computational neuroscience at the University of Bristol, says acceptance of human-like robots may be something that we simply have to work through.

“We are moving more towards the humanisation of robots. Technology is improving but robots are also moving out of factories and into our homes and hospitals so to work in these environments they need to be more human like. The psychologists are interested in it but I don’t think the industry is that concerned,” Lepora said.

Richard Mitchell, professor of cybernetics at the University of Reading, says it is important for the behaviour of a robot or avatar to reflect how human it looks. “Thanks to advances in computer graphics, we can produce something that looks quite realistic. It needs to then behave in a way that is consistent. We’re getting there on screen but we can’t do that with real robots. Physically it’s still a problem, particularly with getting them to move like humans,” Mitchell says.

But why replicate humans, Mitchell asks. “In the area of using robots for companionship, I understand there’s a desire to have things that are more realistic – but personally I’d prefer a human.”

This idea that our unease is caused by a disconnect between a robot’s appearance and behaviour is picked up by research from Angela Tinwell, a games and creative technologies researcher at the University of Bolton. This suggests that a mismatch between aspects of the robots appearance and/or behaviour – including speech synchronisation, speech speed and facial expressions – may be responsible for giving us the heebie-jeebies.

Tinwell in 2013 found that virtual characters that weren’t startled by a scream sound were regarded as most uncanny. The study also suggested that this may have even reminded people of the kind of behaviour exhibited by humans with psychopathic traits.

The trick, therefore, is for engineers, designers and psychologists to work hand in hand – to build a complete package rather than work on, say, the machine model and the AI and then to the appearance. Sound familiar? In many ways, it echoes the idea of building enterprise software and bridging the gap between those who code and those who work on the UX layer. Leave it up to engineers, and you might have something that works well but that’s just not necessarily user friendly. Linux desktop, anybody? As for the UX side, well – all buttons, sliders and concepts.

Communication between all parties involved is, according to Dr David Golightly, a human factors specialist from the University of Nottingham with a PhD in cognitive psychology and a member of the Charted Institute of Ergonomics and Human Factors, “pivotal”.

“Don’t try to generalise, make sure you understand the functional nature of what you’re trying to achieve and think about the level of control that people need to have,” Golightly said.

“As automation increases, you need to represent the system in new and more interesting ways. Understand the psychology of what you’re trying to do with a robot system and don’t be frightened to throw out some of the old representations.”

Ultimately, however, we may never completely bridge the Uncanny Valley. As technology improves, making us ever more discerning of what we want and expect and even critical of what’s delivered, the industry that attempts to reach a successful end state of anthropomorphised robots may find itself frustrated. Tindall, for one, says we may simply become more sensitive and therefore will always be able to tell that something is not quite right.

The uncanny valley, it seems, is here to stay – no matter how much the AI and the UX types collaborate. ®

More about

TIP US OFF

Send us news


Other stories you might like