El Reg drills into chatbot hype: The AIs that want to be your web butlers

So many things to solve, eg: how can there be conversation without memory?

AI has to be good enough so 'people can build a rapport with the machine'

Natural language processing and AI emerged in academia in the 1950s, and are rollicking in a massive boost in funding from the private sector.

Amazon is luring AI students with the promise of cash prizes if they find ways to make Alexa smarter. Twelve teams from various universities have been chosen to take part in the inaugural Amazon Alexa prize. Each team receives a stipend to build their bot, and the winning team will receive $500,000.

The real prize, however, is the bonus $1m that will be given if the winning team shows that their chatbot can speak “coherently and engagingly with humans on popular topics for 20 minutes.”

Two teams are from Carnegie Mellon University, and are led by Alex Rudnicky and Alan Black, both professors with long white beards who have dedicated a significant amount of time to finding ways to give machines a voice and a mind.

Computers don’t have brains – they can’t think and they lack common sense – and they don’t understand and learn language in the same way humans do. Frederick Jelinek, a prominent natural language processing researcher, famously said: “Every time I fire a linguist, the performance of the speech recognizer goes up.” It all boils down to clever engineering, and coming up with the most effective ways to model human communication.

“The main problem is that humans are very good at chatting; they’re good at talking about things that don’t have a specific goal. But with machines it’s harder: you wouldn’t say to Cortana where’s the best place to get a coffee, you’d say where is the nearest cafe?,” Black told The Register.

“Humans ask Cortana or Alexa a very targeted question and it gives an answer. It’s not necessary fun, it’s not making people want to use this ... and they have no affiliation to any particular personal assistant. They need to have a more natural conversation, so people can build a rapport with the machine and feel more content with it.”

It helps to train your model on large datasets containing real conversations such as online forums or movie scripts. Computers then use pattern recognition to learn how certain combinations of words are associated with one another, so it can match it with appropriate responses.

There’s a limit to how well that will work, however, Black says. To really push chatbots to become more human-like, it “needs to know an awful lot more about the world beyond referencing the weather. It needs to understand humans and predict how they will act, what they should do to build a useful relationship.”

Rudnicky agrees. “Humans come packed with experience about the world, and machines need that knowledge too. It needs to keep track of what’s going on rather than just focusing on content,” he said.

Can communication be reduced to computations?

The first step and the last step of building a successful chatbot has been reached. Computers already recognize and process speech well and are on their way to sounding much more natural when they speak. But the middle stage of understanding and reasoning from information still needs work.

Part of the problem is that chatbots lack a memory component. Conversations work on a turn-by-turn basis and it only really remembers the last message sent.

Making bots more human-like isn’t the only way to make them useful, Robert Dale, CTO at Arria NLG – a UK-based company that uses natural language generation to provide useful insights from data – told The Register.

Adding components so that a chatbot can do more tasks is another way. It’s why Amazon kickstarted the Amazon Alexa Portal and has opened the device up to third-party developers.

Dale has been involved in natural language generation for nearly thirty years, and says there are many unsolved problems. The main issue is that no one has come up with a good system to make computers understand text and the world around them.

“Chatbots are riding on the wave of AI, but I’d argue that there isn’t much intelligence behind them right now. We need to understand how humans communicate first in order to replicate that in machines,” Dale said.

The process of deriving meaning from abstract speech or writing is so natural to humans – yet it's mysterious and difficult to describe, let alone reduce to computations.

Nobody knows how human-like machines will have to be in order to hold conversations. But the line is drawn before any real discussion of the c-word – consciousness. Dismissing the idea of a machine like HAL-9000, both Rudnicky and Black shake their heads at the idea of machines having to be self-aware before they can talk like humans.

“In the future, computers will be advanced enough to converse naturally like humans, but we will always be able to tell the difference,” Rudnicky said. ®




Biting the hand that feeds IT © 1998–2019