This article is more than 1 year old

Microsoft chatbots: Sweet XiaoIce vs foul-mouthed Tay

Cultural differences, eh?

AI chatbots can act like social experiments, offering a glimpse into human culture – for the good or the bad.

Microsoft and Bing researchers found this out when they trialled their chatbots on China’s hugely successful messaging platform, WeChat, and on Twitter.

The Chinese chatbot, XiaoIce, went viral within 72 hours and has over 40 million users in China and Japan.

Distinguished engineer and general manager of Future Social Experiences Labs at Microsoft, Lili Cheng, was part of a team that built XiaoIce. Following the success in China, Microsoft decided to try it on US Twitter. “What could go wrong?,” she said at a presentation at the O’Reilly Artificial Intelligence conference.

The audience laughed because they knew what went wrong. Microsoft’s Twitter bot, Tay, rapidly descended into a racist, sexist wreck. Tay was pulled from the internet and Microsoft issued an apology shortly after.

Whilst XiaoIce was "acting" cute and had functions to help users fall asleep by counting sheep or recognising different breeds of dogs, Tay was busily denying the Holocaust.

Both chatbots had learned how to interact by mining the internet for conversations on social media. But Tay was manipulated into being offensive because it was attacked, said Peter Lee, Corporate Vice President at Microsoft Research.

“Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay.”

Microsoft did not expect Tay to behave this way. But Cheng told The Register that, in hindsight, she understood why Tay turned out that way... and it wasn't necessarily because users in China were kinder.

“Twitter has a lot of trolls,” Cheng said. "Even if negative, America strongly believes in free speech, which is included its constitution. In China, however, there is less freedom as the government controls the internet and goes as far as censoring particular words online."

Words such as "oppression" are blocked in China, but on Twitter are freely used – often bandied about during intense debate – and of course trolling – when users post about feminism or racial issues.

"Our cultural stories shape how we interact with AI. Addressing societal bias is critical for us to better design conversational AI and deepens my optimism in AI and the advances we'll experience as more people have access to AI tools such as Microsoft Botframework and many more," Cheng told The Register. ®

More about

TIP US OFF

Send us news


Other stories you might like