This article is more than 1 year old

Facebook scales back AI flagship after chatbots hit 70% f-AI-lure rate

'The limitations of automation'

So it begins.

Facebook has scaled back its ambitions and refocused its application of "artificial intelligence" after its AI bots hit a 70 per cent failure rate. Facebook unveiled a bot API for its Messenger IM service at its developer conference last April. Facebook CEO Mark Zuckerberg had high hopes.

TenCent's WeChat was the model. Although WeChat began life as an instant-messaging client, it rapidly evolved into a major platform for e-commerce and transactions in China. But it largely keeps any AI guesswork away from real users.

With Facebook's bot API, Zuckerberg had joined a "chatbot arms race" with Microsoft CEO Satya Nadella. For Nadella, chatbots were "Conversations as a Platform," or even the "third run-time" – as important to humanity as the operating system or the web browser.

Some experts fretted that if China opened up a lead in AI, the West would be doomed to lose World War 3. Others suggested that whichever superpower lost the AI arms race would relapse into a state of primitive technology feudalism.

However, as we reminded you recently, the reality of "artificial intelligence" is far from impressive, once it's made to perform outside carefully stage managed and narrow demos. As stage one, Facebook's AI would parse the conversation and insert relevant external links into Messenger conversations. So how has the experiment fared?

In tests, Silicon Valley blog The Information reports, the technology "could fulfil only about 30 per cent of requests without human agents." And that wasn't the only problem. "The bots built by outside developers had issues: the technology to understand human requests wasn't developed enough. Usage was disappointing," we're told. Now it's simply trying to make sense of the conversation.

There's even a phrase you won't have seen in many mainstream thinkpieces about AI, predicting a near future of clever algorithms taking middle-class jobs. Brace yourselves, dear readers. Facebook engineers will now focus on "training [Messenger] based on a narrower set of cases so users aren't disappointed by the limitations of automation."

Ah.

"Their discussions are much more grounded in reality now compared to last year," said another person close to the Messenger developers. "The team in there now is finding ways to activate commercial intent inside Messenger. It's much less about, 'We'll dominate the world with AI.'"

Analyst Richard Windsor describes Facebook as "the laggard in AI," failing to match the results Google. "The problems that it has had with fake news, idiotic bots and Facebook M, all support my view that when Facebook tries to automate its systems, things always go wrong. The problem is not that Facebook does not have the right people but simply that it has not been working on artificial intelligence for nearly long enough," he wrote recently.

In its exclusive, The Information also notes that Facebook has been grappling with what we call the "Clippy The Paperclip problem": the user views the contribution by the agent, or bot, as intrusive.

"Clippy didn't fail for lack of good intentions, or contextual unawareness, but because the interruption was inappropriate," we noted.

Facebook is expected to unveil its revised plans for its chat AI stuff at this year's F8 developer conference in April. ®

More about

TIP US OFF

Send us news


Other stories you might like