This article is more than 1 year old

People built AI bots to improve Wikipedia. Then they started squabbling in petty edit wars, sigh

Most argued articles include Pervez Musharraf, Niels Bohr and Schwarzenegger

Analysis An investigation into Wikipedia bots has confirmed the automated editing software can be just as pedantic and petty as humans are – often engaging in online spats that can continue for years.

What's interesting is that bots behave differently depending on which language version of Wikipedia they work on: some become more argumentative than others, based on the culture and subjects they end up tackling.

Bots have been roaming the worldwide web for more than 20 years. Some can be quite useful. Others are less so, spewing racist abuse and spam, or posing as beautiful flirty women on dating sites.

A paper published today in PLOS ONE by researchers from the University of Oxford and The Alan Turing Institute in the UK splits internet bots into two categories: benevolent and malevolent.

Wikipedia droids are classed as benevolent as they help edit articles, identify and clean up vandalism, check spelling, import information automatically, and identify copyright violations. In 2014, for example, as many as 15 per cent of all edits were made by bots, and they often disagreed with each other, we're told.

Dr Milena Tsvetkova, lead author of the paper and a researcher at Oxford Internet Institute, said: “We find that bots behave differently in different cultural environments, and their conflicts are also very different to the ones between human editors.”

By looking at reverts – when a human or bot overrules another editor’s contribution by restoring an earlier version of the article – Tsvetkova and her team found areas of conflict between the automated word-wrangling software. “Compared to humans, a smaller proportion of bots’ edits are reverts and a smaller proportion get reverted,” the paper noted. (We've previously briefly covered Tsvetkova and co's work here.)

Looking over a ten-year period, each bot on the English Wikipedia undid another bot’s edits on average 105 times – much more than the average of three times for humans. It was worse on the Portuguese version, where each bot reverted another bot 185 times on average, over ten years. On the German version, the droids were much more pleasant to one another – each one changing another's work only 24 times, on average over ten years.

The difference between the Wikipedia language editions can be explained: Portuguese bots are simply more active and edit more articles than German ones, for example. Interestingly, though, cultural diversity is also a factor, Professor Taha Yasseri, coauthor of the paper and researcher from the Oxford Internet Institute, said. Bots designed to work in the same way, or use the same code, sometimes end up disagreeing with each over cultural and language points. And some cultures provoke more arguments and edit wars.

“The findings show that even the same technology leads to different outcomes, depending on the cultural environment," Prof Yasseri told The Register.

"An automated vehicle will drive differently on a German autobahn to how it will through the Tuscan hills of Italy. Similarly, the local online infrastructure that bots inhabit will have some bearing on how they behave and their performance. Bots are designed by humans from different countries so when they encounter one another, this can lead to online clashes.

"Some of the articles most contested by bots are about Pervez Musharraf, Uzbekistan, Estonia, Belarus, the Arabic language, Niels Bohr, and Arnold Schwarzenegger."

That's not to say all bots are programmed the same way – the edit wars are a result of a complex and heady mix of software and subjects, ultimately. Prof Yasseri added: "We see differences in the technology used in the different Wikipedia language editions, and the different cultures of the communities of Wikipedia editors involved create complicated interactions. This complexity is a fundamental feature that needs to be considered in any conversation related to automation and artificial intelligence.”

Wiki bots aren’t the only semi-intelligent agents affected by cultural differences. Microsoft’s Chinese bot, XiaoIce, was an angel compared to its American foul-mouthed counterpart, Tay, which was hijacked by 4chan trolls who found a backdoor command phrase to teach Tay filthy words. The contrast in behavior can be explained by taking a closer look at the internet rules in both countries.

The world of AI isn’t particularly friendly, it seems. A recent experiment by DeepMind showed AI bots were driven by competition and had no qualms resorting to violence. ®

More about

TIP US OFF

Send us news


Other stories you might like