“Benevolent bots” that are designed to improve articles on Wikipedia — just like humans — have online “fights” over content that can continue for years, researchers have found.
Commonly known as software robots, editing bots on Wikipedia undo vandalism, enforce bans, check spelling, create links and import content automatically.
Other bots can mine data, identify data or identify copyright infringements.
The team observed how they interacted on 13 different language editions over 10 years (from 2001) and found that bots interacted with one another, whether or not this was by design. It led to unpredictable consequences.
The research paper, published in PLOS ONE, said bots appear to behave differently in culturally distinct online environments but “are more like humans than you might expect”.
It is a warning to those who use artificial intelligence for building autonomous vehicles, cyber security systems or for managing social media.
Although bots are automations that do not have the capacity for emotions, bot to bot interactions are unpredictable and act in distinctive ways.
“We find that bots behave differently in different cultural environments and their conflicts are also very different to the ones between human editors,” said lead author Milena Tsvetkova from the Oxford Internet Institute.
Said her colleague Taha Yasseri: “Bots are designed by humans from different countries. So when they encounter one another, this can lead to online clashes.
“We see differences in the technology used in the different Wikipedia language editions that create complicated interactions. This complexity is a fundamental feature that needs to be considered in any conversation related to automation and artificial intelligence,” Yasseri added.