How AI fights gamer abuse in League of Legends
Machine learning cracks down on racist, sexist and homophobic slurs
Artificial Intelligence (AI) is being used to put a stop to harassment in online gaming, cracking down on abuse in League of Legends.
The MOBA videogame is the most popular in the world, with over 67 million players and a multi-million dollar eSports industry built around it.
However, as with many online games, it can often be a breeding-ground for harassment, bullying and vitriol.
Women gamers have been among the most victimised groups, with the GamerGate movement leading to widespread trolling of women on Twitter after a female game critic suffered abuse on the social network.
Developer Riot Games has been taking steps to combat such instances of "toxicity", and has introduced machine learning in order to speed up the process.
Jeffrey Lyte' Lin, lead game designer of social systems at Riot, detailed the changes in an article on Re/Code.
One of the previous methods of dealing with harassment was the Tribunal' a system whereby a player who has been reported for a significant amount of in-game offences is judged by other players.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
After this system had seen 100 million votes, Riot used the accumulated information to build a wide dataset dealing with how players thought about various online behaviours.
The results were a little surprising, with hate speech of all kinds being reviled, and homophobic slurs judged by North Americans to be "the most rejected phrases in the English language".
Armed with this data, Riot can now use AI to automatically identify and classify negative and positive speech in up to 15 different languages.
This extends not just to simple curse words and racial slurs, but also covers "advanced linguistics such as whether something [is] sarcastic or passive-aggressive".
As soon as negative behaviour is reported, the machine learning system is alerted, and will deliver an appropriate punishment. Constructive players are also encouraged with rewards.
After implementing this system, Riot has seen a drastic drop in incidents of abuse, as well as repeat offences.
"Verbal abuse has dropped by more than 40 per cent," according to Lyte, "and 91.6 per cent of negative players change their act and never commit another offense after just one reported penalty."
Overall, homophobic, sexist and racist incidents in League of Legends have fallen to a combined two per cent of all games, he claimed.
The system is continually driven by player votes, so as more cases are seen by the Tribunal, the machine's attitude to punishments and rewards will evolve.
This marks another instance of bad behaviour being driven from the internet's most popular pastimes, with Reddit last week announcing a new content policy to ban harassment and death threats.
Image credit: League of Legends, developed by Riot Games
Adam Shepherd has been a technology journalist since 2015, covering everything from cloud storage and security, to smartphones and servers. Over the course of his career, he’s seen the spread of 5G, the growing ubiquity of wireless devices, and the start of the connected revolution. He’s also been to more trade shows and technology conferences than he cares to count.
Adam is an avid follower of the latest hardware innovations, and he is never happier than when tinkering with complex network configurations, or exploring a new Linux distro. He was also previously a co-host on the ITPro Podcast, where he was often found ranting about his love of strange gadgets, his disdain for Windows Mobile, and everything in between.
You can find Adam tweeting about enterprise technology (or more often bad jokes) @AdamShepherUK.