Believe that a lot of players in the "League of Legends" will encounter abusive or scolded players, especially some of the younger players are more outspoken, these players will often give other players a negative impact on the game experience. As a game developer, Riot Games, the fight against malicious speech is a huge challenge, because in the world has such a large number of player groups, each country region players use different ways of swearing, how to distinguish so many players words and deeds is very difficult.
A player called "pupil" type
Over the years, Riot Games has tried a variety of technologies, including AI technology, to monitor and guide players ' words and deeds, and has achieved quite good results.
Player abuse often seen in the game
In a recent foreign media interview, Jeffrey Lin, chief designer of Riot Games social systems, said the new system was more efficient after the use of AI technology. Their system has been able to identify millions of of malicious rhetoric, 15 of the official languages supported by the League of Legends, and 92% of the players who have been identified with abusive rhetoric do not make the same mistake again. Riot's system, he argues, can be used not only for online games, but also for other types of online communities. In the past, after confirming the malicious speech, the player will get feedback within a week, now the time has been reduced to 5 minutes.
A few years ago, the company launched a management system called tribunal. The player's malicious remarks are confirmed and programmed into a "file". Players view the "files" and vote for those actions that are acceptable. In general, this system is very accurate. Jeffrey Lin says 98% of the decision-making in the player community is consistent with riot's internal decisions.
The company's new system has also greatly improved the player's "correction rate". In the game, if a player has been subjected to some sort of punishment and then, for a specific period of time, no longer subject to the same punishment, he is considered "corrected". "When it comes to punishment, when we add better feedback and give evidence of the conversation, the correction rate rises from 50% to 65%. Jeffrey Lin said, "When machine learning systems provide faster feedback, and with evidence, the correction rate has risen to an unprecedented 92%." ”
Jeffrey Lin believes that their experience can be used in a broader field. Justin Reich, a researcher at Harvard University's Böckmann Center, said he agreed. ' From the experience of riot games, we can come to the conclusion that abusive behavior does not necessarily come from the wicked, but from the people who are in a bad mood, ' he said. Therefore, in the fight against malicious speech, we can not only target those malicious troll, but also need to consider, in the anonymous state of the Internet, human beings will expose their worst side. However, malicious speech is not difficult to eradicate the chronic disease. We are able to solve this problem through technology, experimentation and community involvement.
"The challenges we face in the League of Legends can be seen on any game, platform, community and forum." "Therefore, we are happy to share the data and experience with more people in the industry," Lin said. We hope that other companies can look at these conclusions and realize that malicious online speech is not an unsolved problem. ”
However, love to play a small part of the network, Chinese language is broad and profound, not to mention the ancient prose in the vernacular curse, so many dialect curse sentences, Riot games ai really can tell?
lol AI distinguishes millions of malicious speech players from abusive behavior (GO)