By solely analyzing in-game chat messages, the newly implemented Google and FACEIT’s chat AI has banned 20,000 toxic players in its first six weeks since launch.
The AI is named Minerva, it is developed jointly by FACEIT and Google under collaboration. The AI is trained via Machine Learning, it analyzes in-game chats and issues warnings against offensive messages while flagging possible spams. It sends notifications of either bans or warnings as soon as the game ends, and the penalty for repeated offense grows harsher.
The AI was first introduced in August and since then it has issued 90,000 warnings, marked 7,000,000 messages as toxic, and banned 20,000 players. The number of toxic messages dropped by a massive 20% in the period of August to September while the AI was active, and the number of unique players sending toxic messages dropped by 8%.
In a recent blog post, FACEIT said:
“In-game chat detection is only the first and most simplistic of the applications of Minerva and more of a case study that serves as a first step toward our vision for this AI
We’re really excited about this foundation as it represents a strong base that will allow us to improve Minerva until we finally detect and address all kinds of abusive behaviors in real-time.
In the coming weeks we will announce new systems that will support Minerva in her training.”
Comments are closed.