logo80lv
Articlesclick_arrow
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_login
Log in
0
Save
Copy Link
Share

Riot and Ubisoft Partner for a Research Aimed To Tackle Online Toxicity

The two gaming companies have launched a collaborative research initiative that will use AI to detect and mitigate toxic behavior in games.

League of Legends and VALORANT developer Riot Games announced that it has teamed up with Assassin’s Creed and Rainbow Six Siege studio Ubisoft for a research project that will help avoid various toxicity issues in games and create more positive gaming communities. 

The research project called "Zero Harm in Comms" will be aimed at developing a database that gathers in-game data which will be used for training AI-based "preemptive moderation tools." Those tools should detect and mitigate abusive behavior in-game and remove any data containing toxicity even before sharing it.

Riot said that the project is just the first step in the companies' cross-industry efforts to "benefit all people who play video games."

"Riot and Ubisoft are aligned in their mission to create gaming structures that foster more rewarding social experiences and avoid harmful interactions," Riot wrote in the announcement. "As members of the Fair Play Alliance, both companies believe that improving the social dynamics of online games will only come through communication, collaboration, and joint efforts across the gaming industry."

The companies said they will share the initial study findings with the gaming industry next year.

You can learn more by reading the full announcement here. Also, don't forget to join our Reddit page and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more. 

Ready to grow your game’s revenue?
Talk to us

Comments

0

arrow
Leave Comment
Ready to grow your game’s revenue?
Talk to us

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more