logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Marvel Rivals Now Uses AI to Monitor Everything You Say in Voice Chat

Using "inappropriate language" will get you "penalized," though what qualifies as "inappropriate" and the nature of the punishment remain unclear.

The mass removal of games from Steam and Itch.io, the UK's oppressive Online Safety Act, talks of similar laws being proposed in other countries, the growing requirement to scan your face or hand over private information just to access everyday websites like Spotify, Discord, YouTube, and Reddit – the sheer coordination of moves aimed at restricting the internet and reducing digital freedoms is enough to push even the most skeptical into believing that the widespread surge in censorship across industries and continents all at the same time is anything but a coincidence.

Now joining this list of concerning developments is, of all things, NetEase's third-person hero shooter Marvel Rivals, which, as it turns out, recently introduced an AI surveillance system that monitors everything you say in voice chat.

As revealed by Rivals' Creative Director Guangguang in the latest Dev Vision Vol. 08 update breakdown, the system was implemented on July 24 – a week ago – which is strange, considering he describes it as an upcoming feature in a video released on July 30.

Regardless, the system is tracking everything you say in voice chat and, with human oversight, will "penalize" players for using "inappropriate language." Naturally, as is usually the case in such situations, the wording was – whether intentionally or not – somewhat vague, leaving it unclear what exactly qualifies as "inappropriate" and what the punishment entails.

According to Guangguang, this system is meant to help the devs "catch toxic behavior," bringing up the age-old question of whether it's really worth allowing a massive corporation to listen in on everything you say just to stop 14-year-olds from trash-talking you through a $5 mic.

NetEase

And if you thought, "Heck, at least the text chat will be left alone," there's a surprise for you too. Going forward, players will be able to mute specific words, meaning any message containing those words won't be shown to them.

While this approach is arguably the best and most reasonable of the bunch – promoting personal responsibility, letting players tailor their own experiences, allowing those who don't mind foul language in a video game to enjoy it as they please, and so on – all of its advantages are for naught, with Guangguang noting that commonly muted words could eventually be added to the official filter list, once again introducing a blanket restriction on everyone's experience.

As questionable as NetEase's approach is, it is by no means new. A similar system was introduced at launch in Activision's Call of Duty: Black Ops 6 to moderate derogatory language in voice chat and analyze text chat traffic in near real-time.

In the lead-up to release, Activision faced criticism over its priorities, with many arguing the studio should focus more on fighting cheaters than policing player speech, condemning the moderation system, noting that players could easily bypass it with personalized insults rather than regular slurs, and objecting to the broader concept of having their voice and text chat monitored at all.

So, what's your take on all this? How do you feel about the end of the "Wild West" era of in-game voice chats? Share your thoughts in the comments!

Don't forget to join our 80 Level Talent platform and our new Discord server, follow us on InstagramTwitterLinkedInTelegramTikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more