Riot Games has shared a key update on combating voice and chat toxicity in Valorant, introducing tougher and quicker penalties for offenders alongside enhancements to its moderation systems.
In a blog post last year, Riot outlined priorities for curbing poor player behavior, such as repeated AFK offenses and toxic communications. The latest progress report details upcoming changes aimed at fostering a healthier gaming environment.
Riot currently relies on player reports combined with automated text moderation to tackle offensive behavior, including slurs, threats, taunts, and hostile language.
These measures led to roughly 400,000 voice and text mutes, plus 40,000 bans for repeated toxic communications (ranging from days to permanent), in January alone.

Despite this, Riot acknowledges that harassment rates haven't dropped significantly, describing past efforts as "at best, neutral" with much more work needed in 2022 and beyond.
To address this, Riot is piloting a program in Turkey to train player support specialists who will review high-priority behavior reports and enforce consistent rules. If successful, it will expand globally.

In the short term, with growing confidence in its automated detection systems, Riot plans to ramp up penalty severity and speed, enabling faster removal of toxic players.
It will also update text moderation so zero-tolerance terms trigger immediate mutes during matches, rather than waiting until the end.

"Tackling and deterring toxic voice chat is a team effort that puts feedback at the center," Riot states. "We've made strides to improve the experience for all. Please continue reporting toxicity in-game, check the muted words summary, share your feedback, and help us build a safer Valorant."