- POPSUGAR Australia
- Gaming
- Twitch is Rolling Out a New Safety Feature to Combat Toxic Users
Twitch is Rolling Out a New Safety Feature to Combat Toxic Users
In September, Streamlabs released a “Safe Mode” feature to help streamers protect themselves from hate raids — a practice where streamers, particularly members of marginalised communities, are flooded with racially or sexually discriminatory language from a large number of users or bot accounts. The problem with that is that Streamlabs isn’t part of Twitch, meaning Twitch wasn’t doing enough to officially protect its users. There are other incidents and patterns of behaviour that have made users cry out for help from Twitch, and today the company announced a new feature called “Suspicious User Detection” as a step towards making the community safer.
Suspicious User Detection is a new safety feature that will give streamers and channel mods more power against users who are trying to evade channel-level bans.
“When you ban someone from your channel, they should be banned from your community for good,” Twitch said in an announcement. “Unfortunately, bad actors often choose to create new accounts, jump back into Chat, and continue their abusive behaviour. Suspicious User Detection, powered by machine learning, is here to help you identify those users based on a number of account signals. By detecting and analysing these signals, this tool will flag suspicious accounts as either “likely” or “possible” channel-ban evaders, so you can take action as needed.”
Accounts that are flagged as “likely” will have their messages blocked from Chat. They will, however, still be visible to streamers and mods, who can then choose to leave the restriction as it, monitor the user for inappropriate behaviour, or ban them from the channel.
Messages from “possible” ban evaders will appear in Chat normally, but the account will be flagged to either the streamer or the mod so they can monitor the user and restrict them from Chat if they think it’s necessary.
Streamers and mods will have a bit more control over how the feature works, though, and can choose to restrict messages from both likely and possible ban evaders if and when they want to be extra cautious.
Twitch has warned that the feature won’t be 100% accurate, especially at launch, because it uses machine learning. The tool will learn over time, though, and is set up to give streamers and mods final say over who can participate in Chat. It’s a relatively small step from Twitch, but one that should shield streamers from users exhibiting toxic or inappropriate behaviour.