Twitter is testing a new ‘Safety Mode’ feature that temporarily blocks accounts for seven days for using insults or hateful remarks, a move aimed at curbing harmful language on the microblogging platform.
The new safety feature has been rolled out to a small feedback group on iOS, Android, and Twitter.com, beginning with accounts that have English-language settings enabled, Twitter said in a blogpost on Wednesday.
“We’ve rolled out features and settings that may help you to feel more comfortable and in control of your experience, and we want to do more to reduce the burden on people dealing with unwelcome interactions.
“Unwelcome Tweets and noise can get in the way of conversations on Twitter, so we’re introducing Safety Mode, a new feature that aims to reduce
Safety Mode feature temporarily blocks accounts for seven days for using potentially harmful language — such as insults or hateful remarks — or sending repetitive and uninvited replies or mentions.
When the feature is turned on in Settings, Twitter’s systems will assess the likelihood of a negative engagement by considering both the tweet’s content and the relationship between the tweet author and replier, Twitter said.
“Our technology takes existing relationships into account, so accounts you follow or frequently interact with will not be autoblocked,” it added.
For those in the feedback group, Safety Mode can be enabled through ‘Privacy and safety’ option in Settings.
“Authors of Tweets found by our technology to be harmful or uninvited will be autoblocked, meaning they’ll temporarily be unable to follow your account, see your Tweets, or send you Direct Messages,” it said.
Twitter said throughout the product development process, it conducted several listening and feedback sessions for trusted partners with expertise in online safety, mental health, and human rights, including members of its Trust and Safety Council.
Their feedback influenced adjustments to make Safety Mode easier to use and helped think through ways to address the potential manipulation of Twitter’s technology, it added.
“These trusted partners also played an important role in nominating Twitter account owners to join the feedback group, prioritising people from marginalized communities and female journalists…We want you to enjoy healthy conversations, so this test is one way we’re limiting overwhelming and unwelcome interactions that can interrupt those conversations,” it said.
Twitter added that its goal is to better protect the individual on the receiving end of tweets by reducing the prevalence and visibility of harmful remarks.
“We’ll observe how Safety Mode is working and incorporate improvements and adjustments before bringing it to everyone on Twitter. Stay tuned for more updates as we continue to build on our work to empower people with the tools they need to feel more comfortable participating in the public conversation,” it added.