Twitter is increasing efforts to provide user-side tools for those users who are experiencing harassment to help themselves.
The social media platform announced that it will be testing a new feature called “Safety Mode.” This new feature is intended to protect users from being overwhelmed by harmful tweets or unwanted replies or mentions. The feature will block, temporarily, accounts from interacting with users to whom the harasser has sent harmful speech and repeated or uninvited replies and mentions.
Over the years Twitter has often bored criticism for the recurrent use of hateful or abusive content submitted to its platform by users. There are even rare and unfortunate cases where the harmful content has extended to the real world, most often when the target of the content is a marginalized group.
Twitter hasn’t spoken of any major protective features since 2017 when it released, among other features, its “safe search” function along with the user’s ability to block potentially abusive or “low-quality” tweets to appear in their conversations.