Twitter confirmed its testing a new feature thatflags users profilesas potentially including sensitive content. When you click on one of these profiles from a link on Twitter, or if you visit the profiles web page directly, you wont be immediately shown the users tweets. Instead, a warning message displays, reading Caution: This profile may include sensitive content.
When you click a link to the profile on Twitter, the message appears in a pop-up window. And if you visit the profile directly, the warning message is all that displays until you agree to view the content by clicking the Yes, view profile button.
A reporter at Mashable first spotted the feature when trying to view the profile oftechnology analyst Justin Warren, but couldnot determine how the content was flagged.
Image credit: Mashable
Twitter tellsus the new feature works similarly to how other sensitive content on Twitter gets flagged, based on users settings.
Currently, the company permits content that contains violence or nudity, butit draws the line at pornography or excessive violence in live video, or in your profile image or header image, according to its page on sensitive media.It doesnt mention profanity, racism, bigotry and other types of offenses, however. But sensitive content is not limited to violence or nudity, were told.
Users can choose to mark themselves as someone who tweets sensitive content through their Privacy and Safety settings.
In addition, other Twitter users can report tweets to the Twitter team for review. In this case, if the tweet is determined to be potentially sensitive, Twitter will label the content appropriately or remove it, if its a live video. It may also adjust your account setting for you, so your future tweets are marked accordingly, if it deems it necessary.
For repeat violations, Twitter may permanently adjust that setting on your behalf, it says.
The process for marking entire profiles as sensitive follows a similar set of guidelines and processes, including the fact that Twitter can takean active role inidentifying these accounts, based on the content of the accounts tweets.
The feature is still in testing, and not widely rolled out at this time.
A Twitter spokesperson confirmed the new feature, saying thisis something were testing as part of our broader efforts to make Twitter safer.
In recent days, Twitter has taken a number of steps to address the issues of safety and abuse on its network. It has rolled out new filters for hiding harassing content, safer search results, a time out feature for bullies, user interface tweaks to hide low-quality and abusive tweets, a better Mute option, more transparency around abuse reportingand smarter algorithms for identifying and handling abusive content, as well as thosethat prevent abusers from coming back after it bans usingnew accounts.
Warning users about select individuals is not necessarily another change aimed at quelling abuse, but rather making the network feel more friendly. Its not exactly a novel idea, of course. Plenty of networks flag content thats not appropriate for all to see like YouTubes warnings on age-restricted content or Facebooks warnings about graphic content, for example.
As with other new anti-abuse features, some people seem genuinely baffled as to why they were flagged, not seemingly able to connect the dots between their tweets and their consequences.
Case in point (um, sensitive content warning):
Oh, such as?
(I mean, perhaps someonedidnt likethetweets amid theposts of art and literature and Italyand reported them? Idk, wild guess!!)
Twitter did not say when the new feature would be more broadly available.