Twitter Inc will begin overhauling how users report harmful tweets in an effort to make it easier for people to describe what is wrong with the content they are seeing, the social networking site said on Tuesday.
The move, which will begin with a small test of Twitter Web users in the United States, comes amid widespread criticism that tech companies including Twitter, Meta Platforms Inc and Alphabet Inc's YouTube are doing too little to shield users from harmful or abusive content online.
Also read: Who is Parag Agrawal, Twitter’s new CEO?
Instead of requiring users to report how a tweet violates Twitter's rules, which presumes knowledge of the company's policies, users will be asked whether they felt they have been attacked with hate, harassed or intimidated with violence or shown content related to self-harm, among other concerns.
Users will also be allowed to describe in their own words why they are flagging the content, Twitter said. The process is akin to a doctor asking a patient about their symptoms and "where does it hurt?" rather than "is your leg broken?" the company said.
"In moments of urgency, people need to be heard and feel supported," Brian Waismeyer, a data scientist on Twitter's health user experience team, said in a statement.
Twitter added that the new process will allow it to gather more granular information on tweets that do not explicitly violate its rules, but that users might nonetheless find problematic or upsetting, which will help the company update its policies in the future.
It is the latest of recent changes that Twitter has made to improve user safety. Last month, the company said it would begin prohibiting the sharing of "personal media" such as photos and videos without the consent of the person.
(Reporting by Sheila Dang in Dallas; Editing by Matthew Lewis for Reuters)
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.