On 8 July, Instagram announced new features aimed at curbing online bullying on its platform, including a warning to people preparing to post abusive remarks. “It’s our responsibility to create a safe environment on Instagram," said a statement from Adam Mosseri, head of the visually focused social platform owned by Facebook.
“This has been an important priority for us for some time, and we are continuing to invest in better understanding and tackling this problem." One new tool being rolled out is a warning generated by Artificial Intelligence (AI) to notify users their comment may be considered offensive before it is posted.
“This intervention gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification," Mosseri said. “From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect."
Another new tool is aimed at limiting the spread of abusive comments on a user’s feed. “We have heard from young people in our community that they are reluctant to block, unfollow, or report their bully because it could escalate the situation, especially if they interact with their bully in real life," Mosseri commented.
A new feature called “restrict", which is being tested, will make posts from an offending person visible only to that person. “You can choose to make a restricted person’s comments visible to others by approving their comments," Mosseri added.
“Restricted people won’t be able to see when you are active on Instagram or when you have read their direct messages." The move by Instagram is the latest in a series of actions on cyberbullying by social networks to deal with hate speech and abusive conduct which can be especially harmful to young users.