Facebook publishes content removal policies for the first time

Facebook's 27-page document governs the behaviour of more than 2 billion users, giving the social network's definitions of hate speech, violent threats, sexual exploitation and more

Sarah Frier
Updated24 Apr 2018, 07:12 PM IST
The release of Facebook’s content policies comes just days after CEO Mark Zuckerberg testified to Congress. Photo: Reuters
The release of Facebook’s content policies comes just days after CEO Mark Zuckerberg testified to Congress. Photo: Reuters

San Francisco: For the first time, Facebook Inc. is letting people know its specific rules for taking down content once it’s reported to the social network’s moderators.

The 27-page document governs the behaviour of more than 2 billion users, giving Facebook’s definitions of hate speech, violent threats, sexual exploitation and more. It’s the closest the world has come to seeing an international code of conduct that was previously enacted behind closed doors. The release of the document follows frequent criticism and confusion about the company’s policies.

The community standards read like the result of years of trial and error and are used to provide workers with enough specificity to make quick and consistent judgments. While fully nude close-ups of buttocks aren’t allowed, they are permitted if “photoshopped on a public figure.” Content from a hacked source isn’t acceptable “except in limited cases of newsworthiness.”

Facebook published the policies to help people understand where the company draws the line on nuanced issues, Monika Bickert, vice-president of global policy management, said in a blog post. The company will for the first time give people a right to appeal its decisions.

The release of the content policies comes just days after chief executive officer Mark Zuckerberg testified to Congress, where he faced frequent questions about the company’s practices. They included lawmakers asking if Facebook unfairly takes down more conservative content than that from liberals or why bad content—such as fake profiles and posts selling opioid drugs—stay up even though they have been reported.

“Our policies are only as good as the strength and accuracy of our enforcement—and our enforcement isn’t perfect,” Bickert said. “In some cases, we make mistakes because our policies are not sufficiently clear to our content reviewers. More often than not, however, we make mistakes because our processes involve people, and people are fallible.”

The company has 7,500 content reviewers, up 40% from a year earlier, working in 40 languages. Facebook also has said it’s working to increase the number of workers who speak the various languages that require more attention. But with 2.2 billion users around the world, that means each reviewer is responsible on average for a userbase that is equivalent to the population of Pittsburgh, Pennsylvania.

In his testimony to Congress, Zuckerberg promised that the application of standards would improve with artificial intelligence. Those systems, however, are trained by humans and are likely to also be fallible.

While Facebook aspires to apply its policies “consistently and fairly to a community that transcends regions, cultures and languages,” it frequently fails to do so. Recent reports in the New York Times have detailed how the company has been slow to respond to posts that incite ethnic violence in countries including Myanmar and Sri Lanka. In many cases, people have reported content that Facebook has allowed to remain up, even though it violates standards.

The policies are detailed when it comes to specific problems the US has experienced—especially gun violence. The profiles of mass murderers are taken down if they’ve killed four people at once, as defined by whether they were convicted or identified by law enforcement with images from the crime, or whether they took their own life or were killed at the scene or aftermath.

One of Facebook’s definitions about harassment involves “claims that a victim of a violent tragedy is lying about being a victim, acting/pretending to be a victim of a verified event, or otherwise is paid or employed to mislead people about their role in the event.” That definition applies only when a message is sent directly to a survivor or immediate family member—as occurred with victims of the Sandy Hook and Parkland shootings.

Now that Facebook has published the policies, it’s asking for feedback to edit them. The company will host a series of events around the world to solicit advice, starting in May.

“Our efforts to improve and refine our Community Standards depend on participation and input from people around the world,” Bickert said. Bloomberg

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.

Business NewsTechnologyAppsFacebook publishes content removal policies for the first time
MoreLess
First Published:24 Apr 2018, 07:11 PM IST
Most Active Stocks
Market Snapshot
  • Top Gainers
  • Top Losers
  • 52 Week High
Recommended For You
    More Recommendations
    Gold Prices
    • 24K
    • 22K
    Fuel Price
    • Petrol
    • Diesel
    Popular in Technology