Meta has relaxed its rules around hate speech and abuse, toeing Elon Musk-owned X, especially for sexual orientation, gender identity and immigration status, and shut the fact-checking feature on social media platforms, AP reported.
On Tuesday, Meta CEO Mark Zuckerberg said that the company will “remove restrictions on topics like immigration and gender that are out of touch with mainstream discourse," citing reasons such as “recent elections”.
Meta has added a community standards rule to its policy that users must follow.
“We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”
Simply put, Meta permits to call homosexual people mentally ill on its social media platforms such as Facebook, Threads and Instagram, AP said. According to Meta, “harmful stereotypes historically linked to intimidation" that include words such as Blackface and Holocaust denial are still not allowed.
Menlo Park, California-based Meta has removed a line from its “policy rationale” explaining why it restricts certain hateful actions. The deleted sentence mentioned that hate speech “creates an environment of intimidation and exclusion, and in some cases may promote offline violence," the report said.
“The policy change is a tactic to earn favour with the incoming administration while also reducing business costs related to content moderation,” Ben Leiner, a lecturer at the University of Virginia’s Darden School of Business, who studies political and technology trends, told AP.
“This decision will lead to real-world harm, not only in the United States, where there has been an uptick in hate speech and disinformation on social media platforms, but also abroad, where disinformation on Facebook has accelerated ethnic conflict (like) in places like Myanmar," Leiner added.
In 2018, Meta recognised that it failed to prevent its platform from being used for inciting offline violence in Myanmar, promoting hatred and violence against the Muslim Rohingya minority.
While most of the attention has gone to the company's fact-checking announcement on Tuesday, the changes to Meta's harmful content policies are worrying, Arturo Béjar, a former engineering director at Meta known for his expertise in tackling online harassment, told AP.
According to him, Meta will rely more on users to report before taking any action rather than proactively enforcing stringent rules against issues such as self-harm, bullying, and harassment. Therefore, the new policy is a concern.
Meta said it plans to focus its automated systems on “tackling illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams.”
“Meta knows that by the time a report is submitted and reviewed the content will have done most of its harm," Béjar said.
“I shudder to think what these changes will mean for our youth, Meta is abdicating their responsibility to safety, and we won’t know the impact of these changes because Meta refuses to be transparent about the harms teenagers experience, and they go to extraordinary lengths to dilute or stop legislation that could help,” he added.
Catch all the Business News , Corporate news , Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.