Elon Musk’s Community Notes Feature on X is helping fight misinformation

The feature relies on volunteers to flag misleading posts and add corrective commentary with links to scientific papers or media sources. (REUTERS)
The feature relies on volunteers to flag misleading posts and add corrective commentary with links to scientific papers or media sources. (REUTERS)

Summary

  • Twitter had fact-checkers whose identity and scientific credentials were unknown. Several studies have pitted crowdsourcing against professional fact checkers and found it works just as well for checking news accuracy. Thank the wisdom of crowds.

After Elon Musk bought Twitter (now X) in 2022, the social media company got rid of many of its moderators, slashed the system whereby users could flag tweets for review, and ramped up a different system to fight misinformation—a form of crowdsourcing called Community Notes. A wave of outrage followed these changes. But the Community Notes feature has the benefit of transparency and shows scientific and medical merit. And a new academic review suggests it’s working—a least for scientific issues.

Also read: Elon Musk's X activates 'Community Notes' program in India. Here's how it works.

Researchers who study social media worry about rampant hate speech and incitements to violence; in 2023 it became prohibitively expensive for researchers to get data to study these problems. But the lead author of this new study, behavioural scientist John Ayers of University of California, San Diego, said Community Notes data was easy to obtain. 

And for hashing out factual issues in fields like science and health, social scientists recommend a crowdsourcing approach, citing studies that show the power of collective intelligence. Several studies have pitted crowdsourcing against professional fact checkers and found the former work just as well for checking the accuracy of news stories.

Ayers and other researchers looked at the accuracy of X’s Community Notes, using the contentious issue of covid vaccines as a test case. The results, published in the Journal of the American Medical Association, showed these X notes were almost always accurate and usually cited high-quality sources. The X feature relies on volunteers to flag misleading posts and add corrective commentary with links to scientific papers or media sources. Other users can vote on the notes’ value.

The old system relied on fact-checkers whose identity and scientific credentials were unknown. They could take down posts they deemed to be misinformation, ban users or use ‘shadow bans,’ by which posts are hidden without user knowledge.

Also read: Musk’s X Faces Probe in Europe Over Handling of Illegal Content, Disinformation

Content moderators employed by social media companies have also been attacked for moving too slowly and failing to take down hateful or violent content. It may be impossible for any social media company to keep up, which is why it’s important to explore other approaches.

The new system isn’t perfect, but it does appear pretty accurate. In the JAMA study, researchers looked at a sample of 205 Community Notes on covid vaccines. They agreed the user-generated information was accurate 96% of the time, and that the sources cited were of high quality 87% of the time. While only a small fraction of misleading posts were flagged, those that did get notes attached were among the most viral, said lead author Ayers.

Psychologist Sacha Altay says people tend to underestimate the power of collective intelligence, which has proven surprisingly good for forecasting and assessing information, as long as enough people participate. The public perception of misinformation is often distorted by political biases, outrage and self-delusion. 

Also read: X influencers making money with misinformation: Blame it on Elon Musk's new paid verification system

Last year, Oxford researchers prompted some reflection with a study titled ‘People Believe Misinformation Is a Threat Because They Assume Others Are Gullible.’ In other words, the people most outraged about fake news aren’t worried they’ll be fooled; they’re worried others will be. But we tend to overestimate our own discernment.

During the pandemic, post moderators labelled lots of subjective statements as misinformation, especially those judging various activities to be ‘safe.’ But there’s no scientific definition of safe, which is why people could talk past each other for months about whether it was safe to let kids back into school or gather without masks. Much of all this ‘misinformation’ was just minority opinion.

Twitter’s old post-censorship system assumed that people skip vaccines or otherwise make bad choices because they are exposed to misinformation. But another possibility is that lack of trust is the real problem—people lose trust in health authorities or can’t find the information they want, and that causes them to seek out fringe sources. If that’s the case, censorship could create more distrust by stifling open discussion about important topics.

Of course, people don’t usually portray themselves as pro-censorship, even if that’s what’s happening. Conservatives are more likely to accept censorship of material they deem indecent, while liberals are more likely to tolerate censorship of information they deem harmful.

But both sides should approve of any system that discourages blind assumptions and snap judgements and encourages open discussion, reflection and the deployment of collective minds. Musk is a divisive figure and there’s plenty to dislike about recent changes in X, but Community Notes is an upgrade. ©bloomberg

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
more

MINT SPECIALS