Keywords often can't effectively identify misinformation. Human intelligence is needed. To combat fake news, Facebook needs to ask the public for help identifying false reporting
A recent New York Times investigation described how Facebook has bungled its response to the misinformation that has proliferated on its platform. Chief Executive Mark Zuckerberg acknowledged in an interview that the problems his company is grappling with “are not issues that any one company can address." He’s right: The problem of fake news has become too big for any social network to address on its own. Instead, the company should call on its users for help though crowdsourcing.
Misinformation is rife on Facebook and other social networks: Russia attempted to interfere in the US midterm elections, the Saudis employ hundreds of trolls to attack critics, fake activists in Bangladesh have been promoting non-existent US women’s marches to sell merchandise, there was a huge disinformation campaign during last month’s general election in Brazil, and fake news has triggered episodes of violence in countries including India, Myanmar and Germany.
Facebook has created a War Room where staffers try to identify misinformation, but they’re clearly outnumbered and unable to keep up with fake news from the platform. Part of the problem is the team is relying on artificial intelligence, but, as experts recently explained in the Times, keywords often can’t effectively identify misinformation. Human intelligence is needed. To combat fake news, Facebook needs to ask the public for help identifying false reporting.
The best way to handle a project too large for any one organization is to ask lots of volunteers to help. That’s how the Oxford English Dictionary was created: The editors asked members of the public to search the books they owned for definitions of particular words and mail in their findings. Thousands participated. As James Surowiecki argued in “The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations," large groups tend to accurately answer questions, even if most of the individuals in the group aren’t very rational or well-informed.
In this case, Facebook should add buttons that appear prominently below any purported news stories posted on its site, asking members of the public to weigh in on whether an article is true or false. Of course, some people would report news as fake simply because they disagree with it, while others might be genuinely duped by false reports. But Facebook has reportedly already assigned their users internal reputation scores that would help the company discount false or gullible reporters. And the number of flags on a truly false story would be expected to rise above the typical number of complaints that merely polarizing posts engender. Facebook staff would then monitor and investigate in real time any posts that are being disproportionately flagged.
Currently, Facebook users have the option of clicking on the top right of posts and giving feedback. They can point out a range of problems, including nudity, false news, hate speech and incorrect voting information. A huge crowdsourcing project would work differently, because Facebook would actively call on its community to help with the gargantuan effort and make it a more central part of its push to identify posts and accounts that should be taken down. The reporting feature would also need to be made more prominent on the platform (currently you have to search through a dropdown to find a way to flag information, so many people may not even be aware that the option exists) and users would be asked to mark whether stories are true or false, which would give Facebook much more data than merely offering the option of flagging false reports.
Independent fact-checking is also part of the European Union’s solution to fake news. In April, the European Commission created a network to fact-check as part of a new initiative to fight fake news. The commission also called on social networks to do more, noting that they had “failed to act proportionately, falling short of the challenge posed by disinformation and the manipulative use of platforms’ infrastructures." But neither this network nor platforms themselves are equipped to identify every fake report. There is simply too much content on social media and too many people with motives to deceive. That’s why a larger, concerted public effort is needed.
Two years after it became clear that fake news improperly influenced the 2016 U.S. presidential election, Facebook and other social networks are still being excoriated for not effectively combating the problem. While it’s true that Facebook has mismanaged its response, continuing to berate the company ignores a fundamental truth: Fake news has become too pervasive for any one organization to obliterate on its own. It’s time for the public to help.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
Kara Alaimo is an assistant professor of public relations at Hofstra University and author of “Pitch, Tweet, or Engage on the Street: How to Practice Global Public Relations and Strategic Communication." She previously served in the Obama administration.