The Right Way to Fight Fake News

https://www.nytimes.com/2020/03/24/opinion/fake-news-social-media.html

Version 0 of 1.

Social media companies have been under tremendous pressure since the 2016 presidential election to do something — anything — about the proliferation of misinformation on their platforms.

Companies like Facebook and YouTube have responded by applying anti-fake-news strategies that seem as if they would be effective. As a public-relations move, this is smart: The companies demonstrate that they are willing to take action, and the policies sound reasonable to the public.

But just because a strategy sounds reasonable doesn’t mean it works. Although the platforms are making some progress in their fight against misinformation, recent research by us and other scholars suggests that many of their tactics may be ineffective — and can even make matters worse, leading to confusion, not clarity, about the truth. Social media companies need to empirically investigate whether the concerns raised in these experiments are relevant to how their users are processing information on their platforms.

One strategy that platforms have used is to provide more information about the news’ source. YouTube has “information panels” that tell users when content was produced by government-funded organizations, and Facebook has a “context” option that provides background information for the sources of articles in its News Feed. This sort of tactic makes intuitive sense because well-established mainstream news sources, though far from perfect, have higher editing and reporting standards than, say, obscure websites that produce fabricated content with no author attribution.

But recent research of ours raises questions about the effectiveness of this approach. We conducted a series of experiments with nearly 7,000 Americans and found that emphasizing sources had virtually no impact on whether people believed news headlines or considered sharing them.

People in these experiments were shown a series of headlines that had circulated widely on social media — some of which came from mainstream outlets such as NPR and some from disreputable fringe outlets like the now-defunct newsbreakshere.com. Some participants were provided no information about the publishers, others were shown the domain of the publisher’s website, and still others were shown a large banner with the publisher’s logo. Perhaps surprisingly, providing the additional information did not make people much less likely to believe misinformation.

Subsequent experiments showed why. Most viral headlines from distrusted publishers were obviously false (for example, “WikiLeaks confirms Hillary Sold Weapons to ISIS”) — even without knowing the source. Adding publisher information typically added little beyond what you could determine from the headline itself.

Consider another anti-misinformation tactic used by social media platforms: enlisting professional fact checkers to identify false content. An early Facebook strategy for combating fake news, for example, was to flag false headlines with a “disputed by third-party fact checkers” warning, and a recently leaked memo suggests that Twitter is considering a similar approach.

Unfortunately, this is also an example of an intuitive approach that research suggests may not work as expected. We and our colleagues conducted experiments that found that though people were less likely to believe and share headlines that had been labeled false — common sense was right about that — people also sometimes mistook the absence of a warning label to mean that the false headlines may have been verified by fact checkers. This is a problem because only a small percentage of false headlines ever get checked and marked: Fact-checking is a painstaking, time-consuming process, whereas troll farms and internet bots can create and distribute misinformation with alarming speed.

In other words, a system of sparsely supplied warnings could be less helpful than a system of no warnings, since the former can seem to imply that anything without a warning is true.

Another seemingly promising strategy is to provide general warnings about the existence of fake news and to offer tips about spotting misinformation. In 2017, Facebook began a public-relations campaign along those lines that included billboards and subway signage informing people that “Fake news is not your friend.” Here, too, research suggests that such tactics can be counterproductive, since they often reduce confidence in all news, regardless of veracity (which, as it happens, is the goal of many disinformation campaigns).

Of course, sometimes ideas that make intuitive sense do work. Getting people to slow down and engage in more critical thinking, for example, has been shown to reduce belief in fake news and to reduce the sharing of it.

Likewise, sometimes ideas that seem terrible turn out to be effective. In 2018, Facebook proposed surveying its users about how much they trusted various news sources and then using that information to selectively promote content from sources rated as trustworthy. That proposal was greeted with widespread condemnation and ridicule, but empirical tests that we conducted indicated that this crowdsourcing approach was highly effective at identifying sources of misinformation.

The obvious conclusion to draw from all this evidence is that social media platforms should rigorously test their ideas for combating fake news and not just rely on common sense or intuition about what will work. We realize that a more scientific and evidence-based approach takes time. But if these companies show that they are seriously committed to that research — being transparent about any evaluations that they conduct internally and collaborating more with outside independent researchers who will publish publicly accessible reports — the public, for its part, should be prepared to be patient and not demand instant results.

Proper oversight of these companies requires not just a timely response but also an effective one.

Gordon Pennycook is an assistant professor at the Hill and Levene Schools of Business at the University of Regina, in Saskatchewan. David Rand is a professor at the Sloan School of Management and in the department of brain and cognitive sciences at M.I.T.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.

Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.