MOOR STUDIO/GETTY IMAGES
COMBATING MISINFORMATION RUNS DEEPER THAN SWATTING AWAY ‘FAKE NEWS’
BY: JENNIFER ALLEN & DAVID RANDORIGINAL SITE: SCIENTIFIC AMERICAN
“Fake news”-style misinformation is only a fraction of what deceives voters. Fighting misinformation will require holding political elites and mainstream media accountable.
Americans are increasingly concerned about online misinformation , especially in light of recent news that the Justice Department seized 32 domains linked to a Russian influence operation interfering in U.S. politics, including the 2024 presidential election. Policy makers, pundits and the public widely accept that social media users are awash in “fake news,” and that these false claims shape everything from voting to vaccinations.
In striking contrast, however, the academic research community is embroiled in a vigorous debate about the extent of the misinformation problem. A recent commentary in Nature argues, for example, that online misinformation is an even “ bigger threat to democracy ” than people think. Meanwhile, another paper published in the same issue synthesized evidence that misinformation exposure is “low” and “ concentrated among a narrow fringe ” of users. Others have gone further and claimed that concerns around misinformation constitute a moral panic or are even themselves misinformation .
So should everyone stop worrying about the spread of misleading information? Clearly not. Most researchers agree that a major problem does indeed exist; the disagreement is simply over what exactly that problem is, and therefore what to do about it.
The debate largely hinges on definitions. Many researchers, and much of the news coverage of the issue, operationalize “misinformation” as outright false news articles published by disreputable outlets with headlines like “Pope Endorses Donald Trump .” Despite a deluge of research examining why people believe and share such content, study after study shows that this kind of “fake news” is rare on social media and concentrated within a small minority of extreme users. And despite claims of fake news or Russian disinformation “swinging” the election, studies show little causal connection between exposure to this kind of content and political behavior or attitudes.
Yet evidence of public misperception abounds. A violent mob stormed the Capitol, claiming that the 2020 election was stolen. One in five Americans refused to take a COVID vaccine. If one defines misinformation as anything that leads people to be misinformed, then widespread endorsement of misconceptions suggests that misinformation is common and impactful.
How do we reconcile all of this? The key is that narrowly defined “fake news”-style misinformation is only a very small part of what causes misbelief. For example, in a recent paper published in Science , we found that misleading coverage of rare deaths following vaccination—much of it from reputable outlets including the Chicago Tribune— was nearly 50-fold more impactful on U.S. COVID vaccine hesitancy than content flagged as false by fact-checkers. And Donald Trump’s repeated claims of election interference found large audiences on both social and traditional media. With a broader definition that includes misleading headlines from mainstream outlets ranging from the dubious New York Post to the respectable Washington Post , and direct statements from political elites like Trump and Robert F. Kennedy, Jr. , misinformation becomes much more prevalent and impactful—and much thornier to address.
Existing solutions focusing on falsehoods from fringe outlets will not suffice. After all, debunking every fake news link on Facebook wouldn’t have prevented Trump’s uninterrupted lying in televised debates with audiences of tens of million of Americans. Expanding the definition of misinformation will necessitate policy shifts not just from social media companies, but for academics and the media as well.
First, academics must look beyond narrow sets of previously debunked claims and study the roots of public misbelief more broadly. This presents a challenge: studying obviously false claims avoids critiques from reviewers but misses the lion’s share of the problem, whereas studying misleading but not necessarily false content with potential for widespread harm is much more susceptible to charges of bias. The risks are real, as exemplified by the effective shutdown of the Stanford Internet Observatory and by attacks on University of Washington researchers , both a consequence of conservatives crying “censorship!” Yet the reality is there will almost never be universal agreement about what is and is not misinformation. Universities and policy makers must protect academic freedom to study controversial topics, and academics should develop approaches for formalizing what content counts as misleading—for example, by experimentally determining effects on relevant beliefs.
Second, while news outlets have spilled a great deal of ink reporting on “fake news,” little has been done to reflect on their own role in promoting misbelief. Journalists must internalize the fact that their own reach is far greater than that of the hoax outlets they frequently criticize—and thus their responsibility is much larger. Unintentional missteps—like misleading reporting about a Gaza hospital explosion and weapons of mass destruction in Iraq —from mainstream media have vastly more impact than a torrent of largely unseen falsehoods from “fake news” outlets. Even though the pressure to chase clicks and ratings is intense, journalists must maintain vigilance against misleading headlines and reporting of politicians’ lies without context.
Finally, social media companies such as Meta, YouTube and TikTok must do more. Their current approaches to combating misinformation , based on professional fact-checking, largely turn a blind eye to misinforming content that doesn't fit the “fake news” mold—and thus miss most of the problem. Platforms often exempt politicians from fact-checking and deprioritize fact-checks o n posts from mainstream sources . But this content is precisely what has huge reach and therefore the greatest potential for harm—and thus is more important to tackle than relatively low exposure “fake news.” Interventions must shift to reflect this reality. For example, common media literacy approaches that combat misinformation by emphasizing source credibility may backfire when misleading content comes from trusted sources.
Platforms can also respond to misleading content that does not violate official policies using community-based moderation that adds context to misleading posts (like X’s Community Notes and YouTube’s new crowdsourced note program ). Larger platform changes such as ranking content based on quality, rather than engagement , might hit at the root of the problem rather being than a Band-Aid fix.
Combating misbelief is much more complicated—and politically and ethically fraught—than reducing the spread of explicitly false content. But this challenge must be bested if we want to solve the “misinformation” problem.
This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.
Jennifer Allen is a postdoctoral fellow at the University of Pennsylvania. She will start as an assistant professor at New York University’s Stern School of Business in the technology, operations and statistics group in the fall of 2025.
David Rand is the Erwin H. Schell Professor and a professor of management science and brain and cognitive sciences at the Massachusetts Institute of Technology.
AWFSM CATEGORIES
Activism | AI | Belief | Big Pharma | Conspiracy | Cult | Culture | Deep State | Economy | Education | Entertainment | Environment | Faith | Global | Government | Health | Hi Tech | Leadership | Politics | Prophecy | Science | Security | Social Climate | Universe | War