We have some recommendations …
Investigation by Hawkfish Research. Writing by Hawkfish Creative. Thanks to Smith & Diction for the graphics.
Online disinformation can feel like an intractable problem. False narratives spread far and wide, and are often amplified by political leaders and mass media so that it seems impossible to combat distortions and lies. Once a conspiracy theory becomes pervasive in public discourse, it’s unclear if media literacy or extensive fact-checking will make a difference, even when disinformation poses a threat to public health, social cohesion, and ultimately the stability of democracy.
This is a hard problem, and no one has all the answers, but our recent investigation highlights a few aspects of online disinformation that offer clues about what we can concretely do about the problem.
- Early Detection
There is a pattern to disinformation distribution. Disinformation is not a random process. The media ecosystem of content creators, remixers, and amplifiers that collectively form a “supply chain” for the creation and normalizing of disinformation. As we saw in the hydroxychloroquine case, this supply chain can operate rapidly to convert fringe ideas into “news” that spreads to and through established mainstream media in a matter of weeks.
This repeating pattern for disinformation distribution is actually useful to understand — it means that the process of spread is somewhat predictable. Technologists should be building and improving systems that provide advance warning to the public and policymakers of disinformation that is headed their way. Like preparing for an oncoming tropical storm, these tools could be used to build better resistance to new disinformation as it emerges from the fringe.
2. Community Resilience
Mainstream media is an accelerant. Once disinformation reaches the traditional media mainstream, it is almost impossible to contain. Our investigation shows that it’s possible to successfully trace disinformation as it spreads from one online community to another from its earliest mentions. But as soon as a false narrative hits mainstream media, tracing becomes impossible. Reporting about hydroxychloroquine on national TV from a White House press conference is a “superspreader” event that distributes disinformation so widely that it becomes impossible to follow or contain. Even when credible outlets describe disinformation within their stories, important qualifiers can get lost in click-friendly headlines and scant TV news summations.
This phenomenon makes it critical to act early. Monitoring the supply chain of disinformation is one task; we also need more tools for targeted counter-messaging when we see false narratives trending. The research on how to push back against disinformation is voluminous. Increasingly we can apply this research to break the path of disinformation from hardcore fringe into mainstream public conversation.
3. Social Network Accountability
Online platforms play a major role in accelerating the distribution of disinformation. Platforms like Facebook, Twitter, and YouTube have accelerated the spread of good and bad information alike. This is not simply because they connect people online; they use content recommendation algorithms. These are designed to show you more of what’s popular, in addition to more of what you’ve liked. That means these same algorithms can, on the basis of what’s trending or popular, present stories to vast audiences who may not otherwise have been exposed to disinformation. Our investigation also shows the degree to which platforms have incentivized disinformation amplifiers (which may well be fringe players or small outlets) through their ad options. This lets fringe players monetize eyeball-catching content through ads just as legitimate publishers do.
Platforms need to take more aggressive action in identifying, labeling and/or removing the many types of disinformation users are apt to see. We’re glad that Twitter is becoming more active in adding warning labels and even key disinformation spreaders. YouTube has pledged to remove “hacked information” or videos that serve to obstruct voting processes. Facebook’s recent announcement to quit running political ads a week before election day may be well-intentioned, but the company has long avoided fact-checking or labeling ads that may contain disinformation.
We believe that these platforms and services, among others, should extend their vigilance to the weeks after November 3, where disinformation will pose a particularly dangerous threat to our democracy. Regardless of who wins, bad actors will continue to play on fears that speak to current political and social polarization in the U.S. Even worse, the days and weeks after election day may be a period of national uncertainty as an unprecedented number of mail-in ballots are slowly counted and the outcome remains a toss-up in key battleground states. At least some of these bad actors aspire to further strengthening the disinformation ecosystem in order to keep us uncertain about virtually everything we see and hear. The post-election period will be an ideal time for them to strike.
Hawkfish was founded on the idea that data-driven tools can improve our politics. Our initial round of investigations is a blueprint for action, and we will continue them in order to develop usable tools and analytics for policymakers, journalists, and political campaigns who battle with disinformation every day.
In this series, we also discuss Trump’s disinformation machine and the COVID-19 infodemic.