Skip to main content

A Disinformation Research Agenda

June 03, 2021
IGCC

News

Despite growing concern about the threat misinformation and disinformation pose to democracy and public health, research on digital mis/disinformation is still in its infancy. On April 7, 2021, IGCC, together with partners at the Empirical Studies of Conflict Project, hosted a symposium to better understand the political economy of mis/disinformation, and to formulate a research agenda. Part of an IGCC initiative on Mis/Disinformation, the symposium brought together academics from universities and think tanks and experts from leading digital and social media platforms, including Google, Facebook, and Twitter.

One of the key themes of the symposium was the incentives of online actors, and their strategies both for promoting and limiting political disinformation. A panel discussion led by Alicia Wanless of the Carnegie Endowment for International Peace showed that the business models of platforms like Facebook, Google, and Twitter depend on selling targeted consumer attention to advertisers. But instead of flooding users with ads for shoes, these platforms often end up facilitating and amplifying mis/disinformation (mixed with other legitimate commercial content). Some of that mis/disinformation is directed by actors with strategic political or ideological goals in mind.

Defining mis/disinformation

Although there is no one accepted definition, misinformation can be thought of as false information that is spread, regardless of whether there is intent to mislead. Disinformation on the other hand refers to the spread of deliberately misleading information.

Damon McCoy of New York University described a market for fake social media accounts—or sockpuppets—at prices of $75-$900 per thousand. Sockpuppets can manipulate platform algorithms to direct traffic to fake political news ads, which are sometimes allowed by platform policies. Platforms are incentivized to reduce spam from fake accounts, but only if it diverts attention from paid ads.

Emotionally charged narratives are especially likely to gain attention on social media and are often favored by attention-maximizing algorithms. Danny Rogers of the Global Disinformation Index (GDI) reported on how this enables actors—like Cyrus Massoumi who specializes in outraging liberals—to profit from disinformation. GDI estimates that disinformation generated $235 million in ad revenue in 2019. Separately, Rogers noted that 73 U.S. hate groups raise funds online, mostly in violation of existing platform policies.

Platforms have a number of ways to limit mis/disinformation: fact-checking, reducing the rank of noncredible information, banning political misinformation in paid ads, limiting user-targeting options for political ads, banning all political ads, requiring transparency in funding of political ads, or using the extreme measure of deplatforming accounts. These policies are inconsistent across platforms, however, nontransparent, and can be gamed in various ways, such as the “news source” exemption from fact-checking. Importantly, each platform has a strong disincentives to unilaterally limit mis/disinformation if the only effect is to divert traffic (and ad revenue) to a competing platform.

In the Philippines, users have much less recourse. Jonathan Ong of the University of Massachusetts Amherst described online misinformation in Philippine elections. Disinformation is generated and disseminated by both state and commercial actors, including “public relations” firms hired by politicians to peddle false or misleading narratives online. Civil society actors lack the power to regulate platforms, with only weak tools at their disposal, such as celebrating firms, journalists, and influencers who behave responsibly, while blacklisting those who do not.

Invoking nationalism is an amplifier of misinformation, at least in China. Kaiping Chen of the University of Wisconsin discussed how nationalism is used to facilitate misinformation on social media about health and science. She finds that not all aspects of nationalism lead users to endorse misperceptions, such as COVID-19 conspiracies. But rhetoric that downgrades out-groups does lead to endorsements of misinformation. Algorithms that downrank nationalist appeals, particularly those based on blaming out-groups, would reduce amplification of conspiracy theories.

The role of research is to help platforms and policymakers develop a coordinated, strategic response. Platforms can also draw on the expertise of nongovernmental organizations (NGOs) that counter misinformation such as First Draft, and on industry-academic collaborations such as Social Science One, a partnership between Facebook and Harvard’s Institute for Quantitative Social Science. Platform responses such as redirecting hate and violence-related search terms towards resources, education, and outreach groups that can help (e.g., the ISIS-focused tool The Redirect Method), are also beginning to be evaluated by academic-practitioner partnerships.

Topics for future research include better understanding: variation across jurisdictions in rules and norms; the role of, and how best to work with, civil society actors in states that limit expression on platforms; the impacts of restricting payment channels; mis/disinformation in the Global South; and the “swag”-ification of disinformation. Cross-platform research opportunities include exploring spillover effects, integrating information on payment methods and off-platform behavior, and creating a consistent definition of mis/disinformation. Also valuable to research would be archiving data on removed content; data sharing on the targeting of ads; using a Census data center model to protect privacy; access to aggregated data for context; a data inventory so researchers know what’s possible; and transparency about keywords that trigger content removal.

Serious obstacles to research on digital mis/disinformation remain. Platforms often change their policies with lead times too short to allow evaluation; algorithms detecting disinformation may violate terms of service; enforced rules cost attention and profit; and transparent rules are easier to game. But if industry-research-NGO collaboration can be ramped up, a rich agenda of policy relevant research topics is waiting eagerly to be populated with projects.

Learn more about IGCC’s Disinformation Initiative.

Listen to the Talking Policy episode about The Disinformation Threat—and What to do About It.

/