The Disinformation Threat—and What to Do About It
Manipulation and deception have always been a part of politics. But misinformation and disinformation are flourishing in the digital age, with social media, and new technologies like artificial intelligence, making fake content easier to create and disseminate. In this interview, Talking Policy host Lindsay Morgan talks with IGCC expert Jacob Shapiro, a professor of politics and international affairs at Princeton University and co-director of IGCC’s new research initiative on disinformation, about what disinformation is, who’s doing it and why, and what can be done about it.
Subscribe to Talking Policy on Spotify, Apple Podcasts, Soundcloud, or wherever you get your podcasts.
Disinformation got much of the blame for Brexit and Trump’s election and has been cited as an essential contributor to the January 6 insurrection in Washington DC and the conspiracy known as QAnon. What is disinformation?
One trait that people often associate with the phenomenon is the veracity—or not—of the factual claims being made. The second thing centers on whether the information is being conveyed in a manner that is honest about who you are. In my research group, the thing we worry about is not people saying things that aren’t true—we worry about people creating false impressions of who they are.
As a society, we put a lot of weight on giving people who have heterodox views the opportunity to have a voice. A big part of what is happening now is people are realizing that they can make money off heterodox views, and can gain political power off of it. And so they are spreading, to use the Harry Frankfurt definition of the term: bullshit.
So disinformation is a combination of a lack of transparency about who the messenger is, and mal-intent, right? It’s not just a lie. It’s a lie with some kind of nefarious intention behind it?
The distinction between mis and disinformation is really valuable. For a while, people understood misinformation as things that are factually wrong, and disinformation as things that are intentionally made up. So, misinformation would be me sharing incorrect information about the side effects of the COVID-19 vaccine, or incorrect information about how severe COVID is. Disinformation would be me promulgating—intentionally—a lie about those things.
You are a part of a research group led by Princeton University that has developed a way to track online foreign disinformation campaigns—specifically state-backed campaigns. What have you found?
Going back to 2013, there have been at least 80 different times when one country has tried to influence a political issue in another through covert action on social media. What’s fascinating about that is it’s almost all done by three countries—China, Russia, Iran—and they’re targeting a relatively small number of other countries, mainly western democracies.
That sounds really bad, but it’s not in the following sense. There are at least 30 countries that have employed this technology against their own citizens, none of whom, with the exception of the ones I mentioned, are projecting it outwards. In other words, the skills needed to engage in online disinformation are very widespread. The use is not.
Eighty campaigns is much less than I would have assumed give how inexpensive and effective it is. Is it less effective than is generally assumed?
The people who argue that disinformation campaigns are effective fall into three categories: the marketers who sell them to politicians; the bureaucrats in dictatorships who stand to lose their fortunes and potentially their lives if they go to their boss and say, “Hey, you know that thing you had us do with great risk against America. It didn’t work.” And then you have people who work on this stuff for a living and are motivated to say that “the social ill that I study is really scary and we should be worried about it.”
There are a lot of reasons to be cautious about how worried we are. One is that the volume of activity by foreign countries is quite low. If you look at American Twitter, the volume of content that has been credibly attributed to foreign actors is tiny—hundreds of thousands to millions of tweets in a sea of hundreds of millions a day. There’s a wonderful study that was in Science a few months back where they actually tracked the entire media consumption of a large sample of Americans. They weren’t looking at disinformation—they were looking at certain fake news channels, like common purveyors. Relative to people’s total media consumption, fake news was a fraction of 1 percent. So, if we believe that is really moving people, then they are the best damn political advertisers in the history of the world.
But don’t we know that disinformation can shape behavior—isn’t that something we should worry about?
It is very clear that exposure over multiple years to ideas purveyed through mass media, can shape very consequential actions—things like fertility decisions, risk-taking during war time, economic investments, where to live, participation in genocide. That is very well-documented.
If misinformation is shaping the larger media environment in consequential ways, then that might be a mechanism for real behavioral effects. Looking at the response to COVID-19, there’s definitely reason to be worried about that.
You wrote a piece last year for Political Violence At A Glance about the proliferation of misinformation about COVID-19, from its origins to people peddling false cures. What’s happening and what are some of the consequences?
There’s pretty good evidence that consumption of and belief in misinformation about COVID-19 is correlated with higher transmission rates, lower compliance with health measures, and lower vaccination rates. The more interesting, and I think important, thing about COVID-related misinformation is that some unknown, but possibly massive share of the medical misinformation is not about authentic belief in expression, but about grift and profit. It is people trying to drive traffic to websites, increase their follower counts, and get people to pay them for saying things that play into people’s interests in conspiracy theories.
How lucrative is it?
Estimates vary. Some of the best work has been done by News Guard and the Global Disinformation Index. The estimates from News Guard are that ad revenue associated with fake news is somewhere around $185 to $200 million a year. GDI puts a similar value on ad sales by sites that are promoting misinformation.
That sounds really low, actually.
It does, depending on how many people you think are engaged in this. One way to think about what’s happened, is that we have created this wonderful ecosystem on the Internet from monetizing all kinds of creative activity, and that’s given us some wonderful things like a direct relationship between artists and people. But it has also enabled people to monetize conspiracy theories and political and health misinformation. People have always found ways to make money off BS; the challenge today is, because of the scalability of these online platforms, it’s possible for the for-profit actors to ramp up the volume of their activity in ways not possible before.
Is there a way to rein in people who profit off misinformation?
We have very different standards in the United States and in many countries for protecting political speech versus commercial speech. If misinformation around politics and health and other issues is understood as political speech and authentic expression, then the ability to regulate it is roughly zero. If it is commercial expression, the ability to regulate it is massive, and the set of institutions that have at least technically the ability to do so is quite large.
A critical task for researchers is to understand how much of the misinformation that’s out there is authentic expression versus for-profit activity.
You direct a project called Empirical Studies of Conflict, which is co-directed by IGCC’s Eli Berman, and you both, together with Molly Roberts recently launched a project with IGCC on the political economy of disinformation. What will you be doing?
The purpose of the project is to better understand the economics behind the misinformation. There’s a body of knowledge in economics and political economy, and to some extent in political science, that can be applied to understanding for-profit misinformation that isn’t currently being applied to it. People who work on misinformation mostly come from communications and computer science backgrounds. It struck us as a place where the kind of interdisciplinary work that both ESOC and IGCC do, could have real value.
To what extent will you be collaborating with the major platforms like Twitter, Google, and Facebook?
We will certainly maintain ties with the people who are on the front lines dealing with the policy problem, which means people in the policy groups at Facebook and Twitter and Google. With disinformation, a large portion of the public policy problem is being managed by for-profit companies instead of by government. They’re thinking about policy, but unlike the U.S. government or the World Bank, they lack mature policy bureaucracies.
How concerned are people from Google and other companies about this problem?
There are absolutely people in these organizations whose mission is to maximize revenue. There have been moments in the last couple of years where people have looked at it and said: This is an existential threat to our business model. If we don’t get our heads around this problem and wrestle it under control, we’re going to be regulated out of business.
Then there are other people in these organizations who have a rational, well-argued position for why this is a really small problem in the scope of the things that they do, and commands way too much senior leadership attention. And there are people who view it as a fundamental threat to democracy and want to try and get the companies to do things to address it.
The way to think about the companies is as complex organizations—what are the coalitions in them that are pushing the organization in the direction of goodness, and how can we help those coalitions?
What kinds of approaches are being used right now to identify and reduce peoples’ exposure to online disinformation?
There are strong arguments for things like getting rid of CDA 230 protections so that the platforms are liable for the content they host; or for breaking up the platforms. But I’m not sure those arguments are right.
Right now, the policy bureaucracies, such as they exist inside the companies, are operating on a very thin evidentiary base. They have very little ability to analyze anything beyond their own platform, and so they’re not making decisions in the ways that mature organizations that govern large commons should. We need to figure out ways to get them, and the government regulators who would like to regulate them, more reliable information on what’s going on in the information environment, and what would the likely impact of different kinds of policies be?
The barriers to doing that are largely institutional. We don’t have the right organizations to handle the sharing of data between platforms in ways that protect anonymity and enable peer review and high reliability science, but don’t go all the way to becoming academic research. And we don’t have people who are incentivized to do the boring but necessary work of monitoring the space.
We need to find a way to support institutions that solve those two problems.
The music featured in the IGCC podcast is courtesy of Gato Loco de Bajo.