American Endorsement of Conspiracy Theories Is Smaller Than You Think
Concern is growing that disinformation, spreading at an unprecedented speed and scale, is causing grave damage in the U.S. and globally on everything from elections to vaccine uptake. But do as many people believe in conspiracies as is generally assumed? In the latest Talking Policy episode, host Lindsay Morgan talks with political scientists Molly Roberts and Seth Hill about why the way researchers measure public opinion may inflate the supposed prevalence of belief in conspiracy.
Headlines seem to suggest that disinformation and belief in conspiracies are big problems in the U.S. (see here, here, and here). In your new paper, you suggest that these conclusions might be overstated. Why
(Molly) When Seth and I were working on this project, we noticed that most of those headlines are generated from surveys where researchers ask the American public something like: do you believe in this conspiracy? And the respondent answers yes or no. Or they might state a conspiracy and then ask if it is true or false. So, for example, True or False: Barack Obama was not born in the United States. Or, True or False: Donald Trump won the 2020 presidential election.
The problem with this setup is that in most surveys, answering “true” means agreeing with the conspiracy, whereas answering false is disagreement with the conspiracy. And a long-understood challenge with survey research is a phenomenon called acquiescence bias, where people who respond to surveys are more likely to respond true than false, agree rather than disagree—no matter what the question is. This inflates the proportion of people who seem to believe in the conspiracy.
This doesn’t mean that disinformation and belief in conspiracies isn’t a problem in the United States. Certainly, there are still quite a few people who believe in conspiracies. It just means that we’re not accurately measuring their prevalence.
Why are people more likely to answer in the affirmative than not?
(Molly) There are a lot of different theories about this. Some people think it’s just inattention to a survey. As you might imagine, it’s really hard to get people to respond to surveys. Right now, most are either answered over the phone or online. There’s not a lot of incentive for people to pay attention to the survey or spend a lot of time on it. So, people’s default response might be to click “agree.”
There’s also some evidence that if they feel uncomfortable with a particular question, “yes” or “agree” is their default answer. We don’t know exactly what the cause of acquiescence bias is, but it’s probably some combination of those things.
How did you attempt to overcome this problem in your research?
(Seth) The problem that Molly identified is that most of the questions we saw in surveys about conspiracies ask respondents whether they agree with a conspiracy. And there’s this general prevalence to default to agreement. When people are uncertain, they’ll say they agree, or people will agree just to be agreeable to the person who’s on the other end of the phone.
The alternative we came up with is: what happens if we ask people if they agree with the opposite of the conspiracy or if they disagree with the conspiracy? We asked some people a version of the question where agreement meant you were endorsing the conspiracy, and others a version where agreement meant they were disagreeing with the conspiracy or saying the conspiracy was false. An example is: do you agree that Barack Obama was born in the United States? In this case, agreeing is counter to the conspiracy. In contrast, we asked other people: do you agree that Barack Obama was not born in the United States? If acquiescence bias was absent, then we should get the same percentage of people on average endorsing the conspiracy in the two different conditions.
We implemented this method across dozens of questions, which we collected from published research on political conspiracies and political facts, including one study on coronavirus. And for each question that had been fielded by other scholars, we constructed an alternate version of the question where it was the opposite of the original question. We fielded four surveys in the United States and three surveys in China.
It sounds so simple: to just flip a question on its head and you get a different answer. What did you find?
(Molly) We found consistently, across all of these questions, that there were really large differences in our measure of belief when we asked what we call the “positively key” where agree meant agreement with the conspiracy, and the negative key version where agree was disagreement with the conspiracy. The actual magnitude of this bias varied a lot between questions. But it could be up to 50 percent larger in the positively keyword wording as compared to the negatively key wording. So, the wording of the question was very consequential in determining the proportion of people that we estimated to believe in the conspiracy. Findings were similar for both China and the U.S.
You found particularly large acquiescence bias on questions about democratic norms and the transition of power following the 2020 American presidential election. What do you make of that?
(Seth) Those questions were fielded in December of 2020, which was obviously a very salient time for those questions. My suspicion is that if we fielded them at a less salient time, it might not have been as large. On the other hand, it does suggest that some of the findings out there showing the extent of endorsement of violence to put the “correct” person in office might be overstated.
One of the things we found that we don’t fully understand is that ideologues exhibited more acquiescence bias than independents and the less political. This is surprising, because you might have thought that ideologues—a strong conservative or a strong liberal—would have the most consistent, strongly held beliefs. Whereas someone who doesn’t pay attention to politics might be more likely to say, I don’t know, I’ll just agree with this. We actually find the opposite.
You found not just that ideologues are more likely to have acquiescence bias, but that people who identify as very conservative exhibit larger acquiescence bias than those who identify as very liberal, or those with less ideological identification. What’s your hunch about what might be driving that?
(Seth) I think there are a couple of possibilities. There’s some research out there on what’s called “expressive responding,” which says that people don’t answer survey questions with the response they think is factually correct, but rather with the response that makes them feel good. If that’s the case, then it might be a reason to believe that ideologues would be more likely to agree with conspiracies that made them feel good.
Most of the questions we asked were about right-wing conspiracies. So, it could be the case that ideologues like to express their political views by agreeing with things that are kind of fun or make their side look good. That could be an explanation for why very conservative seem to exhibit the most acquiescence bias, if most of our questions agreeing with a conspiracy means supporting the conservative side of the issue.
It could also be that there is actual belief in these conspiracies, but when the negative keyed version asks them to disagree with a non-conspiracy, it’s a little more confusing. Molly and I have talked about looking more carefully at the set of questions and trying to see if the complexity of the negative key version is somehow driving some of this.
Despite your results, the conventional wisdom remains that belief in conspiracy is a significant and widespread problem in the United States. What effect does this perception have on politics in the U.S., and on politicians’ behavior?
(Molly) It’s a really interesting question: what’s the impact of belief about beliefs on American politics? What happens when I have the impression that 50 percent of the American public believes in a conspiracy, when in fact, it’s closer to 25 percent? If I believe that conspiracy, then maybe that means I think that more people are on my side than actually are. If I don’t believe in that conspiracy, it might mean that I think that more people think differently than I do.
One of the things that is really important from a policy perspective is, if we want to target information, about the pandemic, for example, or about politics, to people who we think have the wrong information, we need to be able to target that really accurately. Our research shows that some people are more likely to display acquiescence bias, which can help policymakers better understand what types of people believe in conspiracies.
How do you think this perception about the magnitude of conspiracy belief affects the behavior of politicians in the U.S.?
(Seth) I’m not aware of any direct evidence on that question. But I think it’s really important and something we have to think carefully about. We tend to assume that politicians are very carefully attuned to what the public wants, and what the public thinks, and a lot of that information comes from polls like these. And so, to the extent that the political class believes that, say, 50 percent of Americans believe in a conspiracy rather than 20 percent, the consequences could be quite large.
I’m speculating here, but you could imagine that Democrats say: look, 50 percent of conservatives are totally unhinged, we have no reason to pay attention to them, there’s no point in talking to them. That could undermine political compromise and opportunities for building new coalitions. At the same time, there’s some conventional wisdom out there that many politicians are running scared from their primary election constituents, rather than their general election constituents. And to the extent that some of these survey artifacts lead them to misperceive what their primary electorates actually want, then the behaviors they exhibit might not reflect what the public actually wants.
You would hope that in the long run any serious errors in perceptions would lead to politicians who actually do perceive things accurately to be more likely to win elections. But that’s really a long-run thing. In the short run, to the extent the political class might be misled about some of these issues, I do think it could have serious consequences for compromise and rhetoric and political conflict in legislative bargaining.
Both of you have done significant work on political disinformation, political campaigns, voting, and elections. How are your views on these challenges evolving?
(Molly) My experience thinking about censorship and propaganda in China has really underscored to me how important truth and freedom of expression is for democracy. The main challenge of disinformation in democracy is that those two things sort of come into conflict. So, we want to have a free and open environment. We want to have a free and open exchange of ideas. Even if some of those ideas are not great ideas, we want to make sure that people feel like they’re free to say what they think and share those ideas. At the same time, when those ideas undermine truth, that is damaging, particularly when our open information environment is weaponized by organized interest or in some cases, foreign governments.
That’s one of the most interesting challenges: how do we maintain a free and open information environment and at the same time have an information environment where people can find good information and distinguish good versus bad information? That’s the question that I see myself tackling over the next few years.
(Seth) I’ve thought a lot about elections and electoral competition. One of the key theories of democracy is that electoral competition can generate representation and good government. And one of the mechanisms through which that happens is challenger candidates. Those who are not currently in office are supposed to be able to bring information to the voters about ways that the incumbents have failed to represent the voters.
Now that theory of representation and democracy obviously depends on voters’ willingness to listen to the challenger and respect that the information the challenger is bringing them is actually relevant. If a polity were to get to the point where most voters felt that they could not trust the information a challenging candidate was bringing to them, that would undermine that mechanism of democratic representation.
In my view, we’re not at that point in the United States or in other advanced democracies, but Molly’s work in China is totally consistent with this. She finds that what China does when there’s threatening information is it floods the world with lots of other information to distract its constituents. The basic idea is: if there’s so much information out there that individual citizens can’t distinguish truth from fiction, it breaks the link between a government’s actions and citizens evaluations. Again, I don’t think we’re there in the United States, but what we need to be mindful of is: are there enough voters who are open minded and able to evaluate the information that’s brought to them by political entrepreneurs and political challengers? Is that electoral mechanism still strong enough to remove incumbents who aren’t serving the voters’ interests?
If there was one takeaway from research on conspiracies that you would like U.S. policymakers to know, what would it be? And what would you want ordinary Americans to know?
(Molly) I would want policymakers to think about the data they’re using about what the American public wants and look at those surveys and think about acquiescence bias through that lens. For ordinary Americans, one of the difficulties of communicating science and social science to the American public is that it’s really hard to communicate uncertainty about our estimates. We need to do better at communicating the challenges of measuring public opinion.
(Seth) For policymakers, I also would encourage remembering that measuring social phenomena is incredibly challenging, not just endorsement of conspiracies, but in general and particularly when asking people about things they don’t think about that often. It’s important to take it with a grain of salt and ideally engage in much more careful interrogation and measurement and looking at things from different angles.
That complements the suggestion I might have for the public, which is, let’s take a pause whenever we see a single news headline from a single poll and say. It could be an accurate reflection of what the public thinks or believes, but it also could be something that’s specific today and if we ask the same question tomorrow, the answer could be different, or if we ask the question in a slightly different way, it could lead to different a conclusion. Don’t take any single piece of information as conclusive. Use your critical judgment and try to gather information from multiple sources.
 The topics covered in the surveys did not differ widely between the U.S. and China, with one exception: in China the authors also asked about genetically modified organisms (GMOs), because of the prevalence of conspiracies about GMOs in China.
The music featured in the IGCC podcast is courtesy of Gato Loco de Bajo.