Countering Online Foreign Influence in 2020 Elections
Social media has proved an essential tool for catalyzing political activism and social change around the world. Yet, the very features that make it so useful to those with greater-good intentions—scalability, mobility, and low costs to entry—also make it prone to manipulation by malign actors who use it to spread disinformation and divisive rhetoric. These bad actors looking to sway public opinion include both fringe groups and well-funded, highly staffed government institutions. With the US presidential election approaching, voters and policymakers are rightly concerned with what should be done to mitigate the flurry of fake news stemming from beyond the border.
Manipulation of elections by international influences is, of course, far from new; even the founding fathers warned against its dangers. And history shows that the United States has engaged in the practice too, with well-documented examples of the US using military and economic forces to sway elections. But, the viral aspects of social media expand the arsenal for states to interfere in the democratic process in fundamentally new directions. They make it cheaper and easier for states to shape one another’s internal politics.
It’s been clear for years that foreign influence efforts—or FIEs for short—are infecting social media in democracies around the world as outside powers seek to shape results and thwart the will of the people. But with few exceptions, little systematic evidence on the phenomena existed, and most work consisted of deeply researched descriptions of specific campaigns or the approaches of specific actors. That collection of highly-specific diagnoses made it hard to judge the tradeoffs among different policy options. While a wealth of anecdotal evidence pointed the finger at Russia as the leader in such campaigns, no one had pulled together the growing body of evidence on who was conducting the campaigns, how they were carried out, and which countries were being targeted.
Last summer, the Empirical Studies of Conflict Project released our first Trends in Online Foreign Influence report, which methodically recorded a wide range of information about all known FIEs, which we define as efforts coordinated by one state to affect a specific aspect of politics in another, using social media, and including content intended to appear to be made in the target country. This last condition sets FIEs apart from traditional propaganda and the broader category of disinformation. Findings were published in a recent piece in the Journal of Information Warfare. Though others have done remarkable work reporting on efforts to influence elections around the world, we believe this paper is the first to codify and define a wide range of variables for each known FIE—platforms used, approach taken, methods used, etc.
The analysis reveals common threads across FIEs and also a few surprises. Attacks on the US only constituted 36 percent of all identified efforts, demonstrating that the problem is more widespread than previously thought. We found 53 distinct efforts across 24 countries in the last five years, many of which are ongoing. In terms of perpetrators, Russia leads the pack, conducting almost three-quarters of the campaigns, followed by Iran, China, and Saudi Arabia. Other countries, such as Mexico and Venezuela, have used similar tactics within their own borders, and we are currently updating our work to include such domestic influence efforts. Although 65 percent of FIEs used defamation of a public figure as a tactic, polarization—defined as enflaming both sides of a pre-existing political issue—was used in about 15 percent of campaigns. Not surprisingly, most FIEs used trolls and bots.
Though Russia is the primary producer of FIEs, it is only a matter of time until other countries get in on the action. The skillset is widespread—essentially online marketing—and the consequences for states conducting FIEs are negligible. It is not surprising that among active FIEs identified in 2019, at least 19 of them were new.
Given the severe consequences that FIEs pose, the international community, including social media platforms, should work to develop codes of conduct to govern influence campaigns. The current state of cybersecurity policy is characterized by domestic and regional actors proposing and drafting guidelines and best practices, suggesting that the space and opportunities for international collaboration exists. But support for international norms currently falls short of a critical “tipping point” in which countries are expected to act even in the face of countervailing domestic pressures. Building consensus around practical solutions will require a seat at the table for social media giants—Facebook and Twitter, among others—as well as the G8. Currently, there’s no table—i.e., no forum with the consistent agenda and proper guest list to reach a critical mass of idea exchange for consensus.
We should not expect this to be a quick process. To put the challenge in perspective, the Anti-Personnel Mine Ban Convention took almost seven years to come into force after a group of six NGOs came together in late-1992 to start the process by forming the International Campaign to Ban Landmines. And while the treaty isn’t perfect, and not everyone is on board, it is considered a key success case for the creation of international standards, one with quantifiable results.
While there are some promising movements among civil society, such as Carnegie’s recently announced Partnership for Countering Influence Operations, as well as governmental organizations working to catalyze action, much more needs to be done. Russian influence operations sullied several European elections in 2018 and 2019. And signs are already pointing to more of the same for the 2020 presidential election in the United States. At what point does the international community say, enough?
This blog post was originally published on Political Violence at a Glance. It has been reprinted with permission.
Global Policy At A Glance
Global Policy At A Glance is IGCC’s blog, which brings research from our network of scholars to engaged audiences outside of academia.
Read More