National Security Innovation Forum Explores Impact of AI on National Security
It has been described as “more profound than fire or electricity.” Experts predict it will add as much as $16 trillion to the global economy by 2030 and fundamentally change the workforce, transportation, medicine, energy, and more.
Artificial intelligence (AI)—which generally refers to the ability of machines to exhibit human-like intelligence, autonomy, and judgement—is a rapidly growing field of technology that is capturing the attention of the global community, from commercial investors to policymakers, entrepreneurs, and ministries of defense. As an inherently dual-use technology, it promises to provide benefits in both the civilian and military sectors. How might AI be used to strengthen national security, and are the United States and its allies well-positioned relative to competitors in developing and using AI?
On March 17, nearly 200 global experts gathered virtually at the National Security Innovation Forum to discuss the impact of AI on national security. A collaboration between the University of California’s Institute on Global Conflict and Cooperation (IGCC), the Silicon Valley Defense Group, the National Security Innovation Catalyst, and partners from the United Kingdom, Australia, and Canada, the Forum brought together specialists from government, academia, and industry (including the investment and start-up communities) to address key national security issues raised by AI and identify practical solutions.
AI has become one of our key focus areas,” said Scott Tait, executive director of Catalyst, “because it appears that it will have disruptive impacts across all areas of human endeavor: information, politics, policy, economy, sociology, and national security. For the Forum, we brought together practitioners who are all wrestling with how this transformative technology is going to influence their worlds, and to explore how those influences are going to cascade across the traditional divisions of labor.”
Among its defense applications, AI can be used for decision making, intelligence, logistics, cyber (both offense and defense), autonomous systems control and coordination, weapons systems and a host of other areas.
“AI will be one of the most disruptive technologies of our generation, and will change how we do business today, especially in the security sector,” said one Forum presenter.
Contrary to concerns that AI will replace human decision-making, speakers stressed that human thinking, judgment, and leadership will continue to be critical—certainly in the near term—underscoring the need for leaders who are trained in how to use AI for strategy and decision-making. But there are growing questions about the ethical, technical, and policy challenges to adopting AI technologies. Key competitors to the United States in China and Russia may be less constrained by legal and ethical considerations, and thus gain advantages that the United States is un-willing to consider and ill prepared to counter.
One of the major themes of the Forum was the need for greater cooperation between the United States and like-minded countries in the adoption of AI. Given ethical questions, such as AI’s use in surveillance and its potential for exploitation and manipulation, participants said that a coordinated strategy was essential. With the “Five Eyes” (the long-standing alliance between Australia, Canada, New Zealand, the United Kingdom, and the United States) as a starting point, participants suggested that the United States should work with its allies and partners in developing and adopting AI according to democratic principles and standards, while also anticipating how others might employ it.
Unlike the era of the space race, today the vast majority of AI advances are made in the private sector and driven by consumer demand and potential for profit. Several speakers and participants stressed the need for governments to provide greater incentives for developing AI for national security goals, and addressing the roadblocks to innovation within government systems. The U.S. defense acquisition process, some noted, may need to be adapted for emerging technologies like AI, where current processes designed to minimize risk could prevent the United States from advancing the technologies it needs to compete.
If stronger incentives are needed to spur innovation, so too are approaches to protect technologies against manipulation or theft, without building new barriers to public sector aquisition.
Said Tait: “This is a two-fold challenge: the companies that develop effective AI algorithms must be allowed to protect them for commercial benefit despite government dual use, and the data sets used to train defense applications of AI must be protected in order to prevent competitors from reverse engineering the capabilities. Our current system of security clearance and information classification is insufficient to address these challenges, and a new system that protects commercial IP and provides government security is needed.”
While the challenges are many, AI, which is emerging alongside a range of other groundbreaking technologies—from 5G to quantum computing—promises to transform the national security and defense landscape for decades to come.
Learn more about IGCC’s Technology and International Security research, and about its partner organization, Catalyst, and the Silicon Valley Defense Group.