Skip to main content

The UN Has Passed a Resolution to Make AI Safer—It Should Do the Same for Quantum

September 12, 2024
Juljan Krause

IGCC Blog
Juljan Krause.

In an unusual display of unity, the UN General Assembly in March adopted a landmark resolution to promote the safe, secure, and trustworthy development of Artificial Intelligence systems. Backed by more than 120 member states, the text calls on countries to refrain from developing AI that, in the wrong hands, has a high potential for human rights abuses.

AI solutions have begun to permeate all sectors of society, from industry to the public sector and the military. While companies can leverage world-class talent to develop bespoke systems that promise a critical edge over competitors, safeguards and ethical use may not always be at the forefront of developers’ agendas. Unfortunately, in this day of rapidly changing technology, regulators often play catchup with latest engineering advances, which adds significant uncertainty to the sector.

This has been the case with AI, where, for example, unconstrained algorithms, intransparently applied, have introduced biases and promoted online conspiracy theories—misinformation that skews elections and may threaten the core values of democracy. Regulatory catchup is also a significant problem for AI’s next variation: quantum computing, which harnesses the principles of quantum mechanics to provide more powerful machines that will be able to make decisions of major global impact. While not universally faster, quantum computing is likely to have a big impact on climate change modelling, chemical and pharmaceutical research, materials science and weapons design. Above all, researchers believe quantum computing can make AI faster and better by enabling even more complex neural networks.

That is why the General Assembly should extend the AI resolution to quantum computing and quantum communication technologies without delay. Given their potentially game-changing impact on a global scale, quantum technologies urgently need a governance model to avoid the mistakes made with AI.

When it comes to high-stakes technologies with huge commercial implications, concerns about privacy, governance and human rights are often addressed only after the fact. Systems are built, deployed and monetized. With some delay and only after threats or abuses become evident, policymakers turn to rights issues, typically after pushback from NGOs and concerned state-society actors. With quantum technologies, we shouldn’t repeat this pattern.

The innovation race between the U.S. and China over emerging technologies is intensifying, and it is bitterly fought. Quantum computing, and to a lesser degree quantum communication, are key strategic technologies in this competition between the two great powers. Quantum technologies will have a significant impact on national and international security, for instance by providing new means for cyber espionage and data crunching capacity that may provide critical advantages in the battlefield. The space domain is also likely to see new quantum capabilities, with China having launched a quantum satellite in 2016.

Thanks to the system-critical status of these technologies, some will argue that they must not be over-regulated, lest we yield advantage to an adversary who is highly committed and perhaps less bothered about human rights. With “Q-Day” on the horizon—the inflection point at which quantum computers become genuinely better at solving real-world problems—voices are likely to amplify that worrying about quantum regulation now will compromise national security.

Should quantum governance be sacrificed for the good that is national security, it will become much harder to create good governance regimes for quantum technologies later. We must avoid pitting national security and quantum governance against each other. Digital governance should not be painted as the flipside to national security. Rather, we must think proactively, not reactively, about how a quantum-enabled world should look—or shouldn’t. The landmark resolution on AI shows that it can be done.

Juljan Krause is a Postdoctoral Fellow in Technology and International Security at the UC Institute on Global Conflict and Cooperation (IGCC). Juljan holds a Ph.D. in computer science and international relations from the University of Southampton in the United Kingdom, and certificates in quantum computing and quantum algorithms from the Massachusetts Institute of Technology.

This blog post was originally published by The Hertie School and has been reprinted with permission.

Thumbnail credit: Gavin Allanwood (Unsplash)

Global Policy At A Glance

Global Policy At A Glance is IGCC’s blog, which brings research from our network of scholars to engaged audiences outside of academia.

Read More