Skip to main content
Trending Topics:
Rise of China
War in Ukraine
Democracy and Its Discontents
Arms Control

Nobody’s Blueprint: AI and the Governance Vacuum

April 03, 2026
Eunji Emily Kim

Blog
Eunji Emily Kim headshot photo

In 1969, four university computers exchanged the first message over ARPANET, a Defense Department project that few imagined would one day connect the whole world. For the first two decades of its existence, that origin shaped everything about how the internet developed. The Defense Advanced Research Projects Agency (DARPA) set the research agenda. Federal agencies defined the protocols. Universities, operating within a publicly funded research ecosystem, built the technical foundations. Private firms were present, but they were working within a framework that the state had designed. When new capabilities emerged, the institutions responsible for setting standards, coordinating deployment, and managing access were part of the same ecosystem that had produced the technology. The state was not just a funder. It was structurally embedded in the process of technological development itself.

The commercialization of the internet in the early 1990s changed the surface of that arrangement without changing its foundations. Private firms flooded in, and the web became a commercial space almost overnight. But the underlying infrastructure, including the protocols and the basic architecture of how data moves, remained rooted in publicly designed frameworks. The state retained meaningful access to the levers that governed how the technology worked at its core. Regulation, standardization, and procurement remained viable tools because the foundational layer was still, in a meaningful sense, public.

This arrangement—public governance and oversight—has been broken by artificial intelligence (AI).

When a handful of private labs began pushing the frontier of large language models, they did not do so within a publicly designed framework. They built their own. The training data, the foundational models, and the products built on top of them were not outcomes of government labs or public research programs. They were built by private firms, with private capital, in support of commercially defined objectives. According to the Stanford University AI Index, nearly 90 percent of notable AI models released in 2024 originated from industry.

The development of AI is not simply a story about private firms being bigger or faster. It is a story about a fundamental shift in who owns the foundational layer of the technology, with implications for who can govern it.

AI’s sidestepping of the public sector did not happen by accident. When the internet was being developed, the state moved first while private capital held back, waiting to see if the technology would show commercial viability. This hasn’t happened with AI. The commercial potential of machine learning was visible early on, and the assets required to pursue it, including data, software talent, and eventually compute, were already concentrated in the hands of a small number of technology firms. Public institutions had little opportunity to embed themselves in the foundational layer because that layer was built before the governance question was even on the table.

That exclusion has consequences. When governments are structurally absent from the foundational layer of a technology, they lose not just influence over how the technology is built, but visibility into where it is going. Governing a technology requires, at minimum, knowing what it is becoming. With AI that knowledge does not exist and not only because governments are on the outside looking in. The firms building these systems do not fully understand AI’s capabilities either. Frontier labs are scaling models, observing what emerges, and adjusting accordingly. The trajectory is being discovered, not designed.

This is what makes the governance gap structural rather than merely institutional. Previous technological frontiers including nuclear or aerospace were uncertain in many ways, but they had legible endpoints that governance institutions could orient themselves around. AI’s endpoint, however, is not just unknown. It is undetermined—possible directions exist but which one prevails depends on choices that have not yet been made. The state is not failing to govern AI because it lacks the right tools. It is struggling because governance presupposes a subject, something with a direction that can be steered. When that subject is absent, the direction a technology takes gets determined not by public deliberation, but by the momentum of whoever is already moving.

The history of genetically modified organisms (GMOs) regulation offers a model for how governments can set the terms of a technology’s development without owning or controlling it. When GMOs began entering agricultural systems in the 1990s, governments did not attempt to match the pace of biotechnology innovation or build competing research programs. What they did instead was more consequential: they defined the terms under which the technology could meet society. They established which organisms could enter food systems and under what conditions, how risks to ecosystems and public health would be assessed, and who bore the burden of demonstrating safety before deployment. That framework did not stop biotechnology from developing. But it meant that the boundaries of development were not set entirely by the logic of the laboratory or the market. The question of what the technology would ultimately become remained, at least in part, a public one, guided by the public good.

AI requires the same logic, applied under harder conditions. The entry points for regulating in the public interest are not obvious, but they exist. Governments can define the conditions under which AI systems are deployed in public services and critical infrastructure. They can set standards for transparency around how frontier models are trained and evaluated. They can establish the terms under which AI-generated decisions affect citizens including in relation to employment, credit, and public benefits. These are not questions about who builds the technology. They are questions about the terms on which that technology meets society—and whether it creates benefit or introduces harm. The goal is not to lead the frontier, but to ensure that as AI takes shape, it does so within boundaries that reflect public values, not just private ones. That has always been the core function of governance. In the age of AI, it remains the most important one.

Eunji Emily Kim is an IGCC postdoctoral fellow in technology and international security, based in Washington, D.C. She studies how society adapts to the rise of artificial intelligence, examining changes and responses across political, institutional, and public spheres.

Thumbnail credit: Wikimedia Commons

Global Policy At A Glance

Global Policy At A Glance is IGCC’s blog, which brings research from our network of scholars to engaged audiences outside of academia.

Read More