Who Will Shape AI in the Public Interest?
Recent tensions between the U.S. Department of War and Anthropic illustrate a deeper governance problem. The dispute centers on whether the Pentagon can use AI tools for fully autonomous weapons or mass surveillance of Americans. Ironically, instead of the government demanding better behavior of a company, it is a tech company raising the alarm over the government’s potential use of its products.
Public customers shape markets.
Meanwhile, the specific contract is not public. The terms are not transparent. And the Pentagon is reportedly both walking away from its Anthropic contract and signaling that it may penalize all contractors who use the company’s products. The episode underscores a reality we too often ignore: the government is not just a regulator of AI. It is one of its largest customers. And when the government spends public money, it is exercising democratic power on behalf of taxpayers.
Public institutions have extraordinary power to shape how market players behave. In this case, the federal government appears to be misusing that power. But just as concerning is that governments at every level, including states and cities, are failing to use their purchasing power to shape AI in the public interest.
Public customers shape markets.
Public purchasing has helped spur innovation, supporting foundational technologies like the Internet and GPS, as well as advanced semiconductors, and has created entire markets through advanced market commitments for life-saving vaccines. When governments buy responsibly, they can accelerate innovation aligned with public goals and demand that those selling products purchased with taxpayer dollars “be best.”
Right now, too many agencies are acting like passive consumers:
The current contractual dispute is attracting attention for good reason: the public should be alarmed when any government actor reacts petulantly to basic guardrails, especially guardrails that mirror long-standing legal constraints and ethical norms. But the underlying issue is more mundane and far more important.
Public institutions have extraordinary power to shape how market players behave. In this case, the federal government appears to be misusing that power. But just as concerning is that governments at every level, including states and cities, are failing to use their purchasing power to shape AI in the public interest.
Across the U.S., AI is entering public institutions quickly and often quietly: through enterprise-wide licenses for chatbots, pilots layered onto existing cloud contracts, features embedded in productivity suites, and heavily discounted “government offers.” The problem is not adoption. The problem is adoption without demands.
Right now, too many agencies are acting like passive consumers: they accept opaque terms, closed architectures, and vague promises of “security,” while trading away future flexibility. Afraid to miss the boat on AI and fall behind, they are racing to spread access but then buying systems that cannot be moved; workflows that cannot be exported; agents whose “memory” and tooling are not portable; and platforms that make it cheaper today to become dependent tomorrow. When elected governments relinquish technical control to private platforms without clear conditions, they are weakening democratic accountability.
If AI is becoming a layer of public infrastructure, mediating access to benefits, permits, jobs, and rights, and becoming central to how government workers do their jobs, then procurement is not a back-office function. It is democratic governance by other means.
That is why I’m organizing a six-week conversation with an incredible group of global thinkers and doers to have the public conversation the Pentagon is not. I want to ask: how can governments realistically, practically, and urgently shape the market for AI in ways that deepen democracy and strengthen our institutions of governance?
Every Wednesday through early April, we will talk about how governments can realistically use purchasing power to shape the market for AI: Not to romanticize the idea of government “building everything,” and not to indulge in performative slogans about sovereignty.
The goal is practical: to surface the concrete decisions public agencies are making right now, the structural risks those choices create, and the levers governments can pull immediately to preserve control, accountability, and long-term options.
The Questions We Need to Answer Now
1) How are governments actually buying AI today—and what does that imply?
In the first session, we start by mapping the real entry points: enterprise deals, extensions of existing contracts, hyperscaler marketplaces, pilots, point solutions, and “shadow AI” adopted outside formal procurement. The point is not to scold. It is to name the patterns that matter—loss-leader pricing, closed ecosystems, bundled services—and to ask: where do those patterns reduce risk, and where do they quietly eliminate choice?
Just as importantly, where does the government still have leverage? What contracting terms—around transparency, auditability, evaluation, safety, portability—are realistic? What should be standard, rather than negotiated ad hoc by the few agencies with the most capacity?
The question is not whether the U.S. should build a single sovereign stack. It’s whether states, regions, and public-academic partnerships can create shared capacity that keeps options open and governance public.
2) What do we mean by “Public AI”—and how is it different from sovereignty?
Europe has been having an increasingly sophisticated conversation about sovereignty. In the U.S., the conversation is often narrower: “Which vendor are we using?” The second session tries to bridge these worlds by clarifying what “Public AI” should mean in operational terms.
Public AI is not simply “built by government” or “hosted domestically.” It is a set of design and governance commitments: interoperability, transparency, accountability, human oversight, and the ability to move systems when the public interest demands it. The central question is agency: can a government change course without breaking its own services? A democracy that cannot change course cannot govern itself.
3) If we take Public AI seriously, what would a “public option” look like?
A public option is not a slogan. It is an institutional design question. Do we need shared compute? Publicly funded foundation models? Open reference architectures? Cooperative procurement? Shared data resources? And at which layer of the stack does intervention matter most?
This is where the conversation becomes concrete: a public option is not “one national platform.” It can be modular—designed to increase competition and reduce dependency while preserving public control.
4) What can states and cities do right now—given real constraints?
State and local government is where much of the most pragmatic innovation is happening—and where constraints are the tightest. This session focuses on what is within reach in the next 12–24 months: structuring RFPs to prevent lock-in, designing sandbox environments for experimentation, building modular tools that can swap models, and aligning procurement, IT, and program teams around shared governance processes.
It also asks a hard question: if enterprise-wide partnerships are becoming the default, what must governments demand before scaling them?
5) What can the U.S. learn from international efforts to retain public control?
International examples are useful not because we can copy them wholesale, but because they expose design choices. Spain’s ALIA initiative, Sweden’s shared AI capacity, and Switzerland’s federated governance models illuminate different approaches to public investment, open standards, multilingual inclusion, and long-term stewardship.
The question is not whether the U.S. should build a single sovereign stack. It’s whether states, regions, and public-academic partnerships can create shared capacity that keeps options open and governance public.
6) How do we govern—and fund—public AI sustainably?
The final session is about standards, oversight, and investment. If AI is infrastructure, then governance cannot be an afterthought. What should public agencies stop doing tomorrow? What should they start demanding from vendors? Where should they invest internal capacity?
This is where the “procurement as governance” idea becomes unavoidable. Governments should be requiring, at a minimum:
-
Portability: the ability to export prompts, agent workflows, tool configurations, and evaluation artifacts in usable formats.
-
Interoperability: architecture that supports model-agnostic swapping and avoids proprietary dependencies where possible.
-
Transparency and documentation: model/system documentation, data-use terms, and clear disclosure about retention and training.
-
Audit and evaluation rights: access for testing, red-teaming, and independent assessment—before and during deployment.
-
Pricing clarity: no “surprise” usage spirals; measurable unit costs; clear throttles and reporting.
-
Governance hooks: logging, access controls, incident reporting, and the ability to pause or roll back systems without vendor lock-in.
None of this is radical. It is what responsible stewardship looks like when the technology at issue will shape how people experience government.
The Pentagon episode is a reminder of what happens when procurement is opaque and power goes unexamined. The government will shape the AI market, whether it intends to or not. The question is whether it will do so deliberately—in ways that strengthen democratic accountability—or accidentally, by accepting defaults designed for private platforms rather than public service.
If we want AI on the public’s terms, we need to start where the leverage is: how we buy, how we build, and how we govern. Procurement is power, and it's time we used it wisely.