Watch the workshop: “Regulating Algorithms: What Governments Around the World Are Doing—and What Public Servants Should Know,”
Around the world, governments are being told two things at once: adopt AI quickly, and reduce dependence on foreign (aka US) technology providers. Whether labeled “AI sovereignty,” “strategic autonomy,” or “domestic AI ecosystems,” the implication is that the public sector should build alternatives to dominant U.S. platforms.
While reducing vendor lock-in and avoiding dependence on unaccountable, privately managed platforms are legitimate goals,“sovereignty” alone does not guarantee that AI systems serve the public interest.
While reducing vendor lock-in and avoiding dependence on unaccountable, privately managed platforms are legitimate goals,“sovereignty” alone does not guarantee that AI systems serve the public interest.
It is possible to have open-source models that are still being deployed without oversight. Transparency requirements can remain symbolic. Agencies can lack the expertise to evaluate what they are buying, and still fail to use AI well.
The tension between AI sovereignty and what we might call public or democratic AI surfaced repeatedly during our InnovateUS workshop, “Regulating Algorithms: What Governments Around the World Are Doing—and What Public Servants Should Know,”
Together with Mihir Kshirsagar from Princeton’s Center for Information Technology Policy, the conversation explored how to apply the lessons learned from public-interest AI labs about building, adapting, and using AI in the public interest to the practical needs of public agencies seeking rapid AI adoption to improve public services.
The Public AI Framework is emerging as an important, more democratic alternative for thinking about how governments buy, build, and govern AI.
ALIA: Public AI in practice
One theme that became clear is the need to create AI systems that can be audited, maintained, and governed in the public interest over time.
Luca opened by explaining that Public AI comprises a set of policies for orchestrating the development of AI systems under transparent governance, equitable access to core components, and a clear focus on public-purpose functions.
He shared the example of the ALIA project in Spain, a public-interest language model initiative designed to treat AI capabilities as public infrastructure. Developed with public investment and an emphasis on openness and transparency, the project reflects a broader European effort to ensure that core AI components — compute, models, and data — remain accessible to public institutions and smaller organizations, not only to large technology firms.
The ALIA example helped ground the idea that building public alternatives to commercial AI systems is only part of the challenge. The other part required for success is governance, or how institutions ensure that AI systems are used responsibly once they are deployed.
With that in mind, the workshop reviewed the EU AI Act and other emerging national legislation in Europe to understand how regulation seeks to translate principles such as transparency, accountability, and human oversight into institutional practice.
The EU AI Act and institutional capacity
The EU AI Act offers one of the most comprehensive attempts to regulate AI through a risk-based framework, distinguishing between prohibited uses, high-risk systems, and systems subject to transparency obligations. Rather than reviewing every provision, the workshop focused on what the Act means for public institutions in practice.
One important takeaway is that definitions embed policy choices. How the law defines an “AI system,” what counts as “decision-making,” and what qualifies as “meaningful human involvement” determines what falls within regulatory oversight. These distinctions shape how agencies design workflows and assign responsibility.
Regulation cannot ensure responsible AI use without institutional capacity.
Another key insight concerns AI literacy, which the Act explicitly requires of organizations that develop or deploy AI systems. This reflects a broader reality: regulation cannot ensure responsible AI use without institutional capacity. Documentation, transparency obligations, and impact assessments only matter if agencies have trained staff with the authority and time to interpret them and intervene when necessary.
The discussion also touched on the evolving European regulatory environment, including proposals to simplify digital regulation through the “Digital Omnibus” process, which may affect how the AI Act is implemented over time.
National implementation: the Italian example
To illustrate how these principles are being translated into practice, Luca highlighted Italy’s national AI law (Law no. 132/2025). The legislation emphasizes human-centric protections alongside digital sovereignty goals, reinforcing principles such as transparency, proportionality, data protection, non-discrimination, and sustainability.
Two provisions were especially relevant to the workshop discussion.
First, regulated professionals such as lawyers and accountants must disclose when they use AI systems in client relationships to preserve fiduciary trust.
Second, the law reinforces a human-in-the-loop requirement for public administration, clarifying that AI systems should support, rather than replace, administrative decision-making and that public officials remain legally responsible for outcomes.
Responsibility for public decisions remains clearly located within governing institutions, even as AI systems become more capable.
These provisions reflect a broader concern raised during the workshop that ensuring that responsibility for public decisions remains clearly located within governing institutions, even as AI systems become more capable.
Three takeaways for public servants
Stepping back from the legal and technical details, three practical lessons emerged from the workshop.
-
First, reducing dependency on vendors is important, but governance matters more than ownership. Public AI initiatives such as ALIA show how governments can invest in shared infrastructure and open capabilities. But whether systems are proprietary or open source, public institutions still need procurement standards, monitoring practices, and clear accountability structures to ensure AI serves the public interest.
-
Second, regulation only works when institutions have the capacity to implement it. The EU AI Act’s emphasis on AI literacy reflects transparency requirements, documentation obligations, and impact assessments that depend on trained staff who can evaluate systems and intervene when necessary. National legislation, such as Italy’s AI law, reinforces the principle that AI systems may support decision-making but cannot replace human responsibility in public administration.
-
Third, oversight must match how AI systems actually operate. Governments often evaluate AI systems at procurement but rarely monitor them continuously after deployment. Yet models evolve, workflows change, and risks often surface only through use. Meaningful human oversight requires operational mechanisms, including monitoring, override authority, and error-detection processes, not just policy commitments.
What’s next: the Public AI series
The upcoming InnovateUS Public AI Series continues this conversation by focusing on how governments can buy, build, and govern AI in ways that align with public values.
The series will explore procurement practices, shared infrastructure, open innovation, and strategies to avoid vendor lock-in while ensuring that systems mediating access to rights and services remain transparent, portable, and accountable.
The workshop made clear that responsible AI adoption depends not only on regulation or technology alone, but on institutions capable of governing both.