Reboot Weekly: Controlling the Stack, Governing Sovereignty, and Training the State

Reboot Weekly: Controlling the Stack, Governing Sovereignty, and Training the State

Published on February 26, 2026

Summary

This week on Reboot Democracy, Beth Simone Noveck explores how governments can provide safe, affordable access to AI at scale, arguing that the real question is not buy versus build, but who controls the infrastructure. In Research Radar, she examines academic warnings about uncritical adoption and reframes the challenge as one of power and oversight. Luca Cominassi and Beth Simone Noveck then argue that sovereignty and regulation mean little without institutional capacity. Beyond Reboot, Washington, DC, ties AI training to procurement; Colorado debates data center safeguards, Stanford HAI researchers clarify what AI sovereignty requires, and new evidence shows little electoral penalty for AI-enabled deception. NPR examines bot swarms and the U.S. Department of Labor releases a national AI literacy framework.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

AI for Governance

AI for Governance

How Institutions Can Provide AI Access Safely, Affordably, and at Scale

Beth Simone Noveck on February 23, 2026 in Reboot Democracy

As Massachusetts signs a multi-million-dollar enterprise AI contract, New Jersey and Boston offer a different path: building government-hosted AI tools inside existing cloud agreements, priced by usage rather than per seat. With costs as low as $1 per user per month in New Jersey and under $10,000 in Boston’s first year, both jurisdictions paired access with mandatory training, guardrails, and strong logging from day one. The choice is not simply buy versus build, but whether government controls its infrastructure or rents it.

Read article

AI for Governance

DC​​ Becomes First Major US City to Require AI Training

Jonathan Andrews on February 19, 2026 in Cities Today

Washington, DC has become the first major US city to mandate responsible AI training for all government employees and contractors, embedding AI literacy into its workforce. In an interview with Cities Today, Stephen Miller, DC Chief Technology Officer, discusses this initiative. Building on Mayor’s Order 2024-028, the InnovateUS course reinforces six core AI values, including transparency, equity, privacy, and accountability. Completion is required and tied directly to tool access and procurement approval. Rather than treating training as a checkbox exercise, DC is integrating it into oversight, task force review, and real-world monitoring of enterprise AI use.

Read article

AI for Governance

The Call to Create the Congressional Capacity and Technology Office (C-TECH)

Aubrey Wilson on February 24, 2026 in POPVOX Foundation

Arguing that Congress is falling behind both industry and the executive branch in AI adoption, Aubrey Wilson proposes the creation of a Congressional Capacity and Technology Office (C-TECH). Modeled on institutions such as CRS, CBO, and GAO, but focused on change management, training, and strategic technology support, the proposal frames AI not as a tool problem but a capacity problem. Rather than another IT shop, C-TECH would embed technical literacy and oversight into legislative operations, addressing a growing asymmetry between Congress and the systems it is charged with governing.

Read article

Governing AI

Governing AI

Research Radar: Academics Are Sounding the Alarm on AI Adoption. Who’s Listening?

Beth Simone Noveck on February 24, 2026 in Reboot Democracy

Reviewing a 17-author paper urging universities to resist uncritical AI adoption, Beth Simone Noveck highlights a growing concern: vendor dependency, opacity, and hype are reshaping public institutions from within. Applying existing research integrity standards, the scholars argue that many commercial AI systems fail tests of transparency and independence. But for governments, refusal isn’t realistic. The governance challenge is not whether to adopt AI but who controls it, at what layer of the stack, and on whose terms.

Read article

Governing AI

Key Takeaways from “Regulating Algorithms: What Governments Around the World Are Doing—and What Public Servants Should Know”

Luca Cominassi and Beth Simone Noveck on February 25, 2026 in Reboot Democracy

Reducing reliance on Big Tech or investing in domestic AI systems does not automatically make AI work for the public. Drawing on examples from Spain’s ALIA project, the EU AI Act, and Italy’s new AI law, this workshop recap argues that sovereignty and regulation only go so far. What ultimately matters is institutional capacity: trained staff, procurement standards, monitoring mechanisms, and clear human accountability once systems are deployed. Democratic AI depends less on ownership alone and more on governance in practice.

Read article

AI Infrastructure

AI Infrastructure

Buy versus Build an LLM: A Decision Framework for Governments

Jiahao Lu, Ziwei Xu, William Tjhi, Junnan Li, Antoine Bosselut, Pang Wei Koh, Mohan Kankanhalli on February 13, 2026 in arXiv

As governments race to deploy large language models, this paper offers a comprehensive public-sector framework for deciding whether to buy commercial AI services, build sovereign models, or adopt hybrid approaches. Drawing on case studies from Singapore’s SEA-LION and Switzerland’s Apertus, the authors evaluate trade-offs across sovereignty, cost, security, sustainability, talent, and long-term industrial strategy. Rather than framing the decision as binary, the paper argues for pluralistic, evolving strategies that treat LLMs as public infrastructure with structured re-evaluation over time.

Read article

AI Infrastructure

Will Colorado Give Data Centers a Warm Embrace—or a Cool Reception?

Sam Brasch, Taylor Dolven, and Lucas Brady Woods on February 20, 2026 in CPR News / The Colorado Sun / KUNC, Feb 20, 2026

As AI-driven data centers expand across Colorado, lawmakers face a defining choice: offer tax incentives to attract investment or impose stricter environmental and community safeguards. Competing bills would either subsidize new facilities or require renewable energy offsets and mitigation for rising utility costs and local impacts. Meanwhile, neighborhoods like Elyria in Denver are voicing concerns about air quality, electricity demand, and long-term community effects. The debate highlights a broader national tension over how states should govern the physical infrastructure powering AI.

Read article

AI and International Relations

AI and International Relations

AI Sovereignty’s Definitional Dilemma

Juan Pava, Caroline Meinhardt, Elena Cryst, and James Landay on February 17, 2026 in Stanford HAI

As governments race to secure “AI sovereignty,” Stanford HAI scholars argue the term remains dangerously underspecified. The concept spans incompatible meanings—from full-stack national self-sufficiency to softer notions of strategic autonomy—and shifts across layers of the AI stack, from compute and data to models and talent. The authors urge policymakers to move beyond vague calls for “control” and instead clarify why and where they seek greater agency. True sovereignty, they argue, is about managing interdependence, not pursuing costly isolation.

Read article

AI and Elections

AI and Elections

Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications

Andreas Jungherr, Adrian Rauchfleisch, and Alexander Wuttke on February 19, 2026 in Political Communication Journal

Across three studies involving more than 7,600 Americans, researchers find that while the public strongly disapproves of AI-enabled deception in election campaigns, parties face no meaningful electoral penalty for using it. Instead, exposure to deceptive AI increases support for stricter regulation, including calls for an outright halt to AI development. The findings reveal a troubling misalignment: norm violations trigger regulatory backlash rather than political consequences, weakening incentives for parties to self-restrain in polarized environments.

Read article

AI and Elections

‘If You Can Keep It’: A.I. And Our Democracy

Jen White and Todd Zwillich on February 16, 2026 in 1A Podcast (NPR)

Experts warn that AI-powered bot swarms and synthetic media are accelerating the scale and sophistication of political disinformation. With estimates suggesting up to 20% of social media accounts may be automated, and far higher on controversial topics, the episode explores how AI lowers the cost of producing persuasive lies, exploits engagement-driven algorithms, and erodes public trust. Guests highlight weak platform guardrails, limited transparency, and the absence of meaningful accountability, raising urgent questions about how democratic institutions can respond before the next election cycle.

Read article

AI and Labor

AI and Labor

U.S. Department of Labor Releases AI Literacy Framework

Staff on February 13, 2026 in U.S. Department of Labor

The U.S. Department of Labor’s Employment and Training Administration has published a national AI Literacy Framework outlining five foundational content areas and seven delivery principles to guide AI skill development across workforce and education systems. Designed to support flexible adoption across industries and roles, the framework aims to accelerate AI readiness using existing Workforce Innovation and Opportunity Act funding streams and the governor’s reserve funds. Positioned within the Administration’s broader AI Action Plan and America’s Talent Strategy, the guidance emphasizes scalable workforce preparation for an AI-driven economy, with ongoing stakeholder input shaping future iterations.

Read article

AI and Public Safety

AI and Public Safety

Met Police Using AI Tools Supplied by Palantir to Flag Officer Misconduct

Robert Booth on February 22, 2026 in The Guardian

The Metropolitan Police has confirmed it is piloting AI tools from Palantir to analyze internal data—such as sickness levels, absences, and overtime patterns to identify potential misconduct among officers. While officials describe the system as pattern detection followed by human review, police unions have condemned the approach as “automated suspicion,” raising concerns about opaque profiling and employee rights. The pilot underscores a broader governance dilemma, as AI expands into public safety oversight, questions of transparency, labor protections, and vendor accountability increasingly accompany promises of cultural reform.

Read article