Reboot Weekly: Testimony, Training, and the Infrastructure of Democracy

Reboot Weekly: Testimony, Training, and the Infrastructure of Democracy

Published on December 18, 2025

Summary

Beth Simone Noveck testifies before Congress on how AI can help lawmakers move beyond performative public engagement to use constituent input at scale meaningfully. Agueda Quiroga announces the launch of InnovateUS’s Spring semester, focused on helping public servants apply AI responsibly amid real constraints of trust, equity, and capacity. A Reboot repost of OECD.AI blog lays out a practical blueprint for building democratic, sustainable AI infrastructure. Also in the news: the Trump administration moves to override state AI laws; Congress advances the National Defense Authorization Act with uneven AI safeguards; Washington state releases AI policy recommendations for 2026; Elon Musk brings Grok to El Salvador’s public schools; and the U.K. deepens its AI strategy with Google DeepMind’s first automated research lab.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

Deliberating with the Public: Democratic Engagement Series Wrap-up – December 18, 10:00 AM ET

Discover our spring workshop series - Kicking off January 2026, register now.

 

AI for Governance

AI for Governance

Learning Together to Improve Public Service

Agueda Quiroga on December 15, 2025 in Reboot Democracy Blog

Marking the launch of InnovateUS’s Spring 2026 workshop semester, this piece outlines how public-servant feedback and partner input shaped a practical, human-centered learning agenda. The season focuses on responsible AI use, leadership through technological change, and strengthening trust in government—offering hands-on, vendor-neutral training designed to help public professionals apply innovation without losing sight of public values or real-world constraints.

Read article

Governing AI

Governing AI

Trump Signs Executive Order to Neuter State A.I. Laws

Cecilia Kang on December 11, 2025 in The New York Times

President Trump signed an executive order aimed at overriding state-level AI regulations in favor of a single federal framework, granting the attorney general authority to sue states and allowing federal agencies to withhold funding from those that do not comply. Framed as a move to secure U.S. global AI dominance and reduce regulatory fragmentation, the order puts dozens of state AI safety, consumer protection, and transparency laws at risk. Critics warn the action pre-empts state protections without replacing them with robust national standards and is likely to face legal challenges over federal authority.

Read article

Governing AI

The Good, Bad, and Really Weird AI Provisions in the Annual U.S. Defense Policy Bill

Amos Toh on December 15, 2025 in Tech Policy Press

The 2025 National Defense Authorization Act introduces limited guardrails for military and intelligence AI, including performance tracking and modest oversight of autonomous weapons. But it also weakens procurement transparency and deepens reliance on private tech contractors, raising long-term risks for accountability, civil liberties, and cost control in AI-enabled defense systems.

Read article

Governing AI

Washington State AI Task Force Releases AI Policy Recommendations for 2026

Taylor Soper on December 1, 2025 in GeekWire

An interim report from Washington’s AI Task Force lays out a state-level blueprint for regulating AI across healthcare, education, policing, and workplaces amid stalled federal action. Recommendations include stronger transparency requirements for AI training data, disclosures for workplace and law enforcement AI use, clinician oversight of AI-assisted healthcare decisions, and a new grant program to support public-interest AI startups.

Read article

Governing AI

ACM TechBrief: Automated Speech Recognition

Allison Koenecke, Niranjan Sivakumar, Jingjin Li, and Shaomei Wu on December 11, 2025 in Association for Computing Machinery

An ACM TechBrief examines how automated speech recognition (ASR) is increasingly used in high-stakes settings, from healthcare and hiring to policing and courts, despite persistent accuracy gaps and bias. Drawing on recent audit studies, the authors show ASR systems perform significantly worse for many speakers, including Black, non-native, disabled, and Deaf users, and warn that generative AI–based systems introduce new risks such as transcription hallucinations. The brief argues for stronger auditing, transparency, and governance as ASR becomes embedded in critical public and private decision-making.

Read article

AI and Public Engagement

AI and Public Engagement

The Future of Constituent Engagement with Congress

Beth Simone Noveck on December 17, 2025 in Reboot Democracy Blog

In testimony before the House Subcommittee on Modernization and Innovation, Beth Simone Noveck argues that Congress’s core challenge is not public participation, but the institutional capacity to use it. Drawing on examples from the U.S., Brazil, and Germany, she shows how pairing disciplined engagement design with AI tools can help Congress synthesize constituent input, surface expertise, and strengthen lawmaking at scale—without turning participation into a performative exercise.

Read article

AI and Education

AI and Education

Elon Musk Teams With El Salvador to Bring Grok Chatbot to Public Schools

Dara Kerr on December 11, 2025 in The Guardian

Elon Musk’s AI company xAI is partnering with the government of El Salvador to deploy its Grok chatbot across more than 5,000 public schools, reaching over one million students as part of an “AI-powered” education initiative. The move has sparked alarm because Grok has repeatedly generated antisemitic content, conspiracy theories, and extremist rhetoric, raising serious concerns about safety, oversight, and political influence in classrooms. The partnership highlights the growing risks of governments adopting untested, privately controlled AI systems in public education without clear safeguards or accountability.

Read article

AI Infrastructure

AI Infrastructure

Public AI: Policies for Democratic and Sustainable AI Infrastructures

Alek Tarkowski, Albert Cañigueral, Felix Sieker, and Luca Cominassi on December 16, 2025 in Reboot Democracy Blog / AI Wonk Blog of OECD.AI

This analysis offers a practical blueprint for “public AI,” mapping where power concentrates across the AI stack—compute, data, and models—and how governments can intervene without trying to outspend frontier labs. It introduces a gradient approach to public AI, paired with concrete policy pathways to reduce dependence on corporate infrastructure while building democratic control, public-interest functions, and open alternatives.

Read article

AI Infrastructure

Transparency in AI Is on the Decline

Rishi Bommasani, Kevin Klyman, Alexander Wan and Percy Liang on December 9, 2025 in Stanford HAI

The 2025 Foundation Model Transparency Index finds that major AI companies are becoming less transparent over time, with average scores dropping sharply since last year. Despite AI’s growing economic and social influence, most firms disclose little about training data, compute, environmental impact, or downstream societal effects. A small number of leaders, such as IBM, demonstrate that high transparency is possible, but widespread opacity across the industry is strengthening calls for regulatory intervention.

Read article

AI and International Relations

AI and International Relations

Google DeepMind Announces Its First Automated Research Lab in the U.K.

Kai Nicol-Schwarz on December 11, 2025 in CNBC

Google DeepMind announced plans to open its first “automated research lab” in the U.K., using AI and robotics to run experiments focused on advanced materials like superconductors and semiconductors. The partnership gives British researchers priority access to cutting-edge AI tools and signals a deeper integration of DeepMind’s models into U.K. government, education, and scientific research, highlighting how national AI strategies are increasingly tied to public–private infrastructure deals.

Read article