Reboot Weekly: 100 Interns, 1,000 Policies, 1,500 Workers, 100,000 Learners—AI, Government, and Democracy by the Numbers

Reboot Weekly: 100 Interns, 1,000 Policies, 1,500 Workers, 100,000 Learners—AI, Government, and Democracy by the Numbers

Published on April 23, 2026

Summary

Lots of numbers in this week's News that Caught Our Eye: Beth Simone Noveck interviews Matti Schneider of OpenFisca about why we may be over-investing in "100 interns on cocaine" aka GenAI when what we need are more computational legal systems. Yan Zhu maps over 1,000 AI governance documents, revealing proliferating policies but major gaps. Summer Mothwood explains how California used AI to make sense of suggestions from 1500 state workers about how to improve government. Beyond Reboot, Brookings finds federal AI use rising but uneven across agencies, while New York scales AI training to over 100,000 workers. Elsewhere, reporting on ICE’s geotracking system and proposed limits on license plate data underscores growing scrutiny of surveillance, while research on AI “personalities” and worker pushback in China points to emerging risks in decision-making and labor.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

Using Source-Grounded AI to Turn Sources into Written and Visual Communications; The Prompting Lab series – April 24, 2:00 PM ET

Simpler Services, Stronger Access: Designing Better Systems with AI; AI and Human Services series – April 27, 2:00 PM ET

Co-Designing with Communities: Building AI Tools with Residents and Frontline Staff; AI and Human Services series – April 30, 2:00 PM ET

UX Fundamentals and Why It Matters in Government; Building Better City Services series – April 30, 3:30 PM ET

AI for Governance

AI for Governance

How New York Plans to Implement Artificial Intelligence Training for State Workers

Tom Eschen and Olivia Holloway on April 16, 2026 in CBS6 Albany

New York is scaling a statewide AI training initiative to more than 100,000 public employees, following a pilot across multiple agencies. Delivered in partnership with InnovateUS, the program combines self-paced courses and live workshops to build practical AI skills across roles. Officials emphasize privacy-conscious deployment and real-world application, reflecting a broader shift: governments are not just adopting AI tools, but investing in workforce capacity to use them responsibly and effectively at scale.

Read article

AI for Governance

AI Agents Running the State: What Could Possibly Go Wrong?

Simone Maria Parazzoli and Omer Bilgin on April 15, 2026 in AI Policy Perspectives

This essay reframes the “agentic state” as a shift from governments executing decisions to orchestrating systems of AI agents across services. Drawing on emerging pilots, the authors stress-test six core assumptions behind this vision, from reliability to public trust. Their key insight is that failure will not stem from isolated errors but from misaligned systems interacting at scale. Without redesigning processes, enforcing interoperability, and updating oversight, agentic AI risks compounding bureaucratic complexity rather than resolving it.

Read article

AI for Governance

From Use Cases to Institutional Choices

Beatriz Rey on April 21, 2026 in ModParl Substack

Drawing on discussions at the Inter-Parliamentary Union Assembly, this piece examines how parliaments are moving from isolated AI experiments to broader institutional transformation. While early adoption has focused on efficiency, such as transcription and search, the deeper challenge is organizational: aligning fragmented initiatives, embedding governance, and building capacity across legislative systems. Cases from Germany and the UK highlight a shift from testing tools to redesigning processes, suggesting that the real work of AI in government lies in restructuring how institutions operate around them.

Read article

Governing AI

Governing AI

What AI Governance Documents Actually Cover and What They Don’t

Yan Zhu on April 20, 2026 in Reboot Democracy

A new analysis of over 1,000 AI governance documents reveals a pattern of concentration: policies focus heavily on well-defined technical risks like safety and security, while giving far less attention to upstream design choices, socioeconomic impacts, and everyday sectors. Rather than a lack of governance, the issue is imbalance, what gets governed tends to follow what is most legible to policymakers, leaving gaps in areas where AI’s effects are harder to define but increasingly consequential.

Read article

Governing AI

Assessing the State of AI Adoption Across the Federal Government

Valerie Wirtschafter on April 15, 2026 in Brookings Institution

AI use across the federal government has grown rapidly, with more than 3,600 documented use cases in 2025. But adoption remains uneven and operationally constrained, concentrated in a handful of large agencies and slowed by talent shortages, procurement barriers, and institutional risk aversion. Drawing on federal data and interviews, this report highlights that scaling AI in government is less about new tools and more about building workforce capacity, aligning incentives, and overcoming structural barriers to implementation.

Read article

AI and Public Engagement

AI and Public Engagement

How We Used AI to Lift the Voices of California State Employees

Summer Mothwood on April 21, 2026 in Reboot Democracy

Analyzing over 2,400 employee comments, California’s Engaged California team shows that the real challenge of AI in public engagement is not scale, but interpretation. Rather than forcing rankings, the team used AI to surface themes while keeping humans in the loop to build taxonomies, audit outputs, and refine results. By publishing their full methodology—including code and prompts—they offer a rare, open, reproducible model for how governments can responsibly use AI to make sense of complex public input without oversimplifying it.

Read article

AI and Public Engagement

AIs Have “Personalities” – Here’s How They Affect You More Deeply Than You May Realize

Tamilla Triantoro on April 13, 2026 in The Conversation

This piece explores how AI systems exhibit interaction styles that users interpret as “personalities,” shaped by training choices and human feedback. Research shows that highly agreeable models can reinforce beliefs, increase trust even when wrong, and lead users to defer to AI judgment. The article points to a growing concern: as these systems scale, design choices like tone and responsiveness can influence decision-making and behavior, raising broader questions about their societal impact.

Read article

AI Infrastructure

AI Infrastructure

A Dozen Interns on Cocaine: What One of the Longest-Running Civic Tech Projects Reveals About AI in Government

Beth Simone Noveck on April 22, 2026 in Reboot Democracy

Drawing on an interview with OpenFisca's Matti Schneider, this piece contrasts generative AI with computational legal systems that “compute” the law rather than approximate it. While large language models excel at synthesis, the article argues they are ill-suited for high-stakes public functions like benefits eligibility, where even small error rates can undermine trust. The core insight is about infrastructure: governments need auditable, deterministic systems for rights-based decisions and risk underinvesting in them as generative tools scale.

Read article

News That Caught Our Eye

News That Caught Our Eye

Inside ICE’s $12 Million Plan To Map Immigrants’ “Patterns of Life”

Katya Schwenk on April 14, 2026 in The Lever

A new report reveals that U.S. Immigration and Customs Enforcement has awarded a $12.2 million contract for “Project SAFE HAVEN,” an AI-powered geotracking system designed to map individuals’ daily routines, movements, and associations. Using persistent data collection from mobile devices and Wi-Fi networks, the tool aims to build detailed “target profiles” and categorize individuals as potential threats based on behavioral patterns. The deployment raises significant concerns about the scope of surveillance, data linkage, and the use of AI to infer risk in immigration enforcement.

Read article

AI and Public Safety

AI and Public Safety

Albany Mulls New Limits on Data from Digital License Plate Readers

Arun Venugopal on April 21, 2026 in Gothamist

New York lawmakers are considering legislation to restrict how data from automated license plate readers (ALPRs) can be stored and shared, particularly with federal agencies. The bill would require a warrant for access and limit data retention to 48 hours, a sharp reduction from current practices. The proposal reflects growing concern over how AI-enabled surveillance tools can track individuals’ movements across contexts, from immigration enforcement to sensitive locations, raising broader questions about privacy, oversight, and the expanding reach of data-driven policing.

Read article

AI and Labor

AI and Labor

Chinese Tech Workers Are Starting to Train Their AI Doubles and Pushing Back

Caiwei Chen on April 20, 2026 in MIT Technology Review

A viral GitHub project in China has exposed a growing workplace trend: employees being asked to document and “distill” their skills into AI agents that can replicate their work. While tools like Colleague Skill began as satire, they reflect real pressure from employers experimenting with automation through agent-based systems. Workers describe the process as reductive and unsettling, raising concerns about dignity, ownership, and replacement. The backlash, including tools designed to sabotage automation, highlights emerging resistance as AI adoption reshapes how labor is defined and valued.

Read article