Reboot Weekly: Governments Use AI to Simplify Rules and Strengthen Participation

Reboot Weekly: Governments Use AI to Simplify Rules and Strengthen Participation

Published on April 30, 2026

Summary

Dane Gambrell interviews Reeve Bull on how Virginia used AI to analyze hundreds of thousands of regulatory requirements, cutting 35.7% of them and saving an estimated $1.4 billion annually, but only after years of human groundwork. Wietse Van Ransbeeck shows how AI is making large-scale listening usable, helping governments process tens of thousands of public inputs by matching participation to the policy cycle. And Basque MP Xabier Barandiaran describes how the Basque Country is embedding participation directly into law, making collaboration traceable and enforceable, not optional. Elsewhere, a congressional staffer builds his own AI tools in the absence of institutional support, and the FDA pilots real-time monitoring of clinical trials. This and more in this week's News that Caught Our Eye

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

Spotlight on the upcoming AI for Public Sector Legal Professionals series:

AI Basics for Public Sector Legal Professionals – May 6, 2:00 PM ET

AI and Legal Ethics – May 13, 2:00 PM ET

Crafting AI Use Policies – May 20, 2:00 PM ET

AI for Governance

AI for Governance

Rethinking Regulation: How Virginia Used AI to Streamline Its Regulatory Code

Dane Gambrell on April 28, 2026 in Reboot Democracy

Virginia used AI to analyze hundreds of thousands of regulatory requirements, cutting 35.7% of them and saving an estimated $1.4 billion annually, but only after years of manual review, clear metrics, and agency coordination. The key insight is that AI acted as a force multiplier: accelerating analysis from years to months, surfacing duplication and contradictions, and making rules more accessible, while leaving final decisions to human experts.

Read article

AI for Governance

Congress has been slow on AI. This staffer tried his own thing

Roll Call on April 28, 2026 in Nina Heller

This piece highlights how AI adoption in government is often happening from the bottom up. A congressional staffer built an AI-powered tool to track House floor activity in real time, summarizing bills, surfacing arguments, and flagging procedural moves despite limited institutional support. While Congress debates formal AI policy and procurement, individual staff are already using off-the-shelf tools to improve legislative workflows. It points to a growing gap between grassroots experimentation and slow-moving institutional adoption.

Read article

AI for Governance

FDA to use AI to track clinical trials in real time

Peter Sullivan on April 29, 2026 in Axios

The U.S. Food and Drug Administration is piloting AI tools to monitor clinical trials in real time, aiming to reduce drug approval timelines while maintaining safety standards. The initiative targets inefficiencies in how trial data is collected, analyzed, and submitted, with early pilots involving AstraZeneca and Amgen. Officials estimate AI could cut overall trial time by up to 40%, addressing “dead time” in administrative processes. The agency is also seeking public input on broader applications, signaling a shift toward more continuous, data-driven oversight in biomedical research and regulation.

Read article

Governing AI

Governing AI

AI Regulation and Human Rights: A Global Trilemma

Mathias Risse on April 22, 2026 in Carr Center for Human Rights, Harvard Kennedy School

This commentary argues that effective, rights-respecting AI governance requires three conditions: governance reach, technological power, and genuine human rights commitment, but no major bloc currently achieves all three. China combines scale and control but subordinates rights to state authority; the United States leads in AI development but lacks cohesive governance; and the European Union prioritizes rights but lacks technological leverage. The result is a global “trilemma” that forces trade-offs for other countries and weakens collective oversight, underscoring the need for coordinated international standards and stronger domestic reforms.

Read article

Governing AI

AI Companies Can’t Regulate Themselves. They Should Regulate Each Other

Mark Thomas on April 29, 2026 in Lawfare

This piece focuses on how to structure coordination in the presence of competition. Its core insight is that AI safety failures are not just technical gaps but a collective-action problem in which firms cannot prioritize safety without losing ground. Drawing on financial regulation models, it proposes a supervised self-regulatory organization (SRO) in which companies co-create binding rules under government oversight, enabling fast, technically informed, and enforceable standards that keep pace with AI development.

Read article

Governing AI

Radical Optionality: Governing Transformative AI Under Uncertainty

Christoph Winter & Charlie Bullock on April 23, 2026 in Institute for Law & AI

This essay introduces a distinctive governance strategy: instead of choosing how to regulate AI now, governments should prioritize preserving their ability to make better decisions later. The concept of “radical optionality” shifts the focus from rules to readiness, arguing that investments in information, talent, evaluation, and flexible legal authority can improve oversight without constraining innovation. By identifying a class of “no-regret” policies that enhance safety at low cost, the piece reframes AI governance as a capacity problem under uncertainty, not just risk control.

Read article

AI Infrastructure

AI Infrastructure

“The absolute edge of precedent”: Feds prepare to take on data centers

Francisco “A.J.” Camacho on April 20, 2026 in Politico

U.S. federal regulators are preparing a major intervention to manage the surging electricity demand driven by AI data centers. The Federal Energy Regulatory Commission (FERC), backed by the White House, is considering new rules to control how large energy users connect to the grid, potentially expanding federal authority at the expense of states. The move reflects mounting pressure to rapidly scale infrastructure for AI while raising questions about jurisdiction, grid stability, and who governs the physical backbone of the digital economy.

Read article

AI and Problem Solving

AI and Problem Solving

AI Is Changing Who Wins Research Grants

Yifan Qian, Zhe Wen, Alexander C. Furnas, Yue Bai, Erzhuo Shao, and Dashun Wang on April 25, 2026 in Northwestern Innovation Institute / arXiv

This study finds that AI-assisted proposal writing is reshaping how research funding is allocated. Proposals with stronger signs of large language model use were more likely to be funded by the NIH and produced more publications. At the same time, AI use was associated with lower novelty, with proposals clustering closer to previously funded ideas. The findings suggest AI is improving how ideas are packaged rather than advancing scientific discovery itself, raising concerns that funding systems may increasingly favor safer, more conventional research.

Read article

AI and Public Engagement

AI and Public Engagement

Governing with Others: The Basque Country Turns Collaboration into Rule of Law

Xabier Barandiaran on April 29, 2026 in Reboot Democracy

The Basque Country is turning participation into a legal obligation. As Member of Parliament Xabier Barandiaran explains, a new Transparency Law requires that public decisions leave a visible trail, showing how they were made, with whom, and how citizen input shaped the outcome. This goes beyond publishing information: contributions must be recorded, addressed, and traceable across the full policy process, backed by new structures to coordinate and evaluate collaboration. AI isn’t built into the system yet, but the model is designed for a future of required, large-scale sensemaking.

Read article

AI and Public Engagement

Before you engage, listen: a framework for citizen participation across the policy cycle

Wietse Van Ransbeeck on April 27, 2026 in Reboot Democracy

This piece argues that participation fails when governments confuse listening with engagement, instead of matching each to the right moment in the policy cycle. Van Ransbeeck outlines a three-stage model: open listening to set agendas, structured engagement to shape decisions, and closing the loop to build trust. Drawing on examples such as St. Louis, the article shows how sequencing participation yields actionable outcomes. AI plays a supporting role, helping governments cluster, summarize, and interpret large-scale public input.

Read article

AI and Labor

AI and Labor

The AI Labor Debate: Three Views on the Future of Work

Teddy Tawil on April 23, 2026 in Carnegie Endowment for International Peace

This piece maps the AI labor debate into three camps: the “alarmed,” who expect rapid job displacement; the “patient,” who anticipate slower, friction-filled adoption; and the “excited,” who see AI driving new job creation. The disagreement centers on two uncertainties: how fast AI capabilities and adoption will advance, and whether new jobs will outpace those lost. Rather than resolving the debate, the paper argues for policy readiness across scenarios of improving labor-market data, tracking AI’s real-world impacts, and piloting wage insurance and training programs to support workers through uncertain transitions.

Read article