In an article originally published in The Times (Scottish Edition), Beth Simone Noveck argues that AI’s most important democratic use is helping governments listen. Drawing on examples from Camden and Scotland, the piece explores how AI could help public institutions process large-scale public input, strengthen participation, and rebuild trust at a time when governments face rising demand and declining capacity.
Published on May 14, 2026 by Beth Simone Noveck
Research Radar
Zero-Click Government: Omakase or Loss of Agency?
In the afterword to Gustavo Maia’s forthcoming book Zero-Click Government, Beth Simone Noveck explores the democratic risks and possibilities of anticipatory governance. While supporting efforts to reduce the administrative burdens placed on citizens, she argues that traditional requests and applications also served as an important democratic feedback signal, one that anticipatory systems risk losing when governments act on inferred demand. Her response examines what kinds of participation, transparency, contestation, and institutional learning are needed if public action is increasingly shaped by data and AI.
Published on May 13, 2026 by Beth Simone Noveck
Global AI Watch
AI Doesn’t Understand Kichwa: Ecuador’s Case for Inclusive AI Governance in the Justice System
In a new piece for Reboot Democracy, Rodrigo Cetina-Presuel, Marco Tello, and Jose M. Martinez-Sierra examine how Ecuador’s judiciary responded to the rapid arrival of AI by building a participatory governance process rooted in the country’s institutional and cultural realities. Through consultations with judicial officials, the process surfaced a critical gap ignored by most international AI frameworks: current AI systems cannot reliably interpret Indigenous languages or legal contexts such as Kichwa. The result was one of the region’s first judicial AI moratoriums, temporarily prohibiting the use of AI in Indigenous-language cases while Ecuador develops more legitimate and locally grounded governance mechanisms for the future.
Published on May 12, 2026 by Rodrigo Cetina-Presuel, Jose M. Martinez-Sierra and Marco Tello
AI for Governance
The Capitol Wire & Building Congressional Intelligence for Everyone
Congressional information has long been technically public but practically inaccessible, scattered across government sites and locked behind expensive subscription platforms. In response, Zach Florman, Communications Director for Rep. Laura Friedman, created the Capitol Wire tool. The Capitol Wire shows how AI can close that gap by turning floor schedules, bill texts, and legislative updates into real-time alerts and searchable policy briefs that are fast, verifiable, and free. The result is a tool that makes public information more legible for staffers, reporters, and citizens alike, and increases the likelihood of public engagement.
Published on May 11, 2026 by Zachary Florman
Global AI Watch
Progress on Global AI Governance: The CAIDP AI Index and Implications for the Public Sector
The 2026 CAIDP AI Index, ranking AI policy commitments across 90 countries, shows that while most governments now agree on core governance principles, the real divide lies in implementation. Many are advancing laws, oversight, and public participation, but progress lags in turning commitments into practice. As the baseline shifts from whether to govern AI to how, the report underscores that outcomes depend less on frameworks and more on the capacity of public institutions—and the civil servants within them—to operationalize these principles in everyday decisions.
Published on May 6, 2026 by April Yoder and Grace Thomson
Research Radar
Research Radar: AI as a Multiplier for Evidence-Informed Policy
A new WHO discussion paper explores how AI can accelerate research synthesis and keep evidence continuously up to date. Elana Banin welcomes the push to use AI to strengthen the evidence-to-policy pipeline, but argues the more consequential question is whether AI will redefine what counts as evidence in the first place. The harder constraint will ultimately be institutional, as most health workers lack the training and infrastructure to adopt these tools. Government decision-makers must start building the processes to test AI outputs against frontline knowledge and the capacity to make that adoption defensible.
Published on May 5, 2026 by Elana Banin
AI for Governance
Who Gets to Define the AI Debate? A Youth Perspective
A high school journalist reflects on who is shaping the public conversation about AI. While headlines focus on risk and disruption, everyday uses of AI are already helping families access benefits, students learn, and cities deliver services. The gap, Amedeo Bettauer argues, is in between those whose experiences count in defining the debate and those in power who seek out diverse experiences.
Published on May 4, 2026 by Amedeo Bettauer
Global AI Watch
Governing with Others: The Basque Country Turns Collaboration into Rule of Law
As the Basque Government moves to pass a new Transparency Law this May, it is redefining what transparency means. No longer just about access to information, the law embeds collaborative governance into its core, requiring that decision-making processes be open, traceable, and shaped with others. This piece explores what it looks like to turn participation from a principle into a legal obligation, and what it takes to make participation a structured, accountable part of how policy is made.
Published on Apr 29, 2026 by Xabier Barandiaran
Rethinking Regulation
Rethinking Regulation: How Virginia Used AI to Streamline Its Regulatory Code
A new entry in our Rethinking Regulation series, this in-depth case study by Dane Gambrell includes an interview with Reeve Bull, who led the state’s regulatory modernization effort. It traces how Virginia used AI to review decades of accumulated rules, cut regulatory requirements by over a third, and make them clearer and more accessible. It shows how governments can pair strong institutional processes with AI to modernize regulation and improve how it works for the public.
Published on Apr 28, 2026 by Dane Gambrell
AI for Governance
Before you engage, listen: a framework for citizen participation across the policy cycle
A mayor presents a plan, residents push back, and everyone leaves frustrated, not because people weren’t heard, but because listening and engagement happened at the wrong moment. This piece reframes participation as a cycle: listening to set the agenda, engagement to shape decisions, and follow-through to prove input mattered. The example of St. Louis shows how sequencing these stages turns public input into real outcomes, with AI enabling reflection on input at scale.
Published on Apr 27, 2026 by Wietse Van Ransbeeck
Global AI Watch
A Dozen Interns on Cocaine: What One of the Longest-Running Civic Tech Projects Reveals About AI in Government
What happens when governments rely on systems that sound right instead of being right? Drawing on OpenFisca’s spread from France to governments across Europe, Africa, and Oceania, Beth Simone Noveck’s interview with Matti Schneider makes the case for public infrastructure that computes the law, as well as the risks of sidelining it as generative AI scales globally.
Published on Apr 22, 2026 by Beth Simone Noveck
Research Radar
How we used AI to lift the voices of California state employees
Using AI to analyze over 2,400 employee comments, California’s Engaged California team found that the challenge wasn’t the scale of the data, but making sense of complex, layered input without oversimplifying it. Their experience shows why human judgment remains essential, from building taxonomies to catching errors, as results can shift significantly depending on how AI is applied and what people choose to trust and prioritize.
Published on Apr 21, 2026 by Summer Mothwood
AI for Governance
What AI Governance Documents Actually Cover and What They Don’t
AI governance is expanding fast, but not evenly. A new analysis from MIT and Georgetown’s CSET maps over 1,000 governance documents to show that while policies are proliferating, they cluster around familiar risks and sectors, leaving key gaps across socioeconomic impacts, upstream design, and everyday domains. The result, as relayed by research member Yan Zhu, is a more precise picture of what AI governance actually covers, what it still overlooks, and where policymakers should focus in the future.
Published on Apr 20, 2026 by Yan Zhu
Global AI Watch
But Grok Said So! How AI is Enabling Political Polarization
Across contexts like India, where author Anirudh Dinesh’s family lives, AI chatbots such as xAI’s Grok are increasingly used not to inform but to generate arguments that reinforce existing political views, creating “generative echo chambers.” Unlike passive social media exposure, users actively prompt AI to validate positions, often producing confident but inaccurate claims that go unchecked. While some research suggests AI can moderate views in neutral dialogue, real-world use skews toward advocacy, compounded by low verification and high trust in outputs. The result is that AI may not just reflect polarization, but actively deepen it, depending on how these systems are designed and used.
Published on Apr 15, 2026 by Anirudh Dinesh
AI for Impact
What Good AI In Government Actually Looks Like
More than $1 trillion in federal grants flows to communities each year, but complexity keeps much of it out of reach. This piece by Beth Simone Noveck, published by Fast Company, explores how AI can either deepen that gap or help close it. The solution is GrantWell, a community-centered tool designed with local governments to make funding accessible and public systems work as intended. Launched in Massachusetts and expanding to additional states, it shows how AI can help communities claim the resources already set aside for them.