Reboot Weekly: Testing Whether AI Can Deliver Public Value

Reboot Weekly: Testing Whether AI Can Deliver Public Value

Published on May 7, 2026

Summary

This week on Reboot Democracy, Elana Banin examines World Health Organization research on how AI could transform evidence-informed policymaking, while warning that the harder challenge is whether ministries can build the capacity to align with frontline realities. Amedeo Bettauer argues that the public conversation on AI is being shaped by a narrow “messenger class,” leaving students, workers, and families already using these tools largely absent from the debate. And the Center for AI and Digital Policy's AI Index finds that, across 90 countries, governments increasingly legislate AI governance principles such as fairness and transparency; the gaps lie in implementation, oversight, and public-sector capacity. Elsewhere, states experiment with AI in benefits systems and child welfare, the Labor Department prepares a workforce data hub to track AI’s economic effects, and South Africa withdraws its national AI policy after fabricated AI-generated citations expose the risks of weak institutional review. This and more in this week’s News That Caught Our Eye.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

Building a Team – May 8, 12:00 PM ET

Prompting Lab Office Hours: Bring Your AI Questions – May 8, 2:00 PM ET

AI and Legal Ethics – May 13, 2:00 PM ET

Ten Things to Know About Data Centers – May 14, 2:00 PM ET

AI for Governance

AI for Governance

Using AI to Improve Child Welfare

David R. Schwartz on April 30, 2026 in IBM Center for The Business of Government

This report examines how AI can support child welfare agencies facing rising caseloads and complex regulatory demands. Using practitioner insights, it highlights early use cases that reduce administrative burden, improve access to policy and case data, and strengthen professional judgment. The findings emphasize that AI’s value lies in augmenting, not automating, high-stakes decisions, with a focus on transparency, low-risk applications, and strong governance to ensure better outcomes for children and families.

Read article

AI for Governance

Open-Source AI Tools Aim to Support Caseworkers on the Frontline

Nava Public Benefit Corporation on May 4, 2026 in Nava Labs

The Caseworker Empowerment Toolkit offers a suite of open-source AI tools designed to reduce administrative burden in social service delivery. From real-time policy chatbots to document verification and automated form-filling, the tools help caseworkers navigate complex systems and connect clients to benefits more efficiently. Early evaluations show significant improvements in accuracy, particularly in complex cases, while the open-source model emphasizes transparency, local control, and adaptability, pointing to a practical, practitioner-centered approach to scaling AI in public services.

Read article

AI for Governance

U.S. States Experiment Widely with AI, but Struggle to Measure Impact

Staff on May 4, 2026 in Code for America

This assessment maps how U.S. states are progressing unevenly across four stages of AI adoption—readiness, piloting, implementation, and impact—highlighting AI as an ongoing institutional shift rather than a one-time deployment. While nearly all states are building governance frameworks and running pilots, most implementations remain focused on internal efficiency and low-risk use cases, with more transformative, citizen-facing applications advancing slowly. The report finds that leadership, data infrastructure, and workforce training drive progress, but with limited shared methods to evaluate outcomes.

Read article

Governing AI

Governing AI

South Africa Withdraws AI Policy Over Fabricated Sources

Staff on April 27, 2026 in Reuters

South Africa has withdrawn its draft national AI policy after discovering that parts of its reference list contained fabricated citations, likely generated by AI. The policy, which aimed to establish new governance bodies and position the country as a leader in AI, was pulled due to concerns over credibility and oversight failures. Officials emphasized that the issue was not technical but institutional, underscoring the risks of unverified AI use in policymaking and the need for stronger human review, accountability, and quality control in government processes.

Read article

Governing AI

Colorado AI Law Faces Enforcement Delay Amid Legal Challenge

Marianne Goodland on April 28, 2026 in Colorado Politics

A federal judge has paused enforcement of Colorado’s landmark AI law, which aims to prevent algorithmic discrimination in areas like hiring, housing, and healthcare. The delay follows a lawsuit led by xAI and joined by the U.S. Justice Department, highlighting tensions over the law’s scope and constitutional implications. While proponents see the legislation as a necessary first step in establishing guardrails, critics argue it is burdensome and unclear.

Read article

Governing AI

From Principles to Practice: What the CAIDP AI Index Reveals

April Yoder and Grace Thomson on May 6, 2026 in Reboot Democracy

The 2026 CAIDP AI Index shows while most countries have adopted frameworks and established some oversight or participation mechanisms, fewer have built enforceable rights, completed readiness assessments, or developed the institutional capacity needed for implementation. The Index highlights a growing divide between symbolic commitments and operational governance, underscoring that real progress depends on public-sector capacity—training, oversight, and day-to-day decision-making—rather than policy adoption.

Read article

Governing AI

White House Weighs Pre-Release Oversight for AI Models

Tripp Mickle, Julian E. Barnes, Sheera Frenkel and Dustin Volz on May 4, 2026 in The New York Times

The Trump administration is considering a shift from its largely hands-off AI stance toward introducing pre-release oversight of advanced models. Discussions include creating a government–industry working group and potentially establishing a formal review process to assess safety before deployment, similar to emerging approaches in the U.K. The move follows concerns around increasingly powerful systems and signals growing recognition that voluntary safeguards may be insufficient, highlighting tensions between maintaining U.S. competitiveness and introducing structured risk governance.

Read article

AI and Labor

AI and Labor

Labor Department Prepares AI Workforce Data Hub

Matt Bracken on April 28, 2026 in FedScoop

The U.S. Department of Labor is preparing to launch an AI workforce hub that will aggregate government and private-sector data to track how AI is reshaping jobs, skills, and productivity. Designed as a “central source of truth,” the platform will provide empirical insights to inform workforce and education policy, moving beyond speculation toward evidence-based decision-making. The initiative reflects a broader push to align AI adoption with worker support through data sharing, scenario planning, and collaboration across agencies, industry, and labor stakeholders.

Read article

AI and International Relations

AI and International Relations

Global Mayors Convene to Shape AI in City Governance

Tréa Lavery on April 30, 2026 in Mass Live

Boston Mayor Michelle Wu has joined a new international Mayors AI Forum, bringing together city leaders from across the globe to shape how AI is used and governed at the local level. The initiative reflects a growing role for cities as frontline actors in AI governance, with mayors collaborating on policy frameworks, economic impacts, and practical applications for residents. As national approaches lag, the forum highlights how municipal leaders are positioning themselves to influence both the implementation and oversight of AI in everyday governance.

Read article

AI and Public Engagement

AI and Public Engagement

A Blueprint for Using AI to Strengthen Democracy

Andrew Sorota and Josh Hendler on May 5, 2026 in MIT Technology Review

This piece argues that AI is reshaping democracy at three levels: how people form beliefs, how they act politically, and how collective decision-making unfolds. As AI becomes the primary interface for information and civic participation, risks include personalized “epistemic bubbles,” agent-driven advocacy, and distorted public deliberation at scale. Yet the authors also point to emerging opportunities, from AI-assisted fact-checking to deliberative platforms that help citizens find common ground.

Read article

AI and Public Engagement

Who Gets to Define the AI Debate? A Youth Perspective

Amedeo Bettauer on May 4, 2026 in Reboot Democracy

A high school journalist examines the gap between dominant AI narratives and lived experience, arguing that a narrow “messenger class” of media and policy voices shapes debate while overlooking how AI is already used in everyday life. Drawing on examples from education, public services, and work, the piece highlights how those most affected, such as students, families, and frontline workers, are largely absent from the conversation. Bettauer calls on policymakers and journalists to broaden the range of perspectives that inform AI governance before policy hardens around an incomplete view of impact.

Read article

AI and Problem Solving

AI and Problem Solving

AI as a Multiplier for Evidence-Informed Policy

Elana Banin on May 5, 2026 in Reboot Democracy

Drawing on a new WHO discussion paper, this piece explores how AI could transform the evidence-to-policy pipeline, enabling faster synthesis, continuous updates, and more responsive decision-making. But the deeper challenge is that AI may reshape what counts as evidence, privileging data-rich sources while sidelining lived experience and local knowledge. Banin argues that realizing AI’s value requires deliberate institutional design, using AI to augment human judgment and to build processes that test model outputs against frontline realities.

Read article