Reboot Weekly: Governing Agents, Expanding Evidence, and Making Hard Choices

Reboot Weekly: Governing Agents, Expanding Evidence, and Making Hard Choices

Published on April 2, 2026

Summary

This week on Reboot Democracy, Sarosh Nagar and David Eaves examine what it will take for governments to govern AI agents already operating in open environments, from building trust infrastructure to managing multi-agent risks. Alister Martin and the Link Health team show how pairing AI with human navigators can help close the public benefits gap, underscoring the need for stronger evidence to guide investment. Beth Simone Noveck reflects on the challenges of teaching democratic engagement in the age of AI, highlighting the tradeoffs behind course design. Beyond Reboot, the White House announced new appointments to the President’s Council of Advisors on Science and Technology, as emerging work in AI ethics and political theory argues that AI systems need “normative competence” to function in democratic contexts. California is moving to limit AI workplace surveillance, New York City schools have introduced guidance, and the National Science Foundation has launched a nationwide effort to build AI readiness. Evidence of chatbot dependency is prompting calls for safeguards, a wrongful arrest tied to facial recognition underscores persistent challenges, and globally, export controls continue to struggle to contain AI proliferation.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

Tools for Innovation: Service Blueprinting 101; Ideas in Action series  April 2, 2:00 PM ET

Governing and Funding Public AI: Standards, Oversight, and Sustainable Investment, Practical Strategies for Buying and Building Public AI series April 8, 2:00 PM ET

Prompting Lab Office Hours: Bring Your AI Questions, the Prompting Lab series  April 10, 2:00 PM ET

The Identity Challenge: Tackling User Disambiguation and Data Integration Across Programs,AI and Human Services series  April 13, 2:00 PM ET

AI for Governance

AI for Governance

The AI Agents Are Here: A Technical Blueprint for Governments

Sarosh Nagar and David Eaves on March 30, 2026 in Reboot Democracy

As AI agents move from capability to autonomy, this piece argues that governments must shift from passive adoption to active governance. It outlines a three-part blueprint: building trust infrastructure (e.g., agent identity and standards), preparing for multi-agent risks (including coordination failures and adversarial behavior), and strengthening institutional capacity through education, liability frameworks, and public-sector experimentation. Agents are already operating in open environments, and without deliberate governance, states risk missing out on benefits and failing to manage emerging systemic risks.

Read article

AI for Governance

The Next Frontier: AI, Equity, and the Future of Public Benefits

Alister Martin, Ar’Sheill Monsanto, Timothy Scheinert, and Austin Tsai on March 31, 2026 in Reboot Democracy

This piece examines how AI can help close the persistent gap in public benefits enrollment, where billions go unclaimed due to fragmented systems and administrative complexity. Focusing on Link Health’s hybrid model, it shows how AI tools—like OCR-powered document processing, multilingual translation, and chatbot intake—work best when paired with trained human navigators. The next frontier is evidence. States need rigorous, comparative research to determine which combinations of AI tools, portal design, and human support actually improve outcomes and justify public investment.

Read article

AI and Public Engagement

AI and Public Engagement

Designing Democratic Engagement in the AI Era: Three Hard Choices

Beth Simone Noveck on April 1, 2026 in Reboot Democracy

Beth Noveck' explains three difficult decisions made in the process of designing the new InnovateUS course on AI and democratic engagement. Over the past week, our team drafted, debated, and cut more than twenty-five thousand words down to a working script, informed by over three hundred comments from fifty advisors across twenty-four countries and a room full of democratic theorists in Barcelona. Designing a one-hour course on public engagement means confronting genuinely hard questions—about representativeness, political framing, and audience—where thoughtful experts disagree and every choice involves a real tradeoff.

Read article

AI and Public Engagement

Building AI for the Democratic Matrix: A Technical Research Agenda for Normative Competence and Normative Institutions

Gillian K. Hadfield, Rakshit Trivedi, and Dylan Hadfield-Menell on March 3, 2026 in Knight First Amendment Institute at Columbia University

This paper argues that aligning AI with democracy requires more than encoding rules or aggregating preferences. Instead, AI systems must develop “normative competence,” the ability to interpret, adapt to, and participate in dynamic social norms that underpin democratic systems. The authors propose building digital “classification institutions” to guide AI behavior in real time, alongside agents capable of predicting and responding to social and legal expectations. Without these capabilities, autonomous AI agents risk destabilizing democratic processes embedded in everyday decisions across markets, governance, and civic life.

Read article

Governing AI

Governing AI

President Trump Announces Appointments to President’s Council of Advisors on Science and Technology

Staff on March 25, 2026 in The White House

The White House announced the first appointments to the President’s Council of Advisors on Science and Technology (PCAST), a body tasked with advising on U.S. leadership in emerging technologies, including AI. The council brings together prominent technology and industry leaders and will focus on workforce impacts and national competitiveness in what the administration calls a “Golden Age of Innovation." Further appointments and policy direction are expected as the council begins its work.

Read article

AI and Labor

AI and Labor

Your Boss’s Algorithm is Watching. California Wants to Make It Look Away

Jacob Ward on March 28, 2026 in Hard Reset

AI-powered workplace surveillance is rapidly expanding, tracking worker behavior, productivity, and even emotional states in real time. This piece examines proposed California legislation to limit these practices, including bans on fully automated discipline and restrictions on predictive behavioral profiling. The bills would require human oversight, transparency, and limits on the use of worker data. As AI systems increasingly shape workplace conditions, the debate highlights growing tensions between efficiency gains and worker rights, safety, and autonomy.

Read article

AI and Labor

Powering Workforce Resilience in the Age of AI: The Case for AmeriCorps

Erin Mote et al. on March 26, 2026 in EDSAFE AI Alliance

This white paper argues that AI is eroding the “first rung” of the career ladder by automating entry-level roles, as youth unemployment rises and job postings decline sharply. It proposes modernizing AmeriCorps into a scalable workforce solution focused on human-centric skills such as critical thinking, dialogue, and teamwork. The report outlines policy reforms to integrate national service into workforce systems, expand training pathways, and build resilient, AI-era entry points into careers, supported by strong ROI and nationwide infrastructure.

Read article

AI and Public Safety

AI and Public Safety

Protecting the Public from Chatbot Harms: Aligning State Policy with Research

Serena Oduro, Briana Vecchione, Meryl Ye, and Livia Garofalo on March 25, 2026 in Data & Society

Research shows users often turn to chatbots during emotional vulnerability, forming dependent relationships despite understanding they are not human. Existing policies focus on disclosures and crisis detection but overlook broader design risks. The authors call for stronger safeguards, including limits on manipulative interactions, protections against dependency, stricter data governance, and independent audits. Effective regulation, they argue, must address how chatbots are designed and deployed.

Read article

AI and Public Safety

Tennessee Grandmother Jailed After AI Facial Recognition Error Links Her to Fraud

Marina Dunbar on March 12, 2026 in The Guardian

A Tennessee woman spent nearly six months in jail after facial recognition software wrongly identified her as a suspect in a bank fraud case. Despite having no connection to the crime, she was arrested, extradited, and detained until evidence proved she was over 1,200 miles away at the time. The case highlights ongoing risks of relying on AI in law enforcement, particularly around accuracy, due process, and accountability, and adds to growing evidence of wrongful arrests linked to flawed facial recognition systems.

Read article

AI and International Relations

AI and International Relations

Regulating Transfers of AI Algorithms, Training Data and Models: The Potential and Limitations of Export Controls

Kolja Brockmann on March 25, 2026 in Stockholm International Peace Research Institute

As AI becomes central to military and security systems, this analysis examines how export controls can govern the transfer of algorithms, training data, and models. While existing frameworks can apply to AI as dual-use technology, ambiguity around definitions, enforcement challenges, and the widespread availability of general-purpose models limit their effectiveness. The piece highlights gaps in international norms, particularly regarding high-risk uses such as autonomous weapons.

Read article

AI and Education

AI and Education

Guidance on Artificial Intelligence

Staff on March 27, 2026 in New York City Public Schools

New York City Public Schools released comprehensive AI guidance outlining how schools can adopt AI while protecting students, educators, and families. The framework introduces a strict approval process for AI tools and uses a “traffic light” system to define prohibited, conditional, and encouraged uses. It prohibits AI from making decisions about grading, discipline, or student placement, while allowing supervised use for tasks like lesson planning and translation. The guidance emphasizes transparency, data protection, and ongoing stakeholder input as the system evolves.

Read article

AI and Education

NSF Initiative Aims to Make Every American Worker, Business, and Community AI-Ready

Staff on March 25, 2026 in National Science Foundation

The National Science Foundation launched the “AI-Ready America” initiative to expand access to AI skills, tools, and infrastructure across the U.S. workforce and economy. In partnership with federal agencies, the program will fund up to 56 state- and territory-based Coordination Hubs to support AI literacy, small-business adoption, and local government capacity. The effort focuses on closing the gap between national AI capabilities and real-world use by investing in training, technical assistance, and hands-on learning pathways to help communities participate in and benefit from the AI economy.

Read article