Reboot Weekly: Building Resilient Systems, Designing Public-Interest AI, and Confronting Generative Polarization

Reboot Weekly: Building Resilient Systems, Designing Public-Interest AI, and Confronting Generative Polarization

Published on April 16, 2026

Summary

This week on Reboot Democracy, Beth Simone Noveck shows how tools like GrantWell can expand access to public funding and model public-interest AI. Lee Rainie and Janna Anderson warn that AI is becoming society’s operating system, requiring resilience as shared infrastructure. Anirudh Dinesh examines how generative AI fuels “echo chambers,” reinforcing existing beliefs. Beyond Reboot, MIT Risk Review, and CEST maps gaps in governance frameworks, while the Center for AI and Digital Policy tracks global progress toward democratic AI. Partnership on AI examines how Pennsylvania and SEIU Local 668 negotiated protections for nearly 10,000 state employees, and the European Commission Joint Research Centre outlines a framework for scaling public-sector AI. Emerging tools from New Jersey Innovation Authority’s NJ EASE to DARPA’s agent communication initiative show AI moving from experimentation to infrastructure, as Anthropic highlights rising cybersecurity risks.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

Getting Started with Source-Grounded AI; The Prompting Lab series – April 17, 2:00 PM ET

Innovations in Action: Inspiring Projects from Around the Country; AI and Human Services series – April 20, 2:00 PM ET

Crisis Engineering for Public Systems: Designing Resilience Under Pressure; Ideas in Action series – April 21, 2:00 PM ET

Using Source-Grounded AI to Turn Sources into Written and Visual Communications; The Prompting Lab series – April 24, 2:00 PM ET

AI for Governance

AI for Governance

What Good AI in Government Actually Looks Like

Beth Simone Noveck on April 14, 2026 in Fast Company and Reboot Democracy

This piece contrasts two paths for AI in government: one that uses simplistic prompts to justify cutting public programs, and another that strengthens access to them. It highlights GrantWell, a platform that uses AI to help communities navigate complex federal grant systems and access funding already allocated by Congress. The argument is clear: public-interest AI should be designed to expand access, reduce administrative burden, and align with democratic intent. Rather than replacing judgment, effective government AI supports communities and public servants in making systems more equitable, transparent, and usable.

Read article

AI for Governance

A Practical Guide for AI Use by Public Sector Leaders — and Why This Matters Now More Than Ever

Alan Shark on March 24, 2026 in IBM Center for The Business of Government

Drawing on four years of hands-on experience with public administrators, this piece offers a practical roadmap for how governments can adopt AI responsibly. It emphasizes that public-sector AI carries higher stakes than private-sector use, where failures can affect rights, access to services, and public trust. The guide outlines key challenges, from procurement and workforce disruption to governance, accountability, and rising infrastructure costs, while stressing the need for AI literacy, clear oversight, and values-driven leadership. Ultimately, it argues that AI is already embedded in government operations and must be managed deliberately, not reactively.

Read article

AI for Governance

Advancing AI Adoption in EU Public Administrations: Future Directions and Opportunities under the Apply AI Strategy

Luca Tangi on April 9, 2026 in European Commission Joint Research Centre

This report outlines how European governments can accelerate AI adoption while maintaining public trust and accountability. Building on the EU’s Apply AI Strategy, it proposes a three-part framework: align AI use with policy and regulatory principles, strengthen institutional capacity, and deploy AI in high-impact public service areas based on real needs. It emphasizes continuous risk assessment, monitoring, and people-centered design, positioning public administrations as both implementers and stewards of responsible AI. The report highlights that effective adoption depends not just on technology, but on governance, skills, and alignment with public values.

Read article

AI for Governance

Talk Ain’t Cheap: DARPA Offers Grants for New AI-to-AI Communication Protocol

Brandon Vigliarolo on April 8, 2026 in The Register

DARPA has launched the MATHBAC program to develop a scientific foundation for how AI agents communicate and collaborate, aiming to accelerate discovery through coordinated “agent collectives.” The initiative will fund research into both the mathematics of agent interaction and the content of their exchanges, including whether AI systems can infer general scientific principles from data. With grants up to $2 million, DARPA is seeking breakthroughs that could enable AI systems to self-improve and even develop new “languages” for collaboration, potentially reshaping how scientific research and innovation are conducted.

Read article

Governing AI

Governing AI

Mapping the AI Governance Landscape: April 2026 Update

Simon Mylius et al. on April 9, 2026 in MIT AI Risk Initiative / CSET

Analyzing over 1,000 AI governance documents, this update reveals a growing mismatch between what AI policies address and where risks are emerging. Governance frameworks remain concentrated on model safety issues like security, privacy, and transparency, while socioeconomic risks, such as power concentration, labor impacts, and multi-agent system, receive far less attention. The analysis also finds uneven coverage across sectors and lifecycle stages, with strong focus on deployment but weaker oversight of early data practices. These gaps suggest current governance efforts may miss the areas where AI’s real-world impacts are most significant.

Read article

Governing AI

Artificial Intelligence and Democratic Values Index 2026

April Yoder et al. on April 9, 2026 in Center for AI and Digital Policy

This global index evaluates how national AI policies align with democratic values, focusing on human rights, rule of law, and public accountability. Covering 90 countries, it finds steady global progress in AI governance, including new laws, oversight bodies, and growing support for international coordination such as a global AI treaty. The report introduces AI literacy as a key metric and highlights increased public participation in policymaking. However, it also notes a shift in global leadership, with the United States stepping back from international AI governance efforts even as domestic legislative activity accelerates.

Read article

AI and Labor

AI and Labor

How the Pennsylvania government and a major union agreed to AI protections for state employees

Tamara Kneese, Eliza McCullough, Michael George, and Stephanie Bell on April 9, 2026 in Partnership on AI

As Pennsylvania positions itself as a major hub for AI data centers, this policy brief examines the growing tension between rapid infrastructure expansion and democratic governance. The authors argue that state-level efforts to fast-track AI and energy development, often aligned with federal priorities, are weakening municipal authority and limiting community input. They warn that bypassing local oversight risks undermining accountability and public trust. Instead, they propose a framework that balances statewide economic goals with meaningful local participation, ensuring communities have a voice in how AI infrastructure reshapes their environments.

Read article

AI Infrastructure

AI Infrastructure

But Grok Said So! How AI is enabling political polarization

Anirudh Dinesh on April 15, 2026 in Reboot Democracy

This piece explores how generative AI is reshaping political polarization by enabling users to generate persuasive arguments that reinforce preexisting beliefs. Unlike social media algorithms that passively shape exposure, chatbots actively construct tailored justifications, contributing to what researchers call “generative echo chambers.” While some studies show AI can moderate views in structured dialogue, real-world use often involves prompting models to defend positions, producing authoritative-sounding but unverified claims. Combined with low verification rates and high user trust, this dynamic risks deepening polarization by making partisan arguments more fluent, credible, and harder to challenge.

Read article

AI and Public Engagement

AI and Public Engagement

The New Human Resilience Challenges Posed by AI

Lee Rainie and Janna Anderson on April 13, 2026 in Reboot Democracy

Drawing on insights from 386 global experts, this piece argues that the greatest AI risk is not a sudden catastrophe but a gradual erosion of human agency, shared reality, and accountability. As AI becomes embedded across societal systems, individual resilience is no longer sufficient. Instead, resilience must be built as shared infrastructure, legal, educational, and civic, enabling people to question, contest, and shape automated systems. The report calls for coordinated action across governments, developers, educators, and communities to preserve human autonomy in an AI-mediated world.

Read article

AI and Problem Solving

AI and Problem Solving

Our PBIF Spring 2026 Open Call Launches Today! Here’s What We’re Looking For

Cassandra Madison on April 14, 2026 in Center for Civic Futures

The Public Benefit Innovation Fund (PBIF) has launched its Spring 2026 open call to support projects that improve the delivery of programs like Medicaid and SNAP through technology and experimentation. Structured around two tracks—Early Concept and Pilot—the fund aims to help teams test ideas and scale proven solutions in real government settings. Priorities include reducing administrative burden for caseworkers, improving data-driven service access, and modernizing infrastructure. In partnership with the Recoding America Fund.

Read article

AI and Problem Solving

Taking a Government AI Tool from Idea to Reality

Zarak Khan and Walker Gosrich on April 14, 2026 in New Jersey Innovation Authority

This case study shows how New Jersey moved an AI prototype into a production-ready government tool. NJ EASE, a document validation system for funding applications, reduces review time by identifying missing or incorrect information in seconds while keeping humans in control. Key lessons include designing around real user workflows, ensuring zero data retention for sensitive documents, embedding security requirements from the start, and iterating through continuous feedback. The result is a scalable, trustworthy model for deploying AI in government.

Read article

AI and Public Safety

AI and Public Safety

Assessing Claude Mythos Preview’s Cybersecurity Capabilities

Nicholas Carlini et al. on April 7, 2026 in Anthropic

This technical report reveals a major leap in AI’s ability to identify and exploit software vulnerabilities. Claude Mythos Preview demonstrated the capacity to autonomously discover zero-day flaws across major operating systems, browsers, and critical infrastructure—and in some cases, generate full exploit chains without human intervention. The findings suggest that AI is rapidly lowering the barrier to advanced cyberattacks, while also offering powerful tools for defense. Anthropic frames this as a “watershed moment” for cybersecurity, calling for urgent industry coordination, faster patch cycles, and expanded use of AI for defensive security before these capabilities become widespread.

Read article