Reboot Weekly: Testing Before Scale, Families Writing the Rules, Cities Fixing Permits

Reboot Weekly: Testing Before Scale, Families Writing the Rules, Cities Fixing Permits

Published on February 5, 2026

Summary

This week on Reboot Democracy, Cassandra Madison argues that governments cannot responsibly adopt new tools without safe, shared spaces to test them before procurement locks in risk. Dhruv Kamlesh Kumar shows how that principle worked in practice on the AIEP project, where families shaped a special education tool by defining what information matters, how privacy is protected, and how meaning is preserved across languages. Beth Simone Noveck reports from Boston, drawing on lessons from Spain to help launch a citizen-enabled permitting overhaul. These efforts stand in contrast to recent failures, from New York City shutting down a misleading chatbot to immigration surveillance systems and exposed children’s chat records that show how weak governance turns tools into harm.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

AI for Governance

AI for Governance

Experimentation as Public Infrastructure

Cassandra Madison on February 2, 2026 in Reboot Democracy

This essay argues that governments cannot responsibly adopt or govern AI without safe, structured spaces to learn before systems scale. Drawing on lessons from public-sector failures and recoveries, Madison argues that experimentation in the form of small, bounded pilots designed to surface risks early is not a luxury but a core component of public infrastructure. Highlighting work at the Center for Civic Futures and its Public Benefit Innovation Fund, the piece shows how shared experimentation environments help governments test AI in high-stakes areas like benefits delivery.

Read article

AI for Governance

Mamdani Targets ‘Unusable’ AI Chatbot for Termination

Colin Lecher and Katie Honan on January 30, 2026 in The City

New York City Mayor Zohran Mamdani says his administration plans to shut down a costly AI chatbot launched under the Adams administration after reporting showed it routinely delivered false and legally dangerous guidance to businesses. Built as part of the MyCity digital services overhaul and hosted on Microsoft’s cloud platform, the bot cost hundreds of thousands of dollars yet failed on basic questions about labor, housing, and consumer law. The move underscores growing scrutiny of poorly governed AI deployments in public services and the fiscal and legal risks of rolling out tools without rigorous testing or accountability.

Read article

AI for Governance

Wicked Decluttering

Beth Simone Noveck on February 4, 2026 in Reboot Democracy

Boston has launched the first phase of a redesigned permitting system that reorganizes rules around what residents are actually trying to do, such as opening a restaurant or remodeling a home, rather than around city departments. The approach grew out of an experiment at the Open Government Partnership Summit in Spain, where civic practitioners explored how regulation could be simplified without weakening protections. Reporting from the Boston rollout, Mayor Michelle Wu and Chief Information Officer Santiago Garces explain how the city analyzed 25 years of permit data to identify common permitting experiences and rewrote guidance in plain language with staff and resident input. AI is used behind the scenes to group past permits, help staff translate regulatory requirements into clear public guidance, and improve search, not to automate decisions or remove safeguards.

Read article

Governing AI

Governing AI

Challenges in Applying the EU AI Act’s Research Exemptions to Contemporary AI Research

Janos Meszaros, Isabelle Huys and John P. A. Ioannidis on January 31, 2026 in Nature

As the EU AI Act comes into force, this paper examines whether its research exemptions are fit for today’s AI ecosystem. The authors argue that the Act’s distinctions between research and commercial activity and between lab development and real-world deployment no longer hold in practice, given blurred incentives, shared infrastructure, and live-testing environments. Through legal analysis and concrete scenarios, the article shows how vague definitions and limited guidance create regulatory uncertainty and potential loopholes. It calls for clearer safeguards and more realistic frameworks that better reflect how contemporary AI research is actually conducted.

Read article

AI and Public Safety

AI and Public Safety

DHS AI Surveillance Arsenal Grows as Agency Defies Courts

Justin Hendrix on January 28, 2026 in Tech Policy Press

A new Department of Homeland Security inventory shows a sharp expansion of AI-driven surveillance tools at ICE, even as federal judges document widespread violations of court orders. Reporting details how facial recognition, social media monitoring, and automated tip-processing tools—many built by Palantir and hosted on major cloud platforms—are being deployed to target neighborhoods, migrants, and protest activity with limited oversight. The piece underscores a widening gap between judicial authority and executive enforcement, raising urgent questions about rule of law, civil rights, and accountability as surveillance systems scale inside U.S. cities.

Read article

AI and Public Safety

An AI Toy Exposed 50,000 Logs of Its Chats With Kids to Anyone With a Gmail Account

Andy Greenberg on January 29, 2026 in WIRED

Security researchers discovered that Bondu, a company selling AI-enabled chat toys for children, left its web console almost entirely unsecured allowing anyone with a Gmail account to access more than 50,000 transcripts of children’s private conversations. The exposed data included names, birthdates, family details, preferences, and intimate chat histories designed to personalize future interactions. Although Bondu quickly fixed the issue after the disclosure, the incident has raised concerns about children’s privacy, data governance, and the broader risks of AI toys that collect sensitive information at scale. U.S. Senator Maggie Hassan has since demanded answers, underscoring growing regulatory scrutiny of AI products aimed at children.

Read article

AI and Problem Solving

AI and Problem Solving

Can AI Help Make Homeless Californians Healthier?

Marisa Kendall on January 27, 2026 in CalMatters

As street medicine teams struggle with chronic doctor shortages, a California health tech company is testing whether AI can help expand care for people experiencing homelessness. The piece examines Akido Labs’s use of an AI tool that guides outreach workers through patient interviews and suggests diagnoses and treatments for remote physician review. While early results point to faster access and higher caseload capacity, clinicians and advocates warn of serious risks, from bias and privacy concerns to the limits of AI in understanding the lived realities of homelessness.

Read article

AI Infrastructure

AI Infrastructure

Shifting Landscapes: A Practical Guide to Pro-Democracy Tech

Alex Parsons, Julia Cushion, and Gemma Moulder on January 31, 2026 in mySociety

This guide maps how technology is being used both to entrench authoritarian power and to strengthen democratic capacity. Based on twenty years of civic tech practice, it distinguishes between defensive tools that protect openness and constructive approaches that build participation, deliberation, and shared democratic infrastructure, offering practical guidance for making technology choices that support democracy rather than undermine it.

Read article

AI and Education

AI and Education

New AI Rules for NYC Schools Coming This Month as Tech Upends Classrooms

Jessica Gould on February 1, 2026 in Gothamist / WNYC

New York City’s Department of Education says it will release long-awaited AI guidelines for public schools this month, responding to mounting concern from parents, educators, and oversight bodies about privacy, procurement, and classroom use. The forthcoming rules aim to set guardrails for AI tools amid rising fears over biometric data collection, student surveillance, and inconsistent decision-making, including contract approvals stalled over AI concerns. The debate highlights how large school systems balance innovation, vendor pressure, and parental consent while defining clear limits on AI use in learning environments.

Read article

AI and Education

Google Backs AI-Enabled Learning in India With ₹85 Cr Grant, AI University

Merin Susan John on January 28, 2026 in Analytics India Magazine

Google has announced a major push into AI-enabled education in India, committing ₹85 crore to support responsible, teacher-led AI learning through new partnerships, tools, and funding. Unveiled at the AI for Learning Forum in Delhi, the initiative includes funding for Wadhwani AI and a pilot to establish India’s first AI-enabled state university in collaboration with the Ministry of Skill Development. Framed around workforce readiness and public capacity-building, the effort emphasizes integrating AI into classrooms and vocational training while keeping educators at the center of learning.

Read article

AI and Education

“Unboxing the Prompt”: How Community Feedback (and AI) Helped Us Build Better AI Together

Dhruv Kamlesh Kumar on February 3, 2026 in Reboot Democracy

This piece traces how the AIEP project rethought AI design by making the system’s core logic visible and editable by the families it serves. Rather than producing generic summaries of dense IEP documents, the team invited parents to help shape the prompts themselves, turning feedback into concrete design rules about what information matters, how it’s presented, and in what order. The result is a model for how “unboxing the prompt” can turn users into co-designers and make AI systems more accountable, usable, and human-centered.

Read article

AI and Labor

AI and Labor

NGA Launches Working Group on AI & the Future of Work

Staff on January 27, 2026 in National Governors Association

The National Governors Association has launched a bipartisan working group to help states respond to AI’s growing impact on jobs, skills, and public-sector work. Convened by the NGA Center for Best Practices in partnership with the Center for Civic Futures and McKinsey & Company, the group brings together governors’ advisors to develop a practical Roadmap for Governors on AI & the Future of Work. The effort will focus on workforce impacts, policy options, and how states can lead by example in building AI-enabled government workforce models ahead of the 2026 election cycle.

Read article