Reboot Weekly: Inside Italy’s AI Law, NJ’s Public Defender's Office, and the Dilemnas of Synthetic Publics

Reboot Weekly: Inside Italy’s AI Law, NJ’s Public Defender's Office, and the Dilemnas of Synthetic Publics

Published on December 11, 2025

Summary

This week’s highlights include three new Reboot Democracy originals: from New Jersey’s Public Defender on using AI to building tools for under-resourced government lawyers; an essay from Luca Cominassi at the Barcelona Supercomputing Center and Beth Noveck on what makes Italy’s new AI law unique and where it falls short; and a piece from Elana Banin showcasing the Collective Intelligence Project’s new research on synthetic publics. Also in the news this week: fights over whether federal rules should override state AI laws shape the policy landscape; Utah and HHS unveil new AI strategies; and new evidence of AI-driven persuasion and misinformation round out the News That Caught Our Eye.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

Governing AI

Governing AI

Humanism Over Hegemony: Inside Italy’s New AI Law

Beth Simone Noveck & Luca Cominassi on December 10, 2025 in Reboot Democracy

Italy’s new AI law advances a distinctly human-centered model of governance. Instead of competing in the global AI arms race, the law prioritizes democratic accountability: keeping humans legally responsible in public administration and healthcare, protecting workers through a national AI labor observatory, restricting AI access for minors, and criminalizing harmful deepfakes and illicit data scraping. While underfunded and lacking an ambitious innovation strategy, Italy’s approach marks a notable departure from the U.S. accelerationist model—asserting digital humanism over technological hegemony.

Read article

Governing AI

State A.I. Laws Keep Us Safe. Trump’s Next Move Could Upend That

Amy Klobuchar on December 9, 2025 in The New York Times

Sen. Amy Klobuchar warns that an imminent Trump executive order could pre-empt state AI laws and “replace them with one rulebook” crafted without public input. She argues that in the absence of congressional action, states have become the country’s primary A.I. safety infrastructure. She urges Congress to advance comprehensive national safeguards but insists that states must retain authority to protect residents from escalating A.I. harms, citing cases involving child safety, scams, and deceptive political content.

Read article

Governing AI

How Washington Is Losing the AI Race No One Is Tracking

Javaid Iqbal Sofi on November 26, 2025 in War on the Rocks

Spain’s decision to award Huawei a contract for its national wiretap system reflects a widening regulatory credibility gap: U.S. firms are losing procurement bids not on technical grounds but because they cannot meet EU-style AI documentation and compliance requirements. Sofi argues this “documentation gap” is undermining U.S. exports, interoperability, and homeland security, as allies increasingly select vendors able to satisfy mandatory AI governance frameworks. The piece calls for a U.S. export strategy that aligns domestic standards with allied regulatory regimes through shared documentation templates, joint working groups, and an AI “regulatory passport” to keep American systems in the competition.

Read article

AI for Governance

AI for Governance

Utah Will Push for ‘Pro-Human’ AI, Gov. Cox Announces, as Trump Backs Ban on State Regulations

Emily Anderson Stern on December 2, 2025 in The Salt Lake Tribune

Utah Gov. Spencer Cox unveiled a “pro-human AI” strategy that expands AI use across state government while investing $10 million in workforce curriculum to ensure residents are “AI-ready.” Cox argued states must retain authority to regulate AI harms, particularly around children and data privacy, as the Trump administration pushes a federal ban on state AI laws. Utah’s plan includes a new academic consortium, sector-focused training, and forthcoming legislation on chatbots, deepfakes, and health-care AI.

Read article

AI for Governance

HHS Releases AI Strategy, United by New “OneHHS” Approach

Alexandra Kelley on December 5, 2025 in Nextgov

HHS issued a department-wide AI strategy built around five pillars: governance and risk management; infrastructure that supports cross-agency data sharing; workforce development and burden reduction; accelerating science with validated AI tools; and modernizing public health delivery. A new “OneHHS” framework will link subagencies to share data and deploy AI solutions more rapidly, while updated policies will clarify data ownership and standardize sharing rules. The plan emphasizes secure, role-based AI adoption for employees, open-weight models for research, and clinical decision-support tools that augment providers. HHS will publish metrics to track progress across all pillars.

Read article

AI for Governance

AI Governance Checklist for Elected Officials: Advancing Responsible AI Adoption and Use in the Public Sector

Maddy Dwyer & Quinn Anex-Ries on December 4, 2025 in Center for Democracy & Technology

This brief offers a government-wide checklist to help elected officials and senior agency leaders manage AI implementation responsibly across state and local government. It outlines five core areas for action: building transparency through public AI inventories and community input; strengthening accuracy with testing standards, audits, and human oversight; improving governance via centralized strategies, AI officers, and cross-agency coordination; embedding privacy and cybersecurity protections into procurement and policy; and mitigating risks to safety, rights, and legal compliance. The authors emphasize adapting guardrails to the risk level of each AI use case and building the capacity needed to deliver trustworthy public-sector AI.

Read article

AI and International Relations

AI and International Relations

Human by Design: Reflections from the OECD Global Roundtable on Equal Access to Justice

Jennifer Sellitti on December 8, 2025 in Reboot Democracy blog

New Jersey Public Defender Jennifer Sellitti reflects on the 2025 OECD Global Roundtable on Equal Access to Justice, arguing that AI can strengthen fairness when deployed with clear safeguards, secure infrastructure, and strong professional oversight. The piece outlines New Jersey’s three-part approach: state-built AI tools, agency-specific legal applications, and contributions to statewide standards on transparency, bias, and forensic reliability; while emphasizing that human judgment and client-centered practice must remain at the core of justice innovation.

Read article

AI and International Relations

Reflecting on India’s AI Governance Guidelines

Amlan Mohanty on December 5, 2025 in Techlawtopia

Mohanty reflects on India’s new national AI governance guidelines, describing them as a “third way” that emphasizes adoption, capacity building, and inclusive development rather than strict regulation. The framework is locally grounded and flexible, addressing risks tied to caste, child safety, and federal coordination. Critics argue it leans too heavily on voluntary measures and lacks operational detail; Mohanty responds that stronger transparency rules, binding standards, and a dedicated digital regulator will be needed as institutions like the AI Safety Institute mature.

Read article

AI and Public Engagement

AI and Public Engagement

Research Radar: Synthetic Data Is Redefining Representation

Elana Banin on December 9, 2025 in Reboot Democracy Blog

As governments test AI models to forecast public reactions to policies, this Research Radar examines the Collective Intelligence Project’s Digital Twin Evaluation Framework, a method for assessing whether models can accurately reflect real opinion patterns within demographic groups. The piece highlights unresolved democratic risks, including unclear standards for representativeness, the danger of synthetic inputs displacing genuine participation, and the legitimacy challenges of relying on “silicon samples” to stand in for communities.

Read article

AI and Elections

AI and Elections

Persuading voters using human–artificial intelligence dialogues

Hause Lin, Gabriela Czarnek, Benjamin Lewis, Joshua P. White, Adam J. Berinsky, Thomas Costello, Gordon Pennycook and David G. Rand on December 4, 2025 in Nature

A new Nature study shows AI chatbots can meaningfully shift voter preferences across multiple countries, often more than traditional political ads. Short, tailored AI conversations increased the likelihood of voting and changed vote choice, particularly among moderates. Even “no-facts” models persuaded, while pro-candidate AIs sometimes generated inaccuracies. The findings highlight how AI-driven persuasion is now effective, scalable, and largely ungoverned in democratic processes.

Read article

AI and Public Safety

AI and Public Safety

Google’s New AI Image Generator Is a Misinformation Superspreader

Ines Chomnalez & Sofia Rubinson on December 3, 2025 in NewsGuard

NewsGuard’s red-teaming audit found Google’s new Nano Banana Pro image generator advanced all 30 tested false claims across health, U.S. politics, global brands, conflicts, and Russian influence operations without rejecting a single prompt. The tool not only produced photorealistic misinformation but often added unprompted details that made hoaxes more credible, including fabricated news broadcasts and depictions of public figures. The findings highlight how generative image models, absent strong guardrails, can supercharge disinformation at scale.

Read article