News That Caught Our Eye #78

News That Caught Our Eye #78

Published on October 2, 2025

Summary

California enacted the first U.S. AI safety law, setting a national precedent with new requirements for transparency, incident reporting, and whistleblower protections. States, cities, and researchers are testing new governance models: from public-sector AI implementation frameworks and civic procurement strategies, to peace dialogue toolkits and legitimacy experiments. Families are co-designing AI tools for education, while expert commentary reflects on Spain’s landmark ruling establishing algorithmic transparency as a constitutional right. New warnings also surfaced about predatory “AI companions.” Across sectors, AI governance is shifting toward public trust, accountability, and inclusive design

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

Coming up this week: 

  • Stakeholder Engagement in AI Implementation, October 6, 2025, 2:00 PM ET
  • Welcome to Amplify — Strategy, Skills, and The Stakes of Public Communication Today, Oct 7, 2:00 PM ET
  • A Model for State Government “Operators” - Colorado’s Governor’s Office of Operations, October 8, 2025, 2:00 PM ET
  • Hiring Reform with Humans at the Center: Lessons from St. Louis and San Francisco, October 9, 2025, 2:00 PM ET

Sign up here

AI for Governance

AI for Governance

Cities Need a New Model for Incentivizing Responsible AI

Emily Royall on September 24, 2025 in Tech Policy Press

Drawing on seven years leading AI pilot programs in San Antonio, Emily Royall makes a compelling case for why U.S. cities must take collective action to reshape the AI marketplace. She outlines a new governance model centered on performance benchmarks—accuracy, security, and transparency—that can be embedded into public procurement and scaled through cooperative purchasing agreements. In the absence of federal regulation, Royall argues, cities must pool their influence to demand safe, explainable, and auditable AI tools, shifting the balance of power from vendors to public institutions and setting a new standard for AI accountability.

Read article

AI for Governance

How AI Is Rewiring Democracy, from Power Diffusion to Public AI

Scott Douglas Jacobsen on September 25, 2025 in The Good Men Project

In this wide-ranging interview, Bruce Schneier and Nathan E. Sanders preview their forthcoming book Rewiring Democracy, offering a pragmatic roadmap for how democratic institutions must adapt in the AI era. The duo explores how artificial intelligence amplifies power—concentrating it in some contexts, diffusing it in others—and lays out a four-part strategy for democratic renewal: resisting inappropriate uses of AI, deploying it responsibly in governance, reforming its ecosystem, and renovating democracy itself.

Read article

AI for Governance

Algorithmic Governance and Public Trust: Experimental Insights from Finland

Jaakko Hillo, Isak Vento, and Tero Erkkilä on September 30, 2025 in Public Administration

This study investigates how citizens and public administrators perceive the legitimacy of algorithmic decision-making in public services, using experimental vignettes from Finland’s child protection and loan guarantee sectors. The findings show that while both groups recognize the efficiency benefits of algorithmic systems, they express significant concern over fairness, accountability, and human oversight, especially when systems are portrayed as fully automated. The results suggest that legitimacy perceptions improve when human involvement is maintained, and that mass–elite opinion gaps are narrower than expected.

Read article

Governing AI

Governing AI

The Judicial Protection of Algorithmic Transparency

José Luis Martí on October 1, 2025 in Reboot Democracy Blog

Spain’s Supreme Court has issued a landmark ruling that enshrines algorithmic transparency as a constitutional right, requiring the government to release the source code of a public benefits algorithm. Legal scholar José Luis Martí explains how the decision repositions AI within democratic oversight frameworks, closes key gaps left by the EU AI Act, and signals a judicial shift toward public accountability in the age of automated governance.

Read article

Governing AI

California Passes Landmark AI Safety Law, Sets National Precedent

Chase DiFeliciantonio on September 29, 2025 in Politico

Governor Gavin Newsom has signed SB 53, the first U.S. law requiring major AI developers to disclose safety and security protocols publicly. Championed by Sen. Scott Wiener, the legislation mandates incident reporting, whistleblower protections, and lays the foundation for CalCompute, a state-run cloud infrastructure. Viewed as a model for future national and international AI governance, the law follows failed efforts last year and emerged after negotiations with leading labs including Anthropic, OpenAI, and Meta. The law distinguishes California’s cautious-but-assertive approach from federal efforts under the Trump administration to accelerate AI development in competition with China.

Read article

Governing AI

Public AI in Practice: What States Are Learning About Responsible AI Deployment

Sophie Luskin and Mihir Kshirsagar on September 30, 2025 in Center for Information Technology Policy

A new report from CITP synthesizes insights from the Shaping the Future of AI convening, which brought together over 120 state leaders, researchers, and philanthropy partners to chart pathways for responsible public-sector AI. The report outlines four core tensions: incremental vs. transformational investment, internal operations vs. citizen-facing services, rapid deployment vs. institutional development, and the challenge of evaluating public AI. It highlights implementation lessons from states like Arizona, Maryland, New Jersey, and Colorado, while drawing international comparisons with India and Sweden. The report calls for collaborative infrastructure, ethical procurement, and co-governance with communities.

Read article

AI and Public Engagement

AI and Public Engagement

“Silicon Sampling” Isn’t Ready for Prime Time, But It Can Strengthen Survey Design

John Wihbey and Samantha D’Alonzo on September 27, 2025 in SSRN

This working paper reviews over 30 studies on “silicon sampling”—the use of LLMs to simulate public opinion—and offers clear guidance for communicators and survey researchers. While the authors caution against using AI models as substitutes for human participants, they find promising applications in survey refinement, translation, and early-stage message testing. The report details when and how LLMs can add value, outlines technical caveats (from model bias to hallucinated consensus), and offers a practical decision tree for responsible use. The bottom line is that LLMs can augment research workflows, but human samples remain the gold standard

Read article

AI and Problem Solving

AI and Problem Solving

Reimagining Parliaments: Can AI Make Legislative Information More Accessible?

Luís Kimaid - Bússola Tech; Vishwajit Singh - HawkAItrack; Fotios Fitsilis - Hellenic Parliament; Stephen Dwyer - U.S. House of Representatives; and Jurgens Pieterse - Parliament of South Africa on September 29, 2025 in Bússola Tech

In a global discussion hosted by Bússola Tech, representatives from the U.S., South Africa, India, and Greece explored how AI could reshape citizen interaction with parliaments. Panelists emphasized that most legislatures are still in early stages, with AI tools being tested internally, focused on data structuring, summarization, and internal workflows. South Africa’s Parliament is prioritizing foundational architecture to ensure trustworthy interfaces; the U.S. Congress has begun piloting Microsoft Copilot for 6,000 House staffers; India is deploying searchable AI-powered video archives; and Greece is mapping global parliamentary AI use cases. Across contexts, speakers agreed that transparency, data integrity, and trust must anchor public-facing AI.

Read article

AI and International Relations

AI and International Relations

Leveraging AI in Peace Processes: A Framework for Digital Dialogues

Martin Wählisch and Felix Kufus on September 25, 2025 in Data & Policy

This research offers a first-of-its-kind framework for responsibly integrating AI into digital peace dialogues, drawing on UN- and NGO-led consultations in Yemen, Libya, and Sudan. Wählisch and Kufus argue that AI can scale participation, synthesize complex input, and support inclusive dialogue in fragile contexts, but only when paired with human facilitation and context-specific design. They identify ten key dilemmas, from algorithmic bias to cybersecurity, and propose a taxonomy to guide peacebuilders in choosing the right tools, safeguards, and timing.

Read article

AI and Education

AI and Education

When Communities Lead, Appropriate Tech and Change Follow

Sofía Bosch Gómez on September 30, 2025 in Reboot Democracy Blog

Co-designed with parents of children with diverse abilities, the AIEP tool is a civic catalyst. Sofía Bosch Gómez documents how California families are using AI literacy and WhatsApp-based learning to navigate the IEP process, train peer leaders, and influence how schools respond. This story offers a powerful model for human-centered, community-built technology that delivers not only access but dignity.

Read article

AI and Public Safety

AI and Public Safety

friend.com or foe.com?

Beth Simone Noveck and Amedeo Bettauer on September 29, 2025 in Reboot Democracy Blog

Marketed as a solution to youth loneliness, friend.com is a voice-recording AI pendant that monitors users and responds with pre-written messages. But as Noveck and Bettauer argue, it’s a surveillance device in disguise, unregulated, opaque, and deeply unethical. This editorial calls on public institutions, parents, and policymakers to reject predatory tech masked as care, and to invest instead in real, accountable tools for social connection.

Read article