News That Caught Our Eye #85

News That Caught Our Eye #85

Published on November 20, 2025

Summary

Princeton’s Mihir Kshirsagar breaks down why predictive-policing algorithms fail and how to distinguish meaningful diagnostic tools from misguided prediction products. Congress renews its push to block state AI laws, while Matt Prewitt and Matthew Victor highlight new institutional approaches to digital participation. The GovLab’s Stefaan Verhulst warns of an emerging “data winter,” and The Frisc profiles the GovLab’s growing community-centered AI work in California. Plus, a new Research Radar from Elana Banin examines why today’s large language models still can’t reason like economists.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

Governing AI

Governing AI

Research Radar: An Economy of AI Agents

Elana Banin on November 18, 2025 in Reboot Democracy

This piece examines Hadfield and Koh’s new study on AI agents and what happens when autonomous systems begin making economic and administrative decisions. The research shows that current agents don’t reason like humans, misinterpret preferences, can spontaneously coordinate like cartels, and erode core assumptions that markets and democratic oversight rely on. The policy question is whether democracies redesign rules now or let commercial incentives define the economic foundations of the AI era.

Read article

Governing AI

It’s Back: Congress Gears Up for Year-End Fight Over Moratorium on AI Laws

Cristiano Lima-Strong on November 18, 2025 in Tech Policy Press

Congress is preparing for another clash over a proposal to block states from enforcing their own AI laws, with House leaders considering attaching a moratorium to the must-pass National Defense Authorization Act. Earlier attempts failed after bipartisan pushback, and questions remain about how broad the new measure will be after previous versions ranged from a 10-year ban to a narrower 5-year restriction tied to broadband grants. Senate critics, including Sen. Brian Schatz, have already vowed to block it, setting the stage for a high-stakes end-of-year fight.

Read article

Governing AI

The Academic Edition: What the Latest Research Tells Us About AI in Parliaments

Beatriz Rey on November 13, 2025 in Modern Parliament by POPVOX Foundation

Legislatures are piloting AI for drafting, debates, and constituent outreach, but public trust is the constraint. UK and Japan surveys show voters accept AI assistance yet reject AI decision-making. Studies find transparent AI use improves satisfaction, while hidden use undermines it. Since 2022, AI references in parliamentary debates have spiked globally, shaped by national priorities such as U.S. security and EU ethics. Italy’s Chamber of Deputies, an early adopter of generative AI in legislative workflow, illustrates both efficiency gains and risks to neutrality and democratic legitimacy.

Read article

AI and Labor

AI and Labor

Europe Is Regulating AI Hiring. Why Isn’t America?

Ryan Zhang on November 11, 2025 in OnLabor

As AI becomes embedded in nearly every stage of hiring, new lawsuits show how automated systems can quietly replicate discrimination from resume filters to video-interview scoring. While U.S. rules remain fragmented and limited to a few states and cities, Europe has moved ahead with a comprehensive approach: the EU AI Act designates hiring tools as “high-risk,” imposing strict testing, documentation, human oversight, audit logs, and significant penalties for violations. The piece examines this widening regulatory gap and what it means for workers and employers as AI-driven hiring accelerates.

Read article

AI for Governance

AI for Governance

The Surprising Shifts in How the Public Sector Is Buying AI — and What Policymakers Can Do About It

Kathrin Frauscher and Kaye Sklar on November 10, 2025 in Open Contracting Partnership

Interviews with 50+ practitioners demonstrate shifts in how governments are acquiring AI. Public agencies are favoring off-the-shelf tools over custom builds, centralizing enterprise-wide contracts, and adopting “shadow AI” through pilots and built-in features that never go through procurement. These trends raise risks around oversight, vendor dependency, and uneven capacity. The authors argue that policymakers must treat procurement as a driver of innovation to ensure AI tools are deployed responsibly and effectively.

Read article

AI for Governance

Irish Department of Justice Chatbots Mislead People Seeking Information

Kris Shrishak on November 19, 2025 in Irish Council for Civil Liberties

In this opinion piece, Kris Shrishak says Ireland’s Justice Department has used immigration chatbots that frequently give wrong answers while the agency denies accountability. FOI files show no contracts or assessments and only narrow testing. Independent checks revealed major mistakes on visas, travel rules, and residency. Shrishak contends that while the department obscures costs and deepens vendor lock-in, while vulnerable users, especially asylum seekers, are nudged toward unreliable tools that can affect their legal status.

Read article

AI and Problem Solving

AI and Problem Solving

Foundations for the Digital Commons

Matt Prewitt and Matthew Victor on November 19, 2025 in Reboot Democracy

This recap of the Roux Institute’s Foundations for the Digital Commons convening highlights a two-day effort to chart practical pathways for rebuilding digital infrastructure that supports democratic life. Set in Maine—home to ranked-choice voting, public financing, and a pragmatic civic culture—the event brought technologists, policymakers, journalists, and civic innovators together to examine real-world models for information flows, large-scale deliberation, and data governance. With momentum behind new pilots such as a Maine-wide citizens’ assembly and data-governance initiatives, the convening underscored how state-level experimentation can drive the next generation of democratic digital systems.

Read article

AI Infrastructure

AI Infrastructure

The Emergence of a Data Winter: The Growing Enclosure of Data at a Time of Rapid AI Advances

Stefaan Verhulst on November 10, 2025 in SSRN

This paper warns of a coming “data winter,” as public, scientific, and training data become harder to access due to regulation, institutional caution, corporate hoarding, and geopolitical fragmentation. As governments scale back open data and companies restrict access, evidence-based governance and equitable AI development are at risk. Verhulst proposes five interventions, including treating data as infrastructure and creating sustainable data commons, to prevent a future of privatized, siloed information.

Read article

AI and Education

AI and Education

This AI Software Translates Special Education Plans for SF Parents

Taylor Barton on November 14, 2025 in The Frisc

A new tool, AiEP, is helping San Francisco parents quickly translate and understand their children’s long, jargon-heavy special education plans—far faster than the district’s slow official process. Built by Innovate Public Schools and Northeastern’s Burnes Center for Social Change, the encrypted system translates IEPs into four languages, summarizes key services, and helps families spot errors. About 200 parents use the tool, but SFUSD has not adopted it, citing privacy concerns. Developers say all personal data is redacted and deleted, while families push for wider access amid ongoing staffing and compliance challenges.

Read article

AI and International Relations

AI and International Relations

Europe in the Age of AI: How Technology Leadership Can Boost Competitiveness and Security

Keegan McBride, Luukas Ilves, Olivia De Hennin, Kevin Luca Zandermann, Tone Langengen, Barbara-Chiara Ubaldi, and Jakob Mokander on November 17, 2025 in Tony Blair Institute

This report argues that Europe is losing ground in the global AI race as the U.S. and China surge ahead in compute, investment, and innovation. Fragmented markets, high energy costs, and slow regulatory coordination have left Europe outmatched in critical technologies that now underpin economic power and national security. The authors call for a continent-wide strategy to secure compute capacity, overhaul digital regulation, accelerate AI adoption, and mobilize Europe’s vast capital pools. They warn the region risks becoming a permanent technology consumer rather than a creator, undermining prosperity, democratic resilience, and geopolitical influence.

Read article

AI and Public Safety

AI and Public Safety

Why “Good Guys” Shouldn’t Use AI like the “Bad Guys”: The Failure of Predictive Policing

Mihir Kshirsagar on November 17, 2025 in Reboot Democracy

This essay argues that predictive policing fails not because police lack data, but because the data reflects policing patterns—not crime—producing false positives, biased feedback loops, and lost public trust. Citing Plainfield and Chicago, Kshirsagar shows how algorithms replicate past enforcement rather than forecast future harm, while diagnostic approaches in Oakland and Richmond use data to guide outreach, reduce violence, and improve accountability without surveillance. The piece concludes that policing needs tools that illuminate systemic problems, not algorithms claiming to predict individual wrongdoing.

Read article