News That Caught Our Eye #81

News That Caught Our Eye #81

Published on October 23, 2025

Summary

Hamburg uses AI to turn citizen feedback into urban policy, Boston showcases tools that improve public trust, and tribal nations are piloting AI for governance while protecting sovereignty. New York banned AI rent price-fixing after algorithmic collusion inflated costs, federal agencies quietly accessed mass surveillance camera networks, and unions sued over AI-enabled social media monitoring. As Andrew Sorota warns, the real threat isn't dramatic AI failures but our growing habit of letting opaque systems make civic choices without public oversight.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

AI for Governance

AI for Governance

How Hamburg is Turning Resident Comments into Actionable Insight

Beth Simone Noveck on October 22, 2025 in Reboot Democracy Blog

The City of Hamburg’s DIPAS platform integrates AI to help city officials summarize, categorize, and map thousands of public comments on urban development plans. Built on open-source models and geospatial tagging, the system structures citizen feedback into actionable insights, making participation more manageable and meaningful. As Beth Simone Noveck reports, nine German cities have now adopted the tool to support faster, more responsive governance.

Read article

AI for Governance

AI Governance Framework Offers Cities Actionable Guidance

Staff on October 16, 2025 in GovTech

A new report from New America’s RethinkAI coalition offers city leaders a results-oriented alternative to federal AI frameworks. Titled “Making AI Work for the Public: An ALT Perspective,” the report proposes a governance model built on three principles—Adapt, Listen, and Trust—and draws from pilots and interviews in Boston, NYC, and San Jose. It urges governments to prepare for increased demand and measure trust, not just efficiency. The ALT framework positions public institutions as enablers in a broader civic tech ecosystem, alongside philanthropy, universities, and community groups.

Read article

AI for Governance

For Tribal Governments, AI Holds Unique Promise and Risks

Lily Jamali on October 9, 2025 in Marketplace

From legal chatbots to 3D-printed turtle shells, tribal nations are piloting AI to strengthen governance, boost efficiency, and preserve cultural heritage. The Morongo Band of Mission Indians built an AI-powered legal database to make its “super democracy” more accessible, while the Cherokee Nation adopted a government-wide AI policy grounded in cultural values. Yet tribal leaders warn of new risks to data sovereignty and privacy as open-source models absorb sensitive historical and linguistic materials. For many, the goal is to harness AI without sacrificing self-determination.

Read article

Governing AI

Governing AI

La desinformación ya existía; la IA solo la hace más barata

Francesc Bracero on October 20, 2025 in La Vanguardia

In a profile in Spain's La Vangardia, Beth Simone Noveck warns that the real risk of AI is our passive willingness to cede public spaces and civic decision-making to unaccountable platforms. She argues that AI should be used to strengthen democracy by making information accessible, amplifying public voice, and embedding collective governance into the design of digital tools.

Read article

Governing AI

Mapping the AI Governance Landscape: Pilot Test and Update

Simon Mylius, Peter Slattery, Yan Zhu, Mina Narayanan, Adrian ThinnYun, Alexander Saeri, Jess Graham, Michael Noetel, and Neil Thompson on October 21, 2025 in MIT AI Risk Repository

A new pilot study from the MIT AI Risk Repository team maps over 950 global AI governance documents using Claude Sonnet 4.5 to classify risks and mitigations across multiple taxonomies, including the MIT AI Risk Taxonomy and NIST frameworks. The findings show LLMs can match or even exceed human reviewers in identifying policy coverage—especially on hard-to-spot risks like transparency failures or systemic governance breakdowns. Notably, areas like AI Welfare and Rights and Multi-agent Risks remain least covered, while governance failure, security vulnerabilities, and national security dominate. This marks a promising step toward building an open-access, searchable database of global AI policy content.

Read article

News that caught our eye

News that caught our eye

New York Bans AI-Enabled Rent Price Fixing

Elissa Welle on October 16, 2025 in The Verge

New York has become the first U.S. state to ban landlords from using AI-powered pricing software to set rents. Governor Kathy Hochul signed the bill into law on Thursday, citing “housing market distortion” caused by private algorithmic systems like RealPage, which optimize rents for profit at the expense of affordability. The law not only bans such tools, it also classifies landlords who use them as engaging in illegal collusion, whether knowingly or not. The move follows similar city-level bans in Jersey City, Philadelphia, San Francisco, and Seattle, and responds to federal antitrust scrutiny. The law takes effect in 60 days.

Read article

AI and Public Engagement

AI and Public Engagement

Building Democracy’s Digital Future: Lessons from Boston’s Civic AI Experiments

David Fields on October 21, 2025 in Reboot Democracy Blog

From the launch of New America’s “ALT” AI framework to flash talks on tools like MAPLE and Digital Democracy, Boston became a civic tech lab last week. At the Civic AI Summit and Harvard’s Allen Lab showcase, public officials, students, and nonprofits showed how AI can improve contracting, participation, and public trust, when paired with human-centered design and intergovernmental collaboration. The lesson: we don’t need perfect systems; we need coordinated experimentation rooted in public values.

Read article

AI and Public Engagement

Rescuing Democracy from the Quiet Rule of AI

Andrew Sorota on October 16, 2025 in Noema Magazine

While most AI discourse focuses on dramatic risks like job loss, deepfakes, or superintelligence, Andrew Sorota warns against the cultural and institutional habit of deferring judgment to algorithms. Sorota argues that democracy is eroded not by a single AI decision, but by our growing willingness to let opaque systems make civic choices for us. Drawing on political theory, global case studies, and emerging civic technologies, he calls for a new social contract that embeds contestability, friction, and collective judgment into the design of AI systems before deference becomes default.

Read article

AI and Problem Solving

AI and Problem Solving

How California Turned Wildfire Recovery into a Deliberative Democracy Experiment

Jeffrey Marino and Josh Kramer on October 19, 2025 in New_Public

In the wake of devastating wildfires around Los Angeles, California’s Office of Data and Innovation launched a digital public engagement platform called Engaged California, a deliberative democracy tool designed to make policymaking more participatory, inclusive, and responsive. In this behind-the-scenes interview, ODI Director Jeffrey Marino shares how the platform uses open-ended questions, structured deliberation, and civility pledges to help residents shape real policy proposals during the wildfire recovery process.

Read article

AI and Labor

AI and Labor

Unions Sue Trump Administration Over AI-Powered Social Media Surveillance

Kevin Collier on October 16, 2025 in NBC News

Three major U.S. labor unions—AFT, CWA, and UAW—filed a lawsuit against the Trump administration for using automated tools and AI to scan visa holders’ social media activity for disfavored viewpoints. Filed by the Electronic Frontier Foundation, the lawsuit alleges that the administration’s interagency surveillance program violates First Amendment rights and suppresses political speech through chilling effects and mass monitoring.

Read article

AI and Public Safety

AI and Public Safety

ICE, Secret Service, and Navy Tapped Flock’s AI Camera Network

Joseph Cox on October 16, 2025 in 404 Media

A new investigation by 404 Media reveals that federal agencies—including ICE, the Secret Service, and the Navy’s criminal investigation division—accessed Flock’s network of AI-powered license plate tracking cameras across the U.S., performing hundreds of searches with minimal transparency. The disclosure, prompted by a letter from Senator Ron Wyden, underscores growing concern over unregulated surveillance infrastructure and raises urgent questions about local government participation in mass data sharing without public oversight.

Read article