News That Caught Our Eye #83

News That Caught Our Eye #83

Published on November 6, 2025

Summary

Brazil is pioneering AI-powered participatory governance at scale, processing input from 1.4 million citizens and turning public proposals into policy reports. From Maine’s 33 recommendations for responsible AI to Brookings’ call for shared data infrastructure to unlock housing supply in 50 cities, we showcase how leaders are solving problems in new ways with the help of AI. We spotlight new research cataloging 70+ LLM tools aimed at improving online discourse, examine the surge of deepfakes in European elections, and share lessons from 20+ InnovateUS workshops. Plus: two essays from Beth Simone Noveck on why augmentation beats automation and why AI agents can’t fix broken institutions.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

AI and Problem Solving

AI and Problem Solving

How Governments Are Using AI

Elana Banin on October 30, 2025 in Reboot Democracy

Governments around the world are learning that AI works best when built on a foundation of trust, participation, and system redesign. This feature draws on more than 20 InnovateUS workshops, showcasing public-sector wins like St. Louis cutting hiring times from 12 to 2 months, Hamburg using AI to surface public priorities in urban planning, and New Jersey reducing Spanish-language form completion from 4 hours to 25 minutes. The core takeaway? AI succeeds when governments start with people, redesign processes, and only then introduce tools.

Read article

AI for Governance

AI for Governance

The AI Fish Counter: Teaching Ourselves to Use AI Before It Uses Us

Beth Simone Noveck on November 3, 2025 in Reboot Democracy Blog

Beth Simone Noveck draws an analogy between buying fish and choosing AI tools: both require informed judgment amid unclear labeling, regulatory gaps, and absolute risk. The danger isn’t that AI will make us dumber, she argues, it’s that governments, companies, and schools won’t make us smarter with it. As policymakers stall and corporations automate, the burden of using AI wisely now falls on us. Like shoppers at the fish counter, we need to learn to read the labels—to know what’s safe, what’s risky, and how to choose well.

Read article

AI for Governance

The Emperor's New Agents: Why AI Won't Fix Broken Government

Beth Simone Noveck on November 4, 2025 in Reboot Democracy

Beth Noveck critiques the new report, The Agentic State, arguing that while AI agents offer a compelling vision for transforming public services, the real solutions to government dysfunction lie in organizational reform, not automation. She warns that without addressing foundational issues—such as siloed agencies, outdated workflows, and poor service design—AI risks exacerbating failure, rather than solving it. The article urges public officials to ask, “Can’t we do this already?” before resorting to speculative, unproven AI solutions.

Read article

AI for Governance

AI in State Government

Katherine Barrett and Richard Greene on October 31, 2025 in IBM Center for the Business of Government

This report explores how state governments are using generative AI to enhance public service delivery—from public health and education to environmental management. The report highlights how AI is already saving time, streamlining operations, and supporting policy innovation, while warning of the need for workforce readiness. Pennsylvania partnered with InnovateUS to expand responsible AI training following a successful pilot of ChatGPT, while Ohio utilized InnovateUS’s seven-module curriculum to establish broad digital literacy across its agencies. The report calls for continued investment in cross-agency learning networks, human-in-the-loop systems, and scalable upskilling to ensure AI strengthens public institutions.

Read article

AI and Public Engagement

AI and Public Engagement

Global AI Watch: Brazil's Experiment in AI-Powered Participation

Christiana Freitas and Ricardo Poppi on November 5, 2025 in Reboot Democracy

Brazil is pioneering a new model of “democratic intelligence” by integrating AI into large-scale participatory governance. Building on decades of civic engagement, the government used AI in its 2024 National Science & Technology Conference to process massive public input—and is now developing an open-source AI system to automatically analyze citizen proposals, generate policy-linked reports, and send personalized feedback. With 1.4M participants and over 8,200 proposals in its 2023 PPA process, Brazil’s approach shows how AI can scale inclusive policymaking without losing nuance.

Read article

AI and Public Engagement

Mapping LLM Tools for Public Discourse, Pluralism, and Social Cohesion

Matt DeVerna, David J. Grüning, Jen Hickey, Adnan Jaber, Julia Kamin, Brendan A. Miller, Rehan Mirza, Jiaxin Pei, Victoria Stanski on October 29, 2025 in Prosocial Design Network

A new report maps 70+ LLM-based tools designed to improve online dialogue and democratic engagement—from comment moderators to deliberation bots. Most tools aim to promote healthy conversation and connection, especially during live user engagement, but few focus proactively on upstream interventions. Case studies include CLR:SKY (Bluesky) and Kenya’s zKE network. Researchers highlight risks associated with manipulation, bias, and oversight.

Read article

AI Infrastructure

AI Infrastructure

Home Genome Project: AI and Housing Supply

Rosanne Haggerty, Ruby Bolaria Shifrin, Jacob Taylor, Kershlin Krishna, Sara Bronin, Nick Cain, Xiomara Cisneros, Adam Ruege, Henri Hammond-Paul, Jamie Rife, Josh Humphries, and Beth Noveck on October 27, 2025 in Brookings Institution

Brookings proposes a Home Genome Project—a national AI infrastructure to help cities tackle housing shortages. Modeled after the Human Genome Project, the initiative would help cities build open datasets and shared AI tools to identify underused land, reduce red tape, and boost housing supply. Pilots in Atlanta, Denver, Santa Fe, and London show how real-time data and cross-agency teams can unlock hidden housing potential. The goal: enable 50+ cities to scale housing AI by 2030.

Read article

AI and Elections

AI and Elections

The week that AI deepfakes hit Europe’s elections

Pieter Haeck and Eva Hartog on October 31, 2025 in Politico

AI-generated disinformation is infiltrating European elections, with deepfake videos falsely showing Irish candidate Catherine Connolly dropping out and over 400 AI-generated posts identified in Dutch political discourse—25% linked to the far-right PVV party. The Dutch data authority warned voters not to rely on AI chatbots for political advice, citing biased and misleading outputs. While the EU’s Digital Services Act requires platforms to curb misinformation, enforcement gaps persist, and few parties are labeling AI-generated content. Binding election-specific AI rules may not arrive before 2026.

Read article

Governing AI

Governing AI

Task Force Releases Report on Artificial Intelligence in Maine

Staff on October 31, 2025 in Maine Governor's Office

Maine’s AI Task Force released its final report, offering 33 recommendations to guide the state’s responsible integration of AI across various sectors, including workforce, health, education, and local government. Convened by Governor Janet Mills, the Task Force emphasized AI literacy, guardrails for consumer and child protection, and the potential for AI to address rural gaps in care and services. It also highlighted municipal use cases, such as Auburn’s plans to modernize permitting and customer service, as evidence of local momentum. The report marks one of the most comprehensive state-led AI governance efforts to date.

Read article

Governing AI

What I Saw Around The Curve

Eli Pariser on October 29, 2025 in Second Thoughts Substack

At “The Curve,” a high-powered AI conference in Berkeley, 350 technologists, policymakers, and activists debated the accelerating trajectory of AI and its societal implications. Discussions explored the capabilities of AI, child safety risks associated with chatbots, U.S.-China dynamics, compute bottlenecks, and “attachment economies.” Anthropic co-founder Jack Clark concluded the event with a provocative call to “self-whistleblow” and regulate from the inside, even if it risks dismantling his own $100 billion company.

Read article

AI and Education

AI and Education

AI Literacy in an Unequal World: Pitfalls and Promises

Sonia Livingstone and Mariya Stoilova on October 27, 2025 in Media@LSE blog

New research from the RIGHTS.AI project reveals that while children across Brazil, India, Kenya, and Thailand are rapidly adopting generative AI, stark inequalities in access, education, and system design are limiting their ability to use it safely or critically. The blog by Mariya Stoilova and Sonia Livingstone identifies three global challenges: AI is amplifying digital literacy gaps; its use exposes children to unique safety and bias risks; and its opaque, global-North–centric design limits children’s agency. The authors argue that AI literacy must be paired with inclusive design, safety standards, and systemic reform to ensure children’s rights are protected in the age of AI.

Read article