Showing 15 of 265 results

Why “Good Guys” Shouldn’t Use AI like the “Bad Guys”: The Failure of Predictive Policing
AI and Public Safety

Why “Good Guys” Shouldn’t Use AI like the “Bad Guys”: The Failure of Predictive Policing

This essay argues that predictive policing continues to fail not because police departments lack data, but because they are using the wrong kind of data, in the wrong way. Applying low-stakes commercial algorithms to high-stakes decisions can produce dangerous false positives, reinforce biased patterns, and erode public trust in policing. Using examples from Plainfield, NJ, and Chicago, the piece illustrates how predictive systems replicate past police behavior rather than accurately forecasting crime, thereby creating self-reinforcing feedback loops. It contrasts these failures with diagnostic approaches in Oakland and Richmond that utilize data to understand harm, guide outreach, and reduce violence without relying on algorithmic surveillance. The core argument is that policing needs better mirrors, not crystal balls.

Published on Nov 17, 2025 by Mihir Kshirsagar

Designing AI for Trust: Lessons from Tarjimly’s Translation Platform for Humanitarian Action
AI and Problem Solving

Designing AI for Trust: Lessons from Tarjimly’s Translation Platform for Humanitarian Action

When refugees needed language support, Tarjimly turned everyday volunteers into lifelines. In this reflection, CEO Atif Javed traces how the platform evolved from a Facebook Messenger experiment into a global translation network, one now partially powered by AI. His key lesson is that designing for trust means using technology to amplify, not replace, human empathy in moments of crisis.

Published on Nov 12, 2025 by Atif Javed

Governing AI: The Air Force’s AI Land Rush
Governing AI

Governing AI: The Air Force’s AI Land Rush

The Air Force is quietly auctioning off slices of its bases for private AI data centers. They call it innovation; it looks like privatization. Fifty-year leases, 3,000 acres of military land, and no public say. If this is how we build the future, who’s really in command?

Published on Nov 9, 2025 by Beth Simone Noveck

Global AI Watch: Brazil’s Experiment in AI-Powered Participation
AI for Governance

Global AI Watch: Brazil’s Experiment in AI-Powered Participation

When Brazil's federal government launched its 2023 Participatory Pluriannual Plan, the response was massive: 1.4 million participants submitted over 8,200 proposals through the Brasil Participativo platform. But volume creates a challenge, as manually processing thousands of contributions is slow and resource-intensive, often causing valuable insights to slip through the cracks. Now, Brazil is pioneering an open-source AI system that automatically analyzes citizen feedback, generates comprehensive reports, and tracks which suggestions made it into final policies. The result is a new model for democratic intelligence, one that transforms the flood of public input into structured, actionable knowledge without losing the nuance of individual voices.

Published on Nov 5, 2025 by Christiana Freitas and Ricardo Poppi

Research Radar: The Emperor's New Agents - Why AI Won't Fix Broken Government
Research Radar

Research Radar: The Emperor's New Agents - Why AI Won't Fix Broken Government

The Agentic State is an ambitious and inspiring blueprint for rebuilding government around AI agents that can act and decide autonomously. It powerfully diagnoses real failures in how the public sector designs, delivers, and manages services. While AI is giving us ways to accelerate change, the prescribed cure may be premature: most of what’s broken in government requires organizational reform, not automation.

Published on Nov 4, 2025 by Beth Simone Noveck

The AI Fish Counter: Teaching Ourselves to Use AI—Before It Uses Us
AI and Problem Solving

The AI Fish Counter: Teaching Ourselves to Use AI—Before It Uses Us

The danger isn’t that AI will make us dumber—it’s that governments, companies, and schools won’t make us smarter with it. As policymakers stall and corporations automate, the burden of using AI wisely now falls on us. Like shoppers at the fish counter, we need to learn to read the labels—to know what’s safe, what’s risky, and how to choose well.

Published on Nov 3, 2025 by Beth Simone Noveck

How Governments are Using AI
AI and Problem Solving

How Governments are Using AI

Public service professionals from across the globe have come together to learn from one another how to use AI to improve governance. From St. Louis cutting hiring times from 12 to 2 months, Hamburg analyzing 11,000 public comments in days, and New Jersey reducing Spanish-language form completion from 4 hours to 25 minutes, it becomes clear that AI works for democracy when we build the foundation first.

Published on Oct 30, 2025 by Elana Banin

Governing the Undefined: Why the Debate Over Superintelligence Misses the Point
Governing AI

Governing the Undefined: Why the Debate Over Superintelligence Misses the Point

As headlines warn of “superintelligent AI” threatening human extinction, a new open letter reignites familiar fears. But beneath the apocalyptic rhetoric lies a deeper problem. The narrative around artificial superintelligence, long embraced by Big Tech, diverts attention from the real and immediate challenges of AI and how our democratic institutions can address them.

Published on Oct 29, 2025 by Dane Gambrell

Re-thinking AI: How a Group of Civic Technologists Discovered the Power of AI to Rebuild Trust in Government

Re-thinking AI: How a Group of Civic Technologists Discovered the Power of AI to Rebuild Trust in Government

After two years of research, the RethinkAI collaborative released Making AI Work for the Public—a comprehensive field review of how U.S. governments adopt AI. Since 2019, over 1,600 AI-related bills have been introduced, but most focus on guardrails, not proactive strategy. Meanwhile, cities are piloting translation tools, engagement platforms, and predictive systems, often led by Chief Information Officers, taking on new strategic roles. The report challenges civic tech’s efficiency-first legacy and proposes a new governance model—ALT: Adapt to anticipate needs, Listen to understand communities, and build Trust through two-way accountability.

Published on Oct 27, 2025 by Neil Kleiman, Mai-Ling Garcia and Eric Gordon

How Hamburg is Turning Resident Comments into Actionable Insight
AI and Public Engagement

How Hamburg is Turning Resident Comments into Actionable Insight

Officials in Hamburg had long struggled with the fact that while citizens submitted thousands of comments on planning projects, only a fraction could realistically be read and processed. Making sense of feedback from a single engagement could once occupy five full-time employees for more than a week and chill any desire to do a follow-up conversation. Learn about how Hamburg built its own open source artificial intelligence to make sense of citizen feedback on a scale and speed that was once unimaginable.

Published on Oct 22, 2025 by Beth Simone Noveck

Building Democracy’s Digital Future: Lessons from Boston’s Civic AI Experiments
AI and Lawmaking

Building Democracy’s Digital Future: Lessons from Boston’s Civic AI Experiments

Boston became a living laboratory for democratic innovation last week, as two major convenings—the Civic AI Summit at Northeastern and Harvard’s Digital Democracy showcase—brought together leaders reshaping how technology serves the public good. From new tools that open up lawmaking and procurement to partnerships that align city and state AI strategies, Boston’s approach offers a model nationally for how AI can strengthen democracy through human-centered design, transparency, and collaboration.

Published on Oct 20, 2025 by David Fields

New America CEO Anne-Marie Slaughter Reflects on the National Gathering for State AI Leaders
AI for Governance

New America CEO Anne-Marie Slaughter Reflects on the National Gathering for State AI Leaders

As states take center stage in shaping how the U.S. adapts to artificial intelligence, their choices will determine not just whether America keeps pace, but whether it thrives. This summer, Princeton University’s Center for Information Technology Policy convened state AI officers, researchers, entrepreneurs, and technologists for “Shaping the Future of AI: A National Gathering for State AI Leaders.” The two-day working conference focused on building practical, responsible frameworks for public-sector AI implementation. New America CEO Anne-Marie Slaughter closed the convening with a wide-ranging keynote that called for public AI infrastructure, trust-based governance, and co-creation across sectors. What follows is a summary of her 10 core takeaways.

Published on Oct 15, 2025 by Anne-Marie Slaughter

Vibe Coding the City: How One Developer Used Open Data to Map Every Public Space in New York City
AI and Service Delivery

Vibe Coding the City: How One Developer Used Open Data to Map Every Public Space in New York City

New York City has thousands of parks, plazas, and public courtyards, but no easy way to find them. Using “vibe coding,” open data, and generative AI, one civic technologist built a map of every public space in the five boroughs. This is the story of NYC Public Space, an app that stitches together fragmented government datasets, AI-generated descriptions, and community-sourced updates to make the city’s public realm more visible and usable. It’s also a case study in how AI can help public interest technologists move faster, build smarter, and turn open data into real public value.

Published on Oct 14, 2025 by Dane Gambrell

The Next UN: AI, Power, and What Global Governance Must Become
AI and Lawmaking

The Next UN: AI, Power, and What Global Governance Must Become

In late September, the UN adopted a global AI resolution backed by all 193 Member States, a diplomatic milestone. But one that risks repeating old patterns of top-down governance. The new Reboot Democracy Blog Editor, Elana Banin, argues that legitimacy doesn’t come from declarations, but from grounded, democratic practice. From California to Vietnam, she explores what real AI governance looks like and lays out three strategic tests the UN must pass to matter.

Published on Oct 8, 2025 by Elana Banin

Silicon Sampling: When Communications Practitioners Should (and Shouldn’t) use AI in the Survey Pipeline
Research Radar

Silicon Sampling: When Communications Practitioners Should (and Shouldn’t) use AI in the Survey Pipeline

Large Language Models are becoming common tools in the communications toolkit, but not all uses are created equal. In this new post from the AIMES Lab at Northeastern University, John Wihbey and Samantha D’Alonzo offer research-backed guidance on when to use LLMs in the survey pipeline and when to steer clear. The research indicates that AI is a powerful assistant for refining survey questions and testing hypotheses, but a poor substitute for actual human respondents. Drawing on more than 30 academic studies, this piece lays out a practical, hybrid approach to “silicon sampling” that helps practitioners strengthen research integrity without falling for AI’s easy shortcuts.

Published on Oct 7, 2025 by John Wihbey and Samantha D'Alonzo