Showing 15 of 331 results

Built Against Its People: Iran’s AI Infrastructure of Control
Global AI Watch

Built Against Its People: Iran’s AI Infrastructure of Control

Dr. Sara Bazoobandi examines how Iran’s doctrine of “knowledge jihad” shaped the development of its digital and AI infrastructure, transforming technology into an instrument of state control. The piece traces how this system, built for surveillance and centralized authority, has also created strategic fragility, offering a cautionary lesson for democracies designing the foundations of AI governance.

Published on Mar 18, 2026 by Sara Bazoobandi

The Case for Civic AI Compacts with Higher Education
Research Radar

The Case for Civic AI Compacts with Higher Education

Cities often treat nearby universities as occasional partners rather than strategic collaborators. But as artificial intelligence reshapes local economies and public services, that relationship may need to change. Drawing on a new policy brief, The AI Lab Next Door, Neil Kleiman argues that city–university “compacts” can transform transactional ties into intentional partnerships, helping communities harness the growing AI capacity already taking shape on college campuses.

Published on Mar 17, 2026 by Neil Kleiman

What We Learned from 50 Experts About Designing Democratic Engagement in the AI Era
Research Radar

What We Learned from 50 Experts About Designing Democratic Engagement in the AI Era

More than 50 practitioners, researchers, and civic technologists from 24 countries reviewed the draft curriculum for Designing Democratic Engagement for the AI Era, providing over 300 comments and suggestions. The feedback highlighted the need for clearer guidance on institutional readiness, trust, inclusion, and the risks and limits of AI in public participation. This post summarizes the key themes that emerged, explains how AI tools were used to synthesize the feedback, and outlines the next steps in developing the course.

Published on Mar 17, 2026 by Dane Gambrell

Can AI help save us bureaucrats from our bureaucracy?
AI for Governance

Can AI help save us bureaucrats from our bureaucracy?

InnovateUS and the Center for Civic Futures are launching a new series exploring how AI can help human services agencies reduce administrative burden and improve benefits delivery. Drawing on Robert Asaro-Angelo’s experience as Commissioner of New Jersey’s Department of Labor and Workforce Development, this post examines how agencies can use AI to help the public sector improve benefits delivery, reduce administrative burden, and better support both frontline staff and the people they serve.

Published on Mar 16, 2026 by Robert Asaro-Angelo

South Australia needs its own sovereign AI capability
Global AI Watch

South Australia needs its own sovereign AI capability

In this commentary, originally published by InDaily South Australia, Matt Ryan argues that artificial intelligence can help governments deliver more effective, human-centered services, but only if it builds public trust and democratic legitimacy. Drawing on examples from Spain, San Francisco, and the UK, he outlines a path for South Australia to develop “sovereign AI capability.” His proposal focuses on three priorities: participatory AI governance, stronger public-sector AI skills, and reinvesting efficiency gains into public services, ensuring AI improves government while strengthening democracy.

Published on Mar 11, 2026 by Matt Ryan

Assembly Required: A Conversation with Lorelei Kelly on Deliberative Technology and Congressional Reform
Research Radar

Assembly Required: A Conversation with Lorelei Kelly on Deliberative Technology and Congressional Reform

In this conversation with Elana Banin, Lorelei Kelly argues that rebuilding democratic resilience requires redesigning the institutional infrastructure connecting citizens to Congress. Drawing on constitutional history and emerging technologies, she explores how deliberative technology and AI could help revive the First Amendment’s promises of assembly and petition for the digital age.

Published on Mar 10, 2026 by Elana Banin

  Building an “Agentic Middleware” for the City Government: Boston’s Experiment with Model Context Protocol 
AI for Governance

Building an “Agentic Middleware” for the City Government: Boston’s Experiment with Model Context Protocol 

Boston is preparing for a future where AI agents increasingly interact with government systems. In this interview, Boston's Chief Information Officer Santiago Garces explains how the city is experimenting with the Model Context Protocol (MCP) as a governance layer between AI and public digital infrastructure. Starting with the open data portal, Boston is testing how MCP can make AI interactions more reliable, secure, and grounded in real government data. The effort provides a concrete example of how to safely support the emerging “agentic web” while improving access to public services.

Published on Mar 9, 2026 by Alberto Rodriguez Alvarez

Global AI Watch: AI, Food Security, and the Case for Institutional Reform
Global AI Watch

Global AI Watch: AI, Food Security, and the Case for Institutional Reform

In this interview, B Cavello discusses findings from an Aspen Institute global survey and reflects on what it reveals about the intersection of AI and food systems. Practitioners consistently emphasized institutional capacity, governance, and distributional challenges as central constraints. The conversation explores how AI might support greater transparency, participation, and resilience, and what these insights mean for U.S. state and local policymakers working on food security, land governance, and public-sector capacity.

Published on Mar 4, 2026 by B Cavello and Elana Banin

Using AI to Help States Break Down Barriers to Skills-Based Hiring
AI and Labor

Using AI to Help States Break Down Barriers to Skills-Based Hiring

Artificial intelligence is often portrayed as a force that will destroy jobs. In this essay, Seth Harris argues that it can also expand opportunity. By using AI to identify and remove unnecessary college degree requirements in state hiring, governments can reduce barriers to public employment, address racial and class disparities, and fill critical roles more efficiently. Backed by new research underway from the Burnes Center for Social Change, supported by the GitLab Foundation, he outlines how thoughtfully deployed AI can break down barriers and accelerate the shift to skills-based hiring.

Published on Mar 3, 2026 by Seth Harris

Who Will Shape AI in the Public Interest
Governing AI

Who Will Shape AI in the Public Interest

The current controversy over the Pentagon’s AI contracts reveals a deeper issue: governments are shaping the AI market through procurement in the wrong ways or not at all, failing to make demands that AI strengthen democracy and improve governance. As AI becomes core public infrastructure, public institutions must use their purchasing power deliberately by requiring portability, accountability, and interoperability and prioritizing use in the public interest. This post explains the public conversation we are having about public and democratic AI andhow governments can buy, build, and govern AI on the public’s terms.

Published on Mar 2, 2026 by Beth Simone Noveck

Key takeaways from “Regulating Algorithms: What Governments Around the World Are Doing and What Public Servants Should Know”
Global AI Watch

Key takeaways from “Regulating Algorithms: What Governments Around the World Are Doing and What Public Servants Should Know”

Building domestic AI systems or reducing reliance on big tech is not the same as making AI work for the public. Examples such as Spain’s ALIA project, the EU AI Act, and Italy’s new AI law show that ownership and regulation go only so far. What really matters is whether public institutions have the people, skills, and authority to oversee these systems once they’re in use.

Published on Feb 25, 2026 by Luca Cominassi and Beth Simone Noveck

Research Radar: Academics Are Sounding the Alarm on AI Adoption. Who's Listening?
Research Radar

Research Radar: Academics Are Sounding the Alarm on AI Adoption. Who's Listening?

A new paper urges universities to resist the uncritical adoption of AI, arguing that existing research integrity standards already prohibit much of what institutions are normalizing. Beth Simone Noveck's diagnosis is sharp: vendor dependency, opacity, and hype are reshaping public institutions from the inside. But for governments, refusal isn’t an option. The real question is not whether to adopt AI, but who controls it and on what terms.

Published on Feb 24, 2026 by Beth Simone Noveck

AI for Governance: How Institutions Can Provide AI Access Safely, Affordably, and at Scale
AI for Governance

AI for Governance: How Institutions Can Provide AI Access Safely, Affordably, and at Scale

As governments move from AI pilots to workforce-wide access, the hardest questions are practical: What does it cost at scale? Who gets access? How do you log usage and manage risk? How do we pair access with training so it’s safe — and actually useful?Drawing on lessons from New Jersey and Boston, this post outlines how to structure AI deployment using usage-based pricing, cloud infrastructure you control, and training designed to drive real impact.

Published on Feb 23, 2026 by Beth Simone Noveck

Government Strategy Needs Reimagining: An Experiment from Argentina
Global AI Watch

Government Strategy Needs Reimagining: An Experiment from Argentina

In Argentina, the Red de Innovación Local (RIL) experimented with its AI-powered platform, PortalRIL, to shift from fragmented work plans to an inquiry-driven planning process anchored in a “Question Tree.” By surfacing patterns, trade-offs, and synergies across teams, AI helped compress months of coordination into weeks of shared clarity. The result suggests that AI’s real promise is to expand strategic horizons and accelerate collective insight, freeing public servants to focus on judgment, identity, and long-term public value.

Published on Feb 18, 2026 by Giulio Quaggiotto

Evaluating AI Safety Through Local Policy: Findings from the UbuntuGuard benchmark
Research Radar

Evaluating AI Safety Through Local Policy: Findings from the UbuntuGuard benchmark

A new paper introducing the UbuntuGuard benchmark questions whether strong results on English-language safety tests consistently translate into responsible use. Built from policies developed by 155 African domain experts across ten languages and six countries, UbuntuGuard's framework assesses whether AI tools comply with the norms that shape services in non-Western contexts. The findings suggest that institutions, wherever they operate, need the capacity to define their own standards before using these tools to improve public-sector outcomes.

Published on Feb 17, 2026 by Elana Banin