Showing 15 of 323 results

Global AI Watch: AI, Food Security, and the Case for Institutional Reform
Global AI Watch

Global AI Watch: AI, Food Security, and the Case for Institutional Reform

In this interview, B Cavello discusses findings from an Aspen Institute global survey and reflects on what it reveals about the intersection of AI and food systems. Practitioners consistently emphasized institutional capacity, governance, and distributional challenges as central constraints. The conversation explores how AI might support greater transparency, participation, and resilience, and what these insights mean for U.S. state and local policymakers working on food security, land governance, and public-sector capacity.

Published on Mar 4, 2026 by B Cavello and Elana Banin

Using AI to Help States Break Down Barriers to Skills-Based Hiring
AI and Labor

Using AI to Help States Break Down Barriers to Skills-Based Hiring

Artificial intelligence is often portrayed as a force that will destroy jobs. In this essay, Seth Harris argues that it can also expand opportunity. By using AI to identify and remove unnecessary college degree requirements in state hiring, governments can reduce barriers to public employment, address racial and class disparities, and fill critical roles more efficiently. Backed by new research underway from the Burnes Center for Social Change, supported by the GitLab Foundation, he outlines how thoughtfully deployed AI can break down barriers and accelerate the shift to skills-based hiring.

Published on Mar 3, 2026 by Seth Harris

Who Will Shape AI in the Public Interest
Governing AI

Who Will Shape AI in the Public Interest

The current controversy over the Pentagon’s AI contracts reveals a deeper issue: governments are shaping the AI market through procurement in the wrong ways or not at all, failing to make demands that AI strengthen democracy and improve governance. As AI becomes core public infrastructure, public institutions must use their purchasing power deliberately by requiring portability, accountability, and interoperability and prioritizing use in the public interest. This post explains the public conversation we are having about public and democratic AI andhow governments can buy, build, and govern AI on the public’s terms.

Published on Mar 2, 2026 by Beth Simone Noveck

Key takeaways from “Regulating Algorithms: What Governments Around the World Are Doing and What Public Servants Should Know”
Global AI Watch

Key takeaways from “Regulating Algorithms: What Governments Around the World Are Doing and What Public Servants Should Know”

Building domestic AI systems or reducing reliance on big tech is not the same as making AI work for the public. Examples such as Spain’s ALIA project, the EU AI Act, and Italy’s new AI law show that ownership and regulation go only so far. What really matters is whether public institutions have the people, skills, and authority to oversee these systems once they’re in use.

Published on Feb 25, 2026 by Luca Cominassi and Beth Simone Noveck

Research Radar: Academics Are Sounding the Alarm on AI Adoption. Who's Listening?
Research Radar

Research Radar: Academics Are Sounding the Alarm on AI Adoption. Who's Listening?

A new paper urges universities to resist the uncritical adoption of AI, arguing that existing research integrity standards already prohibit much of what institutions are normalizing. Beth Simone Noveck's diagnosis is sharp: vendor dependency, opacity, and hype are reshaping public institutions from the inside. But for governments, refusal isn’t an option. The real question is not whether to adopt AI, but who controls it and on what terms.

Published on Feb 24, 2026 by Beth Simone Noveck

AI for Governance: How Institutions Can Provide AI Access Safely, Affordably, and at Scale
AI for Governance

AI for Governance: How Institutions Can Provide AI Access Safely, Affordably, and at Scale

As governments move from AI pilots to workforce-wide access, the hardest questions are practical: What does it cost at scale? Who gets access? How do you log usage and manage risk? How do we pair access with training so it’s safe — and actually useful?Drawing on lessons from New Jersey and Boston, this post outlines how to structure AI deployment using usage-based pricing, cloud infrastructure you control, and training designed to drive real impact.

Published on Feb 23, 2026 by Beth Simone Noveck

Government Strategy Needs Reimagining: An Experiment from Argentina
Global AI Watch

Government Strategy Needs Reimagining: An Experiment from Argentina

In Argentina, the Red de Innovación Local (RIL) experimented with its AI-powered platform, PortalRIL, to shift from fragmented work plans to an inquiry-driven planning process anchored in a “Question Tree.” By surfacing patterns, trade-offs, and synergies across teams, AI helped compress months of coordination into weeks of shared clarity. The result suggests that AI’s real promise is to expand strategic horizons and accelerate collective insight, freeing public servants to focus on judgment, identity, and long-term public value.

Published on Feb 18, 2026 by Giulio Quaggiotto

Evaluating AI Safety Through Local Policy: Findings from the UbuntuGuard benchmark
Research Radar

Evaluating AI Safety Through Local Policy: Findings from the UbuntuGuard benchmark

A new paper introducing the UbuntuGuard benchmark questions whether strong results on English-language safety tests consistently translate into responsible use. Built from policies developed by 155 African domain experts across ten languages and six countries, UbuntuGuard's framework assesses whether AI tools comply with the norms that shape services in non-Western contexts. The findings suggest that institutions, wherever they operate, need the capacity to define their own standards before using these tools to improve public-sector outcomes.

Published on Feb 17, 2026 by Elana Banin

Minding the brand: Leveraging AI to build a culture of recognition in government
AI for Governance

Minding the brand: Leveraging AI to build a culture of recognition in government

Max Stier argues that rebuilding trust in government begins inside agencies by recognizing and elevating the everyday excellence of career civil servants. Drawing on the Partnership for Public Service and federal workforce data, he argues that internal culture, not just external messaging, shapes public confidence. The post explains how AI can cut paperwork, reduce bias, recognize good work in real time, and tailor praise, helping leaders boost morale, reward strong performance, and better serve the public.

Published on Feb 16, 2026 by Max Stier

Global AI Watch: Korean Public Funds for Global AI Advancements
Global AI Watch

Global AI Watch: Korean Public Funds for Global AI Advancements

As Korea aims to become a top-three global AI power, the government has staked its strategy on “sovereign AI” as both an economic and national security priority. In this reflection, originally published in the Herald Insight Collection, Merve Hickok examines Korea’s multibillion-dollar investment in a national foundation model amid U.S.–China competition and the rise of Open-Source AI, asking whether alternative investments—such as small language models, compute and energy efficiency, and AI governance and evaluation—might better secure Korea’s long-term autonomy and global leadership.

Published on Feb 11, 2026 by Merve Hickok

How We Co-Designed an AI-Powered Tool for IEPs with Families in San Francisco
Research Radar

How We Co-Designed an AI-Powered Tool for IEPs with Families in San Francisco

As the AIEP project concludes its first pilot in San Francisco, it offers more than a new AI tool for navigating IEPs. It shows what becomes possible when families, educators, designers, and researchers co-design technology from the ground up. Through a free, open-source tool, a civic AI learning course, a community-centered playbook, and academic research, this work demonstrates a practical model for public-purpose AI rooted in lived experience, shared learning, and accountability. What began as support for parents has grown into a blueprint for building AI with communities, not just for them.

Published on Feb 10, 2026 by Sofía Bosch Gómez, Joanna French and Belén Farmer Martinez

Doing Democracy with AI: Designing Public Engagement for the AI Era
AI for Governance

Doing Democracy with AI: Designing Public Engagement for the AI Era

Leaders increasingly believe public engagement matters, but lack the practical know-how to do it well. Beth Simone Noveck and Dane Gambrell examine how institutions use AI and collective intelligence to engage the public at scale. Across countries and levels of government, engagement can move from performative to consequential when institutions build the capacity to design it well. That work now comes together in a new free course, Designing Democratic Engagement for the AI Era, created by InnovateUS, The GovLab, and the Allen Lab for Democracy Renovation, to help public professionals turn these lessons into practice.

Published on Feb 9, 2026 by Beth Simone Noveck and Dane Gambrell

Wicked Decluttering

Wicked Decluttering

What if the problem with government wasn’t too many rules, but how they’re organized? Boston’s permitting overhaul with AI for Impact shows how AI and collective intelligence can simplify the user experience without eroding the safeguards that matter.

Published on Feb 4, 2026 by Beth Simone Noveck

Research Radar: “Unboxing the Prompt”: How Community Feedback (and AI) Helped Us Build Better AI Together
Research Radar

Research Radar: “Unboxing the Prompt”: How Community Feedback (and AI) Helped Us Build Better AI Together

Families are expected to advocate for their children using IEP documents that are dense, technical, and often inaccessible. Instead of treating AI as a black box that produces generic summaries, this project takes a different approach of "unboxing the prompt" and inviting parents into the system's core logic. This post traces how community feedback reshaped the tool at every stage, from moving beyond one-size-fits-all summaries to extracting legally meaningful details, to designing for privacy, to preserving meaning across languages, and to foregrounding student strengths.

Published on Feb 3, 2026 by Dhruv Kamlesh Kumar

Experimentation as Public Infrastructure
AI for Governance

Experimentation as Public Infrastructure

Governments are adopting powerful new technologies faster than their systems are built to learn. This piece by Cassandra Madison at the Center for Civic Futures argues that responsible innovation requires more than access to tools. It requires safe, structured spaces for experimentation. By treating experimentation as public infrastructure, governments can learn early, surface risks before they scale, and make better decisions when the stakes are highest.

Published on Feb 2, 2026 by Cassandra Madison