
Join People Powered on September 16 for the Release of New Guidance on AI for Digital Democracy
Join People Powered on September 16 for the release of new guidance based on global case studies for AI for Digital Democracy.
Insights on AI, Governance and Democracy
Join People Powered on September 16 for the release of new guidance based on global case studies for AI for Digital Democracy.
The Center for Public Sector AI has launched a new recognition initiative called The AI 50, which honors people and institutions that are playing important roles in implementing and developing artificial intelligence within government agencies.
Artificial intelligence can transform evidence-based policymaking by enabling policymakers to cast a wider net for evidence, synthesize evidence more rapidly, and incorporate better and deeper engagement with communities. However, this transformation also presents significant challenges from bias and transparency concerns to the risk of over-reliance on algorithmic outputs. By understanding the promise and the pitfalls of AI-enabled research tools, while keeping human expertise at the center of the process, we can harness these powerful tools to serve the public interest while preserving the democratic values of transparency, accountability, and inclusive governance.
A new Code for America assessment looks at how states are adopting artificial intelligence to support the design, delivery, and evaluation of public services. While most states remain in early development stages, the three leading states distinguished themselves by building comprehensive governance frameworks, investing in workforce training, and establishing dedicated leadership structures to support the responsible and effective use of AI.
HEKA, the Highway Engineer Knowledge Agent chatbot created through the AI for Impact co-op program is empowering design engineers in the MassDOT Highway Division to efficiently query department manuals and documentation, aiding in the design of quicker and safer infrastructure projects for commuters in Massachusetts. It was recently highlighted by NBC 10 Boston.
New research from the UK Gov shows how AI could make it easier for institutions to do public engagement. A new process called "Consult" combined AI with human oversight to analyze public consultation responses with 76% accuracy in seconds.
While Washington fights over who gets to say "no" to AI, they're missing the bigger question: how can we actually use these tools to fix our broken institutions? States like Ohio and New Jersey are already proving AI's transformative potential—cutting millions in bureaucratic waste, speeding up citizen services, and making government actually work for people. The real debate shouldn't be about regulation versus innovation, but about the AI we need to build, buy, and design to strengthen democracy.
This week’s Research Radar highlights The Agentic State, an ambitious whitepaper arguing that AI agents could reshape the core functions of government. It’s a timely vision for public sector transformation —worth reading, debating, and building on.
We are developing "Civic and Democratic AI," an 8-part WhatsApp course that teaches people how to use generative AI to navigate government processes, understand complex documents, and organize for community action. The course aims to provide practical AI skills for civic engagement. We are seeking feedback on the course content. Share your insights and expertise as we roll out this free program to help communities use AI to understand their government, access their rights, and organize for change.
A new white paper from The Institutional Architecture Lab argues that combating AI-generated deepfakes and synthetic content in elections requires purpose-built institutions. The authors propose Electoral Integrity Institutions that would coordinate across government, tech platforms, and civil society to scan, assess, and respond to synthetic content threats. But the paper also provokes a fundamental question: should we design institutions defensively to react to AI threats, or offensively to build better, more participatory and representative elections?
InnovateUS is excited to announce "Responsible AI for Public Sector Legal Professionals," two free courses which equip public sector lawyers and legal support staff to safely and responsibly use AI tools and implement AI systems to improve the efficiency and effectiveness of their work while safeguarding sensitive information. Co-created with senior legal and technical leaders from state agencies, the curriculum is designed for government attorneys, legal support staff, policymakers, and compliance officers seeking to harness AI's potential while upholding professional and ethical responsibilities.
The Trump administration’s January 20 executive order rechristening the US Digital Service as the Department of Government Efficiency (DOGE) has effectively hijacked the civic tech movement. While the US Digital Service focused on life-saving and government improvement functions, DOGE has used AI and other advanced technologies to burrow deep into administrative datasets and monopolize control. It’s time to flip the script (again) and break the government’s stranglehold on information. Rather than centralize power, let’s use AI to distribute it.
In our AI revolution, we face a pivotal choice between using these unprecedented cognitive tools to amplify our worst tendencies or solve humanity's greatest challenges. As New Jersey's Chief AI Strategist, I've witnessed firsthand how AI can transform public services, but becoming true "public entrepreneurs" requires more than technology—it demands purpose, partnership, problem definition, and participation to create meaningful change in an increasingly fractured world. Read my effort to offer hope to honors graduates of Kean University facing the collapse of dignity, decency, and due process.
House Republicans have introduced a provision in the Budget Reconciliation bill that would prevent states from regulating artificial intelligence systems for a decade. This move represents a striking departure from traditional Republican advocacy for states' rights, as the party now seeks to impose federal preemption over state-level AI safety and accountability measures. Even if it doesn't survive markup, the intent is clear: technological accelerationism above all else.
ETH Zurich researchers introduce "Value-Sensitive Citizen Science," a systematic framework combining design principles with citizen science to foster meaningful public participation in AI development. The paper provides a structured approach to embed community values directly into technical systems, critical as AI increasingly shapes societal outcomes.