Reboot Weekly: Governing the Agentic Web, Deliberative Technology for Congress, and South Australia’s Sovereign AI Strategy

Reboot Weekly: Governing the Agentic Web, Deliberative Technology for Congress, and South Australia’s Sovereign AI Strategy

Published on March 12, 2026

Summary

This week on Reboot Democracy, Alberto Rodriguez Alvarez explores with Santi Garces how the City of Boston is experimenting with the Model Context Protocol to safely connect AI agents to government systems. Elana Banin speaks with Lorelei Kelly about new research on how deliberative technology could revive the First Amendment rights by rebuilding how civic input reaches Congress. Matt Ryan argues that developing “sovereign AI capability” in South Australia will require participatory governance, stronger public-sector skills, and reinvesting efficiency gains into public services. Beyond Reboot, Vermont’s new law requires disclosure of AI-generated campaign media. Gallup data showing 43% of public-sector employees now use AI at work. A White House meeting convened tech companies that pledged to fund power infrastructure for energy-hungry AI data centers. And New York court cases where AI chatbots generated fake legal citations that led judges to question or dismiss filings.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

AI for Governance

AI for Governance

Building an “Agentic Middleware” for City Government: Boston’s Experiment with Model Context Protocol

Alberto Rodriguez Alvarez on March 9, 2026 in Reboot Democracy and Fast Company

Boston is experimenting with the Model Context Protocol (MCP) as a governance layer that mediates how AI systems interact with government infrastructure. In an interview co-published with Fast Company, CIO Santiago Garces explains how the city is starting with its open data portal to ensure AI tools can query reliable, real-time government data rather than outdated web sources. By creating a controlled “middleware” layer between AI agents and public systems, Boston aims to improve security, reliability, and accessibility as automated agents increasingly interact with government services.

Read article

AI for Governance

​​South Australia Needs Its Own Sovereign AI Capability

Matt Ryan on March 11, 2026 in Reboot Democracy Blog, republished from InDailySA

Originally published in InDaily South Australia and republished on the Reboot Democracy Blog, this commentary argues that governments can use artificial intelligence to improve public services only if deployments build public trust and democratic legitimacy. Drawing on examples from Spain, San Francisco, and the UK, Ryan proposes developing “sovereign AI capability” through participatory governance, stronger AI skills across the public sector, and reinvesting efficiency gains into people-focused services.

Read article

Governing AI

Governing AI

The Paradoxes of the European Union’s AI Regulation

Nicoletta Rangone on March 10, 2026 in The Regulatory Review

The European Union’s AI Act aims to balance innovation with strong protections for fundamental rights, positioning the EU as a global leader in “rights-driven” AI governance. Rangone argues, however, that the EU’s heavy reliance on regulation rather than on investment in compute, talent, and data infrastructure may undermine competitiveness. The Act’s complex, risk-based framework and lengthy legislative processes also risk regulatory lag as AI technologies evolve. Ultimately, the piece suggests that the success of EU AI governance will depend less on regulatory ambition and more on effective implementation across member states.

Read article

AI and Elections

AI and Elections

Gov. Scott Signs Bill Regulating AI in Election Campaign Media

News Team on March 5, 2026 in WCAX

Vermont Gov. Phil Scott signed legislation requiring disclosure of AI-generated images, audio, or video used in campaign media within 90 days of an election. The law targets deceptive synthetic media and deepfakes that could mislead voters, mandating clear disclosures visible to viewers or listeners. Candidates whose likeness is misused can seek legal relief, and violations may carry fines of up to $15,000 for repeat offenses intended to cause harm. The measure reflects growing state-level efforts to curb AI-driven misinformation in elections.

Read article

AI Infrastructure

AI Infrastructure

How AI is Quietly Becoming a Supply Chain Problem

Dr. Melina Beykou on March 4, 2026 in Royal United Services Institute

As AI systems become embedded in critical infrastructure and defense, their complex supply chains are emerging as a major security risk. Drawing on lessons from the 2025 “Shai-Hulud” software supply-chain attack, which affected up to 25,000 projects, Beykou argues that AI development relies on a fragile web of chips, cloud infrastructure, open-source tools, and shared models. With millions of models hosted on platforms like Hugging Face and growing experimentation with agentic systems, upstream vulnerabilities can propagate widely with little visibility. The piece calls for stronger transparency and governance to secure AI supply chains as adoption accelerates.

Read article

AI Infrastructure

Trump Announces A.I. Industry Pledge to Pay for Power

David McCabe and Brad Plumer on March 4, 2026 in The New York Times

At a White House meeting, major tech companies including Google, Microsoft, and OpenAI pledged to cover the cost of the electricity and infrastructure needed to power their rapidly expanding AI data centers. The “ratepayer protection pledge” commits companies to finance power plants, grid upgrades, and negotiated utility rates so that rising energy demand from AI does not raise consumer electricity prices. The move comes as data centers—sometimes consuming as much power as a small city—become central to U.S. efforts to lead the global AI race while addressing growing political concerns about energy costs and local impacts.

Read article

AI and Public Engagement

AI and Public Engagement

Assembly Required: A Conversation with Lorelei Kelly on Deliberative Technology and Congressional Reform

Elana Banin on March 10, 2026 in Reboot Democracy Blog

In this interview, Lorelei Kelly spotlights new research that strengthening democratic resilience requires redesigning the institutional infrastructure connecting citizens to Congress. Drawing on constitutional history, she highlights how the First Amendment rights of assembly and petition once operated as structured workflows that fed civic voice directly into lawmaking. Kelly suggests that emerging technologies, including AI, could help institutions process large volumes of public input, turning participation into usable insight. Rebuilding these deliberative systems, she argues, is essential to restoring meaningful connections between communities and representative government.

Read article

AI and Public Engagement

Taking IT to the Streets: Announcing the Community Engagement Handbook for AI

Meg Young, Elizabeth Buehler, Leila Doty, and Ryan Kurtzman on February 25, 2026 in GovAI Coalition

As government agencies increasingly deploy AI for services such as benefits eligibility, fraud detection, and road safety, the GovAI Coalition has released a new handbook to help public officials engage residents in decisions about AI use. The guide offers practical tools, including stakeholder mapping, session agendas, and outreach templates, to help agencies run community engagement processes around AI adoption. The authors argue that even modest engagement efforts can surface local insights, shape system design before procurement, and strengthen public trust in government technology.

Read article

AI and Public Engagement

A People-Centered Justice Approach to Implementing AI Governance

Nate Edwards and Stacey Cram on March 9, 2026 in Center on International Cooperation New York University

NYU’s Center on International Cooperation argues that AI governance debates focus on technical standards and catastrophic risks but overlook how rules will be enforced. Courts, regulators, legal aid providers, and community justice workers will ultimately handle disputes over AI decisions, yet they are rarely included in governance design. The report calls for “people-centered justice” in AI governance, including human review of high-stakes automated decisions, community-informed impact assessments before governments procure AI systems, independent oversight bodies able to pause harmful systems, and accessible complaint and appeal processes when algorithms cause harm.

Read article

AI and Labor

AI and Labor

Senators Call on Agencies to Capture AI’s Workforce Impact

Alexandra Kelley on March 6, 2026 in Nextgov/FCW

A bipartisan group of nine U.S. senators is urging federal statistical agencies to update national surveys to better measure how artificial intelligence is affecting jobs and workplace culture. In a letter to the Department of Labor, the Bureau of Labor Statistics, and the Census Bureau, lawmakers proposed adding AI-related questions to major datasets such as the Current Population Survey and the Job Openings and Labor Turnover Survey. The effort aims to provide policymakers with clearer data on AI-driven job disruption and workforce changes as adoption accelerates.

Read article

News that caught our eye

News that caught our eye

AI Adoption Rapidly Growing in Public Sector

Christos Makridis on March 11, 2026 in Gallup

New data from Gallup shows that AI adoption in government workplaces is accelerating rapidly, with 43% of public-sector employees reporting some use of AI in 2025, up from 17% in 2023. Government workers now match or exceed businesses in overall exposure to AI tools. Employees in organizations where managers encourage experimentation are far more likely to use AI regularly, suggesting that leadership practices, rather than access to technology, will determine whether AI becomes embedded in everyday government work.

Read article

AI and Law

AI and Law

AI v. Nicki Minaj: How Chatbots Are Colliding With NY’s Court System

Joe Hong on March 10, 2026 in Gothamist

A growing number of New York court cases are revealing how AI chatbots are reshaping legal filings, sometimes with costly consequences. In one case, a housing activist’s lawsuit was dismissed after an AI tool generated fabricated legal citations. In another, a self-represented plaintiff suing Nicki Minaj relied on AI tools to help prepare filings, prompting judicial scrutiny when some quotations proved inaccurate. Judges and legal experts are now debating how courts should regulate AI-assisted filings while balancing risks, such as hallucinated case law, with AI’s potential to expand access to justice for people who cannot afford lawyers.

Read article