Reboot Weekly: AI for Human Services, Civic AI Compacts with Universities, and Lessons from Iran’s AI Infrastructure

Reboot Weekly: AI for Human Services, Civic AI Compacts with Universities, and Lessons from Iran’s AI Infrastructure

Published on March 19, 2026

Summary

This week on Reboot Democracy, Robert Asaro-Angelo explores how AI could help human services agencies reduce administrative burdens and improve benefits delivery. Dane Gambrell shares lessons from 50 global experts shaping a course on democratic engagement in the AI era. Neil Kleiman argues for “civic AI compacts” between cities and universities, while Sara Bazoobandi examines how Iran’s AI infrastructure, built for control, created strategic fragility. Beyond Reboot, Lawfare warns that U.S. military AI policy is increasingly governed through contracts rather than public law. Vietnam issued a national AI ethics framework, and the Department of Energy’s Genesis Mission aims to accelerate scientific discovery with a new supercomputer expected to be operational by 2026. Officials tied to the Department of Government Efficiency used ChatGPT to help terminate over $100 million in humanities grants, and the National Guard is deploying AI tools for disaster response and operational planning.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

AI for Governance

AI for Governance

Can AI Help Save Bureaucrats from Their Own Bureaucracy?

Robert Asaro-Angelo on March 16, 2026 in Reboot Democracy

Drawing on his experience leading the New Jersey Department of Labor and Workforce Development, Asaro-Angelo argues that AI could help human services agencies reduce administrative burden while improving benefits delivery. New Jersey’s early experiments included training staff on responsible AI use, deploying tools to improve language access, and using AI to help draft unemployment appeal decisions and reduce backlogs. The piece also announces a new InnovateUS series on AI and Human Services in partnership with the Center for Civic Futures.

Read article

AI for Governance

Military AI Policy by Contract: The Limits of Procurement as Governance

Jessica Tillipman on March 10, 2026 in Lawfare

A new analysis argues that U.S. military AI governance is increasingly being shaped by procurement contracts with technology vendors. The piece examines tensions between the United States Department of Defense and companies such as Anthropic and OpenAI, highlighting how disputes over model restrictions, surveillance safeguards, and “any lawful use” provisions are being negotiated through bilateral agreements rather than public policy processes. Tillipman warns that the contracts lack the democratic accountability, transparency, and durability needed to govern high-stakes technologies like AI-enabled surveillance and autonomous weapons.

Read article

AI for Governance

Vietnam Issues National AI Ethics Framework to Guide Responsible Deployment

on March 16, 2026 in

Vietnam has issued a new National Artificial Intelligence Ethics Framework requiring developers and operators to ensure AI systems are safe, transparent, and subject to human oversight. The framework mandates safeguards against bias, cybersecurity threats, and privacy violations, along with testing, validation, and mechanisms for feedback and correction before deployment. It also promotes energy-efficient AI development and responsible innovation aligned with social welfare and cultural values, with reviews scheduled every three years.

Read article

AI Infrastructure

AI Infrastructure

Built Against Its People: Iran’s AI Infrastructure of Control

Sara Bazoobandi on March 18, 2026 in Reboot Democracy

Sara Bazoobandi examines how Iran’s doctrine of “knowledge jihad” shaped the country’s digital and AI infrastructure, framing technological development as both a religious duty and a tool of state control. Over decades, this ideology drove centralized systems such as the National Information Network and AI-enabled surveillance tools used to monitor dissent and enforce social norms. While these systems expanded state power, the piece argues that highly centralized architectures can create strategic fragility, offering a cautionary lesson for democracies designing AI infrastructure around resilience, accountability, and citizen access.

Read article

AI Infrastructure

What’s Next for the Genesis Mission

Maria Curi on March 13, 2026 in Axios

The U.S. Department of Energy and Dell Technologies are accelerating the federal Genesis Mission, an initiative to use AI and advanced computing to speed up scientific discovery. Leaders say the effort aims to double the productivity of public R&D spending by applying AI to fields such as physics, chemistry, biology, and engineering. The program also includes ambitious targets, from error-resistant quantum computing by 2028 to commercially viable fusion power in the 2030s, as well as a plan to train 100,000 scientists and engineers over the next decade. A new supercomputer expected by the end of 2026 could serve as the blueprint for AI-driven scientific research across government agencies.

Read article

AI Infrastructure

Inside the Dirty, Dystopian World of AI Data Centers

Matteo Wong on March 13, 2026 in The Atlantic

This investigation examines how the rapid expansion of generative AI is reshaping energy systems, local environments, and industrial policy. New data centers, such as xAI’s Colossus facility in Memphis, can consume electricity comparable to hundreds of thousands of homes and are sometimes powered by on-site natural-gas turbines built by tech companies. The article reports that AI demand could drive the largest surge in U.S. electricity consumption in decades, extend fossil-fuel use, and revive nuclear plants such as Three Mile Island through power deals with companies like Microsoft, while raising the risk that massive AI infrastructure investments could become stranded.

Read article

AI and Public Engagement

AI and Public Engagement

What We Learned from 50 Experts About Designing Democratic Engagement in the AI Era

Dane Gambrell on March 17, 2026 in Reboot Democracy

Feedback from more than 50 practitioners and researchers across 24 countries helped refine the draft curriculum for Designing Democratic Engagement for the AI Era. Reviewers contributed over 300 comments, emphasizing the need for clearer guidance on institutional readiness, trust, inclusion, privacy, and the risks of using AI in public participation. The post also describes how AI tools were used to organize and synthesize the feedback, while human review ensured accuracy, and outlines the next steps in developing the course for public servants.

Read article

AI and Education

AI and Education

The Case for Civic AI Compacts with Higher Education

Neil Kleiman on March 17, 2026 in Reboot Democracy

As artificial intelligence reshapes local economies and public services, Kleiman argues that cities should move beyond transactional relationships with nearby universities and build intentional “civic AI compacts.” Drawing on the policy brief "The AI Lab Next Door," the piece suggests that structured partnerships between local governments and higher education institutions could support workforce development, applied research, and experimentation with AI in public services. By aligning city needs with universities' growing technical capacity, these compacts could help communities shape how AI is deployed locally.

Read article

AI and Education

Governing Artificial Intelligence in the Higher Education Sector

Aída Ponce Del Castillo on February 28, 2026 in European Trade Union Institute

As universities across Europe rapidly adopt AI tools for teaching, assessment, research, and administration, this interdisciplinary volume argues that the key challenge is governance rather than technology. Contributors examine how procurement choices, vendor dependence, intellectual property rules, and regulatory frameworks, such as the EU AI Act, are reshaping institutional authority and accountability. The analysis situates AI in higher education within broader debates over public-sector digitalization, academic autonomy, and labor rights.

Read article

AI and Education

How DOGE Gutted the NEH in 22 Days

Sara Custer on March 11, 2026 in Inside Higher Ed

Internal documents released in a lawsuit reveal how officials tied to the Department of Government Efficiency (DOGE) used ChatGPT to help review grants at the National Endowment for the Humanities, contributing to the rapid termination of more than $100 million in funding. Staff prompted the chatbot to determine whether projects related to “DEI,” with the responses used to flag grants for cancellation. The disclosures point to a rushed decision process in which DOGE staff with little humanities expertise reviewed grants and drafted termination letters with limited agency oversight. Academic associations now argue in court that the cancellations were unconstitutional and politically motivated.

Read article

AI and Public Safety

AI and Public Safety

National Guard’s Chief Data & AI Officer on AI, Disaster Response and the Future of Mission Readiness

Staff on March 12, 2026 in ExecutiveGov

In an interview with ExecutiveGov, Delester Brown, Chief Data and AI Officer of the National Guard Bureau, described how AI is moving from experimentation to operational capability in disaster response and defense readiness. The Guard is prioritizing tools such as object detection, predictive analytics, and risk modeling to improve situational awareness and resource coordination during emergencies. Brown emphasized that the goal is “amplified intelligence” using AI to enhance human decision-making while also expanding access to AI tools across the Guard’s 54 states and territories through interoperable, no-code systems.

Read article

AI and Labor

AI and Labor

AI Seen as a Driver of Inequality by U.S. Voters

AJ Dellinger on March 17, 2026 in Gizmodo

New polling suggests Americans increasingly view AI as a threat to economic stability and fairness, with majorities favoring worker protections over incentives for tech companies. Nearly 60% support aid for workers displaced by AI, while more than half believe companies should be financially accountable for job losses. Concern is rising faster than for other major issues, reflecting broader distrust in corporate power and skepticism that AI-driven growth will benefit the public. The findings signal a growing political salience of AI as a labor and inequality issue.

Read article