Reboot Weekly: From Data Governance to the Genesis Mission—AI’s Democratic Tests

Reboot Weekly: From Data Governance to the Genesis Mission—AI’s Democratic Tests

Published on December 4, 2025

Summary

This week, our new Research Radar warns that the White House’s Genesis Mission to merge federal AI, data, and supercomputing power could sideline universities, communities, and public oversight. In a companion piece, Beth Noveck and Dane Gambrell argue that meaningful algorithmic accountability must go beyond individual rights to include collective oversight through public registers and system-level audits. A new analysis by Stefaan Verhulst and Friederike Schüür highlights why most AI governance still neglects the upstream data layer, where stewardship and protections are weakest. In Spain, Political Watch’s QHLD project shows how transforming parliamentary activity into usable data can expose political blind spots and reconnect citizens with their representatives. Beyond Reboot, we track Congress’s fight over state AI laws, the geopolitics of chips, emerging super PAC influence, Australia’s National AI Plan, and the broader struggle to govern AI as deployment accelerates.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

AI for Governance

AI for Governance

QHLD: Making Spain’s Parliament Understandable with AI

Celia Zafra & Pablo Martín on November 26, 2025 in Reboot Democracy blog

Political Watch’s QHLD platform turns thousands of Spanish parliamentary initiatives into structured, searchable data, revealing gaps between political attention and public need, like 198 initiatives on squatting vs. 54 on social housing. The project shows how usable data can strengthen accountability, but also the barriers civic-tech teams face: limited funding, slow adoption, and the high cost of integrating responsible AI.

Read article

AI for Governance

Accountable Algorithms: Blending Rights & Collective Oversight

Beth Simone Noveck & Dane Gambrell on December 1, 2025 in Reboot Democracy blog

Predictive-policing failures in New Jersey and mass misflags in the UK’s welfare system show why AI oversight can’t fall solely on individuals. This piece argues for pairing notice-and-appeal rights with collective tools, such as public algorithm registers, organizational standing, and proactive audits. Global models like Spain’s BOSCO ruling and registries in Canada, Chile, and New York demonstrate how shared scrutiny enables governments, journalists, and civil society to keep automated systems fair, accurate, and accountable.

Read article

AI for Governance

Republicans Split as Trump Pushes to Block State AI Laws in the NDAA

Julia Shapero & Sudiksha Kochi on December 2, 2025 in The Hill

A fierce Republican fight over whether to ban state AI regulations is now threatening passage of the must-pass NDAA. The White House is pressuring GOP leaders to preempt state AI laws, pitting Trump and tech-aligned lawmakers against states’-rights conservatives and multiple GOP governors. More than 200 state legislators and key Senate Democrats also oppose the ban, warning it would give Big Tech a regulatory holiday. A second flashpoint—the GAIN AI Act, restricting AI chip sales to China—has further exposed deep divisions within the party.

Read article

AI for Governance

National AI Plan

Department of Industry, Science and Resources on December 2, 2025 in Australian Government

The Australian Government has published its 2025 National AI Plan, outlining a whole-of-economy strategy to grow a competitive domestic AI industry while ensuring that “every person benefits from this technological change.” The plan sets three national goals: capturing the opportunity by investing in smart infrastructure and sovereign AI capability; spreading the benefits through broad workforce training, regional inclusion, and improved public services; and keeping Australians safe with updated regulation, an AI Safety Institute, and protections for rights, privacy, and workplace fairness. The plan frames AI as central to the government’s Future Made in Australia agenda.

Read article

Governing AI

Governing AI

Research Radar: The White House Wants a Scientific Genesis. It May Trigger a Democratic Exodus

Beth Simone Noveck on December 2, 2025 in Reboot Democracy blog

The administration’s new Genesis Mission aims to fuse federal supercomputers, datasets, and AI systems into a single “closed-loop” platform for scientific discovery. But as Beth Noveck argues, the plan centralizes unprecedented research power while offering almost no role for universities, communities, or public oversight. With vague access rules, heavy corporate influence, and priorities tilted toward geopolitics over public need, Genesis raises sharp questions about transparency, accountability, and who will shape the nation’s scientific future.

Read article

Governing AI

Fears About A.I. Prompt Talks of New Super PACs to Rein In Industry Power

Theodore Schleifer on November 25, 2025 in New York Times

The US has approved Nvidia chip exports for a new Armenian supercomputer project, tying the world’s most valuable AI company to one of its smallest economies and extending Washington’s tech-driven diplomacy. The move is part of the Trump administration’s broader strategy to leverage AI semiconductors to deepen geopolitical influence, with parallel approvals for advanced chip sales to the UAE and Saudi Arabia. The Armenia deal strengthens US interests in the strategically vital Caucasus, following Trump’s recent peace declaration and exclusive rights over a new transit corridor meant to bypass Russia and anchor US power in a region long shaped by Moscow.

Read article

Governing AI

‘Fear and excitement’ are driving AI discourse in government

Colin Wood on December 3, 2025 in StateScoop’s Priorities Podcast

After two years of research on how U.S. states are adopting and governing AI, New America has released a sweeping report that its authors say offers “remarkable” findings. Neil Kleiman notes that across parties and regions, officials are either energized or anxious about AI—but unified in believing they must act. Lilian Coral says she was struck by the sophistication of state-level conversations, describing a landscape where fear, excitement, and pragmatic problem-solving are emerging in equal measure.

Read article

AI and Problem Solving

AI and Problem Solving

Solving Public Problems with Artificial Intelligence

Beth Simone Noveck on November 25, 2025 in Reboot Democracy

Beth Simone Noveck is redesigning her global Solving Public Problems course for the AI era, exploring how tools like rapid evidence review, pattern detection, and large-scale summarization can help learners define problems, test hypotheses, and listen to communities more effectively—without losing the human connection at the heart of civic problem-solving. She’s inviting public input as the course evolves.

Read article

AI and Labor

AI and Labor

Big Tech Wants AI Without Rules. Workers Are Fighting Back

Dane Gambrell on November 24, 2025 in Reboot Democracy blog

As AI accelerates surveillance, algorithmic management, and union-busting tactics, workers and unions are pushing back. This Reboot analysis shows how collective bargaining is becoming the strongest guardrail in the absence of federal protections—setting limits on AI-driven monitoring, automation, and data use—while unions mobilize for worker-centered policy and experiment with AI tools to strengthen organizing.

Read article

AI Infrastructure

AI Infrastructure

Toward AI Governance That Works: Examining the Building Blocks of AI and the Impacts

Dr. Stefaan Verhulst and Dr. Friederike Schüür on December 3, 2025 in Reboot Democracy

As governments rush to regulate AI, most frameworks still focus on model outputs, leaving the upstream data layer largely ungoverned. This piece argues that without strong data governance—covering how data is collected, shared, protected, and stewarded—AI systems cannot be safe, rights-respecting, or equitable. To deliver real public value and fair outcomes, countries must pair output rules with robust input governance, uniting both into a single, coherent framework for AI oversight.

Read article

AI and Public Engagement

AI and Public Engagement

AI in the Street

Dominique Barron, Noortje Marres, Rachel Coldicutt, Alex Taylor & Maya Indira Ganesh on November 28, 2025 in Careful Industries

This new report distills lessons from “AI in the Street,” a pilot that placed everyday observatories of AI in four UK cities and one site in Australia. Researchers find a persistent gap between government narratives about AI’s societal benefits and what local communities actually experience, with residents often seeing themselves as not the intended beneficiaries of AI innovation. The report outlines mismatches in purpose, beneficiaries, and need, warns that poor communication and limited engagement risk deepening distrust, and offers recommendations for participatory, place-based AI governance.

Read article

AI and International Relations

AI and International Relations

The Long Reach of Trump’s Nvidia AI Diplomacy

Anthony Halpin on November 20, 2025 in Bloomberg

The US has approved Nvidia chip exports for a new Armenian supercomputer project, tying the world’s most valuable AI company to one of its smallest economies and extending Washington’s tech-driven diplomacy. The move is part of the Trump administration’s broader strategy to leverage AI semiconductors to deepen geopolitical influence, with parallel approvals for advanced chip sales to the UAE and Saudi Arabia. The Armenia deal strengthens US interests in the strategically vital Caucasus, following Trump’s recent peace declaration and exclusive rights over a new transit corridor meant to bypass Russia and anchor US power in a region long shaped by Moscow.

Read article