Reboot Weekly: Measuring Workforce Capacity, Governing Prediction, and Mapping Public Systems

Reboot Weekly: Measuring Workforce Capacity, Governing Prediction, and Mapping Public Systems

Published on January 29, 2026

Summary

InnovateUS launches the Observatory of Public Sector AI to examine the impact of AI training and use. Princeton's Arvind Narayanan and State of Hawaii Chief Data Officer Rebecca Cai dig into the pros and cons of predictive AI use in government, explains Stephan Schmidt. Anita McGahan discusses Christian Bason on how AI can help public institutions move beyond rigid hierarchies toward more humane, trust-based ways of working; and Aziza Umarova shows how Uzbekistan and Bhutan are mapping schools with AI and crowdsourcing. States continue expanding public-sector AI laws despite looming federal preemption battles. South Korea advances a comprehensive AI regulatory framework, and a new report traces how data-center expansion is shifting environmental, labor, and community-level governance decisions.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

Systems for Success: Rewiring Agencies to Deliver at Speed and Scale – Stephanie Pollack, former Secretary and CEO of Massachusetts DOT, January 29, 2:00 PM ET

Prompting Lab Office Hours: Bring your AI Question – Dayvd Smith, IT Director, Colorado Governor’s Office of Information Technology, January 30, 2:00 PM ET

From Expertise to Impact: A Practical Guide to Informing and Influencing Policy – Deborah Stine, Science and Technology Policy Academy and former Ob February 2, 2:00 PM ET

Explaining Public Service: Strategies for Clear, Credible Communication Max Stier, Partnership for Public Service, with Jill Abramson, February 3, 2:00 PM ET

AI for Governance

AI for Governance

Launching the Observatory of Public Sector AI: An Invitation to Build the Evidence Base Together

Beth Simone Noveck, Anirudh Dinesh, Gregory Porumbescu, Allison Wan, and Amanda Welsh on January 26, 2026 in Reboot Democracy

This piece announces the launch of the Observatory of Public Sector AI, a new research initiative drawing on anonymized data from more than 150,000 public servants nationwide. By tracking how public employees learn, use, and adapt AI at work before and after InnovateUS training, the Observatory aims to identify which investments in skills and organizational support actually strengthen government capacity and improve service delivery. Framed as shared research infrastructure, the project invites collaboration to build an empirical evidence base on what makes AI adoption effective, ethical, and durable in the public sector.

Read article

AI for Governance

Prediction Isn’t Intelligence: How Predictive Models Really Work in Government

Stephan Schmidt on January 27, 2026 in Reboot Democracy

Drawing on an InnovateUS workshop with Arvind Narayanan and Rebecca Cai, this piece explains why predictive AI poses distinct risks in government contexts. Schmidt shows how models that appear accurate can quietly encode shortcuts, feedback loops, and system artifacts, turning probabilities into decisions without accountability. Through examples from hiring, healthcare, and border control, the article argues that the central challenge isn’t model performance but governance: how predictive systems are evaluated, integrated into workflows, and overseen by humans. The takeaway is that prediction requires institutional guardrails, pilots, and domain expertise, not blind trust in benchmarks or vendors.

Read article

AI for Governance

Reimagining Public Institutions: Rethinking Leadership for Organizational Transformation

Anita McGahan on January 27, 2026 in Reboot Democracy

Drawing on insights from InnovateUS workshop featuring Christian Bason, this piece argues that many public institutions are constrained by inherited hierarchies that no longer serve workers or the public. McGahan explores how, by reducing the cost of coordination and communication, AI can enable more organizational models that empower frontline staff while maintaining accountability. The vision centers on redesigning public institutions to listen better, support human judgment, and deliver greater public value by aligning technology with compassion, mission, and organizational learning.

Read article

AI for Governance

Department of Energy Seeks Input on Advancing AI for Science and Engineering Workforce Development

Staff on January 16, 2026 in U.S. Department of Energy

The Department of Energy has issued a Request for Information tied to its new Genesis Mission, a government-led effort to use AI, high-performance computing, and quantum technologies to transform how the United States conducts science and engineering. Beyond technical challenges, DOE is explicitly seeking input on how to build state capacity, calling for the training of 100,000 scientists and engineers over the next decade with dual expertise in AI and domain science. The RFI positions workforce development, cross-sector partnerships, and national laboratories as core components of the governance infrastructure for AI-enabled science.

Read article

Governing AI

Governing AI

The Mirage of AI Deregulation

Alondra Nelson on January 15, 2026 in Science

Challenging claims that the Trump administration is “deregulating” AI, Nelson argues that U.S. AI governance has instead shifted toward a more concentrated and less transparent form of state power. Through executive action, industrial policy, equity stakes, immigration controls, research funding decisions, and federal preemption of state laws, the administration is reshaping AI development outside traditional rulemaking channels. The result is a form of hyper-regulation by other means, one that weakens democratic accountability while entrenching executive discretion.

Read article

Governing AI

States Expanded Laws Governing Public Sector AI Use During the 2025 Legislative Session

Quinn Anex-Ries on January 15, 2026 in Center for Democracy & Technology

During the 2025 legislative session, lawmakers in 20 states introduced 50 bills explicitly regulating how government agencies use AI, of which 15 were enacted into law. The total number of states with public sector AI statutes now stands at 19, up from 16 the year prior. States such as Kentucky, Texas, and Montana adopted comprehensive frameworks requiring agencies to disclose AI use, establish inventories, and assign centralized oversight. New York City passed a package of bills regulating AI use across city agencies, while states including Maine, New York, and Texas moved to formalize AI governance through dedicated offices or chief AI officers.

Read article

Governing AI

South Korea Launches Landmark Laws to Regulate Artificial Intelligence

Shim Kyu-seok on January 22, 2026 in Japan Times

South Korea has enacted what it calls the world’s first comprehensive AI regulatory framework, with the new AI Basic Act taking effect ahead of the EU’s phased rollout of its own rules. The law aims to strengthen trust, safety, and accountability in AI systems while positioning South Korea as a global regulatory leader. While officials frame the move as pro-innovation governance, startups warn that compliance costs could slow growth, highlighting widening global divergence between Europe’s rules-first approach, the U.S.’s lighter touch, and China’s more centralized model.

Read article

AI and Public Engagement

AI and Public Engagement

How the World Lives with AI: Findings from a Year of Global Dialogues

Staff on January 21, 2026 in Collective Intelligence Project

Based on seven rounds of deliberation with more than 6,000 people across 70 countries, the 2025 Global Dialogues Index Report offers one of the most detailed global snapshots to date of how people actually experience AI. The findings surface striking gaps between trust in AI tools and distrust in the companies that build them, widespread use of AI for emotional support, growing belief reinforcement through chatbot interactions, and deep anxiety about labor impacts, pointing to governance challenges that operate at relational and systemic levels.

Read article

AI and Problem Solving

AI and Problem Solving

UbuntuGuard: A Culturally-Grounded Policy Benchmark for Equitable AI Safety in African Languages

Tassallah Abdullahi, Macton Mgonzo, Mardiyyah Oduwole, Paul Okewunmi, Abraham Owodunni, Ritambhara Singh, and Carsten Eickhoff on January 19, 2026 in arXiv

This paper introduces UbuntuGuard, the first policy-based AI safety benchmark designed specifically for African languages and sociocultural contexts. Built from adversarial queries authored by 155 domain experts across sensitive fields, the benchmark exposes how English-centric safety evaluations systematically overestimate multilingual safety. Testing 13 general-purpose and guardian models, the authors find that cross-lingual transfer offers only partial protection and that even dynamic, policy-aware models struggle to localize African-language harms. The work argues that culturally grounded, multilingual benchmarks are needed to protect low-resource language communities.

Read article

AI and Labor

AI and Labor

Manager Support Is the Missing Link in Workplace AI Adoption

Andy Kemp on January 28, 2026 in Gallup

New Gallup data show that while experimentation with AI is widespread, sustained use at work remains uneven. As of late 2025, 26% of U.S. employees use AI frequently, but manager support is the decisive factor. Employees whose managers actively encourage AI use are twice as likely to adopt it regularly and nearly nine times as likely to say it helps them do their best work. The findings point to a persistent gap between organizational AI investment and day-to-day value creation, underscoring that leadership, communication, and enablement drive meaningful adoption.

Read article

AI Infrastructure

AI Infrastructure

AI Data Centers Are the New Environmental Burden Black Communities Didn’t Ask For

Danielle A. Davis on January 16, 2026 in Essence

As AI data centers expand into residential areas, Black communities across the U.S. are pushing back against rising energy and water demands, limited local economic benefits, and exclusion from land-use decisions. Davis situates today’s backlash within a longer history of environmental racism, arguing that AI must be understood as physical infrastructure. While Microsoft has pledged a “community-first” approach to data-center development, the piece questions whether voluntary commitments can deliver real accountability without enforceable standards, transparency, and genuine community power.

Read article

AI and Education

AI and Education

Mapping the School, Seeing the System: How Spatial Context Reshaped Public Decision-Making in Uzbekistan and Bhutan

Aziza Umarova on January 28, 2026 in Reboot Democracy

Mapping schools in Uzbekistan and Bhutan revealed that education data that looked complete on paper often missed what mattered most on the ground: distance, terrain, accessibility, and basic conditions that shape whether children can actually reach classrooms. By combining spatial data, participatory collection, and AI analysis through an open-source geoportal, governments reframed education planning around access, quality, and dignity. The work shows how AI-enabled, place-based data can improve policy, investment decisions, and accountability by aligning systems with how local infrastructure is experienced in real life.

Read article