Reboot Weekly: Building a Culture of Recognition, AI Safety by Local Rule, and Argentina’s Strategy Reset

Reboot Weekly: Building a Culture of Recognition, AI Safety by Local Rule, and Argentina’s Strategy Reset

Published on February 19, 2026

Summary

This week on Reboot Democracy, Max Stier explores how AI can strengthen internal government culture by building real-time recognition systems for civil servants. Elana Banin examines the UbuntuGuard benchmark, arguing that AI safety must be tested against locally defined public-sector rules rather than inferred from English-language standards. Giulio Quaggiotto reflects on Argentina’s AI-supported “Questions Tree” experiment to rethink how governments build strategy. Beyond Reboot, Colombia’s AI-presented candidate Gaitana enters an Indigenous election, the Pentagon pressures Anthropic to loosen military guardrails, global experts release the 2026 International AI Safety Report, governments expand AI-assisted legislative drafting, and India convenes the first major Global South AI summit.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

AI and Elections

AI and Elections

This Is Gaitana, the AI-Created Candidate Who Will Compete in Colombia’s Indigenous Elections

Fernanda González on February 13, 2026 in WIRED en Español

Colombia’s March 8 Indigenous special-district elections will feature “Gaitana,” an AI-presented candidate advocating digital sovereignty and participatory democracy. While marketed as an AI congresswoman working 24/7 without a salary, election authorities clarified that a human candidate is formally registered, campaigning via an AI interface labeled “IA” on the ballot.

Read article

Governing AI

Governing AI

Exclusive: Pentagon threatens to cut off Anthropic in AI safeguards dispute

Dave Lawler and Maria Curi on February 15, 2026 in Axios

The Pentagon is reportedly considering severing ties with Anthropic after the company refused to authorize unrestricted military use of Claude, particularly for mass domestic surveillance or fully autonomous weapons. OpenAI, Google, and xAI have reportedly shown greater flexibility. The dispute underscores a structural tension: commercial AI labs’ usage policies versus defense agencies’ demand for “all lawful purposes” access.

Read article

Governing AI

International AI Safety Report 2026

Yoshua Bengio (Chair), Stephen Clare, and Carina Prunkl, et al. on February 3, 2026 in Mila - Quebec Artificial Intelligence Institute and the AI Security Institute

Backed by 100+ experts from 30+ countries, the second International AI Safety Report assesses frontier AI capabilities, risks, and technical safeguards. It organizes risks into misuse, malfunctions, and systemic harms, while highlighting the “evidence dilemma” policymakers face as capabilities outpace empirical risk data. The report stops short of policy prescriptions, instead offering a shared scientific baseline for governments navigating AI governance.

Read article

Governing AI

Understanding Global AI Governance Through a Three-Layer Framework

Cedric Sabbah and Moshe Uziel on February 4, 2026 in Lawfare

Adapting the classic internet governance model, the authors map AI governance across infrastructure, logical (models), and social (applications) layers. The framework reveals fragmentation and duplication across institutions while highlighting how frontier firms are vertically integrating across all layers. It’s a useful taxonomy for understanding where governance gaps and power consolidation are emerging.

Read article

Governing AI

Evaluating AI Safety Through Local Policy: Findings from the UbuntuGuard benchmark

Elana Banin on February 17, 2026 in Reboot Democracy

UbuntuGuard tests whether AI systems comply with locally defined policies across ten African languages and six countries. Rather than relying on English-language benchmarks, the framework evaluates alignment with real institutional norms. The findings suggest that AI safety must be policy-explicit and context-specific and that governments need the capacity to test systems against their own rules before deployment.

Read article

AI for Governance

AI for Governance

Governments Are Using AI To Draft Legislation. What Could Possibly Go Wrong?

Chris Stokel-Walker on February 10, 2026 in Tech Policy Press

From the UK to Brazil and New Zealand, governments are using AI tools to summarize consultations, cluster amendments, and draft legislative materials. While promising efficiency gains, researchers warn that AI-assisted rulemaking could expose regulations to legal challenge, reduce transparency, or weaken legitimacy if procedural safeguards aren’t preserved.

Read article

AI for Governance

AI Resources for MPs and Parliament Staff

Staff on February 10, 2026 in POPVOX Foundation

POPVOX released practical AI toolkits for legislative offices, including use policies, change management guides, custom GPT instructions, and deep research workflows. The effort reflects a broader push to address the “pacing problem” by equipping parliaments with operational AI fluency rather than abstract literacy alone.

Read article

AI for Governance

Minding the brand: Leveraging AI to build a culture of recognition in government

Max Steir on February 16, 2026 in Reboot Democracy

Drawing on federal workforce data, Max Stier argues that rebuilding trust in government begins internally. AI can help automate recognition programs, reduce bias in performance acknowledgment, and personalize praise, strengthening morale and outcomes. The piece reframes AI not just as efficiency infrastructure but as cultural infrastructure.

Read article

AI for Governance

Government Strategy Needs Reimagining — An Experiment from Argentina

Giulio Quaggiotto on February 18, 2026 in Reboot Democracy

Argentina’s Red de Innovación Local piloted an AI-supported “Questions Tree” strategy process using PortalRIL. By surfacing patterns and trade-offs across teams, AI compressed months of coordination into weeks. The experiment suggests AI’s real promise lies not in automation but in expanding collective strategic insight.

Read article

AI for Governance

AI Impact Summit 2026: World Leaders, Tech Giants Descend on New Delhi

Staff on February 15, 2026 in Asian News International (ANI)

India hosted the first major AI summit in the Global South, convening heads of state and tech executives under themes of People, Planet, and Progress. The summit positioned India as a bridge between frontier AI development and Global South implementation needs, while unveiling 12 indigenous foundation models.

Read article

AI and Public Safety

AI and Public Safety

Modern NYC Subway Gates Tested by the MTA Use AI to Track Fare Evaders

Ramsey Khalifeh on February 4, 2026 in Gothamist

The MTA is piloting AI-enabled fare gates that detect and document fare evasion through camera-triggered clips and AI-generated descriptions. Vendors are competing for a $1.1B modernization contract. The rollout illustrates how AI-driven monitoring is embedding into civic infrastructure.

Read article

AI and Public Safety

Cops Are Buying ‘GeoSpy,’ an AI That Geolocates Photos in Seconds

Joseph Cox on February 12, 2026 in 404 Media

Police departments are purchasing GeoSpy, an AI tool that infers where photos were taken using environmental visual cues. While pitched as lead-generation support, the reporting raises questions about validation, error rates, and mission creep in AI-assisted geolocation.

Read article