News That Caught Our Eye #67

News That Caught Our Eye #67

Published on July 17, 2025

Summary

Virginia launches an AI-powered regulatory review system to simplify and update regulations while San Francisco partners with Microsoft to provide city employees access to Copilot. The Pentagon will provide Anthropic, Google, OpenAI, and xAI up to $200 million to design and implement AI tools that advance national security. The EU publishes a code of practice designed to reduce the administrative burden on companies that comply with the AI Act. After an 11-month strike, SAG-AFTRA approves a deal with video game companies that gives actors more power over how AI is used to replicate their voice, face, or movement. A University of Cambridge report argues that Britain’s environmental policies need to be updated for the nation to meet its AI infrastructure plans while also sticking to its climate goals. Dane Gambrell argues that AI-powered research tools can transform evidence-based policymaking – but only if implemented responsibly. Read more in this week's AI News That Caught Our Eye.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

July 29, 2025, 2:00 PM ET: Making Homelessness Rare and Brief: Lessons from the Built for Zero Backbone Strategy, Melanie Lewis Dickerson, Deputy Chief Program Officer, Community Solutions

July 30, 2025, 2:00 PM ET: How to Ensure Successful AI Adoption: Making Vendors Accountable and Trustworthy, Thomas Gilbert, Founder and CEO, Hortus AI

August 4, 2025, 2:00 PM ET: Chatbots in Public Service: Responsible Design and Use, Vance Ricks, Teaching Professor, Northeastern University

AI for Law Enforcement: Beginning on September 4, 2025, this workshop series for law enforcement and public safety professionals builds foundational knowledge and best practices for responsible AI deployment in policing. 

Reboot Democracy: Designing Democratic Engagement for the AI Era: Starting on September 11, 2025, learn how to design effective and efficient AI-enhanced citizen engagement that translates public input into meaningful outcomes. This series is hosted and curated by Beth Simone Noveck, founder of InnovateUS and the GovLab, and Danielle Allen, Director of the Allen Lab for Democracy Renovation. 

Amplify: Mastering Public Communication in the AI Age: Beginning on October 7, 2025, this workshop series explores how AI tools—when used responsibly and transparently—can strengthen communication, broaden outreach, and counter disinformation. The series is hosted and curated by Jill Abramson and John Wihbey, who will also serve as part of the faculty, alongside Henry Griggs. 

For more information on workshops, visit https://innovate-us.org/workshops

Governing AI

Governing AI

European Union Unveils Rules for Powerful A.I. Systems

Adam Satariano on July 10, 2025 in The New York Times

“European Union officials unveiled new rules on Thursday to regulate artificial intelligence. Makers of the most powerful A.I. systems will have to improve transparency, limit copyright violations and protect public safety…The European Commission said the code of practice is meant to help companies comply with the A.I. Act. Companies that agreed to the voluntary code would benefit from a ‘reduced administrative burden and increased legal certainty.’ Officials said those that did not sign would still have to prove compliance with the A.I. Act through other means, which could potentially be more costly and time-consuming.”

Read article

AI and Public Engagement

AI and Public Engagement

AI and the Trust Revolution

Yasmin Green and Gillian Tett on July 7, 2025 in Foreign Affairs

“...[E]thnographic research conducted by Jigsaw—Google’s technology incubator—reveals a more complex and subtle reality: members of Gen Z, typically understood to be people born after 1997 and before 2012, have developed distinctly different strategies for evaluating information online, ones that would bewilder anyone over 30….If AI tools are designed carefully, they might potentially help—not harm—interpersonal interactions… The challenge for policymakers, citizens, and tech companies alike is to recognize how the nature of trust is evolving and then design AI tools and policies in response to this transformation. Younger generations will not act like their elders, and it is unwise to ignore the tremendous change they are ushering in.”

Read article

AI for Governance

AI for Governance

Virginia to Use Agentic AI to Power Review of Regulations

News Staff on July 14, 2025 in GovTech

“Virginia Gov. Glenn Youngkin issued an executive order Friday that enables the state to use agentic AI for a regulatory reduction pilot program. Youngkin's move is part of an increase in efficiency-focused initiatives at the state level, a growing number of which largely rely on new technologies such as AI. In Virginia, Executive Order 51 (2025) establishes a new pilot aiming to enhance governmental efficiency and build on previous work in this space with agentic AI. This also is not Youngkin's first executive move to reduce regulations in the state, with previous efforts including Executive Directive 1 (2022) and Executive Order 19 (2022), which called for a 25 percent regulatory reduction. The state has exceeded that goal, with agencies streamlining 26.8 percent of regulatory requirements and eliminating 47.9 percent of the words in guidance documents.”

Read article

AI for Governance

S.F. government is embracing an OpenAI-powered chatbot to help with city services

Roland Li on July 14, 2025 in San Francisco Chronicle

“San Francisco’s city government is getting chatbot access as it continues to embrace artificial intelligence, Mayor Daniel Lurie said. Microsoft 365 Copilot, powered by OpenAI’s GPT-4o chatbot product, will be available starting Monday to nearly 30,000 city employees. Lurie said the city is the largest local government to use generative AI for tasks including writing reports, data analysis and document summaries…San Francisco city workers are being told to follow guidelines including keeping data secure, fact checking and disclosing AI use. The city is partnering with nonprofit InnovateUS to train staff.”

Read article

AI for Governance

US government announces $200 million Grok contract a week after ‘MechaHitler’ incident

Lauren Feiner on July 14, 2025 in The Verge

“A week after Elon Musk’s Grok dubbed itself “MechaHitler” and spewed antisemitic stereotypes, the US government has announced a new contract granting the chatbot’s creator, xAI, up to $200 million to modernize the Defense Department. xAI is one of several leading AI companies to receive the award, alongside Anthropic, Google, and OpenAI. But the timing of the announcement is striking given Grok’s recent high-profile spiral, which drew congressional ire and public pushback. The use of technology, and especially AI, in the defense space has long been a controversial topic even within the tech industry, and Musk’s prior involvement in slashing federal government contracts through his work at the Department of Government Efficiency (DOGE) still raises questions about potential conflicts…”

Read article

AI for Governance

New Jersey, Pennsylvania, and Utah Lead States in AI Readiness, Report Finds

Dane Gambrell on July 10, 2025 in Reboot Democracy

“A new Code for America assessment looks at how states are adopting artificial intelligence to support the design, delivery, and evaluation of public services. While most states remain in early development stages, the three leading states distinguished themselves by building comprehensive governance frameworks, investing in workforce training, and establishing dedicated leadership structures to support the responsible and effective use of AI.”

Read article

AI for Governance

AI Can Revolutionize Policy Research – But Only If Implemented Responsibly

Dane Gambrell on July 16, 2025 in Reboot Democracy

Artificial intelligence can transform evidence-based policymaking by enabling policymakers to cast a wider net for evidence, synthesize evidence more rapidly, and incorporate better and deeper engagement with communities. However, this transformation also presents significant challenges from bias and transparency concerns to the risk of over-reliance on algorithmic outputs. By understanding the promise and the pitfalls of AI-enabled research tools, while keeping human expertise at the center of the process, we can harness these powerful tools to serve the public interest while preserving the democratic values of transparency, accountability, and inclusive governance.

Read article

AI Infrastructure

AI Infrastructure

Big Tech’s Climate Performance and Policy Implications for the UK

Bhargav Srinivasa Desikan and Gina Neff on July 10, 2025 in Minderoo Centre for Technology and Democracy

“The AI Opportunities Action Plan calls for the UK’s AI infrastructure to be increased and for the introduction of AI Growth Zones to accelerate the building of data centres. These plans may put the UK’s climate goals at risk. The UK is legally committed to reaching net zero emissions by 2050, and the current Government has a further ambition to decarbonise the power grid by 2030. But increasing investment in AI infrastructure will come with large costs in terms of carbon emissions, electricity use, and water consumption, and tech companies’ self-reported global emissions have been growing rapidly – even before the generative AI (GenAI) boom. This report addresses the gap that exists in trying to achieve two policy goals: decarbonising the UK and advancing AI infrastructure. It examines both the current climate impact of data centres and their potential future trajectories, driven by AI development and spearheaded by what Big Tech is asking of countries.”

Read article

AI Infrastructure

The UN Made AI-Generated Refugees

Matthew Gault on July 10, 2025 in 404 Media

“I am talking to Amina, an AI avatar that plays the role of a woman living in a refugee camp in Chad after she escaped violence in Sudan…Amina is an experiment, part of a pair of AI avatars created by the United Nations University Center for Policy Research (UNU-CPR), a research institution connected to the United Nations. It’s one that is sure to be controversial, considering that the UN itself says a major problem facing refugees is their dehumanization for political gain or convenience. The UNU-CPR project is using an inhuman technology in an attempt to help people learn more about what they are facing. The group also tested a soldier persona called ‘Abdalla,’ which ‘simulates the behavior and decision-making patterns of an actual combatant, offering negotiators and mediators a possible tool to train for future high-stakes negotiations.’”

Read article

AI and Labor

AI and Labor

Meet the new gig work behind AI, same as the old gig work

Eli Rosenberg on July 10, 2025 in Hard Reset

“A few weeks ago, details came out about Meta’s plan to acquire a 49 percent stake in the company Scale AI for a sizeable $14 billion…Scale is one of the best-known brands of a certain niche in the AI industry: companies that help train and build AI models through the use of human annotators, who label and define pictures, videos, images and text to help AI systems learn and grow more advanced. There is big money to be made here, as vast sums of capital flow to AI: Scale is forecasting $2 billion in revenue this year reportedly, with an expected valuation of $25 billion. The data collection and annotation industry has been forecasted to reach $17.1 billion by 2030 by some estimates, and studies from 2021 and 2022 estimate that millions of people have engaged in data annotation for work, at least temporarily. The work appears to be another example of the tech industry creating the illusion of seamlessness and futuristic magic — in this case, artificial intelligence — while relying on a complicated and potentially illegal business model powered by a very old technology: humans.”

Read article

AI and Labor

Video Game Actors Contract Ratified: SAG-AFTRA Leaders Talk Gaming Execs’ Reckoning With Hollywood’s AI Standards, 11-Month Strike’s Turning Point

Jennifer Maas on July 9, 2025 in Variety

“Following an 11-month strike plagued by back-and-forth disputes over Generative AI, SAG-AFTRA has ratified its new contract with major video game companies including Activision, Disney Character Voices, Electronic Arts, Epic Games, Formosa Interactive, Insomniac Games, Take 2 Productions and WB Games Inc….The contract ‘also accomplishes performer safety guardrails and gains around A.I., including consent and disclosure requirements for A.I. digital replica use and the ability for performers to suspend consent for the generation of new material during a strike.’”

Read article