
News That Caught Our Eye #68
Published by Dane Gambrell and Angelique Casem on July 23, 2025
The White House unveils its “AI Action Plan” aimed at supporting the nation’s infrastructure, innovation, and global influence over the technology; critics say the plan prioritizes corporate interests over the public. California co-creates a recovery action plan with communities impacted by the Los Angeles wildfires while Ohio lawmakers propose an AI regulation framework to attract business to the state. Swiss researchers release a publicly-developed large language model that supports more than 1,000 languages. A National Employment Law Project report outlines a policy agenda to rein in AI-powered workplace surveillance software. Delta uses AI to ramp up dynamic flight pricing – attracting the attention of lawmakers. Researchers at The GovLab propose a framework for empowering communities to shape how AI systems are designed and used. Read more in this week’s AI News That Caught Our Eye.
In the news this week
- Governing AI:Setting the rules for a fast-moving technology.
- AI for Governance:Smarter public institutions through machine intelligence.
- AI and Public Engagement:Bolstering participation
- AI and Problem Solving:Research, applications, technical breakthroughs
- AI Infrastructure:Computing resources, data systems and energy use
- AI and Labor:Worker rights, safety and opportunity
- AI and Education:Preparing people for an AI-driven world
Upcoming Events
July 29, 2025, 2:00 PM ET: Making Homelessness Rare and Brief: Lessons from the Built for Zero Backbone Strategy Melanie Lewis Dickerson, Deputy Chief Program Officer, Community Solutions
July 30, 2025, 2:00 PM ET: How to Ensure Successful AI Adoption: Making Vendors Accountable and Trustworthy Thomas Gilbert, Founder and CEO, Hortus AI
August 4, 2025, 2:00 PM ET: Chatbots in Public Service: Responsible Design and Use Vance Ricks, Teaching Professor, Northeastern University
August 20, 2025, 2:00 PM ET: From Bureaucracy to Vitality: Transforming Public Organizations Michele Zanini, Co-founder, Management Lab
August 26, 2025, 2:00 PM ET: Future-Ready Government: Building Resilience Across State and Public Agencies Dan Chenok, Executive Director, IBM Center for The Business of Government
AI for Law Enforcement: Beginning on September 4, 2025, this workshop series for law enforcement and public safety professionals builds foundational knowledge and best practices for responsible AI deployment in policing.
Reboot Democracy: Designing Democratic Engagement for the AI Era: Starting on September 11, 2025, learn how to design effective and efficient AI-enhanced citizen engagement that translates public input into meaningful outcomes. This series is hosted and curated by Beth Simone Noveck, founder of InnovateUS and the GovLab and Danielle Allen, Director of the Allen Lab for Democracy Renovation.
Amplify: Mastering Public Communication in the AI Age: Beginning on October 7, 2025, this workshop series explores how AI tools—when used responsibly and transparently—can strengthen communication, broaden outreach, and counter disinformation. The series is hosted and curated by former New York Times Executive Editor Jill Abramson and John Wihbey, Director of the AI-Media Strategies Lab (AIMES Lab) at Northeastern University and an associate professor of media innovation and technology, alongside Henry Griggs.
Governing AI
Governing AI
Trump Administration Plans to Give A.I. Developers a Free Hand
“The Trump administration said on Wednesday that it planned to speed the development of artificial intelligence in the United States, opening the door for companies to develop the technology unfettered from oversight and safeguards, but added that the A.I. needed to be free of ‘ideological bias.’...President Trump’s A.I. Action Plan outlines measures to ‘remove red tape and onerous regulation,’ as well as make it easier for companies to build infrastructure to power A.I. The plan also calls for the government to give federal contracts to companies that ‘ensure that their systems are objective.’”
Read articleGoverning AI
Ohio lawmakers introduce AI regulation bill
“A bill known as the Right to Compute Act, introduced at the Ohio Statehouse a few days ago, aims to create a framework for artificial intelligence (AI) systems and technology in the state. ‘I think there’s a lot of potential in this space, economic potential. We want Ohio to be welcoming,’ Rep. Tex Fischer (R-Boardman) said. ‘State AI regulations, it’s kind of the wild west. Every state has the ability to regulate it, but very few states actually have.’...Fischer is one of the Republicans behind House Bill 392. He said the goal of the bill is simply to create a framework to attract businesses to the state while protecting Ohioans… The bill adds a definition of ‘artificial intelligence systems’ to the Ohio Revised Code, creates guidelines and requirements for risk management, and looks to ensure the safety of Ohioans from the harmful uses of AI.”
Read articleGoverning AI
The Africa Edition
“A spotlight on Africa is long overdue here at POPVOX Foundation. We’ve been closely following developments across the continent, where legislative innovation, institutional reform, and research are reshaping the way politics works. In this issue, you’ll find our coverage of the first Africa Regional Conference on Parliament and Legislation (AFRIPAL), new insights into how MPs serve their constituents, and examples of how African parliaments are embracing technology and collaboration. Africa is changing, and the world should be paying close attention.”
Read articleGoverning AI
David Sacks and the blurred lines of government service
“When Vultron, a startup that creates AI tools specifically for federal contractors, announced its $22 million funding round earlier this week, it made sure to highlight a key investor: Craft Ventures, the firm ‘co-founded by White House AI adviser David Sacks.’ The announcement has raised questions about conflicts of interest in the Trump administration, where Sacks serves as both AI and crypto czar while maintaining his role at Craft Ventures — an arrangement that critics see as a new model of government service where the lines between public duty and private gain have become unclear.”
Read articleGoverning AI
Delta plans to use AI in ticket pricing draws fire from U.S. lawmakers
“Three Democratic senators have pressed Delta Air Lines CEO Ed Bastian to answer questions about the airline’s planned use of artificial intelligence to set ticket prices, raising concerns about the impact on travelers…the airline plans to deploy AI-based revenue management technology across 20% of its domestic network by the end of 2025 in partnership with Fetcherr, an AI pricing company. They said a Delta executive had earlier told investors the technology is capable of setting fares based on a prediction of ‘the amount people are willing to pay for the premium products related to the base fares.’”
Read articleAI for Governance
AI for Governance
Want Accountable AI in Government? Start with Procurement
“Through interviews with nineteen city employees based in seven anonymous US cities, we found that procurement practices vary widely across localities, shaping what's possible when it comes to governing AI in the public sector. Procurement plays a powerful role in shaping critical decisions about AI. In the absence of federal regulation of AI vendors, procurement remains one of the few levers governments have to push for public values, such as safety, non-discrimination, privacy, and accountability. But efforts to reform governments' procurement practices to address the novel risks of emerging AI technologies will fall short if they fail to acknowledge how purchasing decisions are actually made on the ground. The success of AI procurement reform interventions will hinge on reconciling responsible AI goals with legacy purchasing norms in the public sector.”
Read articleAI and Public Engagement
AI and Public Engagement
Advancing Agency: Digital Self-Determination as a Framework for AI Governance
“As Artificial Intelligence (AI) systems become increasingly embedded in societal decision-making, they have simultaneously deepened longstanding asymmetries of data, information and control. Central to this dynamic is what this paper terms agency asymmetry: the systematic lack of meaningful participation by individuals and communities in shaping the data and AI systems that inform decisions that impact their lives. This asymmetry is not merely a technical or procedural shortcoming; it is a structural feature of contemporary data and AI governance that underpins a range of interrelated harms–from algorithmic opacity and marginalization, to ecological degradation. This paper proposes Digital Self-Determination (DSD) as a normative and practical framework for addressing these challenges. Building on the principle of self-determination as both individual autonomy and collective agency, DSD offers tools for empowering communities and individuals to determine how data-based technologies are designed, implemented and used…”
Read articleAI and Public Engagement
Fire survivors are shaping the LA Fire Rebuilding Action Plan through Engaged California
“Engaged California is the state’s new deliberative democracy program that is designed to amplify the voices of Californians. It’s been touted by the governor as the ‘modern town hall.’ Starting in March, we opened up the first-in-the-nation pilot program, in response to the LA wildfires that burned Altadena and Pacific Palisades in January. Engaged California is designed to create a new conversation opportunity for those who survived and were impacted. A civil conversation and policy-informing space. A new community.”
Read articleAI and Problem Solving
AI and Problem Solving
The Cities and States That Are Getting It Right
“To transform the operations of the public sector, leaders will need both courage and creativity. Government unions and contractors alike will be uncomfortable. The question is whether to prioritize the needs of the existing system or the needs of the public it is supposed to serve. A few pioneers are choosing the public, responding to the coming crisis by ensuring that our public institutions have the right people doing the right work. Other states and cities should follow their lead. In Denver, which is facing a budget shortfall, the city recently moved to change its layoff rules. In many states and cities, layoffs must be based on seniority, and a more senior employee can bump a more effective junior one. Mayor Mike Johnston changed that, and Denver’s new rules instead instruct managers to weigh employees’ performance history, abilities and length of service.”
Read articleAI Infrastructure
AI Infrastructure
A language model built for the public good
“In late summer 2025, a publicly developed large language model (LLM) will be released — co-created by researchers at EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS). This LLM will be fully open: This openness is designed to support broad adoption and foster innovation across science, society, and industry. A defining feature of the model is its multilingual capability in over 1,000 languages.”
Read articleAI and Labor
AI and Labor
When ‘Bossware’ Manages Workers: A Policy Agenda to Stop Digital Surveillance and Automated-Decision-System Abuses
“Employers are increasingly turning to digital technologies for help with various management functions, such as employee monitoring, pay-setting, staffing, performance evaluation, and discipline—but often without transparency, safeguards for accuracy, or recourse for workers who’ve been wronged. Digital surveillance and automated decision systems—or ‘bossware’ tools— have given employers and businesses a wide range of increased powers in relation to workers…Employers’ increasing use of bossware has intensified a range of existing job quality problems, including harmful disciplinary practices, job precarity, lack of autonomy, poor working conditions, [and more]. Policymakers seeking to respond to technological changes in the workplace should address job quality degradation in addition to privacy, discrimination, and potential job losses. This requires regulating both workplace digital surveillance and automated decision systems. State lawmakers can lead the way in designing effective public policy interventions to empower workers against bossware’s harms.”
Read articleAI and Labor
In recent layoffs, AI’s role may be bigger than companies are letting on
“As rounds of layoffs continue within a historically strong stock market and resilient economy, it is still uncommon for companies to link job cuts directly to AI replacement technology. IBM was an outlier when its CEO told the Wall Street Journal in May that 200 HR employees were let go and replaced with AI chatbots, while also stating that the company’s overall headcount is up as it reinvests elsewhere…firms often limit their explanations to terms like reorganization, restructuring, and optimization, and that terminology could be AI in disguise. ‘What we’re likely seeing is AI-driven workforce reshaping, without the public acknowledgment,’ said Christine Inge, an instructor of professional and executive development at Harvard University. ‘Very few organizations are willing to say, ‘We’re replacing people with AI,’ even when that’s effectively what’s happening.’”
Read articleAI and Education
AI and Education
Reading Together with AI: Tools For Parent-Led Learning
“Research on the value of family involvement in children’s literacy development extends beyond the introduction of ed tech tools…Following 2017, AI features began to be introduced, and the possibilities various tools could offer, such as enabling deeper personalization based on different children’s needs and skills, drastically increased. Take Google’s Read Along app, which launched in 2019. The app features a reading assistant for kids, called Diya, which uses advanced text-to-speech and voice recognition technologies to listen to children read and provide feedback. As these tools have advanced, they have become better able to provide precise guidance, both providing direct feedback to children and offering caregivers access to data on their children’s reading abilities. For example, the organization Springboard Collaborative is now working on a Parent-Facing Literacy Screener, which will use AI to assess children’s literacy progress at home.”
Read articleThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.