
News That Caught Our Eye #66
Published by Dane Gambrell and Angelique Casem on July 9, 2025
President Trump’s budget funds AI-powered surveillance, raising concerns about expanded monitoring of immigrants and student protestors. Massachusetts DOT uses AI to speed up infrastructure planning by streamlining manual searches. A New York Times investigation finds AI-generated disinformation is undermining elections worldwide. Code for America releases new AI Landscape assessment of state governments' readiness for responsible AI adoption. Anthropic releases a transparency framework for governing large AI models. Microsoft quietly lays off 9,000 employees as it replaces developers with AI agents. Tiago Peixoto urges building institutional capacity to harness GenAI, while Ethan Mollick debunks the myth of “AI brain damage.” Read more in this week’s AI News That Caught Our Eye.
In the news this week
- Governing AI:Setting the rules for a fast-moving technology.
- AI and Elections:Free, fair and frequent
- AI and Problem Solving:Research, applications, technical breakthroughs
- AI for Governance:Smarter public institutions through machine intelligence.
- AI and Education:Preparing people for an AI-driven world
- AI and Labor:Worker rights, safety and opportunity
Upcoming Events
July 10, 2025, 2:00 PM ET: Community Engagement for Public Professionals: Communicating Scientific and Technical Information to Policymakers and the Public Deborah Stine, Founder and Chief Instructor, Science and Technology Policy Academy
July 17, 2025, 2:00 PM ET: Designing AI with Humans in Mind: Insights on Inclusion, Productivity, and Strategy Jamie Kimes, Founder, The Idea Garden, Josh Martin, Former Chief Data Officer, State of Indiana
July 29, 2025, 2:00 PM ET: Making Homelessness Rare and Brief: Lessons from the Built for Zero Backbone Strategy Melanie Lewis Dickerson, Deputy Chief Program Officer, Community Solutions
July 30, 2025, 2:00 PM ET: How to Ensure Successful AI Adoption: Making Vendors Accountable and Trustworthy Thomas Gilbert, Founder and CEO, Hortus AI
August 4, 2025, 2:00 PM ET: Chatbots in Public Service: Responsible Design and Use Vance Ricks, Teaching Professor, Northeastern University
Amplify: Mastering Public Communication in the AI Age: Beginning on October 7, 2025, this workshop series explores how AI tools—when used responsibly and transparently—can strengthen communication, broaden outreach, and counter disinformation. The series is hosted and curated by Jill Abramson and John Wihbey, who will also serve as part of the faculty, alongside Henry Griggs.
For more information on workshops, visit https://innovate-us.org/workshops
Governing AI
Governing AI
How Trump’s Budget Bill Sells Out The Future to Big Tech
“President Donald Trump’s so-called ‘Big Beautiful Bill’ is a massive hand out to the tech industry that comes at the expense of the people. A close reading of the budget bill …reveals that the government’s interests and actions are now squarely aligned with Silicon Valley.” As a result of the legislation, which was signed by President Trump on July 4th, “ the government will fund the acquisition of more Silicon Valley AI to expand surveillance and bolster the wealth and power of the tech oligarchy.”
Read articleGoverning AI
How the US is turning into a mass techno-surveillance state
“In the last four months, Trump and his former star advisor, the tech tycoon Elon Musk, have, along with the private sector, accelerated the deployment of a massive techno-surveillance state. And for the first time in history, Washington is boasting about it rather than denying its existence…’These measures have disproportionately affected immigrants, refugees, students, and marginalized and low-income communities. Although the scale and intensity of surveillance are increasing, the problem is not new…’”
Read articleGoverning AI
Exclusive: Google's AI Overviews hit by EU antitrust complaint from independent publishers
“Alphabet's Google has been hit by an EU antitrust complaint over its AI Overviews from a group of independent publishers, which has also asked for an interim measure to prevent allegedly irreparable harm to them, according to a document seen by Reuters. Google's AI Overviews are AI-generated summaries that appear above traditional hyperlinks to relevant webpages and are shown to users in more than 100 countries. It began adding advertisements to AI Overviews last May. The company is making its biggest bet by integrating AI into search but the move has sparked concerns from some content providers such as publishers. The Independent Publishers Alliance document, dated June 30, sets out a complaint to the European Commission and alleges that Google abuses its market power in online search.”
Read articleGoverning AI
5 Ways Cooperatives Can Shape the Future of AI
“AI development is dominated by a handful of powerful firms, raising concerns about equity, accountability, and social harm. AI cooperatives—democratically governed and community-owned—offer a promising alternative through five key interventions: 1) Democratizing data governance by giving individuals control over how their data is used; 2) Bridging research and civil society by grounding AI debates in public needs, not elite institutions; 3) Advancing education to equip members with the knowledge to influence AI systems; 4) Building alternative ownership models to keep AI value creation in stakeholder hands; and 5) Adapting AI for cooperative ends, ensuring systems support solidarity and worker power. Though cooperatives face barriers in scale and resources, these strategies point to a viable, inclusive path for AI aligned with public interest.”
Read articleGoverning AI
The Need for Transparency in Frontier AI
“Frontier AI development needs greater transparency to ensure public safety and accountability for the companies developing this powerful technology. AI is advancing rapidly. While industry, governments, academia, and others work to develop agreed-upon safety standards and comprehensive evaluation methods—a process that could take months to years—we need interim steps to ensure that very powerful AI is developed securely, responsibly, and transparently. We are therefore proposing a targeted transparency framework, one that could be applied at the federal, state, or international level, and which applies only to the largest AI systems and developers while establishing clear disclosure requirements for safety practices.”
Read articleAI and Elections
AI and Elections
A.I. Is Starting to Wear Down Democracy
“Since the explosion of generative artificial intelligence over the last two years, the technology has demeaned or defamed opponents and, for the first time, officials and experts said, begun to have an impact on election results. Free and easy to use, A.I. tools have generated a flood of fake photos and videos of candidates or supporters saying things they did not or appearing in places they were not — all spread with the relative impunity of anonymity online. The technology has amplified social and partisan divisions and bolstered antigovernment sentiment, especially on the far right, which has surged in recent elections in Germany, Poland and Portugal…As the technology improves, officials and experts warn, it is undermining faith in electoral integrity and eroding the political consensus necessary for democratic societies to function.”
Read articleAI and Problem Solving
AI and Problem Solving
MassDOT using AI to build faster
This report highlights the Highway Engineer Knowledge Agent (HEKA) chatbot used by design engineers in the MassDOT Highway Division to speed up infrastructure planning by streamlining manual searches. The AI tool was developed through Northeastern University’s AI for Impact co-op program.
Read articleAI and Problem Solving
AI for Good Global Summit 2025 Day 1 highlights
The three-day AI for Good Summit, organized by the United Nations International Telecommunication Union, kicked off on July 8 in Geneva, Switzerland: The summit’s aims include “identifying innovative AI applications, building skills and standards, and advancing partnerships to solve global challenges.”
Read articleAI for Governance
AI for Governance
From Copilots to Complexity: When Generative AI Meets the Public Sector
“Generative AI (GenAI) promises to transform governments. Yet despite surging interest, compelling examples of end-to-end automation in government workflows remain scarce…This isn't about technological limitations, but institutional ones: examining where GenAI delivers real value, where it doesn't, and what's needed to move from isolated efforts to systemic transformation. Two recent UK studies highlight this challenge. The Alan Turing Institute estimates that 41% of public sector work time is potentially exposed to GenAI, based on granular task-level analysis of ONS time-use data. Meanwhile, the UK Government Digital Service gave Microsoft's M365 Copilot to 20,000 civil servants over three months. Result: 26 minutes saved per day – roughly 5% of a workday, or 13 days annually per person. One study models potential. The other measures reality. The gap between 41% theoretical exposure and 5% realized gains is not failure – it's instruction. It reveals there are conditions necessary for GenAI to fulfill its public sector promise.”
Read articleAI for Governance
Government AI Landscape Assessment
"The use of AI in the public sector brings immense opportunities—but also immense risks. That’s why we’ve created this Government AI Landscape Assessment to evaluate the readiness of U.S. state governments in responsibly adopting AI. Code for America is dedicated to advancing the use of human-centered AI in government. We hope this Landscape Assessment provides the civic-tech community with a clear, actionable picture of how AI is transforming public service delivery. The rapid evolution of AI means that states are at varying stages of AI readiness. The Landscape provides a comprehensive snapshot across key dimensions such as Leadership & Governance, AI Capacity Building, and Technical Infrastructure & Capabilities. Most states are navigating early or developing phases in these categories, building foundational capabilities while defining governance structures and strategic direction. But a few have emerged as national leaders—setting up dedicated AI offices, launching sophisticated pilot programs, training their staff, and building out infrastructure."
Read articleAI and Education
AI and Education
Against "Brain Damage"
“I increasingly find people asking me ‘does AI damage your brain?’ It's a revealing question. Not because AI causes literal brain damage (it doesn't) but because the question itself shows how deeply we fear what AI might do to our ability to think….Part of this is due to misinterpretation of a much-publicized paper out of the MIT Media Lab (with authors from other institutions as well), titled “Your Brain on ChatGPT” … It involved a small group of college students who were assigned to write essays alone, with Google, or with ChatGPT (and no other tools). The students who used ChatGPT were less engaged and remembered less about their essays than the group without AI…There was, of course, no brain damage. Yet the more dramatic interpretation has captured our imagination because we have always feared that new technologies would ruin our ability to think… Given that AI is such a general purpose intellectual technology, we can outsource a lot of our thinking to it. So how do we use AI to help, rather than hurt us?”
Read articleAI and Labor
AI and Labor
Microsoft Is Quietly Replacing Developers With AI—And the Layoffs Are Just Beginning
“On July 2, Microsoft cut roughly 9,000 jobs globally, amounting to about 4% of its workforce. The official reason? A standard bit of corporate jargon: ‘organizational and workforce changes.’ But inside the company…employees tell a much more specific story: Microsoft is betting big on AI, and it’s already replacing people with it. Among those hit were at least five employees at Halo Studios (formerly 343 Industries), including developers working on the next mainline Halo installment…Behind the scenes, many believe this round of layoffs is about more than streamlining. ‘They’re trying their damndest to replace as many jobs as they can with AI agents,’ one Halo developer said.”
Read articleThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.