
News That Caught Our Eye #60
Published by Dane Gambrell and Angelique Casem on May 29, 2025
In the news this week: A new whitepaper on Public AI outlines how open approaches to AI can be developed and institutionalized in the public interest. The EFF argues that new federal legislation intended to combat sexual deepfakes could open the door to censorship of political speech, while Sir Geoff Mulgan and researchers at UCL explain how well-designed institutions could effectively combat deepfakes in elections. An ILO study warns that automation may threaten women’s jobs more than men, while research from the Federal Reserve shows that demand for AI skills is greatest in highly-educated professions. Gideon Lichfield argues for a narrower version of Republicans' proposed ban on state AI laws, while an investigation finds that local police are providing ICE access to an AI-powered surveillance network. Read more in AI News That Caught Our Eye.
In the news this week
- AI and Elections:Free, fair and frequent
- AI for Governance:Smarter public institutions through machine intelligence.
- AI and Labor:Worker rights, safety and opportunity
- AI and Problem Solving:Research, applications, technical breakthroughs
- Governing AI:Setting the rules for a fast-moving technology.
Upcoming Events
- June 5, 2025, 2:00 PM ET: Community Engagement for Public Professionals: Overview Deborah Stine, Founder and Chief Instructor, Science and Technology Policy Academy
-
June 10, 2025, 2:00 PM ET: Generative AI for Public Sector Communicators: Tools, Ethics, and Best Practices John Wihbey, Director of the AI-Media Strategies Lab (AIMES Lab) & Associate Professor, Northeastern University
-
June 12, 2025, 2:00 PM ET: Community Engagement for Public Professionals: Interviews Deborah Stine, Founder and Chief Instructor, Science and Technology Policy Academy
-
June 18, 2025, 2:00 PM ET: Leading Through Reform: Strategies to Engage Resistant Teams Malena Brookshire, Chief Financial Officer
For more information on events visit https://innovate-us.org/workshops
Special Announcements
Global Dialogues Challenge: The Collective Intelligence Project is opening up their global dataset for participants to use the data to create stories, insights, or tools that help guide the future of AI. Submissions due July 11th. $10,000 prize fund will be distributed across top submissions. Learn more: https://www.cip.org/challenge
Data Commons Event: The New Commons Challenge is an initiative seeking innovative data commons projects to enhance disaster response and local decision-making, with applications due by 11:59:59 PM on June 2, 2025. Two winners will receive $100,000 in funding each. Learn more: https://newcommons.ai/
AI and Elections
AI and Elections
Research Radar: Dreaming Better Elections Into Reality
A new white paper from Sir Geoff Mulgan and The Institutional Architecture Lab argues that combating AI-generated deepfakes and synthetic content in elections requires purpose-built institutions. The authors propose Electoral Integrity Institutions that would coordinate across government, tech platforms, and civil society to scan, assess, and respond to synthetic content threats. But, says Beth Noveck, the paper also provokes a fundamental question: should we design institutions defensively to react to AI threats, or offensively to build better, more participatory and representative elections?
Read articleAI for Governance
AI for Governance
Why Generative AI Isn’t Transforming Government (Yet) — and What We Can Do About It
Essay exploring the question of which public sector GenAI use cases are most promising: “The answer is nuanced. While transformative end-to-end automation remains largely aspirational, strategic augmentation and delegated autonomy offer immediate benefits if properly implemented and governed.GenAI won't transform governments overnight. But with targeted use, adaptive governance, and practical realism, [GenAI] can help deliver public services that are not only faster and more efficient but also fairer and more inclusive.”
Read articleAI for Governance
Two Paths for A.I.
“Last spring, Daniel Kokotajlo, an A.I.-safety researcher working at OpenAI, quit his job in protest...He’d concluded that a point of no return, when A.I. might become better than people at almost all important tasks, and be trusted with great power and authority, could arrive in 2027 or sooner…Around the same time… two computer scientists at Princeton, Sayash Kapoor and Arvind Narayanan, were preparing for the publication of their book, ‘AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.’ … [which] advanced views that were diametrically opposed to Kokotajlo’s…Recently, all three researchers have sharpened their views, releasing reports that take their analyses further…Reading these reports back-to-back…and speaking to their authors in succession, in the course of a single afternoon, I became positively deranged. ‘AI 2027’ and ‘AI as Normal Technology’ aim to describe the same reality, and have been written by deeply knowledgeable experts, but arrive at absurdly divergent conclusions. Discussing the future of A.I. with Kapoor, Narayanan, and Kokotajlo, I felt like I was having a conversation about spirituality with Richard Dawkins and the Pope.”
Read articleAI and Labor
AI and Labor
AI advances may threaten women’s jobs more than men’s
“Women may be at a heightened risk for being edged out of their job (or having their duties change) due to AI. According to a new study…from the United Nations’ International Labour Organization (ILO) and Poland’s National Research Institute (NASK), jobs disproportionately done by women, especially in higher income countries, are more steadily becoming automated. Researchers also found a significant contrast between how at-risk women’s jobs were versus men’s – 9.6% of female employment compared to 3.5% of jobs typically held by men. The researchers noted that rather than AI taking over employees’ jobs completely, human roles will, more commonly, evolve with the technology.”
Read articleAI and Labor
By Degree(s): Measuring Employer Demand for AI Skills by Educational Requirements
“Key Takeaways: ‘The percentage of all job postings that require at least one AI skill increased from about 0.5 percent in 2010 to 1.7 percent in 2024. The demand for AI skills varies by education: job postings that require at least a bachelor's degree are more likely to require an AI skill than postings that require an associate degree or high school diploma. AI skill demand is increasing for occupations that require at least an associate degree, growing from 0.4 percent in 2010 to 1.4 percent in 2024. This demand growth is mostly concentrated in Computer and Mathematical occupations.’”
Read articleAI and Problem Solving
AI and Problem Solving
Reimagining AI for Environmental Justice and Creativity
“This collection of essays from a 2024 University of Virginia workshop explores reimagining AI for environmental justice and creativity. AI has become ubiquitous across sectors like health, education, and finance, often operating invisibly in apps and services without user awareness. While techno-optimists promote AI's transformative potential, critical researchers highlight documented harms from overly optimistic adoption. Central concerns focus on creativity and environmental sustainability, particularly how large language models consume massive resources while potentially disrupting creative industries. The workshop addressed fundamental questions about AI's current state and future possibilities, offering resources for educators, researchers, policymakers, and activists to thoughtfully approach challenges in building, using, and evaluating AI systems across diverse contexts.”
Read articleAI and Problem Solving
We did the math on AI’s energy footprint. Here’s the story you haven’t heard.
“...New analysis by MIT Technology Review provides an unprecedented and comprehensive look at how much energy the AI industry uses—down to a single query—to trace where its carbon footprint stands now, and where it’s headed, as AI barrels towards billions of daily users. We spoke to two dozen experts measuring AI’s energy demands, evaluated different AI models and prompts, pored over hundreds of pages of projections and reports, and questioned top AI model makers about their plans. Ultimately, we found that the common understanding of AI’s energy consumption is full of holes..”
Read articleAI and Problem Solving
Public AI White Paper – A Public Alternative to Private AI Dominance
“Today, the most advanced AI systems are developed and controlled by a small number of private companies. These companies hold power not only over the models themselves but also over key resources such as computing infrastructure. This concentration of power poses not only economic risks but also significant democratic challenges. The Public AI White Paper presents an alternative vision, outlining how open and public-interest approaches to AI can be developed and institutionalized. It advocates for a rebalancing of power within the AI ecosystem – with the goal of enabling societies to shape AI actively, rather than merely consume it.”
Read articleGoverning AI
Governing AI
The US could become a grand experiment in AI law—in theory, anyway
“House Republicans’ proposed 10-year moratorium on enforcing any state-level or local AI regulations has caused the predictable uproar. They argue that the AI laws now passing in dozens of states will create a patchwork of conflicting and often poorly drafted regulations that will be a nightmare for companies to comply with, and will hold back American AI innovation. The countervailing view, in an open letter signed by more than 140 organizations, from universities to labor unions, is that it will give AI companies license to build systems that cause untold social harm without facing any consequences. Both are right—just not entirely; both are wrong—just not completely. There’s an argument for a moratorium—but a much narrower one than what Republicans propose.”
Read articleGoverning AI
ICE Taps into Nationwide AI-Enabled Camera Network, Data Shows
“Data from a license plate-scanning tool that is primarily marketed as a surveillance solution for small towns to combat crimes like carjackings or finding missing people is being used by ICE… Local police around the country are performing lookups in Flock’s AI-powered automatic license plate reader (ALPR) system for ‘immigration’ related searches and as part of other ICE investigations, giving federal law enforcement side-door access to a tool that it currently does not have a formal contract for…The fact that police almost never get a warrant to perform a Flock search means that there is not as much oversight into its use, which leads to local police either formally or informally helping the feds by doing lookups.”
Read articleGoverning AI
Congress Passes TAKE IT DOWN Act Despite Major Flaws
“...The U.S. House of Representatives passed the TAKE IT DOWN Act, giving the powerful a dangerous new route to manipulate platforms into removing lawful speech that they simply don't like. President Trump himself said that he would use the law to censor his critics.The takedown provision in TAKE IT DOWN applies to a much broader category of content—potentially any images involving intimate or sexual content… The takedown provision also lacks critical safeguards against frivolous or bad-faith takedown requests. Services will rely on automated filters, which are infamously blunt tools. They frequently flag legal content, from fair-use commentary to news reporting. The law’s tight time frame requires that apps and websites remove speech within 48 hours, rarely enough time to verify whether the speech is actually illegal. As a result, online service providers, particularly smaller ones, will likely choose to avoid the onerous legal risk by simply depublishing the speech rather than even attempting to verify it.”
Read articleThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.