
News That Caught Our Eye #65
Published by Dane Gambrell and Angelique Casem on July 2, 2025
In the news this week: The Senate removes the controversial 10-year moratorium on state AI laws from the budget proposal, while a new bill would ban all AI systems controlled by foreign adversaries from use by federal agencies. A federal judge rules that Anthropic's use of books to train its Claude AI model is legal fair use – but the company's storage of millions of pirated books violates copyright law. Denmark considers a change to copyright law to give residents legal rights over their own body and voice to prevent unauthorized digital imitations, while ICE deploys new software that can recognize individuals based on fingerprints or facial features to assist in mass deportations. A report finds that UK entry-level job postings have dropped 32% since ChatGPT's launch. Political scientist Henry Farrell argues that AI should be viewed not just as a technology to be governed, but as a technology that transforms how we govern. Read more in this week's AI News That Caught Our Eye.
In the news this week
- Governing AI:Setting the rules for a fast-moving technology.
- AI for Governance:Smarter public institutions through machine intelligence.
- AI and Education:Preparing people for an AI-driven world
- AI and Public Safety:Law enforcement, disaster prevention and preparedness
- AI and Labor:Worker rights, safety and opportunity
Upcoming Events
July 8, 2025, 2:00 PM ET: AI Regulation Across Borders: Who’s Setting the Rules—and Why It Matters Vance Ricks, Teaching Professor, Northeastern University
July 9, 2025, 2:00 PM ET: Making Digital Services Accessible: Why Inclusive Design Matters for Everyone Joe Oakhart, Principal Software Engineer, Nava
July 10, 2025, 2:00 PM ET: Community Engagement for Public Professionals: Communicating Scientific and Technical Information to Policymakers and the Public Deborah Stine, Founder and Chief Instructor, Science and Technology Policy Academy
July 17, 2025, 2:00 PM ET: Designing AI with Humans in Mind: Insights on Inclusion, Productivity, and Strategy Jamie Kimes, Founder, The Idea Garden, Josh Martin, Former Chief Data Officer, State of Indiana
July 29, 2025, 2:00 PM ET: Making Homelessness Rare and Brief: Lessons from the Built for Zero Backbone Strategy Melanie Lewis Dickerson, Director, Large-Scale Change, Community Solutions
For more information on workshops, visit https://innovate-us.org/workshops
Governing AI
Governing AI
US Senate Drops Proposed Moratorium on State AI Laws in Budget Vote
“Early Tuesday morning, the United States Senate voted 99-1 to pass an amendment to the budget bill removing the proposed 10-year moratorium on the enforcement of state laws on artificial intelligence. The introduction of the amendment, put forward by Senators Marsha Blackburn (R-TN) and Maria Cantwell (R-WA), signaled the failure of a compromise between Blackburn and Sen. Ted Cruz (R-TX) that would have reduced the duration of the moratorium and adjusted its language.”
Read articleGoverning AI
Anthropic wins key US ruling on AI training in authors' copyright lawsuit
“A federal judge in San Francisco ruled late on Monday that Anthropic's use of books without permission to train its artificial intelligence system was legal under U.S. copyright law. Siding with tech companies on a pivotal question for the AI industry, U.S. District Judge William Alsup said Anthropic made ‘fair use’ of books by writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson to train its Claude large language model. Alsup also said, however, that Anthropic's copying and storage of more than 7 million pirated books in a ‘central library’ infringed the authors' copyrights and was not fair use. The judge has ordered a trial in December to determine how much Anthropic owes for the infringement.”
Read articleGoverning AI
Bipartisan bill aims to block Chinese AI from federal agencies
“Legislation introduced Wednesday in Congress would block Chinese artificial intelligence systems from federal agencies as a bipartisan group of lawmakers pledged to ensure that the United States would prevail against China in the global competition over AI. ‘The future balance of power may very well be determined by who leads in AI.’ About five months ago, a Chinese technology startup called DeepSeek introduced an AI model that rivaled platforms from OpenAI and Google in performance, but cost only a fraction to build. This raised concerns that China was catching up to U.S. despite restrictions on chips and other key technologies used to develop AI.”
Read articleGoverning AI
Denmark to tackle deepfakes by giving people copyright to their own features
“The Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice…it would strengthen protection against digital imitations of people’s identities with what it believes to be the first law of its kind in Europe. It defines a deepfake as a very realistic digital representation of a person, including their appearance and voice.”
Read articleAI for Governance
AI for Governance
AI-enhanced nudging in public policy: why to worry and how to respond
“This paper discusses how AI-enhanced personalization can help make nudges more means paternalistic and thus more respectful of people’s ends. We explore the potential added value of AI by analyzing to what extent it can, (1) help identify individual preferences and (2) tailor different nudging techniques to different people based on variations in their susceptibility to those techniques. However, we also argue that the successes booked in this respect in the for-profit sector cannot simply be replicated in public policy. While AI can bring benefits to means paternalist public policy nudging, it also has predictable downsides (lower effectiveness compared to the private sector) and risks (graver consequences compared to the private sector). We discuss the practical implications of all this and propose novel strategies that both consumers and regulators can employ to respond to private AI use in nudging with the aim of safeguarding people’s autonomy and agency.”
Read articleAI for Governance
AI as Governance
“Political scientists have had remarkably little to say about artificial intelligence (AI), perhaps because they are dissuaded by its technical complexity and by current debates about whether AI might emulate, outstrip, or replace individual human intelligence. They ought to consider AI in terms of its relationship with governance. Existing large-scale systems of governance such as markets, bureaucracy, and democracy make complex human relations tractable, albeit with some loss of information. AI's major political consequences can be considered under two headings. First, we may treat AI as a technology of governance, asking how AI's capacities to classify information at scale affect markets, bureaucracy, and democracy. Second, we might treat AI as an emerging form of governance in its own right, with its own particular mechanisms of representation and coordination. These two perspectives reveal new questions for political scientists, encouraging them to reconsider the boundaries of their discipline.”
Read articleAI and Education
AI and Education
Productive Struggle: How Artificial Intelligence Is Changing Learning, Effort, and Youth Development in Education
This new report explores AI’s impact on learning, diving into cognitive science, as well as more recent research to examine when AI might have benefits in scaling productive struggle and when AI might unintentionally exacerbate problematic practices. “AI is rapidly advancing… yet, its development does not override cognitive science and pedagogical research showing that students learn when they are challenged, supported, and given opportunities to reflect. This dynamic, often called ‘productive struggle,’ remains fundamental in learning… This report…moves beyond polarized debates of ‘is AI good or bad?’ and instead dwells in the murkier, more consequential space where nuance lives. By weaving together evidence from the science of learning, capabilities of emerging technology, and early empirical research, this report explores the blurry boundaries where AI can amplify effective teaching and learning, and where it risks undercutting them.”
Read articleAI and Public Safety
AI and Public Safety
ICE Is Using a New Facial Recognition App to Identify People, Leaked Emails Show
“Immigration and Customs Enforcement (ICE) is using a new mobile phone app that can identify someone based on their fingerprints or face by simply pointing a smartphone camera at them... The underlying system used for the facial recognition component of the app is ordinarily used when people enter or exit the U.S. Now, that system is being used inside the U.S. by ICE to identify people in the field. The news highlights the Trump administration’s growing use of sophisticated technology for its mass deportation efforts and ICE’s enforcement of its arrest quotas…”
Read articleAI and Labor
AI and Labor
AI Killed My Job: Tech workers
“‘What will AI mean for jobs?’ may be the single most-asked question about the technology category that dominates Silicon Valley, pop culture, and our politics… Meanwhile, tech executives are pouring fuel on the flames. Dario Amodei, the CEO of Anthropic, claims that AI products like his will soon eliminate half of entry level white collar jobs, and replace up to 20% of all jobs, period…There’s no doubt that lots of firms are investing heavily in AI and trying to use it to improve productivity and cut labor costs…I heard from workers who recounted how managers used AI to justify laying them off, to speed up their work, and to make them take over the workload of recently terminated peers… Today, we’ll begin by looking at how AI is killing jobs in the tech industry.”
Read articleAI and Labor
Entry level jobs fall by nearly a third since ChatGPT launch
“The number of entry level jobs comprised of junior positions, graduate roles and apprenticeships has fallen by almost a third (31.9 per cent) since the arrival of ChatGPT, research shows. Job search site Adzuna found that vacancies looking for graduates had fallen to the lowest level since Covid, with entry level jobs now only accounting for a quarter of the total market, down from 28.9 per cent in 2022. While replacing entry-level roles with artificial intelligence taking on tasks is part of the picture, rising labour costs - including increased National Insurance contributions - are also a factor, with rising salaries outstripping inflation until recently.”
Read articleThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.