
News That Caught Our Eye #64
Published by Beth Simone Noveck and Angelique Casem on June 25, 2025
In the news this week: Governor Murphy leads a panel with John Bailey, Afua Bruce and Matin Svensson of AI Sweden on “What Could Go Right with Our AI Future.” Research from Apple reveals that advanced AI models suffer "complete accuracy collapse.” India seeks public input for the 2026 AI Impact Summit. The Newsroom AI Lab funds smaller newsrooms to adopt AI tools, and more on federal efforts to ban states from regulating AI. Read more in this week’s AI News That Caught Our Eye.
In the news this week
- Governing AI:Setting the rules for a fast-moving technology.
- AI and Public Engagement:Bolstering participation
- AI and Problem Solving:Research, applications, technical breakthroughs
- AI Infrastructure:Computing resources, data systems and energy use
Upcoming Events
July 8, 2025, 2:00 PM ET: AI Regulation Across Borders: Who’s Setting the Rules—and Why It Matters Vance Ricks, Teaching Professor, Northeastern University
July 9, 2025, 2:00 PM ET: Making Digital Services Accessible: Why Inclusive Design Matters for Everyone Joe Oakhart, Principal Software Engineer, Nava
July 10, 2025, 2:00 PM ET: Community Engagement for Public Professionals: Communicating Scientific and Technical Information to Policymakers and the Public Deborah Stine, Founder and Chief Instructor, Science and Technology Policy Academy
July 17, 2025, 2:00 PM ET: Designing AI with Humans in Mind: Insights on Inclusion, Productivity, and Strategy Jamie Kimes, Founder, The Idea Garden, Josh Martin, Former Chief Data Officer, State of Indiana
For more information on workshops, visit https://innovate-us.org/workshops
Special Announcements
Event: IEEE Workshop on Trustworthy and Privacy-Preserving Human-AI Collaboration - TPHAC, November 2025
“This workshop explores the evolving relationship between humans and AI systems, with a focus on fostering trustworthy and privacy-preserving collaboration. We invite contributions that bridge the gap between machine intelligence and human understanding, particularly in shared decision-making scenarios. The workshop promotes the development of adaptive, hybrid, and emerging AI systems that respond to dynamic contexts while respecting human agency and enhancing human capabilities.”
Governing AI
Governing AI
WATCH: Governor Murphy “What Could Go Right with Our AI Future” Panel at State AI Leaders Conference
At the State AI Leaders Event, three dozen states, cities and tribes came together with researchers, technologists and designers to discuss the future of AI in the public interest. Governor Philip D. Murphy of New Jersey led a keynote discussion on global and local strategies for empowering public institutions in the AI era.
Read articleGoverning AI
California Report on Frontier AI Policy
This report provides a “framework for policymaking on the frontier of AI development…it examines the best available research on foundation models and outlines policy principles grounded in this research that state officials could consider in crafting new laws and regulations that govern the development and deployment of frontier AI in California.”
Read articleGoverning AI
Advanced AI suffers ‘complete accuracy collapse’ in face of complex problems, study finds
“Apple said in a paper published at the weekend that large reasoning models (LRMs) – an advanced form of AI – faced a ‘complete accuracy collapse’ when presented with highly complex problems. It found that standard AI models outperformed LRMs in low-complexity tasks, while both types of model suffered ‘complete collapse’ with high-complexity tasks. Large reasoning models attempt to solve complex queries by generating detailed thinking processes that break down the problem into smaller steps. The study, which tested the models’ ability to solve puzzles, added that as LRMs neared performance collapse they began ‘reducing their reasoning effort’. The Apple researchers said they found this ‘particularly concerning’.”
Read articleGoverning AI
How Some of China’s Top AI Thinkers Built Their Own AI Safety Institute
“While China has been investing heavily in AI development and deployment, it has also begun to talk more concretely about catastrophic risks from frontier AI and the need for international coordination.” The Carnegie Endowment reports on the China AI Safety and Development Association: “The establishment of CnAISDA presents promise for global AI governance, elevating experts who appear genuinely concerned about catastrophic AI risks and are motivated to build common international standards to reduce them.”
Read articleAI and Public Engagement
AI and Public Engagement
Public Consultation - Help Shape the AI Impact Summit 2026
“India is proud to host the AI Impact Summit 2026… the Summit will mark a strategic shift, from action to measurable impact, in global AI cooperation, ...advancing inclusive growth, social development, and a healthier planet…To ensure the Summit is grounded in public priorities and stakeholder insight, the Government of India invites inputs from citizens, students, researchers, startups, civil society, and domain experts. [The] feedback will help shape the themes, focus areas, and key deliverables of the Summit.”
Read articleAI and Public Engagement
Intelligence in the Public Interest
“Aspen Digital is leveraging AI benchmarking as a new way to include community voice in AI development…Our approach is grounded in a simple idea: make it as easy as possible to do the right thing by translating public needs into the language that AI developers already understand. By defining new benchmarks that measure success in terms of impact on the United Nations Sustainable Development Goals (SDGs), we can realign incentives, encouraging researchers and engineers to build systems that tackle real-world problems and deliver tangible public value.”
Read articleAI and Problem Solving
AI and Problem Solving
Using digital technology for democratic resilience, transformation and impact
“Westminster Foundation for Democracy’s (WFD) Democratic Resilience in a Digital World Programme… aimed to build WFD’s evidence base on if, how and why digital approaches to democracy support can make a meaningful difference, and likewise, what to avoid. It sought to understand how digital technologies can enhance democratic processes and how to do so effectively while avoiding unintended consequences, as well as how WFD should respond to digital threats to democracy. Activities were grouped into three categories: pilot projects, real-time learning, and research.”
Read articleAI and Problem Solving
Why PeaceTech must be the next frontier of innovation and investment
“In the United States, the 2025 defense budget climbed to over $895 billion—one of the largest increases in peacetime history…nations are investing in weapons because others are. In this climate of fear, alliances are being redefined. [PeaceTech] is the use of technology to save human lives, prevent conflict, de-escalate violence, rebuild fractured communities, and secure fragile peace in post-conflict environments. PeaceTech and defense must work hand in hand to develop the most effective technologies—not just to prevent conflict, but to build stability and save lives.”
Read articleAI Infrastructure
AI Infrastructure
Hacks/Hackers launches new lab to empower newsrooms to build AI tools
“Hacks/Hackers is launching a Newsroom AI Lab to support smaller newsrooms in evaluating, adopting and implementing large language models and other recent technologies, supported by a $300,000 grant from the Patrick J. McGovern Foundation. The Hacks/Hackers Newsroom AI Lab will build lasting technical capacity in participating newsrooms through hands-on collaboration, structured technical support and development of new AI tools and templates designed specifically for journalism that can be used by any newsroom.”
Read articleSpecial Topics
State AI Ban
The proposed 10-year federal moratorium on state AI laws has cleared a crucial procedural hurdle in the US Senate. Proponents argue that a patchwork of conflicting state regulations will stifle innovation, increase costs and burden smaller companies. Critics, including over 140 organizations opposing the measure, argue it would give AI companies unprecedented license to cause social harm without accountability. As journalist Karen Hao notes, the measure would "enshrine the impunity of Silicon Valley into law." The debate may be missing the fundamental point, focusing too narrowly on restrictions rather than democratic innovation.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.