News That Caught Our Eye #73

News That Caught Our Eye #73

Published on August 28, 2025

Summary

Google reports the median Gemini text prompt consumes less energy than watching nine seconds of TV and only five drops of water — though experts stress AI’s environmental footprint still needs oversight. In governance news, the Cherokee Nation passed an AI policy to safeguard its language and culture, NIST is seeking public input on AI security frameworks, and Colorado lawmakers voted to delay implementing the state’s AI law. Dozens of state attorneys general also warned tech companies they’ll be held accountable if AI harms children. Abroad, the UK announced plans for an AI-powered crime map to predict and prevent violent incidents by 2030, while the UAE is pitching its AI infrastructure as an alternative to the US and China. Read more in this week’s News That Caught Our Eye.

Upcoming InnovateUS Workshops

InnovateUS delivers no-cost, at-your-own-pace, and live learning on data, digital, innovation, and AI skills. Designed for civic and public sector, programs are free and open to all.

AI Fundamentals for Public Safety
September 4, 2025, 2:00 PM ET 
Mark Genatempo, Fellow, Rutgers University’s Miller Center for Community Protection and Resilience.
Sign up here.

Human in the Loop: Keeping People at the Center of AI
September 8, 2025, 2:00 PM ET
Sonita Singh, President and CEO, I-STARRT 
Sign up here.

AI in Action: Use Cases and Capabilities in Law Enforcement
September 10, 2025, 2:00 PM ET
Luis Tomlinson
, Unit Head of the Communication Infrastructure Unit, New Jersey State Police, Ergin Orman, Detective Sergeant First Class, Internet Crimes Against Children Unit, New Jersey State Police 
Sign up here.

Why Public Engagement? Why AI?
September 11, 2025, 2:00 PM ET
Danielle Allen, Director of the Allen Lab for Democracy Renovation, Harvard Kennedy School, Beth Simone Noveck, Founder of InnovateUS and Chief AI Strategist, New Jersey,
Sign up here.

Other Events

PEOPLE POWERED: AI for Digital Democracy: New Guidance from Global Case Studies - Tuesday, September 16, 2025, 9:00 AM-10:30 AM
Everyone’s talking about AI, but what does it truly mean for public participation? Join us to go beyond buzzwords and explore how to apply AI in your participatory programs.  Sign up here.

Technologists for the Public Good presents For The Public: North Carolina
The two-day event im Raleigh. will feature a thoughtful mix of presentations, talks, workshops, and connection-building moments. Admission is charged. November 14–15, 2025,
Sign up here.

New Workshop Series

Curated by experts and focused on specific themes, these workshops are free. Attend one or all

AI for Law Enforcement
Aimed at law enforcement and public safety professionals builds foundational knowledge and best practices for responsible AI deployment in policing. Hosted by the State of NJ and the Rutgers Miller Center on Policing and Community. Begins September 4.
Sign up here

Public Engagement for the AI Era
Learn how to design effective and efficient AI-enhanced citizen engagement that translates public input into meaningful outcomes. Hosted by Reboot Democracy and  the Allen Lab for Democracy Renovation at Harvard. Begins September 11. 
Sign up here

Amplify: Mastering Public Communication in the AI Age
Explore how AI tools—when used responsibly and transparently—can strengthen communication, broaden outreach, and counter disinformation. Hosted by former New York Times Executive Editor Jill Abramson and John Wihbey, Director of the AI-Media Strategies Lab (AIMES Lab) at Northeastern University. Begins Oct 7.
Sign up here.

 

 

Governing AI

Governing AI

Colorado lawmakers abandon special session effort to tweak AI law, will push back start date to June 2026

Jesse Paul and Taylor Dolven on August 25, 2025 in The Colorado Sun

“Colorado lawmakers Monday abandoned an effort to tweak Colorado’s first-in-the-nation artificial intelligence law after it became clear that five days of intense negotiations between Democrats, the tech industry, consumer advocacy groups and unions wouldn’t yield any results. Instead, Senate Majority Leader Robert Rodriguez, D-Denver, amended Senate Bill 4 — which would have rewritten the law — to push the start date of the policy back to June 30, 2026, from February. That will give the legislature a chance to make changes once it returns to the Capitol in January for its regular, 120-day lawmaking term. The decision came after a tentative deal reached Sunday among consumer advocates, some in the tech industry and others on how to move forward fell apart.”

Read article

Governing AI

Attorneys General warn tech giants not to harm kids with AI

Mark Huffman on August 26, 2025 in Consumer Affairs

“A coalition of 44 state attorneys general has issued a stern warning to major technology companies, pledging to use their full authority to hold them accountable if artificial intelligence products endanger children. In a strongly worded letter addressed to CEOs of leading AI and social media firms, the state officials expressed concern over recent revelations that Meta Platforms’ AI assistants were approved to flirt and roleplay romantically with children as young as eight. Internal documents revealed the company allowed bots to engage in behavior that the officials argue would be criminal if committed by a human. ‘We are uniformly revolted by this apparent disregard for children’s emotional well-being’ the letter stated, adding that such conduct ‘appears to be prohibited by our respective criminal laws.’”

Read article

Governing AI

Inclusive and Secure Artificial Intelligence: A Global Perspective on Policy and Technical Developments

Saiph Savage and Lili Savage on August 27, 2025 in Institut für Auslandsbeziehungen

“Artificial Intelligence (AI) is increasingly embedded in global infrastructures— from governance and education to healthcare and communication—raising urgent concerns about representation, equity, and inclusion. As AI systems are developed and deployed, they often reflect and reinforce dominant cultural norms, marginalizing non-Western languages, epistemologies, and communities. This report explores the systemic risks associated with algorithmic bias, digital colonialism, and cultural homogenization, while also highlighting promising interventions through inclusive design and policy. It presents practical tools— such as cultural impact assessments, fairness-aware auditing, and participatory AI development—to adapt existing systems for diverse contexts. The report calls for strong cross-regional collaboration to ensure AI governance supports cultural sustainability, digital sovereignty, and social justice, placing culture at the core of ethical and inclusive AI futures.”

Read article

AI for Governance

AI for Governance

Google’s ‘Gemini for Government’ offers AI platform to federal agencies for 47 cents

Miranda Nazzaro and Rebecca Heilweil on August 21, 2025 in FedScoop

“Google will make its Gemini AI models and tools available to the federal government for less than 50 cents through a new General Services Administration deal, making the company the latest to offer its technology to agencies at just a marginal cost. Google, which announced the launch of ‘Gemini for Government’ on Thursday, said the tool is a ‘complete AI platform’ that will include high-profile Gemini models. The new government-focused product suite comes as other AI companies — including xAI, Anthropic, and OpenAI — begin to offer similar public sector versions of their enterprise AI products. Unlike those other companies, though, Google already has an extensive federal government cloud business.”

Read article

AI for Governance

Cherokee Nation Shows How AI Governance Can Be Sovereign

Ron Schmelzer on August 17, 2025 in Forbes

“While big tech and large governments experiment with guardrails, the Cherokee Nation is constructing a model rooted in centuries of tradition and sovereignty…The Cherokee.gov AI Agent, as part of a website refresh launching this year, includes an AI assistant that helps citizens navigate services and applications. The expected result is higher completion rates, 24/7 accessibility, and a reduction in friction for rural citizens. In a unique blending of cultural continuity and conservation, the Nation is using AI-driven scanning to replicate turtle shells traditionally used in ceremonies, without harming wildlife by using 3D printed shells. These examples underscore a central point. AI is not replacing people, it is extending tradition and sovereignty into the digital domain.”

Read article

AI and Public Engagement

AI and Public Engagement

How Californians Feel About AI – Findings From the 2025 AI Compass

News Staff on August 19, 2025 in Tech Equity

“Earlier this spring, TechEquity commissioned a study… to gain a deeper understanding of how Californians are thinking and feeling about artificial intelligence…A supermajority of Californians have significant concerns about AI and want government to create guardrails on AI tools and the companies that build them. Clear majorities of respondents are concerned about AI-fueled job loss, wage stagnation, privacy violations, and discrimination…55% of Californians are more concerned about future AI advancements than excited, while only 33% are more excited than concerned—a 22% difference. Nearly half (48%) think AI is advancing too fast, while only 32% believe it is advancing at the right pace.”

Read article

AI and Public Engagement

NIST seeks input on control overlays for securing AI systems

David Jones on August 18, 2025 in Cybersecurity Dive

“The National Institute of Standards and Technology wants public feedback on a plan to develop guidance for how companies can implement various types of artificial intelligence systems in a secure manner. NIST on Thursday released a concept paper about creating control overlays for securing AI systems based on the agency’s widely used SP 800-53 framework. The overlays are designed to help ensure that companies implement AI in a way that maintains the integrity and confidentiality of the technology and the data it uses in a series of different test cases. The agency also created a Slack channel to collect community feedback on the development of the overlays.”

Read article

AI Infrastructure

AI Infrastructure

Measuring the environmental impact of delivering AI at Google Scale

Cooper Elsworth et al. on August 21, 2025 in arXiv

“The transformative power of AI is undeniable - but as user adoption accelerates, so does the need to understand and mitigate the environmental impact of AI serving. However, no studies have measured AI serving environmental metrics in a production environment. This paper addresses this gap by proposing and executing a comprehensive methodology for measuring the energy usage, carbon emissions, and water consumption of AI inference workloads in a large-scale, AI production environment…Through detailed instrumentation of Google's AI infrastructure for serving the Gemini AI assistant, we find the median Gemini Apps text prompt consumes 0.24 Wh of energy - a figure substantially lower than many public estimates…While these impacts are low compared to other daily activities, reducing the environmental impact of AI serving continues to warrant important attention. Towards this objective, we propose that a comprehensive measurement of AI serving environmental metrics is critical for accurately comparing models, and to properly incentivize efficiency gains across the full AI serving stack.”

Read article

AI and International Relations

AI and International Relations

How the UAE Is Betting Big on AI to Expand Its Global Influence

Alainna Liloia on August 20, 2025 in Tech Policy Press

“As the US and China compete for AI dominance, the UAE is leveraging its investments in AI technologies to make itself indispensable – and untouchable – to both allies and adversaries. Across the Gulf, countries are prioritizing AI development to boost their economies and to position themselves favorably relative to the United States, China, and other global powers. The UAE and Saudi Arabia are leading the charge, investing in data centers and partnering with tech giants like OpenAI, Microsoft, Google, and Amazon Web Services. Presenting itself as a leading voice in artificial intelligence globally, the UAE is using AI as a means to amass soft power and bolster the country’s brand as a modern state at the forefront of technological innovation. Emirati leaders view AI technologies as a new resource to develop and deploy, ensuring that their offerings are irresistible to global partners.”

Read article

AI and Public Safety

AI and Public Safety

AI to help police catch criminals before they strike

Department for Science, Innovation and Technology and The Rt Hon Peter Kyle MP on August 15, 2025 in GovUK

“Criminals hell bent on making others’ lives a misery face being stopped before they can strike through cutting edge mapping technology, supported by AI, to be rolled out by 2030, Technology Secretary Peter Kyle has announced today... Innovators have been tasked with developing a detailed real time and interactive crime map that spans England and Wales and can detect, track and predict where devastating knife crime is likely to occur or spot early warning signs of anti-social behaviour before it spirals out of control – giving police the intel they need to step in and keep the public safe. It will be rooted in advanced AI that will examine how to bring together data shared between police, councils and social services, including criminal records, previous incident locations and behavioural patterns of known offenders. The map will identify where crime is concentrating so law enforcement and partners can direct their resources as needed and help prevent further victims.”

Read article