News That Caught Our Eye #49

In the news this week: Tech groups warn that budget cuts to the National Institute of Standards and Technology could undermine US AI leadership, urging officials to preserve the agency's critical role in standards development. Meanwhile, the Brennan Center reports that while AI didn't massively disrupt 2024 elections, deepfakes and AI-generated disinformation are slowly undermining public trust, creating a landscape where truth becomes increasingly contested. Kenya's Special Envoy highlights how basic infrastructure gaps in internet and energy must be addressed before AI benefits can be equitably distributed worldwide. An Elon University study reveals AI is experiencing one of the fastest technology adoption rates in history, with Hispanic adults (66%) and Black adults (57%) more likely than White adults (47%) to use AI language tools - challenging typical tech adoption patterns. A CDT report shows workers strongly support greater transparency in workplace AI surveillance and limits on productivity monitoring that could harm mental health. Read more in this week's AI News That Caught Our Eye.

Angelique Casem

Dane Gambrell

Read Bio

Listen to the AI-generated audio version of this piece. 

AI and Elections

Gauging the AI Threat to Free and Fair Elections - Brennan Center for Justice, Shanze Hasan and Abdiaziz Ahmed, March 6, 2025

An article from the Brennan Center finds that while AI did not play a major role in disrupting the 2024 elections, the risk of deepfakes and AI-generated disinformation are increasingly undermining public trust in elections. Notable incidents include AI-impersonated robocalls of President Biden, Russian-created deepfakes of Kamala Harris, and similar cases worldwide. The long-term danger is a landscape where truth becomes contested, enabling bad actors to dismiss real evidence as fake. Solutions require transparency through watermarking AI content, platform accountability, oversight of encrypted messaging apps, and ethical guidelines for AI developers modeled after healthcare and finance industries. Without coordinated action, AI-fueled deception threatens to become normalized in politics.

Generative AI, Democracy and Human Rights - Centre for International Governance Innovation, David Evan Harris and Aaron Shull, February 2025

This policy brief document discusses the increasing threat of AI-generated disinformation campaigns on elections, democratic processes and human rights. It notes that the rise of sophisticated AI tools, capable of creating convincing fake content, is occurring alongside a decline in trust and safety measures by major tech companies. This combination creates a dangerous environment where elections and individual freedoms, particularly freedom of thought, are at risk. To counter these threats, the brief argues that AI companies should be held accountable for harms caused by their products and policies such as watermarking AI-generated content and banning AI impersonation should be implemented. It highlights Europe's Digital Services Act and AI Act as examples of proactive policy-making and emphasizes the need for governments to protect the information ecosystem and promote transparency on social media platforms. Ultimately, it calls for a shift toward democratizing social media and ensuring that AI serves the needs and protects the rights of individuals.

Governing AI

NIST Cuts Would Put US Behind AI Eightball, Tech Groups Warn Commerce Secretary - FedScoop, Matt Bracken, March 10, 2025

“U.S. leadership in artificial intelligence would be compromised by cuts to the National Institute of Standards and Technology, top tech trade associations warned in a letter sent Monday to Commerce Secretary Howard Lutnick. In the letter, the tech groups praised NIST’s AI work that began during President Donald Trump’s first term, making the case to Lutnick that the agency ‘has proven to be a critical enabler for the U.S. government and U.S. industry to maintain AI leadership globally.’ As the Trump administration slashes jobs across the federal government, the letter writers want NIST to continue ‘to play a vital role in advancing American leadership in … AI innovation’ by pursuing ‘a strategy that leverages NIST’s leadership and expertise on standards development, voluntary frameworks, public-private sector collaboration, and international harmonization.’”

AI for Governance

Agency AI Use Case Inventories Must Stay, Groups Tell Trump Officials - FedScoop, Matt Bracken, March 7, 2025 

“A letter signed by 18 civil society organizations implores the Office of Management and Budget and the Office of Science and Technology Policy heads to make sure agencies continue to maintain and update their AI inventories. The letter noted that the AI use case initiative began during President Donald Trump’s first term in office. A 2020 AI executive order and subsequent OMB guidance ‘clearly recognized the opportunity to promote AI adoption through transparency and public trust,’ the letter stated, adding that use case inventories were ‘a pillar’ of the government’s AI policies in Trump’s first term. Inventorying AI use cases across agencies in the years after Trump’s order was largely inconsistent, resulting in a patchwork tracking system that lacked standardization. The Government Accountability Office found that most AI use case inventories were ‘not fully comprehensive and accurate.’”

The Dangers Of Automated Governance: Can AI Lead A Nation? - Forbes, Cornelia C. Walther, March 11, 2025

Wharton Fellow Cornelia Walther explores the double-edged potential of AI in governance, arguing that while AI can enhance decision-making efficiency, it lacks the human capacities for ethical judgment, adaptability during crises, and emotional intelligence. Rather than full automation, the author advocates for a hybrid approach where human leaders develop both "human literacy" and "algorithmic literacy." As she points out: "AI does not create new societal challenges; it scales the ones we already have... Ultimately we cannot expect the technology of tomorrow to live up to values that today's humans do not manifest. Garbage in, garbage out."

Aligning Urban AI and Global AI Governance: Insights from a Paris AI Action Summit Side Event - The GovLab Blog, Can Simsek, Stefaan Verhulst, Sara Marcucci Roshni Singh, March 7, 2025

The GovLab team highlights cities as crucial pioneers in AI governance. Cities create policies that later shape national and global frameworks. They argue that Urban AI has emerged as essential for managing migration, supporting sustainability, and preventing centralized control, with cities leveraging their procurement power and infrastructure to influence implementation. While many cities are experimenting with participatory governance models that involve residents in AI decision-making, these efforts require stronger institutional support and better alignment between local initiatives and broader regulatory frameworks like the EU AI Act.

USPTO Withdraws Its Former Artificial Intelligence Strategy Document - JD Supra, Mark Baghdassarian, Aaron Frankel, Zehra Jafri, March 12, 2025

Kramer Levin reports that the United States Patent and Trademark Office has withdrawn its recently published AI strategy document, which emphasized “safe, secure, and trustworthy” AI development. The Trump administration is shifting the emphasis to America’s global AI dominance in order to “promote human flourishing, economic competitiveness, and national security.”

AI and Public Engagement

Announcing the Youth Engagement Toolkit for Responsible Data Reuse: An Innovative Methodology for the Future of Data-Driven Services - The Data Tank Medium, Elena Murray, Moiz Shaikh, Dr. Stefaan G. Verhulst, February 27, 2025

“Young people seeking essential services often are asked to share their data without having a say in how it is used or for what purpose… a lack of trust in data collection and usage may result in young people choosing not to seek services at all or withholding critical information out of fear of misuse. This risks deepening existing inequalities rather than addressing them. Based on a methodology developed and piloted during the NextGenData project, the Youth Engagement Toolkit is designed to provide a step-by-step guide on how to implement an innovative methodology for responsible data reuse in improving service, engage young people in decision-making by amplifying their voice, agency and preferences in how their data is used, and foster collaboration by bringing youth, service providers, and policymakers together to co-design solutions.”

Close Encounters of the AI Kind: The Increasingly Human-Like Way People are Engaging with Language Models - Elon University, Lee Rainie, March 2025

“Half of Americans now use artificial intelligence (AI) large language models like ChatGPT, Gemini, Claude, and Copilot. Since the launch of ChatGPT on Nov. 30, 2022, the spread of LLM usage in this country represents one of the fastest, if not the fastest, adoption rates of a major technology in history. The growth and spread of these AI systems in the U.S. population are especially striking for their diversity. Younger, well-educated, relatively wealthy, and employed adults are somewhat more likely than others to be using LLMs now. Yet, it is also the case that half of those living in households earning less than $50,000 (53%) use the tools. Moreover, Hispanic adults (66%) and Black adults (57%) are more likely than White adults (47%) to be LLM users. 

Large AI Models Are Cultural and Social Technologies - Science, Henry Farrell, Alison Gopnik, Cosma Shalizi, and James Evans, March 13, 2025

“Debates about artificial intelligence (AI) tend to revolve around whether large models are intelligent, autonomous agents. Some AI researchers and commentators speculate that we are on the cusp of creating agents with artificial general intelligence (AGI), a prospect anticipated with both elation and anxiety. There have also been extensive conversations about cultural and social consequences of large models, orbiting around two foci: immediate effects of these systems as they are currently used, and hypothetical futures when these systems turn into AGI agents—perhaps even superintelligent AGI agents. But this discourse about large models as intelligent agents is fundamentally misconceived. Combining ideas from social and behavioral sciences with computer science can help us to understand AI systems more accurately. Large models should not be viewed primarily as intelligent agents but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated.”

AI and Problem Solving

AI Could Supercharge Human Collective Intelligence in Everything from Disaster Relief to Medical Research - The Conversation, Hao Cui and Taha Yasseri, March 3, 2025 

Artificial Intelligence is enhancing human collective intelligence across multiple domains. In disaster scenarios, AI-controlled drones and robots survey damage, process data, and deliver supplies to inaccessible areas, helping emergency teams prioritize efforts. The article explores how AI augmentation works by processing vast datasets quickly, automating physical tasks, improving information exchange, and facilitating collaboration. It touches on real-world applications that already exist in disaster response, healthcare, media, public policy, and environmental protection, noting that AI should be viewed as a collaborator rather than a competitor to humans.

AI Infrastructure

Who Gets to Build AI? Tackling the Gaps in Infrastructure, Data, and Governance - Apolitical, Ula Rutkowska, Christina Obolenskaya, and Phillip Thigo, March 11, 2025

An interview with Kenya's Special Envoy on Technology highlights the significant challenges nations in the Global South face in equitable AI development. These hurdles include deficits in fundamental infrastructure like internet connectivity and energy, shortages of digital and AI-specific skills, the need for adaptable governance frameworks, and difficulties in securing appropriate financing. The Envoy, Phillip Thigo, stresses the necessity of investing in these foundational elements to ensure that the benefits of AI are not limited to a few well-resourced countries. He advocates for inclusive, trustworthy, and cooperative approaches to AI's future, warning that without these, existing inequalities could worsen. Thigo also shares examples of AI projects in Kenya focused on climate action and improving agricultural outcomes, illustrating the technology's potential when foundational needs are addressed.

AI Search Has A Citation Problem - Columbia Journalism Review, Klaudia Jaźwińska and Aisvarya Chandrasekar, March 6, 2025

“The Tow Center for Digital Journalism conducted tests on eight generative search tools with live search features to assess their abilities to accurately retrieve and cite news content, as well as how they behave when they cannot. We found that chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead, premium chatbots provided more confidently incorrect answers than their free counterparts, generative search tools fabricated links and cited syndicated and copied versions of articles, content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses, and more. Our observations are not just a ChatGPT problem, but rather recur across all the prominent generative search tools that we tested.”

AI and Education

Parents Need to Pay Attention to Artificial Intelligence, Too - Word in Black, Aziah Siid, March 11, 2025

As AI rapidly enters K-12 classrooms, concerns about racial bias and equity are growing, with experts like Columbia professor Ezekiel Dixon-Romàn emphasizing that parents must take an active role in oversight. "A lot of parents don't realize that we do have power. We all have the right of refusal... We have the right to refuse to be subject to these technologies," he states, highlighting both individual and collective parental influence. While companies pledge to create fair tools, the article suggests parents shouldn't leave AI implementation solely to schools, as existing racial disparities in AI systems could widen educational inequities without proper vigilance.

AI and Labor

What Do Workers Want? A CDT/Coworker Deliberative Poll on Workplace Surveillance and Datafication - Center for Democracy and Technology, Matt Scherer, March 6, 2025 

“The Center for Democracy & Technology (CDT) and Coworker.org collaborated on a project that explored workers’ perspectives on workplace surveillance through a unique Deliberative Polling approach…. The deliberations consisted of three sessions focusing on four topics: monitoring work-from-home employees, location tracking, productivity monitoring, and data rights. In the final post-deliberation survey, respondents showed strong support for proposals that would grant workers a right to greater transparency regarding employers’ surveillance and data collection practices, prohibit off-clock surveillance, limit location tracking, and bar employers from engaging in productivity monitoring that would harm workers’ mental or physical health. Moving forward, researchers should explore deliberation-centered methodologies further, both to determine workers’ organic views on key workplace policy issues…. and policymakers should recognize the urgent need for a regulatory framework addressing the datafication of workers.”

Free AI, Governance, and Democracy Learning Opportunities with InnovateUS

  • March 18, 2025, 2pm - Reimagining Public Institutions: Rethinking Leadership for Organizational Transformation, Christian Bason, Co-founder, Transition Collective and former Chief Executive, Danish Design Center

  • March 20, 2025, 2pm - Opportunities for Practical Federalism, Charles Keckler, Professor, Northeastern University

  • March 25, 2025, 4pm - Reading Challenges Revealed: AI Innovation in Dyslexia Assessment, Lizzie Jones, Program Director, The Learning Agency

  • March 26, 2025, 11 am - Del Big Data a la IA: Liderazgo para una transición digital responsable en el sector público/From Big Data to AI: Responsible Public Digital Transformation, Santiago Garces, Chief Information Officer, City of Boston

To register and see more workshops, visit https://innovate-us.org/workshops

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.