
News That Caught Our Eye #49
Published by Angelique Casem on January 1, 1970
In the news this week: Tech groups warn that budget cuts to the National Institute of Standards and Technology could undermine US AI leadership, urging officials to preserve the agency's critical role in standards development. Meanwhile, the Brennan Center reports that while AI didn't massively disrupt 2024 elections, deepfakes and AI-generated disinformation are slowly undermining public trust, creating a landscape where truth becomes increasingly contested. Kenya's Special Envoy highlights how basic infrastructure gaps in internet and energy must be addressed before AI benefits can be equitably distributed worldwide. An Elon University study reveals AI is experiencing one of the fastest technology adoption rates in history, with Hispanic adults (66%) and Black adults (57%) more likely than White adults (47%) to use AI language tools - challenging typical tech adoption patterns. A CDT report shows workers strongly support greater transparency in workplace AI surveillance and limits on productivity monitoring that could harm mental health. Read more in this week's AI News That Caught Our Eye.
In the news this week
- AI and Elections:Free, fair and frequent
- Governing AI:Setting the rules for a fast-moving technology.
- AI for Governance:Smarter public institutions through machine intelligence.
- AI and Public Engagement:Bolstering participation
- AI and Problem Solving:Research, applications, technical breakthroughs
- AI Infrastructure:Computing resources, data systems and energy use
- AI and Education:Preparing people for an AI-driven world
- AI and Labor:Worker rights, safety and opportunity
AI and Elections
AI and Elections
Gauging the AI Threat to Free and Fair Elections
An article from the Brennan Center finds that while AI did not play a major role in disrupting the 2024 elections, the risk of deepfakes and AI-generated disinformation are increasingly undermining public trust in elections. Notable incidents include AI-impersonated robocalls of President Biden, Russian-created deepfakes of Kamala Harris, and similar cases worldwide. The long-term danger is a landscape where truth becomes contested, enabling bad actors to dismiss real evidence as fake. Solutions require transparency through watermarking AI content, platform accountability, oversight of encrypted messaging apps, and ethical guidelines for AI developers modeled after healthcare and finance industries. Without coordinated action, AI-fueled deception threatens to become normalized in politics.
Read articleAI and Elections
Generative AI, Democracy and Human Rights , February 2025 This policy brief document discusses the increasing threat of AI-generated disinformation campaigns on elections,
This policy brief document discusses the increasing threat of AI-generated disinformation campaigns on elections, democratic processes and human rights. It notes that the rise of sophisticated AI tools, capable of creating convincing fake content, is occurring alongside a decline in trust and safety measures by major tech companies. This combination creates a dangerous environment where elections and individual freedoms, particularly freedom of thought, are at risk. To counter these threats, the brief argues that AI companies should be held accountable for harms caused by their products and policies such as watermarking AI-generated content and banning AI impersonation should be implemented. It highlights Europe's Digital Services Act and AI Act as examples of proactive policy-making and emphasizes the need for governments to protect the information ecosystem and promote transparency on social media platforms. Ultimately, it calls for a shift toward democratizing social media and ensuring that AI serves the needs and protects the rights of individuals.
Read articleGoverning AI
Governing AI
NIST Cuts Would Put US Behind AI Eightball, Tech Groups Warn Commerce Secretary
“U.S. leadership in artificial intelligence would be compromised by cuts to the National Institute of Standards and Technology, top tech trade associations warned in a letter sent Monday to Commerce Secretary Howard Lutnick. In the letter, the tech groups praised NIST’s AI work that began during President Donald Trump’s first term, making the case to Lutnick that the agency ‘has proven to be a critical enabler for the U.S. government and U.S. industry to maintain AI leadership globally.’ As the Trump administration slashes jobs across the federal government, the letter writers want NIST to continue ‘to play a vital role in advancing American leadership in … AI innovation’ by pursuing ‘a strategy that leverages NIST’s leadership and expertise on standards development, voluntary frameworks, public-private sector collaboration, and international harmonization.’”
Read articleAI for Governance
AI for Governance
Agency AI Use Case Inventories Must Stay, Groups Tell Trump Officials
“A letter signed by 18 civil society organizations implores the Office of Management and Budget and the Office of Science and Technology Policy heads to make sure agencies continue to maintain and update their AI inventories. The letter noted that the AI use case initiative began during President Donald Trump’s first term in office. A 2020 AI executive order and subsequent OMB guidance ‘clearly recognized the opportunity to promote AI adoption through transparency and public trust,’ the letter stated, adding that use case inventories were ‘a pillar’ of the government’s AI policies in Trump’s first term. Inventorying AI use cases across agencies in the years after Trump’s order was largely inconsistent, resulting in a patchwork tracking system that lacked standardization. The Government Accountability Office found that most AI use case inventories were ‘not fully comprehensive and accurate.’”
Read articleAI for Governance
The Dangers Of Automated Governance: Can AI Lead A Nation?
Wharton Fellow Cornelia Walther explores the double-edged potential of AI in governance, arguing that while AI can enhance decision-making efficiency, it lacks the human capacities for ethical judgment, adaptability during crises, and emotional intelligence. Rather than full automation, the author advocates for a hybrid approach where human leaders develop both "human literacy" and "algorithmic literacy." As she points out: "AI does not create new societal challenges; it scales the ones we already have... Ultimately we cannot expect the technology of tomorrow to live up to values that today's humans do not manifest. Garbage in, garbage out."
Read articleAI for Governance
Aligning Urban AI and Global AI Governance: Insights from a Paris AI Action Summit Side Event
The GovLab team highlights cities as crucial pioneers in AI governance. Cities create policies that later shape national and global frameworks. They argue that Urban AI has emerged as essential for managing migration, supporting sustainability, and preventing centralized control, with cities leveraging their procurement power and infrastructure to influence implementation. While many cities are experimenting with participatory governance models that involve residents in AI decision-making, these efforts require stronger institutional support and better alignment between local initiatives and broader regulatory frameworks like the EU AI Act.
Read articleAI for Governance
USPTO Withdraws Its Former Artificial Intelligence Strategy Document
Kramer Levin reports that the United States Patent and Trademark Office has withdrawn its recently published AI strategy document, which emphasized “safe, secure, and trustworthy” AI development. The Trump administration is shifting the emphasis to America’s global AI dominance in order to “promote human flourishing, economic competitiveness, and national security.”
Read articleAI and Public Engagement
AI and Public Engagement
Announcing the Youth Engagement Toolkit for Responsible Data Reuse: An Innovative Methodology for the Future of Data-Driven Services
“Young people seeking essential services often are asked to share their data without having a say in how it is used or for what purpose… a lack of trust in data collection and usage may result in young people choosing not to seek services at all or withholding critical information out of fear of misuse. This risks deepening existing inequalities rather than addressing them. Based on a methodology developed and piloted during the NextGenData project, the Youth Engagement Toolkit is designed to provide a step-by-step guide on how to implement an innovative methodology for responsible data reuse in improving service, engage young people in decision-making by amplifying their voice, agency and preferences in how their data is used, and foster collaboration by bringing youth, service providers, and policymakers together to co-design solutions.”
Read articleAI and Public Engagement
Close Encounters of the AI Kind: The Increasingly Human-Like Way People are Engaging with Language Models
“Half of Americans now use artificial intelligence (AI) large language models like ChatGPT, Gemini, Claude, and Copilot. Since the launch of ChatGPT on Nov. 30, 2022, the spread of LLM usage in this country represents one of the fastest, if not the fastest, adoption rates of a major technology in history. The growth and spread of these AI systems in the U.S. population are especially striking for their diversity. Younger, well-educated, relatively wealthy, and employed adults are somewhat more likely than others to be using LLMs now. Yet, it is also the case that half of those living in households earning less than $50,000 (53%) use the tools. Moreover, Hispanic adults (66%) and Black adults (57%) are more likely than White adults (47%) to be LLM users.
Read articleAI and Public Engagement
Large AI Models Are Cultural and Social Technologies
“Debates about artificial intelligence (AI) tend to revolve around whether large models are intelligent, autonomous agents. Some AI researchers and commentators speculate that we are on the cusp of creating agents with artificial general intelligence (AGI), a prospect anticipated with both elation and anxiety. There have also been extensive conversations about cultural and social consequences of large models, orbiting around two foci: immediate effects of these systems as they are currently used, and hypothetical futures when these systems turn into AGI agents—perhaps even superintelligent AGI agents. But this discourse about large models as intelligent agents is fundamentally misconceived. Combining ideas from social and behavioral sciences with computer science can help us to understand AI systems more accurately. Large models should not be viewed primarily as intelligent agents but as a new kind of cultural and social technology, allowing humans to take advantage of information other humans have accumulated.”
Read articleAI and Problem Solving
AI and Problem Solving
AI Could Supercharge Human Collective Intelligence in Everything from Disaster Relief to Medical Research
Artificial Intelligence is enhancing human collective intelligence across multiple domains. In disaster scenarios, AI-controlled drones and robots survey damage, process data, and deliver supplies to inaccessible areas, helping emergency teams prioritize efforts. The article explores how AI augmentation works by processing vast datasets quickly, automating physical tasks, improving information exchange, and facilitating collaboration. It touches on real-world applications that already exist in disaster response, healthcare, media, public policy, and environmental protection, noting that AI should be viewed as a collaborator rather than a competitor to humans.
Read articleAI Infrastructure
AI Infrastructure
Who Gets to Build AI? Tackling the Gaps in Infrastructure, Data, and Governance
An interview with Kenya's Special Envoy on Technology highlights the significant challenges nations in the Global South face in equitable AI development. These hurdles include deficits in fundamental infrastructure like internet connectivity and energy, shortages of digital and AI-specific skills, the need for adaptable governance frameworks, and difficulties in securing appropriate financing. The Envoy, Phillip Thigo, stresses the necessity of investing in these foundational elements to ensure that the benefits of AI are not limited to a few well-resourced countries. He advocates for inclusive, trustworthy, and cooperative approaches to AI's future, warning that without these, existing inequalities could worsen. Thigo also shares examples of AI projects in Kenya focused on climate action and improving agricultural outcomes, illustrating the technology's potential when foundational needs are addressed.
Read articleAI Infrastructure
AI Search Has A Citation Problem
“The Tow Center for Digital Journalism conducted tests on eight generative search tools with live search features to assess their abilities to accurately retrieve and cite news content, as well as how they behave when they cannot. We found that chatbots were generally bad at declining to answer questions they couldn’t answer accurately, offering incorrect or speculative answers instead, premium chatbots provided more confidently incorrect answers than their free counterparts, generative search tools fabricated links and cited syndicated and copied versions of articles, content licensing deals with news sources provided no guarantee of accurate citation in chatbot responses, and more. Our observations are not just a ChatGPT problem, but rather recur across all the prominent generative search tools that we tested.”
Read articleAI and Education
AI and Education
Parents Need to Pay Attention to Artificial Intelligence, Too
As AI rapidly enters K-12 classrooms, concerns about racial bias and equity are growing, with experts like Columbia professor Ezekiel Dixon-Romàn emphasizing that parents must take an active role in oversight. "A lot of parents don't realize that we do have power. We all have the right of refusal... We have the right to refuse to be subject to these technologies," he states, highlighting both individual and collective parental influence. While companies pledge to create fair tools, the article suggests parents shouldn't leave AI implementation solely to schools, as existing racial disparities in AI systems could widen educational inequities without proper vigilance.
Read articleAI and Labor
AI and Labor
What Do Workers Want? A CDT/Coworker Deliberative Poll on Workplace Surveillance and Datafication
“The Center for Democracy & Technology (CDT) and Coworker.org collaborated on a project that explored workers’ perspectives on workplace surveillance through a unique Deliberative Polling approach…. The deliberations consisted of three sessions focusing on four topics: monitoring work-from-home employees, location tracking, productivity monitoring, and data rights. In the final post-deliberation survey, respondents showed strong support for proposals that would grant workers a right to greater transparency regarding employers’ surveillance and data collection practices, prohibit off-clock surveillance, limit location tracking, and bar employers from engaging in productivity monitoring that would harm workers’ mental or physical health. Moving forward, researchers should explore deliberation-centered methodologies further, both to determine workers’ organic views on key workplace policy issues…. and policymakers should recognize the urgent need for a regulatory framework addressing the datafication of workers.”
Read articleThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.