
News That Caught Our Eye #50
Published by Angelique Casem & Dane Gambrell on January 1, 1970
In the news this week: A report from the Center for Democracy & Technology finds that civic tech organizations in Mexico and Taiwan effectively used AI tools to combat AI-driven information in last year’s elections while Cornell’s Frank Pasquale argues that AI poses a threat to democratic processes. Reversing course from the Biden Administration, the Trump administration instructs AI scientists to remove terms like "AI safety" and "fairness" in favor of "reducing ideological bias" and developing tools that “expand America’s global AI position.” Ivy leaguers are flooding Chinese AI company Deep Seek with applications. An NYU study finds that TikTok's algorithm skewed toward promoting Republican content during the 2024 election and our partners at InnovateUS launch Spanish-language AI workshops. Read more in this week's AI News That Caught Our Eye.
In the news this week
- Governing AI:Setting the rules for a fast-moving technology.
- AI for Governance:Smarter public institutions through machine intelligence.
- AI and Public Engagement:Bolstering participation
- AI and Problem Solving:Research, applications, technical breakthroughs
- AI and Elections:Free, fair and frequent
- AI and Labor:Worker rights, safety and opportunity
Governing AI
Governing AI
Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models
“The National Institute of Standards and Technology (NIST) has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of ‘AI safety’, ‘responsible AI’, and ‘AI fairness’ in the skills it expects of members and introduces a request to prioritize ‘reducing ideological bias, to enable human flourishing and economic competitiveness.’ The new agreement removes mention of developing tools ‘for authenticating content and tracking its provenance’ as well as ‘labeling synthetic content,’ signaling less interest in tracking misinformation and deep fakes… and an emphasis on putting America first… a researcher discusses how ‘The Trump administration has removed safety, fairness, misinformation, and responsibility as things it values for AI, which I think speaks for itself,’... ignoring these issues could harm regular users by possibly allowing algorithms that discriminate based on income or other demographics to go unchecked. ‘Unless you're a tech billionaire, this is going to lead to a worse future for you and the people you care about. Expect AI to be unfair, discriminatory, unsafe, and deployed irresponsibly.’”
Read articleGoverning AI
AI Industry’s Wish List for Trump
“A variety of artificial intelligence firms and industry groups are hoping to shape the Trump administration’s forthcoming policy on emerging technology and keep the U.S. a leader in the space. While the recommendations come from a variety of industry players, the proposals largely overlap and offer a glimpse into how the industry envisions its future under President Trump. Some main takeaways include the need for a federal AI framework, but not overdoing regulation, strengthening export controls amid foreign competition (specifically on semiconductors, tools, and exports to China), government adoption of AI in the context of AI streamlining purposes and upkeep with foreign governments, and more money for AI infrastructure.”
Read articleGoverning AI
The Civil Rights Act in the Age of Generative AI, an Interim Report
“This paper explores the intersection of Generative AI and civil rights law, arguing that the current legal framework, though foundational, requires significant adaptation to address the unique challenges posed by automated decision-making systems.”
Read articleGoverning AI
An Overview of South Korea’s Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (Basic AI Act)
“South Korea’s first major AI legislation – the Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (AI Basic Act) was signed into law on January 21, 2025. It aims to balance AI advancement with the protection of individuals’ rights and dignity in South Korea. The AI Basic Act promotes the development of AI technologies that enhance the quality of life while ensuring safety, reliability, and transparency in decision-making, making South Korea the second country in the world to establish a comprehensive legislative framework governing AI after the European Union. It requires clear explanations of key criteria behind AI’s final results while mandating the government to ensure AI safety, support innovation, and develop policies to help residents adapt to AI-driven societal changes. Thus, it focuses on fostering public trust in AI by setting clear ethical standards and regulatory frameworks to guide its responsible use.”
Read articleAI for Governance
AI for Governance
Singapore Public Officers to Use Meta LLM to Build AI Applications
“The Singapore government, in collaboration with tech company Meta, has launched the Llama Incubator Programme, which is designed to build capabilities and drive innovation on open-source artificial intelligence (AI) among local start-ups, small and medium-sized enterprises (SMEs) and the public sector agencies..”
Read articleAI for Governance
AI Poised to Reshape State Transportation Departments, Staff
“To position itself for a new AI age, the California Department of Transportation (Caltrans) is exploring pilot projects and use cases for AI, and policy direction. The department is near finalizing an AI strategy — and is considering establishing a ‘chief data and AI officer role,’ a step that could be concluded in the next several months, Dara Wheeler, Caltrans division chief of research, innovation and system information said. Establishing use cases for AI, understanding the technology and managing the data are all part of how transportation systems and departments will reshape themselves for the future. If civil engineers were the backbone of 20th-century transportation planning and development, watch for layers of tech expertise filling the halls of state DOTs, experts said.”
Read articleAI for Governance
Govt. Considerations for Adding AI Into Human Services
“Governments working in the health and human services sector are increasingly looking to artificial intelligence to improve service delivery — a key part of which is motivational interviewing, an evidence-based counseling technique with an emphasis on expressing empathy toward clients. AI can help health and human services in several ways, from surfacing information insights at scale to providing information on public health issues proactively to improving data management..”
Read articleAI for Governance
(VIDEO) Usage of Artificial Intelligence in Legislative Drafting: Dos and Don’ts
A Bussola Tech (Brazil) panel provided an overview of how AI can support and enhance the drafting process while addressing potential risks and challenges faced by lawmakers. Donalene Roberts, U.S. House of Representatives, Grant Vergottini, Xcential Legislative Technologies, and Miguel Landeros Perkic of Chile’s Cámara de Diputadas y Diputados discussed the role of AI in improving legislative drafting systems, challenges in existing processes, and best practices for leveraging AI-driven solutions. Panelists shared perspectives on potential risks to the drafting profession, the impact of AI on institutional workflows, and practical dos and don’ts for integrating AI into legislative drafting.
Read articleAI for Governance
AI Will Completely Transform Local Government in the Next 10 Years— If We Embrace It Effectively
Artificial intelligence has the potential to significantly enhance the efficiency, responsiveness, and citizen engagement of local governments. This blog post identifies three main areas where AI can drive transformation: automating routine administrative tasks, enabling better data analysis for informed decision-making, and improving public engagement through AI-driven platforms. The piece emphasizes the need for investment in AI training and cultural change within local institutions to fully realize these benefits. If implemented effectively, AI could make local governance more proactive, efficient, and citizen-focused.
Read articleAI and Public Engagement
AI and Public Engagement
Bringing Citizens into the Courtroom: How Digital Technologies Can Democratize Constitutional Justice
“Constitutional courts could use AI to incorporate citizen participation through three key technological innovations: information platforms that make constitutional law accessible to all citizens; deliberative digital forums that enable diverse public input on constitutional questions; and collaborative interpretation mechanisms that allow citizens and experts to contribute directly to judicial decision-making. These tools could enhance transparency, foster democratic engagement, and improve the legitimacy of constitutional rulings. By leveraging AI, courts can manage large volumes of public input efficiently, ensure inclusivity, and bridge the gap between legal experts and the general public, ultimately strengthening the relationship between citizens and constitutional governance.”
Read articleAI and Public Engagement
Artificial Intelligence for Digital Citizen Participation: Design Principles For a Collective Intelligence Architecture
“The challenges posed by digital citizen participation and the amount of data generated by Digital Participation Platforms (DPPs) create an ideal context for the implementation of Artificial Intelligence (AI) solutions. However, current AI solutions in DPPs focus mainly on technical challenges, often neglecting their social impact and not fully exploiting AI's potential to empower citizens. The goal of this paper is thus to investigate how to design digital participation platforms that integrate technical AI solutions while considering the social context in which they are implemented.”
Read articleAI and Public Engagement
(VIDEO) The Peacemaking Machine: Can AI Improve Democratic Deliberation?
A recent Reboot Democracy in the Age of AI workshop discussed the concept of a "Habermas Machine" that facilitates better deliberation and decision-making. MIT and Google Deepmind’s Michiel A Bakker and Michael Henry Tessler explored technologies that enable productive democratic conversations while minimizing bias and polarization. By designing systems that help citizens understand complex issues and make informed decisions, this initiative aims to scale democratic participation through technology. As polarized debates dominated 20th-century democratic discourse, AI-supported deliberation tools seek to create more rational and inclusive dialogue in the future, while maintaining essential human involvement.
Read articleAI and Problem Solving
AI and Problem Solving
Cutting Through the Noise: Early Insights from the Frontier of Nonprofit AI Use
This blog post discusses how nonprofits are scaling AI-powered services, arguing that AI-driven personalization is expected to boost engagement and impact across sectors like healthcare, agriculture, and education. Policymakers worry about AI inconsistencies, but developers are implementing techniques like fine-tuning and real-time monitoring to manage risks. Early evidence supports the potential for AI to enhance the scale and cost-effectiveness of development efforts, with the need for ongoing evaluation.
Read articleAI and Elections
AI and Elections
Adaptation and Innovation: The Civic Space Response to AI-Infused Elections
“This report looks at their contributions to a resilient information environment during the 2024 electoral periods through three case studies: (I) fact-checking collectives in Mexico, (II) decentralization and coordination among civil society in Taiwan, and (III) AI incident tracking projects by media, academics, and civil society organizations…Though the case studies span different political contexts and types of interventions, common themes emerged. Organizations benefited from complementary or collaborative work with peer groups. They also used AI to bolster their own work. Civic space actors contended with funding and capacity constraints, insufficient access to information from companies, difficulty detecting and verifying AI-generated content, and the politicization of media resilience work, including fact-checking. Finally, the case studies emphasize that the issue of AI in elections is not temporary. Civic space actors have been addressing the risks and exploring the opportunities AI presents for years — long before the media and policy attention of 2024. These groups will continue to be invaluable resources and partners for public and private actors in 2025 and beyond.”
Read articleAI and Elections
AI and Electoral Manipulation: From Misinformation to Demoralization
“This paper examines how artificial intelligence, if improperly used, poses a significant threat to democratic processes by enabling misinformation and eroding citizen morale: “Artificial intelligence is a powerful tool, but it is easily exploited by extremists and frauds to deceive and demoralize the vulnerable. Two common types of AI-predictive and generative-are putting democracy at risk. Predictive analytics can divide publics into ever more isolated silos, eroding the type of common knowledge necessary for democratic deliberation. Generative AI is making it ever easier to fake images and events. Even when such fabrications are debunked, the air of unreality created by such dissimulation has allowed some politicians to deny real evidence of wrongdoing by averring that the evidence documenting it was AI-generated. This dangerous confluence of trends means reformers have to work on more than combating misinformation. They also need to resist demoralization, which occurs when citizens start tuning out politics altogether and become too cynical or distracted to engage. AI may also play a small part in addressing the problems of misinformation and demoralization-but real solutions entail a far broader conversation about the future of democracy.”
Read articleAI and Elections
AI Political Archive
Submit an example to tThe AI Political Archive, which aims to document the use of generative AI in political campaign communications during 2024 U.S. elections, with a focus on multimedia content and down ballot races that receive less media attention. “Our goal is to provide a unique dataset for journalists, academics, policymakers, political consultants to better study, track, and assess the use of generative AI in political communication.”
Read articleAI and Elections
TikTok's recommendations skewed towards Republican content during the 2024 U.S. presidential race
This study analyzing TikTok's recommendation algorithm in the lead-up to the 2024 US presidential election found a significant skew towards Republican content. The researchers created dummy accounts to simulate and observe both how the algorithm serves content to both Republican and Democratic-leaning users. Republican-leaning accounts received more party-aligned recommendations, while Democratic-leaning accounts were shown more content from the opposite party. This asymmetry persisted across different states and engagement metrics. The research also found that Republican-aligned channels exhibited higher engagement, suggesting the algorithm amplified right-leaning political discourse during this critical period.
Read articleAI and Labor
AI and Labor
Stanford, Harvard Grads Seek China AI Startup Jobs, Founder Says
Graduates from top US schools including Harvard University and Stanford University are flooding an up-and-coming Chinese AI startup with resumes as DeepSeek’s debut earlier this year has helped elevate the profiles of fellow emerging technology builders in China. That’s a sea change from years past, when it was challenging to hire engineers even from Chinese universities, said Victor Huang, co-founder and chairman of Manycore Tech Inc. The fortune of the startup took a turn for the better after it was labeled as one of the “Six Dragons” of the eastern Chinese city of Hangzhou together with DeepSeek, whose lower cost AI model stunned the global AI industry in January, and Manycore has found it easier to attract candidates since, according to Huang. “It’s totally changed” in the past two months, Huang told Bloomberg Television. “Many top talents from the likes of Tsinghua University, Zhejiang University, Beijing University, even from Stanford, Harvard, they send resumes to us, and some of them already joined us.”
Read articleThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.