News That Caught Our Eye #50

In the news this week: A report from the Center for Democracy & Technology finds that civic tech organizations in Mexico & Taiwan effectively used AI tools to combat AI-driven information in last year’s elections while Cornell’s Frank Pasquale argues that AI poses a threat to democratic processes. Reversing course from the Biden Administration, the Trump administration instructs AI scientists to remove terms like "AI safety" and "fairness" in favor of "reducing ideological bias" and developing tools that “expand America’s global AI position.” Ivy leaguers are flooding Chinese AI company Deep Seek with applications. An NYU study finds that TikTok's algorithm skewed toward promoting Republican content during the 2024 election, and our partners at InnovateUS launch Spanish-language AI workshops. Read more in this week's AI News That Caught Our Eye.

Angelique Casem

Dane Gambrell

Read Bio

Governing AI

Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ From Powerful Models - Wired, Will Knight, March 14, 2025

“The National Institute of Standards and Technology (NIST) has issued new instructions to scientists that partner with the US Artificial Intelligence Safety Institute (AISI) that eliminate mention of ‘AI safety’, ‘responsible AI’, and ‘AI fairness’ in the skills it expects of members and introduces a request to prioritize ‘reducing ideological bias, to enable human flourishing and economic competitiveness.’ The new agreement removes mention of developing tools ‘for authenticating content and tracking its provenance’ as well as ‘labeling synthetic content,’ signaling less interest in tracking misinformation and deep fakes… and an emphasis on putting America first… a researcher discusses how ‘The Trump administration has removed safety, fairness, misinformation, and responsibility as things it values for AI, which I think speaks for itself,’... ignoring these issues could harm regular users by possibly allowing algorithms that discriminate based on income or other demographics to go unchecked. ‘Unless you're a tech billionaire, this is going to lead to a worse future for you and the people you care about. Expect AI to be unfair, discriminatory, unsafe, and deployed irresponsibly.’”

AI Industry’s Wish List for Trump - The Hill, Julia Shapero and Miranda Narazzo, March 17, 2025

“A variety of artificial intelligence firms and industry groups are hoping to shape the Trump administration’s forthcoming policy on emerging technology and keep the U.S. a leader in the space. While the recommendations come from a variety of industry players, the proposals largely overlap and offer a glimpse into how the industry envisions its future under President Trump. Some main takeaways include the need for a federal AI framework, but not overdoing regulation, strengthening export controls amid foreign competition (specifically on semiconductors, tools, and exports to China), government adoption of AI in the context of AI streamlining purposes and upkeep with foreign governments, and more money for AI infrastructure.”

The Civil Rights Act in the Age of Generative AI, an Interim Report - North Carolina Central University School of Law via SSRN, Kevin P. Lee, March 14, 2025

“This paper explores the intersection of Generative AI and civil rights law, arguing that the current legal framework, though foundational, requires significant adaptation to address the unique challenges posed by automated decision-making systems.”

An Overview of South Korea’s Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (Basic AI Act) - Securiti, Anas Baig and Syeda Eimaan Gardezi, March 3, 2025

“South Korea’s first major AI legislation – the Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base (AI Basic Act) was signed into law on January 21, 2025. It aims to balance AI advancement with the protection of individuals’ rights and dignity in South Korea. The AI Basic Act promotes the development of AI technologies that enhance the quality of life while ensuring safety, reliability, and transparency in decision-making, making South Korea the second country in the world to establish a comprehensive legislative framework governing AI after the European Union. It requires clear explanations of key criteria behind AI’s final results while mandating the government to ensure AI safety, support innovation, and develop policies to help residents adapt to AI-driven societal changes. Thus, it focuses on fostering public trust in AI by setting clear ethical standards and regulatory frameworks to guide its responsible use.”

AI for Governance

Singapore Public Officers to Use Meta LLM to Build AI Applications - GovInsider, Amit Roy Choudhury, March 14, 2025

“The Singapore government, in collaboration with tech company Meta, has launched the Llama Incubator Programme, which is designed to build capabilities and drive innovation on open-source artificial intelligence (AI) among local start-ups, small and medium-sized enterprises (SMEs) and the public sector agencies..” 

AI Poised to Reshape State Transportation Departments, Staff - Government Technology, Skip Descant, March 14, 2025 

“To position itself for a new AI age, the California Department of Transportation (Caltrans) is exploring pilot projects and use cases for AI, and policy direction. The department is near finalizing an AI strategy — and is considering establishing a ‘chief data and AI officer role,’ a step that could be concluded in the next several months, Dara Wheeler, Caltrans division chief of research, innovation and system information said. Establishing use cases for AI, understanding the technology and managing the data are all part of how transportation systems and departments will reshape themselves for the future. If civil engineers were the backbone of 20th-century transportation planning and development, watch for layers of tech expertise filling the halls of state DOTs, experts said.”

Govt. Considerations for Adding AI Into Human Services - Government Technology, Julia Edinger, March 14, 2025

“Governments working in the health and human services sector are increasingly looking to artificial intelligence to improve service delivery — a key part of which is motivational interviewing, an evidence-based counseling technique with an emphasis on expressing empathy toward clients. AI can help health and human services in several ways, from surfacing information insights at scale to providing information on public health issues proactively to improving data management..”

(VIDEO) Usage of Artificial Intelligence in Legislative Drafting: Dos and Don’ts - Bussola Tech, Wade Ballou, Donalene Roberts, Grant Vergottini, Miguel Landeros Perkic, March 15, 2025

A Bussola Tech (Brazil) panel provided an overview of how AI can support and enhance the drafting process while addressing potential risks and challenges faced by lawmakers. Donalene Roberts, U.S. House of Representatives, Grant Vergottini, Xcential Legislative Technologies, and Miguel Landeros Perkic of Chile’s Cámara de Diputadas y Diputados discussed the role of AI in improving legislative drafting systems, challenges in existing processes, and best practices for leveraging AI-driven solutions. Panelists shared perspectives on potential risks to the drafting profession, the impact of AI on institutional workflows, and practical dos and don’ts for integrating AI into legislative drafting.

AI Will Completely Transform Local Government in the Next 10 Years— If We Embrace It Effectively - International City/County Management Association, Neil Kleiman, March 14, 2025

Artificial intelligence has the potential to significantly enhance the efficiency, responsiveness, and citizen engagement of local governments. This blog post identifies three main areas where AI can drive transformation: automating routine administrative tasks, enabling better data analysis for informed decision-making, and improving public engagement through AI-driven platforms. The piece emphasizes the need for investment in AI training and cultural change within local institutions to fully realize these benefits. If implemented effectively, AI could make local governance more proactive, efficient, and citizen-focused.

AI and Public Engagement 

Bringing Citizens into the Courtroom: How Digital Technologies Can Democratize Constitutional Justice - Reboot Democracy, Alejandro Cortés-Arbeláez, March 19, 2025 

“Constitutional courts could use AI to incorporate citizen participation through three key technological innovations: information platforms that make constitutional law accessible to all citizens; deliberative digital forums that enable diverse public input on constitutional questions; and collaborative interpretation mechanisms that allow citizens and experts to contribute directly to judicial decision-making. These tools could enhance transparency, foster democratic engagement, and improve the legitimacy of constitutional rulings. By leveraging AI, courts can manage large volumes of public input efficiently, ensure inclusivity, and bridge the gap between legal experts and the general public, ultimately strengthening the relationship between citizens and constitutional governance.” 

Artificial Intelligence for Digital Citizen Participation: Design Principles For a Collective Intelligence Architecture - Government Information Quarterly, Nicolas Bono Rossello, Anthony Simonofski, Annick Castiaux, March 7, 2025

“The challenges posed by digital citizen participation and the amount of data generated by Digital Participation Platforms (DPPs) create an ideal context for the implementation of Artificial Intelligence (AI) solutions. However, current AI solutions in DPPs focus mainly on technical challenges, often neglecting their social impact and not fully exploiting AI's potential to empower citizens. The goal of this paper is thus to investigate how to design digital participation platforms that integrate technical AI solutions while considering the social context in which they are implemented.”

(VIDEO) The Peacemaking Machine: Can AI Improve Democratic Deliberation? - Reboot Democracy, Giorgia Christiansen, March 18, 2025

A recent Reboot Democracy in the Age of AI workshop discussed the concept of a "Habermas Machine" that facilitates better deliberation and decision-making. MIT and Google Deepmind’s Michiel A Bakker and Michael Henry Tessler explored technologies that enable productive democratic conversations while minimizing bias and polarization. By designing systems that help citizens understand complex issues and make informed decisions, this initiative aims to scale democratic participation through technology. As polarized debates dominated 20th-century democratic discourse, AI-supported deliberation tools seek to create more rational and inclusive dialogue in the future, while maintaining essential human involvement.

AI and Problem Solving

Cutting Through the Noise: Early Insights from the Frontier of Nonprofit AI Use - Center for Global Development, Han Sheng Chia, March 13, 2025

This blog post discusses how nonprofits are scaling AI-powered services, arguing that AI-driven personalization is expected to boost engagement and impact across sectors like healthcare, agriculture, and education. Policymakers worry about AI inconsistencies, but developers are implementing techniques like fine-tuning and real-time monitoring to manage risks. Early evidence supports the potential for AI to enhance the scale and cost-effectiveness of development efforts, with the need for ongoing evaluation.

AI and Elections

Adaptation and Innovation: The Civic Space Response to AI-Infused Elections - Center for Democracy & Technology, Isabel Linzer, March 13, 2025

“This report looks at their contributions to a resilient information environment during the 2024 electoral periods through three case studies: (I) fact-checking collectives in Mexico, (II) decentralization and coordination among civil society in Taiwan, and (III) AI incident tracking projects by media, academics, and civil society organizations…Though the case studies span different political contexts and types of interventions, common themes emerged. Organizations benefited from complementary or collaborative work with peer groups. They also used AI to bolster their own work. Civic space actors contended with funding and capacity constraints, insufficient access to information from companies, difficulty detecting and verifying AI-generated content, and the politicization of media resilience work, including fact-checking. Finally, the case studies emphasize that the issue of AI in elections is not temporary. Civic space actors have been addressing the risks and exploring the opportunities AI presents for years — long before the media and policy attention of 2024. These groups will continue to be invaluable resources and partners for public and private actors in 2025 and beyond.”

AI and Electoral Manipulation: From Misinformation to Demoralization - in Human Vulnerability in Interaction with AI in European Private Law (A. Diurni, ed.) (Springer, forthcoming, 2025). Via SSR, Frank Pasquale, Cornell Law School, March 14, 2025

“This paper examines how artificial intelligence, if improperly used, poses a significant threat to democratic processes by enabling misinformation and eroding citizen morale: “Artificial intelligence is a powerful tool, but it is easily exploited by extremists and frauds to deceive and demoralize the vulnerable. Two common types of AI-predictive and generative-are putting democracy at risk. Predictive analytics can divide publics into ever more isolated silos, eroding the type of common knowledge necessary for democratic deliberation. Generative AI is making it ever easier to fake images and events. Even when such fabrications are debunked, the air of unreality created by such dissimulation has allowed some politicians to deny real evidence of wrongdoing by averring that the evidence documenting it was AI-generated. This dangerous confluence of trends means reformers have to work on more than combating misinformation. They also need to resist demoralization, which occurs when citizens start tuning out politics altogether and become too cynical or distracted to engage. AI may also play a small part in addressing the problems of misinformation and demoralization-but real solutions entail a far broader conversation about the future of democracy.”

AI Political Archive - Center on Technology Policy (CTP), the NYU Center for Social Media and Politics (CSMaP), and the American Association of Political Consultants (AAPC)

Submit an example to The AI Political Archive, which aims to document the use of generative AI in political campaign communications during 2024 U.S. elections, with a focus on multimedia content and down ballot races that receive less media attention. “Our goal is to provide a unique dataset for journalists, academics, policymakers, political consultants to better study, track, and assess the use of generative AI in political communication.”

TikTok's recommendations skewed towards Republican content during the 2024 U.S. presidential race - via Arxiv, Hazem Ibrahim (NYU) et al., January 29, 2025

This study analyzing TikTok's recommendation algorithm in the lead-up to the 2024 US presidential election found a significant skew towards Republican content. The researchers created dummy accounts to simulate and observe both how the algorithm serves content to both Republican and Democratic-leaning users. Republican-leaning accounts received more party-aligned recommendations, while Democratic-leaning accounts were shown more content from the opposite party. This asymmetry persisted across different states and engagement metrics. The research also found that Republican-aligned channels exhibited higher engagement, suggesting the algorithm amplified right-leaning political discourse during this critical period.

AI and Labor

Stanford, Harvard Grads Seek China AI Startup Jobs, Founder Says - Bloomberg, Saritha Rai, Annabelle Droulers, and Lauren Faith Lau, March 20, 2025

Graduates from top US schools including Harvard University and Stanford University are flooding an up-and-coming Chinese AI startup with resumes as DeepSeek’s debut earlier this year has helped elevate the profiles of fellow emerging technology builders in China. That’s a sea change from years past, when it was challenging to hire engineers even from Chinese universities, said Victor Huang, co-founder and chairman of Manycore Tech Inc. The fortune of the startup took a turn for the better after it was labeled as one of the “Six Dragons” of the eastern Chinese city of Hangzhou together with DeepSeek, whose lower cost AI model stunned the global AI industry in January, and Manycore has found it easier to attract candidates since, according to Huang. “It’s totally changed” in the past two months, Huang told Bloomberg Television. “Many top talents from the likes of Tsinghua University, Zhejiang University, Beijing University, even from Stanford, Harvard, they send resumes to us, and some of them already joined us.”

Events

InnovateUS:

  • March 25, 2025, 4pm - Reading Challenges Revealed: AI Innovation in Dyslexia Assessment, Lizzie Jones, Program Director, The Learning Agency

  • March 26, 2025, 11am - Del Big Data a la IA: Liderazgo para una transición digital responsable en el sector público/From Big Data to AI: Responsible Public Digital Transformation, Santiago Garces, Chief Information Officer, City of Boston

  • March 26, 2025, 2pm - Innovating in the Public Interest: Getting Started, Anita McGahan, Senior Research Scientist, The Burnes Center for Social Change

  • April 8, 2025, 2pm - Diseño participativo de servicios públicos con apoyo de inteligencia artificial/Co-Creating Public Services with AI Assistance, Sofia Bosch Gomez, Assistant Professor in the Department of Art + Design and Fellow at the Burnes Center for Social Change, Northeastern University

To register and see more workshops, visit https://innovate-us.org/workshops.

More Events:

  • April 2, 2025, 1pm - Hands-On AI Training: Boost Progressive Messaging Online, Higher Ground Institute: This hands-on training, presented in partnership with Vocal Media, will equip you with practical AI tools and prompt engineering techniques that can dramatically speed up your content creation process. Learn how to leverage powerful platforms like Descript and Opus Clip to produce more content in less time, helping progressive messages break through the noise and compete effectively in digital spaces. Learn more and register here. 

  • March 2, 2025, 2pm -  The Implementation Gap: Turning Legislative Intent into Executive-Led Outcomes, Marci Harris, Cofounder and Executive Director, POPVOX Foundation: This discussion addresses the critical but often overlooked issues with implementation across the Executive branch. Join former Executive branch experts for a conversation during which they will share real-life examples and first-hand accounts to help you, and identify and avoid common policy design pitfalls that can derail your member's objectives, learn techniques for crafting legislation that translates effectively from paper to practice, reduce future constituent casework by anticipating implementation challenges, and learn insider perspectives on how the Executive branch interprets and implements legislation. Learn more and register here.

  • April 15, 2025 - Applications are officially open for the third cohort of Decoded Futures at the Tech:NYC Foundation — a no-cost, seven-week program designed for NYC-based education and workforce development nonprofits looking to explore how AI can support their work. Individuals will receive hands-on support for custom AI tool development, learn about AI fundamentals, practical applications, and ethical considerations, access to Decoded Futures’ network of NYC-based technologists, no cost to participate. Learn more here. 

 

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.