Weekly News header

News That Caught Our Eye #52

Published on April 3, 2025

Summary

In the news: New Jersey's AI Task Force has pioneered a new approach to using AI to develop policy recommendations with public input, demonstrating how AI can enhance democratic participation in policymaking. An article from Spencer Overton examines how AI might advance racially inclusive democracy while acknowledging that proper legal frameworks are essential to prevent exacerbating inequalities. A survey from the Imagining the Digital Future Center reveals Americans' complex relationship with AI – with 52% of U.S. adults already using large language models despite having deep concerns about privacy and societal impacts. Meanwhile, a Luminate survey highlights Indonesia's unique vulnerability to AI-driven manipulation, with 75% of citizens believing AI-generated content can influence political views. This edition of AI News That Caught Our Eye covers news from this week while also taking a look back at some of the key stories we’ve written about at Reboot Democracy so far this year.

AI and Public Safety

AI and Public Safety

Ready for Wildfire: Using GenAI as a "Practice Partner" for Future-Ready Governments

Michael Baskin on January 29, 2025 in Reboot Democracy

AI can help cities prepare for crises, not just plan for them. “To be ready for emergent futures, organizations need to shift from having planned to being prepared. Practice closes the readiness gap. Organizations and leaders that re-imagine GenAI as a ‘practice partner’ can build adaptive, resilient organizations that are ready for what’s coming. As a ‘practice partner,’ GenAI can run live open-ended scenario exercises for city governments with low cost, low barrier to entry, and high effectiveness.”

Read article

AI and Elections

AI and Elections

Understanding the Role of GenAI in Elections: A Crucial Endeavor for 2024

Fernanda Sauca on February 21, 2025 in TUM Think Tank

“In a new report, Dr Amélie Hennemann-Heldt, TUM Think Tank Fellow of Practice, sheds light on the nuanced role of generative AI (genAI) in elections. With over 70 national elections held in 2024, covering about half the global population, concerns around genAI’s potential to disrupt democratic integrity were at an all-time high –particularly in Germany. However, this report challenges the assumption that genAI’s impact is solely negative, revealing a more complex landscape where risks coexist with opportunities for democratic innovation. Real-world election experiences helped to temper fears that genAI would overwhelmingly disrupt electoral processes. While some malicious actors did use genAI to manipulate information, the report found that these cases were typically isolated incidents rather than coordinated campaigns. Most misleading AI-generated content appeared to stem from careless use rather than systematic manipulation.”

Read article

Governing AI

Governing AI

States Leading the Way: Why We're Convening State Leaders to Shape America's AI Future

Beth Simone Noveck on March 27, 2025 in Reboot Democracy

“While federal AI policy shifts toward deregulation, my fellow state AI leaders will gather in June with researchers, entrepreneurs, and technologists at Princeton University under the auspices of the National Governors Association, the Center for Public Sector AI, the NJ AI Hub, InnovateUS and the Center for Information Technology Policy, to develop practical frameworks for responsible AI implementation in government. The two-day working conference will focus on: expanding equitable access, building public trust, strengthening governance, unlocking data responsibly, and driving innovation aligned with democratic principles.” For more information, see https://stateaileaders.org/

Read article

AI Infrastructure

AI Infrastructure

Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations

Apostol Vassilev et al. on March 24, 2025 in NIST Computer Security Resource Center

“This NIST Trustworthy and Responsible AI report provides a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The taxonomy is arranged in a conceptual hierarchy that includes key types of ML methods, life cycle stages of attack, and attacker goals, objectives, capabilities, and knowledge. This report also identifies current challenges in the life cycle of AI systems and describes corresponding methods for mitigating and managing the consequences of those attacks. The terminology used in this report is consistent with the literature on AML and is complemented by a glossary of key terms associated with the security of AI systems. Taken together, the taxonomy and terminology are meant to inform other standards and future practice guides for assessing and managing the security of AI systems by establishing a common language for the rapidly developing AML landscape.”

Read article

AI and Problem Solving

AI and Problem Solving

Nuclear fusion: Delivering on the promise of carbon‑free power with the help of AI

Chris Welsch on March 25, 2025 in Microsoft

“Set in a scrubby forest of pine, oak and aromatic brush in Provence, the world’s largest nuclear fusion plant is under construction. Here, 2,000 scientists, physicists and workers from more than 30 countries are building a power plant fueled by the same energy that makes the sun shine. Fusion has the potential to be a new source of affordable, carbon-free energy. ITER, as it’s called, is a mind-blowingly complex project, involving harnessing plasma that is even hotter than our sun….A collaboration with Microsoft is one of the ways ITER is preparing for the moment when it’s time to turn the ignition switch. ITER is using a range of Microsoft tools – from Microsoft 365 Copilot to Azure OpenAI Service to Visual Studio to GitHub – to speed toward its goals. ‘We’re assembling a little bit more than 1 million parts, and the challenge is not only to manufacture these things, the challenge is also to assemble them and have it all work at once.’ Alain Becoulet, a physicist and deputy director general of ITER says the need for precision with complex components makes building a fusion plant the challenge of a lifetime. ‘It’s like a nuclear Swiss watch,’ he says. The development of a chatbot in Azure OpenAI service has significantly improved how ITER staff searches its database of more than 1 million documents, ranging from research to inventory to regulation, Becoulet said. A collaboration with GitHub Copilot is helping make software development accessible even to non-developers, as well as enabling sophisticated simulations to improve safety and operations.”

Read article

AI and Public Engagement

AI and Public Engagement

Who Participates in Digital Democracy – and Who Really Benefits?

Fredrik M. Sjoberg and Tiago C. Peixoto on February 3, 2025 in Reboot Democracy

This blog post summarizes the findings from a recent study examining use cases from four countries where digital platforms were used to engage residents in lawmaking and policymaking. The authors conclude that “who participates in digital democracy doesn’t always determine who benefits – what matters is whether and how governments respond. If they don’t respond, or if their response isn’t inclusive, even the broadest participation means little. AI-driven engagement must go beyond scaling participation and deliberation to address this challenge.”

Read article

AI and Public Engagement

Bringing Citizens into the Courtroom: How Digital Technologies Can Democratize Constitutional Justice

Alejandro Cortés-Arbeláez on March 19, 2025 in Reboot Democracy

“Constitutional courts could use AI to incorporate citizen participation through three key technological innovations: information platforms that make constitutional law accessible to all citizens; deliberative digital forums that enable diverse public input on constitutional questions; and collaborative interpretation mechanisms that allow citizens and experts to contribute directly to judicial decision-making. These tools could enhance transparency, foster democratic engagement, and improve the legitimacy of constitutional rulings. By leveraging AI, courts can manage large volumes of public input efficiently, ensure inclusivity, and bridge the gap between legal experts and the general public, ultimately strengthening the relationship between citizens and constitutional governance.”

Read article

AI and Public Engagement

Our Love-Hate Relationship with Digital Technology

Lee Rainie on March 24, 2025 in Reboot Democracy

“At the Imagining the Digital Future Center, we have found that Americans are fearful in important ways about AI – particularly generative AI and large language models (LLMs) – and yet the user base is exploding. On the fear side, our surveys show that people are especially concerned about the way AI systems will erode their personal privacy, their opportunities for employment, how these systems might change their relationships with others, their potential impact on basic human rights, the way they will disrupt people’s physical and mental health. At the level of institutions and big systems, they also have great anxiety that AI will negatively impact politics and elections, further erode the level of civility in society, worsen economic inequality, and be harmful to both K-12 education and higher education. Those concerns are leavened to a degree by the public’s sense that AI will be helpful in health and science discovery. Still, overall and in broad terms these are grim expectations. And yet … the survey results we just reported show that 52% of U.S. adults already are LLM users, making them one of the fastest – if not the fastest – adopted consumer technology in history.”

Read article

AI and Public Engagement

Diseño participativo de servicios públicos con apoyo de inteligencia artificial (Co-Creating Public Services with AI Assistance)

Sofía Bosch Gómez on April 8, 2025 in InnovateUS

En esta charla se explorarán procesos y herramientas que facilitan la integración del diseño participativo en la definición de problemas y el desarrollo de soluciones innovadoras a problemas públicos con el apoyo de la inteligencia artificial. A través de este nuevo enfoque, se discutirán nuevas herramientas de participación y consideraciones clave para colaborar de manera efectiva y equitativa con diversas comunidades.

Read article

AI and Public Engagement

Innovating in the Public Interest: Winning Early

Anita McGahan on April 9, 2025 in InnovateUS

Great projects begin with early wins. We will talk about the importance of driving for early wins – and of using them to build understanding of how collaborators are each evaluated and rewarded for successes. Early successes can become the foundation for longer-term initiatives.

Read article

AI and Public Engagement

Starting with Curiosity: A Beginner’s AI Guide for Public Servants

Jamie Kimes and Caleb Williams on April 10, 2025 in InnovateUS

Artificial intelligence (AI) is everywhere—but what does it actually mean for your work in government? This beginner-friendly workshop is designed for public servants who are curious about AI but may also have questions, concerns, or hesitations. No technical background is required. Together, we’ll explore what AI is (and isn’t), how it works behind the scenes, and how public servants can begin to navigate this rapidly evolving space with confidence, ethics, and purpose.

Read article

AI and Public Engagement

Indonesia faces unique threat from AI manipulation, research shows

on March 25, 2025 in Luminate

“Indonesians are deeply aware of the power of AI-generated content to manipulate public opinion – but those who think they’re safe may be the most vulnerable, according to new research from Luminate conducted by Ipsos. Three in four (75%) believe AI–generated content has the potential to impact the general public’s political views, while 72% say it could affect their close friends and family. Even at a personal level, 63% acknowledge that AI-generated content could shape their own political views. This suggests a broad awareness that AI-driven narratives have real-world consequences, even as individuals may underestimate their own susceptibility compared to those around them. Those who think AI won’t affect them may struggle to detect it Emergent research shows that AI-generated propaganda can be more effective than content produced by people. What’s less clear is whether it still works if people know it was created by AI. Regardless, our survey found that significant numbers of Indonesians aren’t confident in their ability to identify whether social media content is AI-generated. A quarter (26%) admit they are not very confident, or not confident at all; 70% claim to be at least fairly confident.”

Read article

AI and Public Engagement

Analyzing the Benefits of Artificial Intelligence to Racially Inclusive Democracy

Spencer A. Overton on March 27, 2025 in GW Law Scholarly Commons

“Over the past two decades—as the United States has grown more ethnically diverse—the U.S. Supreme Court has dismantled key voting rights protections, and state legislatures have erected a record number of voting restrictions. Largely oblivious to this growing gap in legal protections, several artificial intelligence (‘AI’) optimists have claimed that AI can help usher in a more inclusive, participatory, and unbiased democracy. Such an outcome, however, is far from guaranteed. This Article is the first to comprehensively examine the extent to which AI—and the legal frameworks that regulate it—can advance racially inclusive democracy. It fills a gap in the AI optimism literature by offering a cleareyed assessment of relevant political, racial, and economic barriers to AI making democracy more racially inclusive. This analysis reveals that some of the AI optimists’ technological and legal proposals could, in fact, exacerbate racial disparities in political power and harm voters of color. The Article acknowledges, however, that certain AI tools, if applied appropriately, could help reduce turnout gaps and increase government responsiveness to communities of color. Although good AI law is no substitute for an updated Voting Rights Act and a Supreme Court committed to protecting voting rights, embedding values of racial inclusion into AI law at this formative stage could shape the trajectory of our democracy. For example, laws ensuring broad access to public AI infrastructure (particularly in historically marginalized communities) and robust AI accountability laws can foster conditions in which AI is more likely to be used to benefit racially inclusive democracy.”

Read article

AI and Public Engagement

How New Jersey's AI Task Force Used AI to Develop Evidence-Based Policy Solutions with New Jersey Residents

Dane Gambrell on April 3, 2025 in Reboot Democracy

“Building on New Jersey's longstanding experience using new technology to engage with the public in how we make policy, the State AI Task Force pioneered a novel approach using AI to help us develop more robust recommendations faster and with the benefit of large-scale community engagement. Rather than relying solely on traditional bench research or expert consultations, the Task Force's Workforce Training and Jobs of the Future Working Group developed a process that paired AI-powered research with direct input from thousands of New Jersey workers to address the pressing challenge of the impact of AI on work…The Working Group used Policy Synth, a free and open source AI-based toolkit developed by Citizens Foundation and The GovLab, to synthesize the findings from research and engagements with private and public sector workers in the state. The Working Group used this approach to develop evidence-based policies while also enhancing democratic participation. As a result of this process, which the Working Group undertook over eight weeks, New Jersey is implementing free AI skills training for all public servants and developing an AI-powered labor market monitoring system to help workers navigate career trends.”

Read article

AI and Labor

AI and Labor

How Tech Oligarchs Are Using AI Hype to Push Mass Layoffs

Dane Gambrell on February 27, 2025 in Reboot Democracy

Earlier this month, software firm Workday announced that it would be laying off more than 1,700 workers – or about 8.5% of its workforce – to redirect investment towards artificial intelligence. Google and Meta are among the tech giants that have cited the need to invest resources in AI development as the reason for cutting jobs. Elon Musk’s so-called Department of Government Efficiency (DOGE) is now bringing the AI-fueled mass layoff strategy to the federal government. At the same time, workers are building power to confront AI-driven mass layoffs through collective action.

Read article

AI for Governance

AI for Governance

America First, Science Last? Kratsios Hearing Signals Empty AI Strategy

Beth Simone Noveck on March 3, 2025 in Reboot Democracy

“Trump's pick to lead OSTP professed American AI leadership at his confirmation hearing while ignoring the dismantling of the very scientific institutions that sustain it. With no vision beyond deregulation and no defense of research funding, he failed to address—and Senators failed to ask—about the role of AI in modernizing government or the growing influence of Elon Musk in shaping federal AI policy.”

Read article

AI for Governance

AI Will Completely Transform Local Government in the Next 10 Years— If We Embrace It Effectively

Neil Kleiman on March 14, 2025 in International City/County Management Association

Artificial intelligence has the potential to significantly enhance the efficiency, responsiveness, and citizen engagement of local governments. This blog post identifies three main areas where AI can drive transformation: automating routine administrative tasks, enabling better data analysis for informed decision-making, and improving public engagement through AI-driven platforms. The piece emphasizes the need for investment in AI training and cultural change within local institutions to fully realize these benefits. If implemented effectively, AI could make local governance more proactive, efficient, and citizen-focused.

Read article