News That Caught Our Eye #51

In the news this week: The Reboot Blog on why state leaders are meeting at Princeton University this June to develop practical frameworks for responsible AI use. A new series with the head of citizen engagement for the Brazilian Senate explores how artificial intelligence could enhance participatory lawmaking. The struggle to shape American AI policy intensifies with the Federation of American Scientists urging the Trump administration to cement US leadership through research investments, while tech companies actively lobby against state AI regulation. Google's Jigsaw upgrades its moderation tools for online communities. Fresh research from Harvard and Wharton reveals AI excels as a team player, while Elon University’s Lee Rainie explores our complex love-hate relationship with emerging technologies. Learn more in this week's AI News That Caught Our Eye.

Angelique Casem

Dane Gambrell

Read Bio

Listen to the AI-generated audio version of this piece. 

Governing AI

States Leading the Way: Why We're Convening State Leaders to Shape America's AI Future - Reboot Democracy, Beth Simone Noveck, March 27, 2025

“While federal AI policy shifts toward deregulation, my fellow state AI leaders will gather in June with researchers, entrepreneurs, and technologists at Princeton University to develop practical frameworks for responsible AI implementation in government. The two-day working conference will focus on: expanding equitable access, building public trust, strengthening governance, unlocking data responsibly, and driving innovation aligned with democratic principles.” For more information, see https://stateaileaders.org/ 

Securing American AI Leadership: A Strategic Action Plan for Innovation, Adoption, and Trust - Federation of American Scientists, Oliver Stephenson, Clara Langevin, and Karinna Gerhardt, March 24, 2025

Response by the Federation of American Scientists (FAS) to the Request for Information (RFI) issued by the Office of Science and Technology Policy (OSTP) in February 2025 regarding the development of an Artificial Intelligence (AI) Action Plan: “To sustain America’s leadership in AI innovation, accelerate adoption across the economy, and guarantee that AI systems remain secure and trustworthy, we offer a set of actionable policy recommendations. Developed by FAS in partnership with prominent AI experts, industry leaders, and research institutions—including contributors to the recent FAS Day One 2025 Project and the 2024 AI Legislative Sprint—these proposals are structured around four strategic pillars: 1) unleashing AI innovation, 2) accelerating AI adoption, 3) ensuring secure and trustworthy AI, and 4) strengthening existing world-class U.S. government institutions and programs.”

Emboldened by Trump, A.I. Companies Lobby for Fewer Rules - The New York Times, Cecilia Kang, March 24 2025

“In recent weeks, Meta, Google, OpenAI and others have asked the Trump administration to block state A.I. laws and to declare that it is legal for them to use copyrighted material to train their A.I. models. They are also lobbying to use federal data to develop the technology, as well as for easier access to energy sources for their computing demands. And they have asked for tax breaks, grants and other incentives. The shift has been enabled by Mr. Trump, who has declared that A.I. is the nation’s most valuable weapon to outpace China in advanced technologies. Many A.I. policy experts worry that such unbridled growth could be accompanied by, among other potential problems, the rapid spread of political and health disinformation; discrimination by automated financial, job and housing application screeners; and cyberattacks.”

Truth And Technology: Deepfakes in Law Enforcement Interrogations - University of Pennsylvania Journal of Constitutional Law, Hillary B. Farber and Anoo Vyas March 21, 2025

This legal analysis examines the emerging issue of AI deepfakes and how they could enable law enforcement to fabricate evidence during custodial interrogations. It traces the history and legal precedents surrounding police deception and fabricated evidence ploys, arguing that the unprecedented capabilities of generative AI necessitate a reevaluation of these tactics. The authors contend that AI-generated fake evidence, due to its speed of creation, realistic appearance, and potential for manipulation, poses a significant threat to due process and the voluntariness of confessions. The authors advocate for special rules or a ban on the use of generative AI in interrogations to safeguard constitutional rights and the integrity of the justice system.

AI for Governance

Apolitical Develops AI Self-Assessment Tool for Public Officers in Diverse Roles - GovInsider,  Si Ying Thian, Mar 23, 2025

“More than 1,500 public servants from over 50 countries have used the AI Readiness Check, a tool that allows public servants to self-assess their artificial intelligence (AI) readiness. The tool was launched by Apolitical, the UK-based social learning network for public servants globally, earlier this month. The AI Readiness Check measures the AI proficiency of public officers across four key areas. Taking just six minutes to complete, public officers can use the tool to assess their AI skills and learn how to effectively and responsibly use AI in the government. This tool allows public officers to self-identify themselves either as leaders, implementers, or users of AI. The assessment then measures their proficiency across four key areas: ethical AI use, AI innovation, operational and decision-making applications, and workforce readiness.”

GSA debuts new generative AI tool for workers - FedScoop, Rebecca Heilweil, March 20, 2025

“The General Services Administration on Thursday revealed a new generative AI tool designed to boost efficiency and help automate repetitive tasks. The platform, now available to GSA staff, comes amid anxiety that the Department of Government Efficiency might use artificial intelligence to surveil or replace federal workers, who are being laid off in large swaths across the government. The GSA chatbot can access a series of large language models, including technology from Anthropic and Meta. The system resembles other AI chatbots, and it’s designed to respond to user prompts and help staff with basic tasks, like writing. ‘This tool reflects our proactive approach to innovation and our commitment to providing secure and effective solutions tailored to the unique needs of government work,’ Stephen Ehikian, the acting administrator of GSA, said in a statement. ‘The opportunity to incorporate generative AI into Government work is akin to giving a personal computer to every worker. We are just at the start of our journey using this new tool, but the demand for this technology exists across GSA and the broader government.’”

The Algorithmic State Architecture (ASA): An Integrated Framework for AI-Enabled Government - Arxiv, Zeynep Engin, Jon Crowcroft, David Hand, and Philip Treleaven, March 2025

This study from the UK proposes a framework for how AI enables government transformation: “As artificial intelligence transforms public sector operations, governments struggle to integrate technological innovations into coherent systems for effective service delivery. This paper introduces the Algorithmic State Architecture (ASA), a novel four-layer framework conceptualising how Digital Public Infrastructure, Data-for-Policy, Algorithmic Government/Governance, and GovTech interact as an integrated system in AI-enabled states. Unlike approaches that treat these as parallel developments, ASA positions them as interdependent layers with specific enabling relationships and feedback mechanisms. Through comparative analysis of implementations in Estonia, Singapore, India, and the UK, we demonstrate how foundational digital infrastructure enables systematic data collection, which powers algorithmic decision-making processes, ultimately manifesting in user-facing services. Our analysis reveals that successful implementations require balanced development across all layers, with particular attention to integration mechanisms between them. The framework contributes to both theory and practice by bridging previously disconnected domains of digital government research, identifying critical dependencies that influence implementation success, and providing a structured approach for analysing the maturity and development pathways of AI-enabled government systems.”

Pennsylvania Shares Look at New Generative AI Pilot Program - Government Technology, Bill O’Boyle, March 24, 2025 

“Gov. Josh Shapiro this week joined leaders from OpenAI, Carnegie Mellon University, and Pennsylvania's labor community to unveil the results of the Commonwealth's first-in-the-nation Generative AI Pilot Program. The findings revealed that employees had a highly positive experience, reporting an average time savings of 95 minutes per day while using ChatGPT for writing, research, summarization and IT support. The pilot underscored the importance of human oversight, demonstrated AI's potential to streamline government operations and showed that Commonwealth employees across various roles, ages, and demographics benefited from the tool. Employees across multiple roles — including human resources, information technology, policy, and program management — benefited from the tool, helping them work more efficiently and focus on more complex, high-value tasks."

The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise - Harvard Business School, Fabrizio Dell’Acqua et al., March 21, 2025

A study by researchers at Harvard, Wharton and Procter & Gamble finds that AI can be an effective team member on a real-world problem solving task: “We examine how artificial intelligence transforms the core pillars of collaboration— performance, expertise sharing, and social engagement—through a pre-registered field experiment with 776 professionals at Procter & Gamble, a global consumer packaged goods company. Working on real product innovation challenges, professionals were randomly assigned to work either with or without AI, and either individually or with another professional in new product development teams. Our findings reveal that AI significantly enhances performance: individuals with AI matched the performance of teams without AI, demonstrating that AI can effectively replicate certain benefits of human collaboration. Moreover, AI breaks down functional silos. Without AI, R&D professionals tended to suggest more technical solutions, while Commercial professionals leaned towards commercially-oriented proposals. Professionals using AI produced balanced solutions, regardless of their professional background. Finally, AI’s language-based interface prompted more positive self-reported emotional responses among participants, suggesting it can fulfill part of the social and motivational role traditionally offered by human teammates. Our results suggest that AI adoption at scale in knowledge work reshapes not only performance but also how expertise and social connectivity manifest within teams, compelling organizations to rethink the very structure of collaborative work.”

AI and IR

Why the World is Looking to Ditch US AI Models -  MIT Technology Review, Eileen Guo, March 25, 2025

“A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government. As I wrote in my dispatch, the Trump administration's shocking, rapid gutting of the US government (and its push into what some prominent political scientists call “competitive authoritarianism”) also affects the operations and policies of American tech companies—many of which, of course, have users far beyond US borders. People at RightsCon said they were already seeing changes in these companies’ willingness to engage with and invest in communities that have smaller user bases—especially non-English-speaking ones. As a result, some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI.”

AI and Public Engagement

From Citizen to Senator: Artificial Intelligence and the Reinvention of Citizen Lawmaking in Brazil - Reboot Democracy, Beth Simone Noveck, Luis Kimaid, Alisson Bruno Dias De Queiroz, and Dane Gambrell, March 26, 2025

“Brazil's Federal Senate has pioneered four innovative citizen participation mechanisms that transform ordinary Brazilians from occasional voters into active lawmakers, with over 120,000 legislative ideas submitted and 11 million votes cast. Based on interviews with the head of the Brazilian Senate's e-Citizenship office and a leading expert on legislative innovation in Brazil, this series of four posts explores Brazil's current democratic ecosystem and future aspirations for how artificial intelligence could make citizen participation even more impactful.”

Trump Administration Receives 8,755 Comments for AI Action Plan — AI: The Washington Report - Mintz, Alexander Hecht, Bruce Sokler, March 24, 2025

“The Trump administration’s Office of Science and Technology Policy received 8,755 comments in response to its Request for Information for the development of its AI Action Plan implementing the Trump AI Executive Order issued in January. The comment period ended on March 15. Although the comments have not been made publicly available by the government, some were released by the commenters, which provide insights into stakeholders’ priorities regarding the Trump administration AI policies, including the use of copyrighted information to train AI models and federal preemption of state AI laws. At this point, while the comments define the range of issues that might go into the AI Action Plan, it remains to be seen which policy proposals might end up in the Plan, which is scheduled to be announced by mid-July 2025.”

Our Love-Hate Relationship with Digital Technology - Reboot Democracy, Lee Rainie, March 24, 2025

“At the Imagining the Digital Future Center, we have found that Americans are fearful in important ways about AI – particularly generative AI and large language models (LLMs) – and yet the user base is exploding. On the fear side, our surveys show that people are especially concerned about the way AI systems will erode their personal privacy, their opportunities for employment, how these systems might change their relationships with others, their potential impact on basic human rights, the way they will disrupt people’s physical and mental health. At the level of institutions and big systems, they also have great anxiety that AI will negatively impact politics and elections, further erode the level of civility in society, worsen economic inequality, and be harmful to both K-12 education and higher education. Those concerns are leavened to a degree by the public’s sense that AI will be helpful in health and science discovery. Still, overall and in broad terms these are grim expectations. And yet … the survey results we just reported show that 52% of U.S. adults already are LLM users, making them one of the fastest – if not the fastest – adopted consumer technology in history.”

Using LLMs to Support Online Communities - Medium, Jigsaw, March 21, 2025

Google’s Jigsaw announced new features for its Perspective API – a machine learning-based tool that helps moderators reduce toxicity in online conversations: “We’re excited to announce beta access to customizable attributes for Perspective API. Customizable attributes enable users to provide their community’s guidelines in natural language and receive a score that reflects whether any given comment was consistent with them. This capability, built with the latest Gemini models, empowers any user, from mods to other community members, to highlight the comments they care about, which they can then label or analyze according to their own needs. We hope our customizable attributes will complement Perspective API’s existing pre-defined attributes to enable an approach that combines broad coverage with fine-grained management. Enabling these new attributes could, for example, allow mods for a community focused on local information to more easily separate out concerns of local residents from those of tourists, or an advice community experiencing rapid growth to maintain the engaging and supportive quality of its posts, without overwhelming the members working to manage the influx of new posts.”

AI Infrastructure

Microsoft is exploring a way to credit contributors to AI training data - TechCrunch, Kyle Wiggers, March 21, 2025

“Microsoft is launching a research project to estimate the influence of specific training examples on the text, images, and other types of media that generative AI models create.… the project will attempt to demonstrate that models can be trained in such a way that the impact of particular data — e.g. photos and books — on their outputs can be ‘efficiently and usefully estimated.’ Many of the companies argue that fair use doctrine shields their data-scraping and training practices. But creatives — from artists to programmers to authors — largely disagree. Microsoft itself is facing at least two legal challenges from copyright holders: The New York Times, accusing of infringing on The Times’ copyright by deploying models trained on millions of its articles. Several software developers have also filed suit against Microsoft, claiming that the firm’s GitHub Copilot AI coding assistant was unlawfully trained using their protected works. Few large labs have established individual contributor payout programs outside of inking licensing agreements with publishers, platforms, and data brokers. The company is investigating ways to trace training data is notable in light of other AI labs’ recently expressed stances on fair use.”

AI and Problem Solving

Can We Make AI Less Power-Hungry? These Researchers Are Working on It - Ars Technica, Jacek Krywko, March 24, 2025

The rising power consumption of AI data centers has become a critical concern, with US energy usage skyrocketing from 76 terrawatt-hours in 2018 to 176 terrawatt-hours in 2023. Driven by large language models like ChatGPT, rising energy costs have prompted researchers to develop  computational techniques to reduce the amount of energy needed to power AI models and make performance more efficient. But with the rise of proprietary models, some are calling for greater transparency from AI companies about their power consumption. While projections suggest data centers might consume up to 12% of US electricity by 2030, researchers remain optimistic that technological innovations like photonic chips and advanced semiconductor designs will help mitigate the energy challenge.

AI breakthrough is ‘revolution’ in weather forecasting - The Independent, Anthony Cuthbertson, March 20, 2025

“Cambridge scientists have made a major breakthrough in weather forecasting after developing a new AI prediction model that is tens of times better than current systems. The new model, called Aardvark Weather, replaces the supercomputers and human experts used by forecasting agencies with a single artificial intelligence model that can run on a standard desktop computer.

This turns a multi-stage process that takes hours to generate a forecast into a prediction model that takes just seconds. ‘Beyond weather, its applications extend to broader Earth system forecasting, including air quality, ocean dynamics, and sea ice prediction.’”

AI and Education

Illinois’ AI-in-Schools Bill Could Help the State Catch Up - StateScoop, Sophia Fox-Sowell, March 25, 2025

“Last week, Illinois state Rep. Laura Faver Dias introduced HB 2503, a bill that would create a task force to develop guidance on the use of AI tools by students and teachers. It would also require school districts to report their uses of AI to the Illinois State Board of Education. The bill would ask the state to put together some guidance and also training for both teachers and students to help them understand and implement that guidance so that AI could be used in ways that it’s helpful for instruction, protect students safety and avoid it in ways that might be harmful.”

Events

InnovateUS:

  • April 8, 2025, 2pm ET - Diseño participativo de servicios públicos con apoyo de inteligencia artificial/Co-Creating Public Services with AI Assistance, Sofia Bosch Gomez, Assistant Professor in the Department of Art + Design and Fellow at the Burnes Center for Social Change, Northeastern University

  • April 9, 2025, 2pm ET - Innovating in the Public Interest: Winning Early, Anita McGahan Senior Research Scientist, The Burnes Center for Social Change

  • April 10, 2025, 2:00 PM ET - Starting with Curiosity: A Beginner’s Guide for Public Servants, Jamie Kimes, OIT Contractor, State of Colorado and Caleb Williams, Founder & Principal, dataPIG

  • To register and see more workshops, visit https://innovate-us.org/workshops.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.