
News That Caught Our Eye #53
Published by Dane Gambrell & Angelique Casem on April 10, 2025
In the news this week: Trump administration guidelines for federal agencies’ use and procurement of AI tools calls for “maximizing” US-made AI but otherwise doesn’t stray far from Biden-era guardrails. Controversy erupts over Musk's DOGE team allegedly using AI to monitor federal workers for disloyalty. A New York State court reprimanded a plaintiff for using an AI avatar to make his oral argument. Stanford HAI releases its new AI Index Report showing that AI tools are becoming more efficient and accessible while a study from Elon University’s Imagining the Digital Future Center finds that many experts are concerned about AI’s potential to diminish empathy, independent thinking, identity, and moral judgment in the near future. Read more about AI, governance and democracy.
In the news this week
- AI and Public Engagement:Bolstering participation
- AI Infrastructure:Computing resources, data systems and energy use
- AI and International Relations:News that caught our eye
- Governing AI:Setting the rules for a fast-moving technology.
- AI for Governance:Smarter public institutions through machine intelligence.
- AI and Labor:Worker rights, safety and opportunity
- AI and Education:Preparing people for an AI-driven world
- Events:News that caught our eye
AI and Public Engagement
AI and Public Engagement
Engaging Youth on Responsible Data Reuse: 5 Lessons Learnt from a Multi-Country Experiment
The NextGenData project explored how to meaningfully engage youth in responsible data reuse. Young people often must share personal data to access services but lack control over how it's used, leading to distrust and disengagement. This project involved over 70 youth across four countries in co-designing data practices, highlighting the need for data literacy, real-world examples, and locally grounded engagement. It emphasized flexibility, support, and inclusive methodologies tailored to local capacities. Key lessons included the importance of context-aware design, peer facilitation, and balancing participation incentives. The authors used the findings from the project to develop a Youth Engagement Toolkit for organizations engaging young people in decisionmaking.
Read articleAI and Public Engagement
Being Human in 2035: Experts predict significant change in the ways humans think, feel, act and relate to one another in the Age of AI
A recent report by Elon University’s Imagining the Digital Future Center warns of profound transformations in the human experience by 2035 due to AI. Based on input from 301 global experts, the study highlights concerns over diminishing empathy, independent thinking, identity, and moral judgment, while acknowledging potential gains in curiosity, decision-making, and creativity. Half of the experts predict a mix of positive and negative changes, while many fear AI’s seductive efficiencies may erode essential human traits. Yet, some hold hope that adaptable humans will find ways to thrive. The report ultimately urges careful, ethical integration of AI into society.
Read articleAI and Public Engagement
Branching Out: A Third Legislative Chamber for the AI Age
The proliferation and collection of large amounts of citizens’ data has led to the rise of "political machines" – AI systems used in government to make decisions around resource allocation. Political anthropologist Eduardo Albrecht argues that establishing a "Third House" – a new legislative body specifically designed to oversee the deployment and operation of political machines – could enable citizens to meaningfully engage with and oversee the AI tools used by their government.
Read articleAI and Public Engagement
Brazil’s Online Consultation System is Reimagining Democracy for the Digital Age
The Brazilian Senate's Public Consultation system has gathered over 30 million yes/no votes on legislation in the past decade from 15 million registered citizens. While these inputs provide lawmakers with basic public sentiment data, AI enhancements could transform the system by creating understandable bill summaries, simulating legislation impacts, synthesizing citizen feedback beyond yes/no responses, matching bills to citizens' interests, and enabling cross-border collaboration – evolving Brazil's engagement framework into a model for participatory democracy.
Read articleAI and Public Engagement
Digital Technologies and Participatory Governance in Local Settings: Comparing Digital Civic Engagement Initiatives During the COVID-19 Outbreak
Study about how digital technologies can enable better and more responsive governance during times of crisis: “Governance paradigms have undergone a deep transformation during the COVID-19 pandemic, necessitating agile, inclusive, and responsive mechanisms to address evolving challenges. Participatory governance has emerged as a guiding principle, emphasizing inclusive decision-making processes and collaboration among diverse stakeholders. In the outbreak context, digital technologies have played a crucial role in enabling participatory governance to flourish, democratizing participation, and facilitating the rapid dissemination of accurate information. These technologies have also empowered grassroots initiatives, such as civic hacking, to address societal challenges and mobilize communities for collective action. This study delves into the realm of bottom-up participatory initiatives at the local level, focusing on two emblematic cases of civic hacking experiences launched during the pandemic, the first in Wuhan, China, and the second in Italy. Through a comparative lens, drawing upon secondary sources, the aim is to analyze the dynamics, efficacy, and implications of these initiatives, shedding light on the evolving landscape of participatory governance in times of crisis. Findings underline the transformative potential of civic hacking and participatory governance in crisis response, highlighting the importance of collaboration, transparency, and inclusivity.”
Read articleAI Infrastructure
AI Infrastructure
AI Index Report 2025
This AI Index Report highlights AI’s influence across society, the economy, and global governance. It includes in-depth analyses of the evolving landscape of AI hardware, novel estimates of inference costs, and new analyses of AI publication and patenting trends. The 2025 Index observes that AI models are becoming more powerful and capable of accomplishing increasingly complex tasks; AI is becoming more integrated in everyday life and becoming widely used across industries; and that AI is becoming more efficient, affordable and accessible. At the same, there are disparities among countries and regions in their ability to access the gains brought by AI and the systems governing responsible use remain incomplete.
Read articleAI Infrastructure
Artificial Intelligence in the Fight Against Misinformation: A Conversation with Ed Bice
“In this discussion, moderated by Burnes Center Director and Professor Beth Simone Noveck, we explored the evolution of misinformation, the challenges of combating it in today’s media environment, and the potential for AI-driven solutions to restore trust in information. One of the key takeaways from the discussion was the failure of the original misinformation response model. For nearly a decade, many experts believed that if civil society organizations and journalists were well-networked, if social media platforms cooperated, and if fact-checking responses were shared across platforms, we could stop the spread and mitigate the impacts of misinformation. A key topic of discussion was the role of generative AI in the misinformation crisis…AI is accelerating misinformation by making deepfakes and manipulated content more accessible, but Bice pushed back against the idea that this is the primary cause of media distrust. Bice shared his long-term vision, emphasizing the need to move away from reliance on large social media platforms and instead build grassroots, community-owned AI models.”
Read articleAI and International Relations
AI and International Relations
Trump’s new tariff math looks a lot like ChatGPT’s
“Trump slapped a 10 percent baseline tariff on all imports into the US, including from uninhabited islands, plus absurdly high rates on specific countries, supposedly based on ‘tariffs charged to the USA’ — which didn’t match up to other estimates. Stock markets have plummeted and consumers are facing down sharp price hikes on potentially almost everything they buy. Where did these numbers come from? Apparently, an oversimplified calculation that several major AI chatbots happen to recommend. A number of X users have realized that if you ask ChatGPT, Gemini, Claude, or Grok for an ‘easy’ way to solve trade deficits and put the US on ‘an even playing field’, they’ll give you a version of this ‘deficit divided by exports’ formula with remarkable consistency.”
Read articleGoverning AI
Governing AI
Trump White House releases guidance for AI use, acquisition in government
“The White House Office of Management and Budget released a pair of memos to provide agencies with guardrails for how they use and purchase artificial intelligence in the government, replacing Biden administration guidance but maintaining some of the same structures. The first new memo provides guardrails for use, and replaces Biden’s directive on the same topic. That document states agencies are to focus on three priorities when accelerating the federal use of AI — innovation, governance and public trust — which align with an executive order on the technology from the first Trump administration. But that directive also maintains things that were established under the Biden administration, like chief AI officers and their council and a special management process for potentially risky AI uses it now calls ‘high-impact.’ Similarly, the second memo on AI acquisition replaces guidance on government purchasing of the tech. That memo maintains its predecessors’ emphasis on the benefits of a competitive AI marketplace, tracking AI performance and managing risks, and cross-functional collaboration, but adds new language aimed at ‘maximizing’ use of AI that’s made in the United States.”
Read articleGoverning AI
White House Releases New Policies on Federal Agency AI Use and Procurement
“The White House Office of Management and Budget (OMB) delivered on President Trump’s decisive Executive Order to remove barriers to American leadership in AI by releasing two revised policies on Federal Agency Use of AI and Federal Procurement. These memos were revised at the direction of the Executive Order and in coordination with the Assistant to the President on Science and Technology and the Office of Science and Technology Policy (OSTP). ‘President Trump recognizes that AI is a technology that will define the future. This administration is focused on encouraging and promoting American AI innovation and global leadership, which starts with utilizing these emerging technologies within the Federal Government. Today’s revised memos offer much needed guidance on AI adoption and procurement that will remove unnecessary bureaucratic restrictions, allow agencies to be more efficient and cost-effective, and support a competitive American AI marketplace,’ said Lynne Parker, Principal Deputy Director of the White House OSTP.”
Read articleGoverning AI
Man Employs A.I. Avatar in Legal Appeal, and Judge Isn’t Amused
A 74-year-old plaintiff representing himself in a New York appeals court used an AI-generated avatar to deliver his pre-recorded argument without disclosing it wasn't real. After originally mistaking the avatar for the plaintiff’s attorney, Justice Sallie Manzanet-Daniels halted the presentation when she discovered the deception. The plaintiff later apologized, explaining he created the avatar because he gets nervous speaking in court. The incident highlights the need to establish clear guardrails about how AI should and should not be used in legal proceedings and rules governing its disclosure.
Read articleAI for Governance
AI for Governance
Why US States Are the Best Labs for Public AI
What makes Public AI enticing is 1) sidestepping market incentives that tend to eschew the provision of public goods and services; and 2) the rapidly decreasing cost of developing fit-for-purpose AI systems. US states can and should lead the charge to develop and deploy AI in the public interest. States can be laboratories of twenty-first century democracy and examples for future US administrations and the world. The closer government gets to the people, the more it is trusted. States and localities provide the governance that matters most in our everyday lives, and their leaders are held accountable for delivering accordingly. This creates a much greater pragmatism, a more immediate and meaningful sense of accountability, and a better set of incentives for applying AI responsibly. State leadership in AI development allows public services to be locally optimized.
Read articleAI for Governance
The Risks of Government by AI
This panel discussion, moderated by Kareem Crayton of the Brennan Center, surfaces concerns over the opaque and unregulated use of AI in public systems. Speakers include Vittoria Elliot, Platforms and Power Reporter of Wired magazine, Suresh Venkatasubramanian, Professor of Data Science and Computer Science at Brown University, and Faiza Patel, Senior Director of the Brennan Center. They highlight risks such as lack of transparency, misuse of government-held personal data, and AI’s potential to reinforce systemic biases—particularly in voting, where flawed AI could wrongly disqualify voters. The panel underscores the importance of data privacy laws, transparency, and public awareness. They advocate for skepticism toward AI's "magical thinking," public engagement, informed advocacy, and tech-literate leadership. Examples from other countries show ethical tech governance is possible, and that the public must demand AI that truly serves societal needs.
Read articleAI for Governance
Public Governance and Emerging Technologies: Values, Trust, and Regulatory Compliance
“This open access book focuses on public governance’s increasing reliance on emerging digital technologies. ‘Disruptive’ or ‘emerging’ digital technologies, such as artificial intelligence and blockchain, are often portrayed as highly promising, with the potential to transform established societal, economic, or governmental practices. Unsurprisingly, public actors are therefore increasingly experimenting with the application of these emerging digital technologies in public governance….The book shows that the success of using emerging technologies in public governance depends to a large extent on the choices made by key stakeholders in public administration, legislative and regulatory bodies, and tech companies. When using and regulating emerging technologies in public governance, it is crucial to uphold public values and comply with legislation in a way that prioritizes the citizen’s perspective. To this end, the book offers an interdisciplinary approach based on qualitative and conceptual research.”
Read articleAI and Labor
AI and Labor
Musk's DOGE using AI to snoop on U.S. federal workers, sources say
“Trump administration officials have told some U.S. government employees that Elon Musk's DOGE team of technologists is using artificial intelligence to surveil at least one federal agency’s communications for hostility to President Donald Trump and his agenda, said two people with knowledge of the matter… the surveillance would mark an extraordinary use of technology to identify expressions of perceived disloyalty in a workforce already upended by widespread firings and severe cost cutting.” The Guardian reports that workers at federal agencies including the Department of Veterans Affairs, Environmental Protection Agency, and State Department believe that DOGE “may be snooping on conversations, using software to track computer activity and, possibly, using artificial intelligence to scan for disloyalty or mentions of diversity, equity and inclusion (DEI) buzzwords. Many fear losing their jobs, as thousands already have.”
Read articleAI and Education
AI and Education
Open Call for Fellowship Applications, Academic Year 2025-2026
Harvard’s Berkman Klein Center for Internet and Society is opening applications for its 2025-2026 fellowships. “Fellows will work in Cambridge, MA to conduct independent work as part of one of the Center’s topical workstreams, in collaboration with BKC faculty, staff, students, and the broader BKC community. Those who are selected for a fellowship will collaborate with a team of Harvard faculty, staff, students, and others on projects, tools, events, and publications related to their workstream topic: AI Interpretability Ethics and Implications, AI Ethics with Allen Lab for Democracy Renovation, Agentic AI Protocols and Risk Mitigations, Artificial General Intelligence Futurecasting and Policy Development, and more.”
Read articleEvents
Events
New workshops from InnovateUS
Join us for a series of workshops on technology and innovation in government. On April 15 at 2PM ET, Waldo Jaquith and Kate Drummond from the U.S. Digital Response will present a tool to simplify tech procurement. Also on April 15, at 4PM ET, Cas Burns of The Learning Agency will explore how AI and speech recognition can enhance reading instruction. On April 16 at 2PM ET, Harrison McRae, Director of Emerging Technologies, Commonwealth of Pennsylvania will discuss Pennsylvania’s pilot using ChatGPT in government. Finally, on April 17 at 2PM ET, Kara Fitzpatrick, Former Director of Experience Design, Cancer Moonshot team, will share insights on human-centered design from her work with the Cancer Moonshot initiative.
Read articleThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.