
News That Caught Our Eye #57
Published by Dane Gambrell and Angelique Casem on May 8, 2025
In the news this week: Engagements with public professionals on AI – one from Dubai, one from New Jersey – showcase why to use AI to do more public listening. The White House's AI directives lack essential training provisions. While DOGE controversially hires an undergraduate to rewrite HUD regulations with AI, the Institute for Progress uses AI to summarize over ten thousand public comments about AI in government. Palantir's "ImmigrationOS" database for ICE raises mass surveillance concerns. Researchers warn that replacing federal workers with hallucination-prone AI could be "catastrophic," while innovative startups are distributing the building of advanced language models without energy-intensive data centers. AI errors are increasing with new "reasoning systems," Meanwhile, the UAE is making AI education mandatory across its K-12 public schools. Read more in this week's AI News That Caught Our Eye.
In the news this week
- AI for Governance:Smarter public institutions through machine intelligence.
- AI and Public Engagement:Bolstering participation
- AI and Labor:Worker rights, safety and opportunity
- AI Infrastructure:Computing resources, data systems and energy use
- Governing AI:Setting the rules for a fast-moving technology.
- AI and Education:Preparing people for an AI-driven world
Upcoming Events
May 7, 2025, 2:00 PM ET: Governing Through Uncertainty: Using Data, Digital Tools, and Generative AI to Strengthen Public Service, Neil Kleiman, Professor, Northeastern University & Faculty Director, InnovateUS, Lamar Gardere, Executive Director, The Data Center
May 8, 2025, 2:00 PM ET: AI Prompts Unleashed: Transforming the Effectiveness and Efficiency of Your Policy Analysis, Program Evaluation, and Community Engagement, Deborah Stine, Founder and Chief Instructor, Science and Technology Policy Academy
May 13, 2025, 3:30 PM ET: Cómo Redactar una Política de IA Generativa para tu Jurisdicción (How to Write a Generative AI Policy for Your Jurisdiction), Santiago Garces, Chief Information Officer, City of Boston
May 14, 2025, 2:00 PM ET: Leading with Confidence: Helping Your Team Navigate AI in Public Service, Neil Kleiman, Professor, Northeastern University & Faculty Director, InnovateUS
May 14, 2025, 4:00 PM ET: From Vision to Implementation: AI for Principals and School Boards, Tony Howard, Principal, Jacksonville North Pulaski School District, AR, Jordan Smith, Subject Coordinator for Educational Technology, Artificial Intelligence, and Innovation, Anglophone East School District in New Brunswick, Canada
May 7, 2025, 10:00AM ET: Demystifying AI and Empowering Workers - UC Berkeley Labor Center. Three-part webinar series cutting through the AI hype to help better understand digital technologies in the workplace, and how to respond through collective bargaining and public policy
For more information on events visit https://innovate-us.org/workshops?tab=live
AI for Governance
AI for Governance
Global AI Watch: Listening to Public Servants - What Dubai and New Jersey Teach Us About AI Readiness
“To effectively evaluate AI in government, we need to distinguish between beneficial applications that genuinely improve public services and problematic deployments designed primarily to eliminate jobs or cut costs without consideration for impacts. Making these judgments requires listening carefully to public professionals at the front lines. Two recent efforts—from New Jersey and Dubai—reveal starkly different approaches to gathering this crucial feedback, with telling results….Dubai's comprehensive 60-question AI survey yielded just 4% participation while New Jersey's streamlined, AI-assisted approach garnered 5,000 responses in three weeks—yet both revealed similar insights about public servants' AI readiness. This natural experiment demonstrates that effective government listening must evolve to be shorter, faster, and continuous, while measuring success beyond efficiency to include quality, transparency, and meaningful human augmentation.”
Read articleAI for Governance
People Before Platforms: Why OMB’s AI Memos Won’t Work Without Training
“Last month, the White House's Office of Management and Budget released two memoranda that shift how our government will approach artificial intelligence. There's just one problem: the people tasked with implementing these ambitious directives haven't been prepared for this moment. To take the White House's vision for AI from idea to implementation, public servants need access to training that is tailored to their specific roles, grounded in real-world context, and aligned with the day-to-day realities of public service.”
Read articleAI for Governance
DOGE Put a College Student in Charge of Using AI to Rewrite Regulations
“A young man with no government experience who has yet to even complete his undergraduate degree is working for Elon Musk’s so-called Department of Government Efficiency (DOGE) at the Department of Housing and Urban Development (HUD) and has been tasked with using artificial intelligence to rewrite the agency’s rules and regulations. Christopher Sweet, a University of Chicago student on leave, is leading the AI-driven deregulation effort, flagged as part of a broader push tied to the Project 2025 agenda. Despite internal skepticism and unclear qualifications, Sweet is overseeing AI reviews of HUD policy and proposing large scale regulatory rollbacks across departments.”
Read articleAI for Governance
Big Tech takes on immigration with new migrant tracking software for ICE
“Federal officials are building a sprawling new database system they're calling ‘ImmigrationOS’ to track and target millions of people living illegally in the United States. Trump has suggested that he wants to remove not just immigrants but U.S. citizens if they're deemed dangerous, and said he's ordered Attorney General Pam Bondi to investigate. ICE agents could use ImmigrationOS to figure out where a targeted person lives and works, when they're likely to be home, who they live with, the kind of car they drive and even what restaurants or shops they frequent. ‘What they have built is a really, really capable engine for analyzing big data, linking it together and picking out parts of it. That gives you the ability to collate this data on somebody and go looking for a reason to prosecute them,’ Quintin said. ‘Even if you think you're safe for now, you might not be safe for long.’”
Read articleAI and Public Engagement
AI and Public Engagement
Citizens' assemblies inch into the mainstream
A recent episode of the British podcast The Rest is Politics, hosted by Alastair Campbell and Rory Stewart, focused on the opportunities and challenges of recent experiments with citizens’ assemblies in the UK to tackle complex issues such as climate change. Citizens' assemblies involve randomly selected groups tasked with solving complex issues through informed deliberation. Campbell and Stewart argue that the traditional representative system feels outdated, with MPs often disconnected from their constituents, and suggested that citizens' assemblies offer a more direct, non-partisan approach to decision-making, fostering thoughtful debate. Their endorsement adds legitimacy to sortition and deliberative democracy, offering a compelling alternative to the current system, especially for complex moral and social issues.
Read articleAI and Public Engagement
What does America think the Trump Administration should do about AI? -
In January 2025, President Trump tasked the Office of Science and Technology Policy (OSTP) with developing an AI Action Plan to promote American leadership in AI. OSTP requested input from the public and received 10,068 submissions. Last week, these submissions were made public. We used AI to extract recommendations from all of these and then created a searchable database. We hope the database will serve as a valuable tool for researchers and policymakers to discover and prioritize AI policy ideas. Here, we offer a high-level analysis of key themes and ideas within the recommendations. View the database at https://www.aiactionplan.org/
Read articleAI and Labor
AI and Labor
Replacing Federal Workers with Chatbots Would Be a Dystopian Nightmare
Hallucinations are one of the many issues that plague so-called generative artificial intelligence systems like OpenAI’s ChatGPT, xAI’s Grok, Anthropic’s Claude or Meta’s Llama. These are design flaws, problems in the architecture of these systems, that make them problematic. Yet these are the same types of generative AI tools that the DOGE and the Trump administration want to use to replace, in one official’s words, ‘the human workforce with machines.’ This is terrifying. There is no ‘one weird trick’ that removes experts and creates miracle machines that can do everything that humans can do, but better. The prospect of replacing federal workers who handle critical tasks—ones that could result in life-and-death scenarios for hundreds of millions of people—with automated systems that can’t even perform basic speech-to-text transcription without making up large swaths of text, is catastrophic. If these automated systems can’t even reliably parrot back the exact information that is given to them, then their outputs will be riddled with errors, leading to inappropriate and even dangerous actions. Automated systems cannot be trusted to make decisions the way that federal workers—actual people—can.”
Read articleAI and Labor
The AI jobs crisis is here, now
“On Monday, April 29th, Luis von Ahn, the billionaire CEO of the popular language learning app Duolingo, made a public announcement that his company is officially ‘going to be AI-first.’ Duolingo, von Ahn wrote in an email to all employees that was also posted to LinkedIn, will ‘gradually stop using contractors to do work that AI can handle.’ The CEO took pains to note that ‘this isn’t about replacing Duos with AI.’ According to one such Duolingo contractor, this is not accurate. For one thing, it’s not a new initiative. And it absolutely is about replacing workers: Duolingo has already replaced up to 100 of its workers—primarily the writers and translators who create the quirky quizzes and learning materials that have helped stake out the company’s identity—with AI systems…This is a glimpse of the AI jobs crisis that is unfolding right now—not in the distant future—and that’s already more pervasive than we might think.”
Read articleAI Infrastructure
AI Infrastructure
These Startups Are Building Advanced AI Models Without Data Centers
“Researchers have trained a new kind of large language model (LLM) using GPUs dotted across the world and fed private as well as public data, a move that suggests that the dominant way of building AI could be disrupted. The new model, Collective-1, uses globally distributed GPUs and both public and private data, potentially disrupting the centralized model of AI development. Built by Flower AI and Vana, it uses a new tool, Photon, for efficient distributed training. Collective-1 has 7 billion parameters, but larger models are underway. This approach allows smaller players to collaborate across ordinary internet connections, making advanced AI development more accessible. Vana also enables users to contribute private data with ownership controls. Distributed AI could unlock sensitive, decentralized datasets and shift power away from dominant tech firms and nations with exclusive access to supercomputers.”
Read articleAI Infrastructure
A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse
“More than two years after the arrival of ChatGPT, tech companies, office workers and everyday consumers are using A.I. bots for an increasingly wide array of tasks. But there is still no way of ensuring that these systems produce accurate information. The newest and most powerful technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. These systems use mathematical probabilities to guess the best response, not a strict set of rules defined by human engineers. So they make a certain number of mistakes. ‘Despite our best efforts, they will always hallucinate,’ said Amr Awadallah, the chief executive of Vectara, a start-up that builds A.I. tools for businesses, and a former Google executive. Companies like OpenAI and Google steadily improved their A.I. systems and reduced the frequency of these errors. But with the use of new reasoning systems, errors are rising. The latest OpenAI systems hallucinate at a higher rate than the company’s previous system, according to the company’s own tests. So these companies are leaning more heavily on a technique that scientists call reinforcement learning. With this process, a system can learn behavior through trial and error. It is working well in certain areas, like math and computer programming. But it is falling short in other areas.”
Read articleGoverning AI
Governing AI
Why the AI Race Could be Upended by a Judge’s Decision of Google
“A federal judge issued a landmark ruling last year, saying that Google had become a monopolist in internet search. But in a hearing that began last week to figure out how to fix the problem, the emphasis has frequently landed on a different technology, artificial intelligence. In U.S. District Court in Washington last week, a Justice Department lawyer argued that Google could use its search monopoly to become the dominant player in A.I. Google executives disclosed internal discussions about expanding the reach of Gemini, the company’s A.I. chatbot. And executives at rival A.I. companies said that Google’s power was an obstacle to their success…An antitrust lawsuit about the past has effectively turned into a fight about the future, as the government and Google face off over proposed changes to the tech giant’s business that could shift the course of the A.I. race. For more than 20 years, Google’s search engine dominated the way people got answers online. Now the federal court is in essence grappling with whether the Silicon Valley giant will dominate the next era of how people get information on the internet, as consumers turn to a new crop of A.I. chatbots to answer questions, find solutions to their problems and learn about the world.”
Read articleGoverning AI
Conservative activist Robby Starbuck sues Meta over AI responses about him
“Conservative activist Robby Starbuck has filed a defamation lawsuit against Meta alleging that the social media giant’s artificial intelligence chatbot spread false statements about him, including that he participated in the riot at the U.S. Capitol on Jan. 6, 2021. Starbuck, who claims he was in Tennessee during the riot, says the AI also falsely accused him of Holocaust denial and criminal activity, sparking a flood of damaging misinformation. Despite his attempts to get Meta to retract the claims, he alleges the company responded only by erasing his name from AI responses. The $5 million lawsuit adds to growing legal scrutiny over false or harmful AI outputs, as experts warn disclaimers alone may not shield tech companies from liability. Meta has acknowledged the issue and apologized, but Starbuck argues the company still hasn’t fixed the root problem.”
Read articleGoverning AI
Researchers secretly experimented on Reddit users with AI-generated comments
“A group of researchers covertly ran a months-long "unauthorized" experiment in one of Reddit’s most popular communities using AI-generated comments to test the persuasiveness of large language models. Posing as real users, the researchers used AI to craft personalized replies, sometimes adopting false identities like a trauma counselor or assault survivor without user consent. Moderators of the community r/changemyview condemned the study as ‘psychological manipulation,’ filed a formal complaint with the University of Zurich, and called for the paper to be withheld. Reddit banned the accounts involved and is pursuing legal action, while the university has pledged to tighten ethical oversight moving forward.”
Read articleGoverning AI
Safety Co-Option and Compromised National Security: The Self-Fulfilling Prophecy of Weakened AI Risk Thresholds
“In this paper, the authors show how some recent approaches in AI have enabled technologists to engage in what can be called ‘safety revisionism’, replacing established safety practices and terminology with vague, less rigorous alternatives. This shift promotes the rapid adoption of military AI systems, but at the cost of lowering safety and security standards. If this trend continues, the way we evaluate risks from foundation models in national security could lead to a dangerous race to the bottom, undermining U.S. national security interests. Safety critical and defense systems must follow trusted assurance frameworks with clear risk thresholds, and foundation models should be no exception. Therefore, evaluation frameworks for military AI systems must protect U.S. critical infrastructure and stay aligned with international humanitarian law.”
Read articleGoverning AI
Research Radar: Race, Democracy, and AI - Spencer Overton Offers a Framework for a More Inclusive Digital Future
“In this week's Research Radar: The future of American democracy may hinge on whether artificial intelligence supercharges racial division or helps build more inclusive participation. Law professor Spencer Overton's groundbreaking two-part analysis reveals how AI technologies simultaneously threaten to amplify racial voter suppression and deception while potentially increasing participation by communities of color—with outcomes determined not by technological inevitability but by human choices about governance, design, and accountability.”
Read articleAI and Education
AI and Education
UAE mandates AI curriculum in schools
“The UAE Cabinet has approved a new curriculum that will make AI a mandatory subject in government schools. This program will be implemented across all educational levels, from kindergarten to Grade 12, starting in the upcoming academic year. This initiative is part of the UAE’s strategy to prepare future generations for a technologically advanced world. The curriculum will provide students with knowledge about AI, including data, algorithms, and applications. It will also cover ethical considerations and the societal impacts of AI.”
Read articleThis work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.