Governing AI
California Wants AI Chatbots to Remind Users They Aren’t People - Gizmodo, AJ Dellinger, February 4, 2025
“A new bill proposed by California Senator Steve Padilla would require chatbots that interact with children to offer occasional reminders that they are, in fact, a machine and not a real person. The bill, SB 243, was introduced as part of an effort to regulate the safeguards that companies operating chatbots must put in place in order to protect children. That last bit is particularly germane to the current moment, as kids have been shown to be quite vulnerable to these systems. Researchers at the University of Cambridge have found that children are more likely than adults to view AI chatbots as trustworthy, even viewing them as quasi-human.” Read the full bill here.
AI-generated child sex abuse images targeted with new laws - BBC, Sima Kotecha, February 2, 2025
“Four new laws will tackle the threat of child sexual abuse images generated by artificial intelligence (AI), the UK government has announced. Possessing AI paedophile manuals - which teach people how to use AI for sexual abuse - will also be made illegal, and offenders will get up to three years in prison. Other laws set to be introduced include making it an offence to run websites where paedophiles can share child sexual abuse content or provide advice on how to groom children. And the Border Force will be given powers to instruct individuals who they suspect of posing a sexual risk to children to unlock their digital devices for inspection when they attempt to enter the UK, as CSAM is often filmed abroad. Depending on the severity of the images, this will be punishable by up to three years in prison. Software can ‘nudify’ real images and replace the face of one child with another, creating a realistic image. In some cases, the real-life voices of children are also used, meaning innocent survivors of abuse are being re-victimised.”
A CNN article quotes Yvette Cooper, the United Kingdom’s interior minister: “We know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person. It is vital that we tackle child sexual abuse online as well as offline so we can better protect the public from new and emerging crimes.” A piece from The Conversation notes that AI-generated child sexual exploitation imagery tops the list when it comes to “content that most regulatory authorities across the globe agree should be censored.”
Thomson Reuters v. Ross Intelligence: Copyright Infringement and Fair Use - United States District Court, District of Delaware, February 11, 2025
A WIRED piece describes how generative AI is becoming the focus of a growing number of legal fights regarding the use of copyrighted material: “... as many major AI tools were developed by training on copyrighted works including books, films, visual artwork, and websites. Right now, there are several dozen lawsuits currently winding through the US court system, as well as international challenges in China, Canada, the UK, and other countries.”
Thomson Reuters, the maker of the legal research platform Westlaw, sued Ross Intelligence, a competitor, for using Westlaw headnotes to train its AI legal research tool. In its summary judgement, The United States District Court for the District of Delaware rejected Ross's claim that its use of Reuters’s data qualified as fair use, and found that Ross was liable for violating its competitor’s copyright. The ruling will further complicate AI companies’ "fair use" argument when developing AI by training in copyrighted materials, which could create barriers to further AI development. Furthermore, the ruling may impact the ability for AI to be comprehensively and diversely trained, leading to less innovative and complex AI tools. The case will go to trial to determine the extent of the damages.
AI and IR
France, tech companies and philanthropies back $400 million foundation to support public interest AI - Fortune, Jeremy Kahn, February 10, 2025
The French government announced the creation of Current, a new foundation dedicated to the creation of AI ‘public goods’ at the AI Action Summit in Paris. Established Tuesday with an initial endowment of $400 million from the French government, the AI Collaborative—which is part of eBay founder Pierre Omidyar’s philanthropy, the Omidyar Network—and a coalition of other countries, technology companies, and philanthropic organizations. It will make grants to fund work that supports public interest projects around AI. Martin Tisné, the CEO of the AI Collaborative and France’s special envoy for public interest AI at the AI Action Summit, told Fortune he envisions many of these projects being around the creation of new public datasets that can be used to build AI models that serve the public interest.”
A Time article notes that some see the event as a reality check, pushing back against what individuals believe to be overblown fears about technology, while others—including top AI researchers —are concerned that critical safety issues are being ignored. Tech Crunch reports that the initiative is backed by several national governments, including Germany, Chile, Kenya, Morocco, and Nigeria, while the United States is notably absent.
UK and US refuse to sign international AI declaration - BBC, Zoe Kleinman and Liv McMahon, February 11, 2025
“The UK and US have not signed an international agreement on artificial intelligence (AI) at a global summit in Paris. The statement, signed by dozens of countries including France, China and India, pledges an ‘open’, ‘inclusive’ and ‘ethical’ approach to the technology's development. In a brief statement, the UK government said it had not been able to add its name to it because of concerns about national security and ‘global governance.’ US Vice President JD Vance told delegates in Paris that too much regulation of artificial intelligence (AI) could ‘kill a transformative industry just as it's taking off’. Vance told world leaders that AI was ’an opportunity that the Trump administration will not squander’ and said ‘pro-growth AI policies’ should be prioritised over safety. His comments appear to put him at odds with French President Emmanuel Macron, who defended the need for further regulation.’ We need these rules for AI to move forward,’ Macron said at the summit.’”
A WorldCrunch piece highlights how the U.S. delegation, led by Vice President JD Vance rejected what they see as “excessive” tech regulations. Vance also warned against working with China, reflecting a worldview critics see as dismissive of global cooperation and resistant to ethical AI governance. The Standard reports that the UK delegation declined to sign the treaty “because it failed to provide enough ‘practical clarity’ on ‘global governance’ of artificial intelligence, or address ‘harder questions’ about national safety.”
AI for Governance
Elon Musk’s DOGE Is Working on a Custom Chatbot Called GSAi - WIRED, Paresh Dave, Zoe Schiffer, and Makena Kelly, February 6, 2025
"Elon Musk's Department of Government Efficiency (DOGE) is pushing to rapidly develop ‘GSAi,’ a custom generative AI chatbot for the US General Services Administration. One goal of the initiative, which hasn’t been previously reported, is to boost the day-to-day productivity of the GSA’s roughly 12,000 employees, who are tasked with managing office buildings, contracts, and IT infrastructure across the federal government, according to the two people. Musk’s team also seemingly hopes to use the chatbot and other AI tools to analyze huge swaths of contract and procurement data, one of them says."
A New York Times article on developing AI to find budget savings reports that the custom AI may be used to identify budget cuts and detect waste and abuse, and that “A.I. would be a key part of their cost-reduction work” within the General Services Administration. The Washington Post reports that DOGE staff may be using AI tools to process sensitive data from the Department of Education, though it’s unclear what specific tools DOGE is using or for what purposes.
Elon Musk’s A.I.-Fuelled War on Human Agency - The New Yorker, Kyle Chayka, February 12, 2025
This article argues that, as head of the Department of Government Efficiency (DOGE), Elon Musk is remaking the federal bureaucracy in a way that devalues human agency and labor: “The federal government is, in effect, suddenly being run like an A.I. startup; Musk, an unelected billionaire, a maestro of flying cars and trips to Mars, has made the United States of America his grandest test case yet for an unproved and unregulated new technology.”
Multiple States Are Banning the Use of DeepSeek by Government Employees
GovTech reported that “the first state-level DeepSeek ban appeared in Texas, where Gov. Greg Abbott announced that the state would not allow the use of AI and social media apps affiliated with the People’s Republic of China and the Chinese Communist Party on government-issued devices…’Texas will continue to protect and defend our state from hostile foreign actors’, says Abbott.” Additionally, NBC News reported that the state of New York has banned DeepSeek: “Gov. Kathy Hochul issued the directive on Monday, citing ‘serious concerns’ about DeepSeek’s apparent censorship and its potential for foreign government surveillance.” According to a recent article from The National Law Review, “The Virginia Governor signed Executive Order 26 ‘banning the use of China’s DeepSeek AI on state devices and state-run networks… China’s DeepSeek AI poses a threat to the security and safety of the citizens of the Commonwealth of Virginia…We must continue to take steps to safeguard our operations and information from the Chinese Communist Party. This executive order is an important part of that undertaking.’”
Artificial Bilingualism, Public Service Delivery, and Democratic Pluralism - Reboot Democracy, by Justin Longo, February 13, 2025
"Real-time translation tech has come a long way in recent years thanks to AI, natural language processing (NLP), and machine translation (MT). Tools like Meta’s SeamlessM4T and Google’s AudioPaLM now handle direct speech-to-speech translation in over 100 languages, and DeepL lets me read reports from the le Gouvernement du Québec in English. As someone who studies digital governance from a Canadian perspective, I think such translation technology—where language barriers dissolve seamlessly through AI intermediaries—is going to raise uncomfortable questions about what official bilingualism means in the Canadian context. Beyond the Canadian context, I think MT presents an interesting edge case for the provision of public services through AI and challenges principles of democratic pluralism."
AI and Public Engagement
National AI Opinion Monitor: AI Trust and Knowledge in America - NAIOM, Katherine Ognyanova and Vivek Singh, February 2025
According to the latest National AI Opinion Monitor (NAIOM) report, Rutgers University Katherine Ognyanova and Vivek Singh surveyed 5000 Americans on public trust of AI, in the companies that use it, and in the news content produced by it: "Close to half (47%) of Americans report “a fair amount” or “a great deal” of confidence in AI to act in the public interest. Confidence is higher among men (52%), non-White respondents (55%), those in age group 25-44 (55%), graduate degree holders (60%), high-income earners ($100K+, 63%), Democrats (56%), and urban area residents (53%)."
AI and Problem Solving
AI tool helps find life-saving medicine for rare disease - Penn Medicine News, February 5, 2025
“After combing through 4,000 existing medications, an artificial intelligence tool helped uncover one that saved the life of a patient with idiopathic multicentric Castleman’s disease (iMCD). This rare disease has an especially poor survival rate and few treatment options. The patient could be the first of many to have their lives saved by an AI prediction system, which could potentially apply to other rare conditions. While Castleman’s disease is relatively rare—about 5000 are diagnosed in the US each year—the findings of this study could save the lives of many more.”
The End of Search, The Beginning of Research - One Useful Thing, Ethan Mollick, February 3, 2025
This article discusses the convergence of Reasoners and Agents in AI, highlighting their potential to revolutionize research. Reasoners enhance AI's ability to "think" and solve complex problems, while Agents autonomously pursue goals. The piece highlights the differences in quality and approach between OpenAI's Deep Research and Google's Deep Research, describing the former as a powerful example of a narrow agent capable of conducting high-quality academic research. Mollick concludes that while general-purpose agents are still in development, specialized AI systems are already transforming expert work, with further advancements expected.
Covering the launch of OpenAI’s Deep Research, Firstpost notes that the tool has outperformed Google’s Gemini Thinking and Grok-2 in some evaluations.
AI and Labor
Tech giant Workday lays off 1,750 employees in shift to AI - The Washington Post, Kelsey Ables, February 6, 2025
“Workday, the tech giant that sells workforce management software, is laying off about 1,750 employees, CEO Carl Eschenbach said in a Wednesday email that pointed to ‘increasing demand’ for artificial intelligence as having ‘the potential to drive a new era of growth’ for the company… the Bay Area-based firm will be ‘prioritizing innovation investments like AI and platform development.’ Workday’s move echoes those of other corporations that have justified cutting employees to create room for AI. Meta, after cutting thousands of jobs in 2022 and 2023, has recruited talent to boost its AI work and efforts to build the metaverse, The Post reported. Last year, Google cut hundreds of engineering and hardware workers, aiming to focus on AI.’”
Covering the mass firings at Workday, Fast Company notes that the layoffs are adding “a rough start for the tech industry in 2025, which has seen major tech giants, including Meta, Microsoft, and Amazon, trim their workforces.”
Why Is This C.E.O. Bragging About Replacing Humans With A.I.? - The New York Times, Noam Scheiber, February 2, 2025
Klarna CEO Sebastian Siemiatkowski expects Klarna's workforce to eventually fall to 2,000 due to AI adoption. A chatbot has replaced 700 customer service agents, resolving cases 9 minutes faster. Bloomberg reports that the company stopped hiring new employees a year ago, instead using AI to replace jobs. Siemiatkowski has “been able to convince employees to get on board with the shift, by promising they’ll see a chunk of any productivity gains they reap from AI in their paycheck.”
Hollywood writers say AI is ripping off their work. They want studios to sue - The Los Angeles Times, Wendy Lee, February 12, 2025
“As AI innovation advances, writers are urging entertainment companies to take legal action against AI firms that they allege are using writers’ work to train AI models without their permission. Some major studios have held discussions with AI firms about the technology, causing concerns among Hollywood talent that more of their jobs will be automated to save money.”
How the U.S. Labor Movement Is Confronting AI - Power at Work, Alex Press, February 6, 2025
Despite AI’s potential for liberating workers from tedious tasks and grunt work, in the hands of employers, the technology is already being used to replace workers and undermine their bargaining power. Rather than replacing them outright, AI’s biggest threat may be as an innovation in management technologies, with workers governed by decisions made by algorithms while simultaneously surveilled by other algorithms. As AI becomes the focus of more strikes and collective bargaining negotiations, a debate is rising to the surface: Should unions try to stop AI, or merely secure a say in its development and use?
The Anthropic Economic Index - Anthropic, Kunal Handa et al., February 10, 2025
Anthropic – the company behind the AI assistant Claude – has launched a new index which the company says aims to help track the impact of AI on the economy and workforce over time: “The Index’s initial report provides data and analysis based on millions of anonymized conversations on Claude.AI, revealing the clearest picture yet of how AI is being incorporated into real-world tasks across the modern economy.” As summarized by Gadgets360, the “initial report of the research reveals that software engineering fields are the most impacted by this new technology. The research found the arts, design, sports, entertainment, and media fields to be in the second spot in terms of jobs being impacted by AI. Apart from finding the impacted markets, the report also claimed that AI's usage is leaning more towards augmentation compared to automation.”