Governing AI
US president Trump repeals Biden’s AI safety order - Ground News, January 22, 2025
“President Trump has rescinded former President Biden's 2023 executive order aimed at establishing safety guidelines for AI, effectively shifting towards a more deregulated approach. This rescinded order had mandated that AI developers disclose safety testing results and establish various assessments related to AI risks, which many industry leaders initially welcomed.” Click here to read the order revoking the Biden administration's AI orders. Read the full list of executive orders repealed by President Trump on Inauguration Day here.
OpenAI presents its preferred version of AI regulation in a new ‘blueprint’ - TechCrunch, Kyle Wiggers, January 13, 2025
“To fuel the data centers necessary to develop and run AI, OpenAI’s blueprint recommends ‘dramatically’ increased federal spending on power and data transmission, and meaningful buildout of ‘new energy sources,’ like solar, wind farms, and nuclear. OpenAI — along with its AI rivals — has previously thrown its support behind nuclear power projects, arguing that they’re needed to meet the electricity demands of next-generation server farms.”
The AI Safety Paradox: When 'Safe' AI Makes Systems More Dangerous - The Collective Intelligence Project, James Padolsey,
“The AI safety community has rallied around the goal of ethical alignment: the hope of making individual AI models reliably abide by human values. AI labs painstakingly tune their models to produce safe, ethical responses. But this well-intentioned focus might not just be insufficient—it could be actively harmful.”
AI Infrastructure
Trump Announces $100 Billion A.I. Initiative - The New York Times, Cecelia Kang & Cade Metz, January 21, 2025
“President Trump on Tuesday announced a joint venture between OpenAI, SoftBank and Oracle to create at least $100 billion in computing infrastructure to power artificial intelligence. The venture, called Stargate, adds to tech companies’ significant investments in U.S. data centers, huge buildings full of servers that provide computing power. Stargate could eventually invest as much as $500 billion over four years. The three companies plan to contribute funds to the venture, which will be open to other investors and start with 10 data centers already under construction in Texas.” Some raised questions about the project’s ambitious financial and operational goals.
Chipped: imagination, invention, innovation & the new stone age - Five-Part Series on Resilient Futures, J.A. Ginsburg, January 17, 2025
This five-part history of the development of silicon chips – the hardware that powers AI systems: “It has been a century since physicist Julius Edgar Lilienfeld, an immigrant to the United States, patented the idea of using a semiconductor material to make a transistor. A hundred years later, silicon microchips, some with tens of billions of transistors, are in everything from computers to cars to coffeemakers. They make our modern world possible. Now, with Artificial Intelligence, they are poised to run the world.”
Disagreements with Elon Musk prompted Ramaswamy’s ‘DOGE’ exit - The Washington Post, Faiz Siddiqui, Elizabeth Dwoskin, Jeff Stein, January 21, 2025
“President Donald Trump’s order establishing the ‘Department of Government Efficiency’ aims to give billionaire Elon Musk’s team sweeping access to operations at federal agencies, revamping its structure after competing visions left one of its leaders seeking an exit. The new structure — which has DOGE taking over the U.S. Digital Service, part of the Executive Office of the President — emerged after months of behind-the-scenes maneuvering between Musk and fellow billionaire entrepreneur Vivek Ramaswamy, the DOGE co-leader who will depart to run for governor of Ohio. Deep philosophical differences over how the panel should operate helped spur Ramaswamy to leave, according to more than a half-dozen people with knowledge of the situation, many of whom spoke on the condition of anonymity to describe private conversations.”
FTC Issues Staff Report on AI Partnerships & Investments Study - Federal Trade Commission, News Staff, January 17, 2025
“The Federal Trade Commission today issued a staff report on the corporate partnerships and investments formed between the largest cloud service providers (CSPs)—Alphabet, Inc., Amazon.com, Inc., and Microsoft, Corp.—and two of the most prominent generative AI developers—Anthropic PBC and OpenAI OpCo, LLC. The report highlights several key terms of the AI partnerships, which include: significant equity and certain revenue-sharing rights for CSP partners in their AI developer partners, certain consultation, control, and exclusivity rights CSP partners hold to varying degrees with respect to their AI developer partners, commitments that require AI developers to spend a large portion of their CSP partner’s investment on cloud services from their partner, and more…”
AI-Related Programmatic Advances at the FTC (June 2021 - January 2025) - Federal Trade Commission, January 17, 2025
“Between June 2021 and January 2025, the Federal Trade Commission (FTC) took significant steps to address challenges posed by artificial intelligence (AI). It banned companies, such as Rite Aid, from using facial recognition technologies that falsely accused individuals of shoplifting and prohibited firms from training AI models on improperly obtained data. The FTC launched ‘Operation AI Comply’ to combat AI-driven scams and finalized rules against fake reviews, including AI-generated ones. Additionally, it amended the Telemarketing Sales Rule to address deceptive AI robocalls, established the Office of Technology, and initiated studies on AI investments and partnerships.”
AI for Governance
Disagreements with Elon Musk prompted Ramaswamy’s ‘DOGE’ exit - The Washington Post, Faiz Siddiqui, Elizabeth Dwoskin, Jeff Stein, January 21, 2025
“President Donald Trump’s order establishing the ‘Department of Government Efficiency’ aims to give billionaire Elon Musk’s team sweeping access to operations at federal agencies, revamping its structure after competing visions left one of its leaders seeking an exit. The new structure — which has DOGE taking over the U.S. Digital Service, part of the Executive Office of the President — emerged after months of behind-the-scenes maneuvering between Musk and fellow billionaire entrepreneur Vivek Ramaswamy, the DOGE co-leader who will depart to run for governor of Ohio. Deep philosophical differences over how the panel should operate helped spur Ramaswamy to leave, according to more than a half-dozen people with knowledge of the situation, many of whom spoke on the condition of anonymity to describe private conversations.”
Federal Agency Outlines Recommendations for Accessible AI - Government Technology, Julia Edinger, January 16, 2025
“First, the Access Board launched a webpage hosting relevant information and recordings from its AI series. The AI series, ‘Developing Artificial Intelligence (AI) Equity, Access & Inclusion for All Series,' is the key way the Access Board is approaching this work with its partners… Thus far, the series has had five targeted sessions, the most recent of which was the Tuesday webinar. The first, in July, was an informational session. A series of public hearings followed, two focused on the disability community and one for federal agencies and AI practitioners.”
C.I.A.’s Chatbot Stands In for World Leaders - The New York Times, Julian E. Barnes, January 18, 2025
“The chatbot is part of the spy agency’s drive to improve the tools available to C.I.A. analysts and its officers in the field, and to better understand adversaries’ technical advances. Core to the effort is to make it easier for companies to work with the most secretive agency.”
Government digital document app launching in summer - BBC, Graham Fraser, January 21, 2025
“The [UK] government is to make digital versions of a range of official documents available via a dedicated app and a digital wallet, as part of what ministers say is an attempt to bring interactions with the public ‘in tune with modern life’. Veteran cards and driving licences will be the first to be incorporated into a gov.uk wallet, which is being launched this year. The government is also testing a chatbot which could be added to the app which would ‘help people find answers to complex and niche questions’. Earlier, it was announced civil servants will soon be given access to a set of tools powered by artificial intelligence (AI) and named ‘Humphrey’ after the scheming official from the classic sitcom ‘Yes, Minister.’”
AI and Engagement
The Role of Big Data and AI in Smart Cities and Urban Planning - Research Gate, Umair Ejaz, January 2025
This study examines how cities can use AI to make urban planning processes more effective and efficient: “Big data and AI enable cities to track and analyze usage patterns in real time, such as traffic flows, energy consumption, and water usage. This allows urban planners to allocate resources more effectively, ensuring that infrastructure and services are deployed when and where they are needed most. For example, smart traffic management systems can reduce congestion and optimize traffic light timings, leading to smoother traffic flows and less fuel consumption.”
Virtual Cities: From Digital Twins to Autonomous AI Societies - IEEE Xplore, Andrey Nechesov, Ivan Dorokhov, Janne Ruponen, January 17, 2025
This study looks at how AI could be used to create fully-functional models of cities, which in turn could aid in testing and developing new technologies: “Virtual Cities (VCs) transcend simple digital replicas of real-world systems, emerging as complex sociotechnical ecosystems where autonomous AI entities function as citizens. Agentic AI systems are on track to engage in cultural, economic, and political activities, effectively forming societal structure within VC. This paper proposes an integrated simulation framework that combines physical, structural, behavioral, cognitive, and data fidelity layers, allowing multi-scale simulation from microscopic interactions to macro-urban dynamics…. Our results demonstrate that such virtual environments can support the emergence of AIdriven societies, where governance mechanisms like Decentralized Autonomous Organizations (DAOs) and an Artificial Collective Consciousness (ACC) provide ethical and regulatory oversight. By blending horizon scanning with systems engineering method for defining novel AI governance models, this study reveals how VCs can catalyze breakthroughs in urban innovation while driving socially beneficial AI development - consequently opening a new frontier for exploring human–AI coexistence.”
Google signs deal with AP to deliver up-to-date news through its Gemini AI chatbot - AP News, Matt O’Brien, January 15, 2025
“... News organizations have expressed concerns about AI companies using their material without permission — or payment — and then unfairly competing with them for advertising revenue that comes when people use a search engine or click on a news website. The New York Times and other outlets have sued OpenAI and other AI companies for copyright infringement and, on Tuesday, presented their arguments before a New York federal judge.”
Global AI Optimism Increases as Usage Grows - Google Public Policy, Kent Walker, January 14, 2025
“A new global survey from Ipsos and Google, ‘Our Life with AI: From innovation to application,’ reveals that attitudes towards AI are trending more positive as its use grows. The survey of 21,000 people across 21 countries shows that global AI usage has jumped ten percentage points to 48% and excitement about AI’s potential now exceeds concerns (57% vs. 43%, up from 50% / 50% last year).”
In Moderation: Automation in the Digital Public Sphere - Journal of Business Ethics, Diana Acosta Navas, January 18, 2025
This study looks at the principles guiding the concept of free speech on digital platforms and how these platforms can be reshaped to create better environments for online discussion: “The digital public forum has challenged many of our normative intuitions and assumptions. Many scholars have argued against the idea of free speech as a suitable guide for digital platforms’ content policies. This paper has two goals. Firstly, it suggests that there is a version of the free speech principle which is suitable for platforms that have adopted a commitment to free speech to guide their content curation strategies. I call it the Principle of Epistemic Resilience. Secondly, it aims to analyze some of the practical implications of the principle. It argues that upholding this principle in the digital public forum requires a comprehensive strategy, including (1) the automated removal and demotion of contents that threaten to cause serious harm; (2) changes to engagement optimization algorithms; and (3) changes to affordances inside the platform. These changes are necessary to create a fertile environment for deliberation, which is crucial to epistemic resilience. If such a comprehensive strategy is absent, platforms may actively undermine the societal value of speech.”
AI and Problem Solving
Atif Javed Uses AI to Ensure No One Gets Lost in Translation - Medium, Fast Forward Team, January 21, 2025
“Tarjimly uses AI to match, train, and pre-translate languages, enabling faster, more accurate connections between volunteers and those in need. By leveraging deep data on low-resource languages, such as Rohingya and Swahili, Tarjimly fine-tunes large language models (LLMs) specifically for underrepresented languages. This targeted approach addresses the shortcomings of mainstream LLMs, which often lack robust support for these languages.”
AI and Lawmaking
AI Will Write Complex Laws - Lawfare, Nathan Sanders & Bruce Schneier, January 16, 2025
“AI can be used in each step of lawmaking, and this will bring various benefits to policymakers. It could let them work on more policies—more bills—at the same time, add more detail and specificity to each bill, or interpret and incorporate more feedback from constituents and outside groups. The addition of a single AI tool to a legislative office may have an impact similar to adding several people to their staff, but with far lower cost.”
AI and IR
America Is Winning the Race for Global AI Primacy—for Now - Foreign Affairs, Colin H. Kahl, January 17, 2025
“U.S. AI labs likely remain one or two years ahead at the frontier, especially since many not-yet-released models are closed-source and therefore harder for Chinese companies to emulate. And as long as scaling state-of-the-art computing power remains vital for frontier AI progress, U.S. companies will expand their lead. As DeepSeek’s CEO Liang Wenfeng has acknowledged, China’s difficulties competing with U.S. AI firms boil down to Washington’s ‘bans on shipments of advanced chips.’”