News That Caught Our Eye #40: January 10, 2025

Published by Dane Gambrell on January 1, 1970

AI News This Week: Congressional rules package instructs officials in the U.S. House of Representatives to continue to integrate AI into House operations. A paper co-authored by dozens of researchers, practitioners and technologists lays out a research agenda for further developing the use of large language models to create more deliberative and healthier digital public squares. With the Trump administration poised to reshape Biden’s AI policies, experts discuss the implications for democratic governance. Read these stories and more in this week’s Reboot Democracy news on AI, democracy and governance.


In the news this week

News that caught our eye

News that caught our eye

New Rules Call for Ongoing AI Efforts in the House; Here’s What Lawmakers Should Do Next

on January 3, 2025 in Popvox Foundation

“The new House Rules package, adopted January 3, 2025, instructs House officials to ‘continue efforts to integrate artificial intelligence technologies into the operations and functions of the House.’ This language reflects the growing recognition of AI's transformative potential in governance and the proactive approach embraced by the House of Representatives’ since 2023.” This post outlines next steps House agencies and officials can consider in response to this directive. Read the full resolution here.

Read article

News that caught our eye

House Bipartisan Task Force on Artificial Intelligence Delivers Report

on December 17, 2024 in Committee on Science, Space, and Technology

“‘Because advancements in artificial intelligence have the potential to rapidly transform our economy and national security, Leader Jeffries and I established the Bipartisan Task Force on Artificial Intelligence to ensure America continues leading in this strategic arena. Developing a bipartisan vision for AI adoption, innovation, and governance is no easy task, but a necessary one as we look to the future of AI and ensure Americans see real benefits from this technology,’ said Speaker Mike Johnson.”

Read article

Governing AI

Governing AI

Apple urged to withdraw 'out of control' AI news alerts

Zoe Kleinman, Liv McMahon and Natalie Sherman on January 7, 2025 in BBC

“Apple is facing fresh calls to withdraw its controversial artificial intelligence (AI) feature that has generated inaccurate news alerts on its latest iPhones. The product is meant to summarise breaking news notifications but has in some instances invented entirely false claims… the technology was ‘out of control’ and posed a considerable misinformation risk.”

Read article

Governing AI

What Will AI Policy Look Like Under the Trump Administration?

Julia Edinger, on January 6, 2025 in Government Technology

“The incoming presidential administration has indicated its intent to revoke President Joe Biden’s 2023 artificial intelligence executive order (EO), and the industry has mixed reactions as to what this will mean for AI work — leaning cautiously towards optimism… The senior director said he expects one area of the EO will remain a priority for the incoming administration: the focus on upskilling and reskilling the workforce for AI.”

Read article

Governing AI

Exit interview: FCC’s Jessica Rosenworcel discusses her legacy on cybersecurity, AI and regulation

Derek B. Johnson on January 3, 2025 in CyberScoop

As we enter this new era with the possibilities of artificial intelligence — it can cause problems, it can also help us solve problems — I’m ultimately an optimist. But I do think that if you’re a consumer, a viewer or a listener and you are interacting with AI-generated stuff, you deserve to know. If that’s a synthetic voice or actor that you’re interacting with, you deserve to know. I think we need to change our legal and cultural norms to reflect that for us to make positive use of artificial intelligence going forward.”

Read article

AI for Governance

AI for Governance

How Government May Use Generative AI in 2025 and Beyond

Chris Hein on January 6, 2025 in Government Technology

“As GenAI advances, state and local government agencies will begin to build internal AI assistants for their own employees. These AI assistants will use publicly available data, internal data and regulatory guidelines applied to what-if scenarios to explore all possible outcomes. Imagine a virtual sandbox where governments can test drive infrastructure changes — adding lanes, tweaking traffic lights, building bike paths — and model the best options for improving communities before breaking ground. This is GenAI at work, helping prepare for unintended consequences and assessing impact.”

Read article

AI for Governance

DHS working with startup to launch AI-powered fitness app

Rebecca Heilweil on January 6, 2025 in FedScoop

“A fitness startup has been working with U.S. Customs and Border Protection since 2022 to develop an artificial intelligence-enabled physical education app for agency employees… The Volt Athletics system — which is supposed to use artificial intelligence to ‘help the DHS workforce improve its overall health and wellness, especially for personnel who operate in high-stress and dangerous conditions’ — was recently disclosed in an updated inventory for AI use cases.”

Read article

AI for Governance

Minnesota’s expanded anti-fraud efforts include AI pilot project

Colin Wood on January 7, 2025 in StateScoop

“As payment integrity approaches have gone from retrospective years ago, to prospective in the near past here, to now with the tools that are available to preemptive, we must leverage these new AI capabilities to protect services and crucial funds,’ Tomes said. ‘State programs have long worked to address suspicious activity, and now we’re taking it further by enhancing our tools and adopting a more advanced approach to proactively strengthen the services we provide.’ Tomes said the AI project will use the same practices used in the private sector to detect and flag ‘unusual payment activity.’”

Read article

AI for Governance

British AI startup with government ties is developing tech for military drones

Jasper Jolly on January 7, 2025 in The Guardian

“…Faculty Science has carried out testing of AI models for the UK government’s AI Safety Institute (AISI)... A spokesperson for Faculty said: ‘We help to develop novel AI models that will help our defence partners create safer, more robust solutions,’ adding that it has ‘rigorous ethical policies and internal processes’ and follows ethical guidelines on AI from the Ministry of Defence. The spokesperson said Faculty has a decade of experience in AI safety, including on countering child sexual abuse and terrorism.”

Read article

AI for Governance

Via genAI pilot, CDAO exposes ‘biases that could impact the military’s healthcare system’

Brandi Vincent on January 3, 2025 in DefenseScoop

“... large language models essentially process and generate language for humans. They fall into the buzzy, emerging realm of generative AI. Broadly, that field encompasses disruptive but still-maturing technologies that can process huge volumes of data and perform increasingly ‘intelligent’ tasks — like recognizing speech or producing human-like media and code based on human prompts. These capabilities are pushing the boundaries of what existing AI and machine learning can achieve.”

Read article

AI and Problem Solving

AI and Problem Solving

Artificial intelligence in anti-corruption – a timely update on AI technology

Dieter Zinnbauer on January 7, 2025 in U4 Anti-Corruption Resource Centre

“AI has been successful in a number of ‘classic’ anti-corruption areas – procurement integrity, compliance, fraud detection and anti-money laundering. But AI has not gained public trust (it is difficult to explain exactly how AI works) and it has not been able to overcome challenges such as resource constraints, data quality, organisational resistance and digital divide issues. AI can also add inherited biases, and it has been responsible for erroneous outputs. However, there are some promising new uses for AI in anti-corruption work: remote sensing via satellites; and managing large-scale citizen consultations.”

Read article

AI and Problem Solving

Google is forming a new team to build AI that can simulate the physical world

Kylie Wiggers on January 6, 2025 in TechCrunch

“...the new modeling team will collaborate with and build on work from Google’s Gemini, Veo, and Genie teams to tackle ‘critical new problems’ and ‘scale models to the highest levels of compute.’ Gemini is Google’s flagship series of AI models for tasks like analyzing images and generating text, while Veo is Google’s own video generation model. As for Genie, it’s Google’s take on a world model — AI that can simulate games and 3D environments in real time. Google’s latest Genie model, previewed in December, can generate a massive variety of playable 3D worlds.”

Read article

AI and Public Engagement

AI and Public Engagement

Making Sense of Large-Scale Online Conversations

Angelo Carino on December 18, 2024 in Jigsaw

“...We’re excited to announce our Sensemaking tools, a new, open-source library for large-scale conversations. Still in ‘beta,’ our Sensemaking tools leverage Google’s industry leading, publicly available Gemini models to categorize and summarize large-scale input into clear insights while retaining nuance. By automating the most complex and time consuming aspects of analysis, we hope to make it possible for more communities to engage in meaningful, large-scale conversations and arrive at informed decisions.”

Read article

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.