News That Caught Our Eye #40: January 10, 2025

AI News This Week: Congressional rules package instructs officials in the U.S. House of Representatives to continue to integrate AI into House operations. A paper co-authored by dozens of researchers, practitioners and technologists lays out a research agenda for further developing the use of large language models to create more deliberative and healthier digital public squares. With the Trump administration poised to reshape Biden’s AI policies, experts discuss the implications for democratic governance. Read these stories and more in this week’s Reboot Democracy news on AI, democracy and governance.

Angelique Casem

Dane Gambrell

Read Bio

AI and Lawmaking

New Rules Call for Ongoing AI Efforts in the House; Here’s What Lawmakers Should Do Next – Popvox Foundation, January 3, 2025

“The new House Rules package, adopted January 3, 2025, instructs House officials to ‘continue efforts to integrate artificial intelligence technologies into the operations and functions of the House.’ This language reflects the growing recognition of AI's transformative potential in governance and the proactive approach embraced by the House of Representatives’ since 2023.” This post outlines next steps House agencies and officials can consider in response to this directive. Read the full resolution here.

House Bipartisan Task Force on Artificial Intelligence Delivers Report - Committee on Science, Space, and Technology, December 17, 2024

“‘Because advancements in artificial intelligence have the potential to rapidly transform our economy and national security, Leader Jeffries and I established the Bipartisan Task Force on Artificial Intelligence to ensure America continues leading in this strategic arena. Developing a bipartisan vision for AI adoption, innovation, and governance is no easy task, but a necessary one as we look to the future of AI and ensure Americans see real benefits from this technology,’ said Speaker Mike Johnson.”

Governing AI

What Will AI Policy Look Like Under the Trump Administration? - Government Technology, Julia Edinger, January 6, 2025

“The incoming presidential administration has indicated its intent to revoke President Joe Biden’s 2023 artificial intelligence executive order (EO), and the industry has mixed reactions as to what this will mean for AI work — leaning cautiously towards optimism… The senior director said he expects one area of the EO will remain a priority for the incoming administration: the focus on upskilling and reskilling the workforce for AI.”

Exit interview: FCC’s Jessica Rosenworcel discusses her legacy on cybersecurity, AI and regulation - CyberScoop, Derek B. Johnson, January 3, 2025

“As we enter this new era with the possibilities of artificial intelligence — it can cause problems, it can also help us solve problems — I’m ultimately an optimist. But I do think that if you’re a consumer, a viewer or a listener and you are interacting with AI-generated stuff, you deserve to know. If that’s a synthetic voice or actor that you’re interacting with, you deserve to know. I think we need to change our legal and cultural norms to reflect that for us to make positive use of artificial intelligence going forward.”

Apple urged to withdraw 'out of control' AI news alerts - BBC, Zoe Kleinman, Liv McMahon and Natalie Sherman, January 7, 2025 

“Apple is facing fresh calls to withdraw its controversial artificial intelligence (AI) feature that has generated inaccurate news alerts on its latest iPhones. The product is meant to summarise breaking news notifications but has in some instances invented entirely false claims… the technology was ‘out of control’ and posed a considerable misinformation risk.”

AI For Governance

How Government May Use Generative AI in 2025 and Beyond - Government Technology, Chris Hein, January 6, 2025 

“As GenAI advances, state and local government agencies will begin to build internal AI assistants for their own employees. These AI assistants will use publicly available data, internal data and regulatory guidelines applied to what-if scenarios to explore all possible outcomes. Imagine a virtual sandbox where governments can test drive infrastructure changes — adding lanes, tweaking traffic lights, building bike paths — and model the best options for improving communities before breaking ground. This is GenAI at work, helping prepare for unintended consequences and assessing impact.”

DHS working with startup to launch AI-powered fitness app - FedScoop, Rebecca Heilweil, January 6, 2025

“A fitness startup has been working with U.S. Customs and Border Protection since 2022 to develop an artificial intelligence-enabled physical education app for agency employees… The Volt Athletics system — which is supposed to use artificial intelligence to ‘help the DHS workforce improve its overall health and wellness, especially for personnel who operate in high-stress and dangerous conditions’ — was recently disclosed in an updated inventory for AI use cases.”

Minnesota’s expanded anti-fraud efforts include AI pilot project - StateScoop, Colin Wood, January 7, 2025

“‘As payment integrity approaches have gone from retrospective years ago, to prospective in the near past here, to now with the tools that are available to preemptive, we must leverage these new AI capabilities to protect services and crucial funds,’ Tomes said. ‘State programs have long worked to address suspicious activity, and now we’re taking it further by enhancing our tools and adopting a more advanced approach to proactively strengthen the services we provide.’ Tomes said the AI project will use the same practices used in the private sector to detect and flag ‘unusual payment activity.’”

British AI startup with government ties is developing tech for military drones - The Guardian, Jasper Jolly, January 7, 2025

“…Faculty Science has carried out testing of AI models for the UK government’s AI Safety Institute (AISI)... A spokesperson for Faculty said: ‘We help to develop novel AI models that will help our defence partners create safer, more robust solutions,’ adding that it has ‘rigorous ethical policies and internal processes’ and follows ethical guidelines on AI from the Ministry of Defence. The spokesperson said Faculty has a decade of experience in AI safety, including on countering child sexual abuse and terrorism.”

Via genAI pilot, CDAO exposes ‘biases that could impact the military’s healthcare system’ - DefenseScoop, Brandi Vincent, January 3, 2025

“... large language models essentially process and generate language for humans. They fall into the buzzy, emerging realm of generative AI. Broadly, that field encompasses disruptive but still-maturing technologies that can process huge volumes of data and perform increasingly ‘intelligent’ tasks — like recognizing speech or producing human-like media and code based on human prompts. These capabilities are pushing the boundaries of what existing AI and machine learning can achieve.”

AI and Problem Solving

Artificial intelligence in anti-corruption – a timely update on AI technology – U4 Anti-Corruption Resource Centre, by Dieter Zinnbauer, January 7, 2025

“AI has been successful in a number of ‘classic’ anti-corruption areas – procurement integrity, compliance, fraud detection and anti-money laundering. But AI has not gained public trust (it is difficult to explain exactly how AI works) and it has not been able to overcome challenges such as resource constraints, data quality, organisational resistance and digital divide issues. AI can also add inherited biases, and it has been responsible for erroneous outputs. However, there are some promising new uses for AI in anti-corruption work: remote sensing via satellites; and managing large-scale citizen consultations.”

Google is forming a new team to build AI that can simulate the physical world - TechCrunch, Kylie Wiggers, January 6, 2025

“...the new modeling team will collaborate with and build on work from Google’s Gemini, Veo, and Genie teams to tackle ‘critical new problems’ and ‘scale models to the highest levels of compute.’ Gemini is Google’s flagship series of AI models for tasks like analyzing images and generating text, while Veo is Google’s own video generation model. As for Genie, it’s Google’s take on a world model — AI that can simulate games and 3D environments in real time. Google’s latest Genie model, previewed in December, can generate a massive variety of playable 3D worlds.”

AI and Public Engagement

AI and the Future of Digital Public Squares - Arxiv, Beth Goldberg, Diana Acosta-Navas, Michiel Bakker, et al., December 2024

“The tools provided by platforms are often inadequate to the complex needs of moderators. For example, many community moderators are faced with large quantities of AI-generated ‘slop,’ which is swamping moderation queues and decreasing the utility of their community. Facebook Group moderators are forced to manually evaluate ‘join requests’ of potential new members, which often requires complicated judgment calls to evaluate the authenticity of the account (Kuo, Hernani, and Grossklags 2023). Moderators need tools to handle raids, generative AI, spam, and karma farming, among other challenges.”

Making Sense of Large-Scale Online Conversations, - Medium, Jigsaw, December 18, 2024

“...We’re excited to announce our Sensemaking tools, a new, open-source library for large-scale conversations. Still in ‘beta,’ our Sensemaking tools leverage Google’s industry leading, publicly available Gemini models to categorize and summarize large-scale input into clear insights while retaining nuance. By automating the most complex and time consuming aspects of analysis, we hope to make it possible for more communities to engage in meaningful, large-scale conversations and arrive at informed decisions.” 

 

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.