News That Caught Our Eye #34 October 31, 2024

Published by Autumn Sloboda, Domenick Gaita on January 1, 1970

This Week in AI News: The U.S. Department of Labor released a blueprint to empower workers in the AI age, recommending transparency, employee input, and protection from AI surveillance, alongside training to ensure AI enhances job security and fair wages. The Financial Times highlighted AI's growing role in government, shifting focus from fears of “Big Robot Rebellions” to ethical concerns around AI-driven decision-making in finance, education, and law enforcement. A new report emphasized Big Tech's influence in AI policy-making, calling for more diverse voices, especially from startups innovating in health and infrastructure. Globally, AI Safety Institutes are emerging to set standards, while Hong Kong released guidelines for fintech, and Albuquerque, N.M., formed a working group for local AI policy. From ethical oversight to workplace rights, this week’s News That Caught Our Eye underscores AI's expanding influence across society.


In the news this week

AI and Problem Solving

AI and Problem Solving

L.A. Metro Enlists AI to Spot Hidden Weapons on Trains

Nathan Solis on October 24, 2024 in GovernmentTechnology

“Los Angeles will utilize AI-powered scanners at Union Station over the next month in an effort to stop passengers with hidden weapons from boarding the rails. Commuters descending to the A and B subway lines (formally known as the Blue and Red lines) will enter into the testing ground for Metro's 30-day pilot program, which is set to go into effect Wednesday. The program arrives amid growing concern over passenger safety, with Metro recording an uptick in arrests this year for riders carrying concealed weapons. The roughly 6-foot-tall Evolv Technology scanners use artificial intelligence to pinpoint on a person's body where they could possibly be carrying a weapon, according to the company's website. All weapons are banned on the Metro system, and it is illegal to carry a concealed firearm without a permit in California.”

Read article

AI and Problem Solving

Energy Department is looking at using AI to help with its nuclear accelerators

Rebecca Heilweil on October 25, 2024 in Fedscoop

“The nuclear physics program in the Energy Department’s Science Office wants to use artificial intelligence to help control its advanced accelerators, according to a new solicitation posted to the agency’s website. The request for applications comes as the Energy Department continues to use artificial intelligence to advance its research endeavors, which includes deploying the technology at some of the world’s fastest supercomputers. The goal, overall, is to help reduce the time needed to conduct experiments. Specifically, the solicitation focuses on applications of artificial intelligence that could advance scientific discovery, including the use of artificial intelligence in digital twins, to ascertain and derive insights from larger datasets, and make advancements in autonomous control.”

Read article

AI and Problem Solving

This AI system makes human tutors better at teaching children math

Rhiannon Williams on October 28, 2024 in MIT Technology Review

“Researchers from Stanford University developed an AI system called Tutor CoPilot on top of OpenAI’s GPT-4 and integrated it into a platform called FEV Tutor, which connects students with tutors virtually. Tutors and students type messages to one another through a chat interface, and a tutor who needs help explaining how and why a student went wrong can press a button to generate suggestions from Tutor CoPilot. The researchers created the model by training GPT-4 on a database of 700 real tutoring sessions in which experienced teachers worked on on one with first- to fifth-grade students on math lessons, identifying the students’ errors and then working with them to correct the errors in such a way that they learned to understand the broader concepts being taught. From this, the model generates responses that tutors can customize to help their online students.”

Read article

AI and Problem Solving

Africa’s digital decade: AI upskilling and expanding speech technology

Matt Brittin on October 28, 2024 in The Keyword

Google has announced the integration of 15 African languages into Voice Search, Gboard, and Google Translate, enabling around 300 million more people to use their voices to interact with technology. This advancement utilizes AI-driven multilingual speech recognition to convert spoken language into text, significantly enhancing accessibility. The technology learns languages in a manner similar to human language acquisition, allowing for more natural communication with digital platforms. By leveraging AI in this way, Google aims to improve user interaction and engagement across diverse linguistic communities in Sub-Saharan Africa.

Read article

AI and Problem Solving

Introducing computer use, a new Claude 3.5 Sonnet, and Claude 3.5 Haiku

on October 22, 2024 in Anthropic

Anthropic has launched upgraded AI models, Claude 3.5 Sonnet and Claude 3.5 Haiku, enhancing capabilities in coding and tool use. The new public beta for "computer use" allows developers to instruct Claude to interact with computer interfaces like a human, facilitating complex tasks such as automating repetitive processes, building and testing software, and conducting open-ended research. Companies like Replit and DoorDash are exploring these capabilities, which promise significant improvements in efficiency and effectiveness for software development and UI navigation. Early feedback indicates substantial advancements in AI-powered coding and task execution, opening up new possibilities for application development.

Read article

Governing AI

Governing AI

Hong Kong unveils rules for ‘responsible’ AI use as it gets ahead of disruptive technology

Mia Castagnone on October 27, 2024 in South China Morning Post

Hong Kong has launched its first guidelines for responsible AI use and is supporting blockchain technology to keep up with fintech advancements. Financial Secretary Paul Chan said AI is a key focus and asked financial institutions to create plans for using it safely, including offering training. Hong Kong wants to be a link between international businesses and China, promoting AI while addressing risks like fraud and cyberattacks. A new payment system between Hong Kong and mainland China is set for 2025. The government is working with universities and banks like HSBC to grow Hong Kong’s AI industry.

Read article

Governing AI

Albuquerque, N.M., AI Working Group to Guide, Analyze Use

News Staff on October 23, 2024 in Government Technology

The Albuquerque City Council has approved a resolution to create an artificial intelligence working group aimed at developing an official city policy for AI use. This initiative positions Albuquerque as a pioneer in New Mexico for responsible AI governance. The working group, co-sponsored by Councilors Tammy Fiebelkorn and Dan Champine, will include city employees, community stakeholders, and civil rights advocates. It will evaluate current AI applications, provide training, and conduct a cost analysis. The Department of Technology and Innovation (DTI) will oversee the group, which must report back to the City Council within nine months.

Read article

Governing AI

Educause ’24: A Summary of Federal Guidance on AI

Abby Sourwine on October 28, 2024 in Government Technology

“Since ChatGPT came out in November 2022, the education world has taken mixed approaches to generative artificial intelligence, with policies ranging from prohibition to proactive embrace. At the Educause Annual Conference in San Antonio last week, members of the U.S. Department of Education’s Office of Educational Technology (OET) summarized a shortlist of written resources published by the federal government to help state and local education leaders sift through the noise.”

Read article

Governing AI

Building Worker Power in the AI Age: A Blueprint from DOL

Dane Gambrell, Beth Simone Noveck and Seth Harris on October 30, 2024 in Reboot Democracy

The U.S. Department of Labor’s new blueprint outlines how AI can empower workers, emphasizing transparency, worker input, and collective bargaining. Key recommendations include involving employees in AI-related decisions, disclosing AI’s use and data collection practices, and protecting workers' rights to organize without AI surveillance. The guidelines also encourage unions to negotiate AI’s role in protecting job security and fair wages, while employers invest in AI skills training to boost job adaptability. By collaborating, workers, unions, and employers can ensure AI fosters progress and fair workplace practices rather than deepening inequalities.

Read article

Governing AI

Beware the AI bureaucrats

Yuval Noah Harari on October 26, 2024 in Financial Times

A recent analysis highlights the growing significance of AI in bureaucratic systems, shifting attention from fears of a "Big Robot Rebellion" to its role in decision-making across finance, education, and law enforcement. While AI toolss demonstrate efficiency and advanced data analysis, their integration poses risks, as seen with social media algorithms amplifying harmful content. The article emphasizes the importance of ethical oversight to manage AI's impact effectively, ensuring that its benefits to society are realized while mitigating potential drawbacks.

Read article

Governing AI

From Safety to Innovation: How AI Safety Institutes Inform AI Governance

Prithvi Iyer on October 25, 2024 in Tech Policy Press

Governments worldwide are setting up AI Safety Institutes (AISIs) to address AI risks and ensure responsible use. Seven major jurisdictions, including the US, UK, Japan, and EU, have established or are developing AISIs. A new report compares these institutes, highlighting three main characteristics of the first wave: a focus on safety, government leadership, and technical expertise. These institutes conduct research, set safety standards, and promote international cooperation. AISIs are focused on safety evaluations and best practices but lack regulatory authority. Some critics argue their narrow focus on safety could hinder innovation or duplicate existing efforts. Future AISIs in developing countries may emphasize innovation alongside safety, potentially adopting different governance models.

Read article

AI for Governance

AI for Governance

South Korea leverages open government data for AI development

Si Ying Thian on October 27, 2024 in GovInsider

South Korea is leveraging open government data to enhance AI development in the private sector. The government has initiated AI training programs using publicly accessible data, fostering innovation among businesses. A notable example is TTCare, an application that analyzes symptoms in pets, trained on data from the government’s AI Hub. Additionally, synthetic data is being generated to protect privacy while providing valuable insights for policy research. With 87,000 public data sets available, South Korea continues to promote the use of AI to address various challenges, demonstrating its commitment to advancing technology through open data initiatives.

Read article

AI for Governance

White House orders Pentagon and intel agencies to increase use of AI

Gerrit de Vynck on October 24, 2024 in Washington Post

“The White House is directing the Pentagon and intelligence agencies to increase their adoption of artificial intelligence, expanding the Biden administration’s efforts to curb technological competition from China and other adversaries. The edict is part of a landmark national security memorandum published Thursday. It aims to make government agencies step up experiments and deployments of AI. The memo also bans agencies from using the technology in ways that ‘do not align with democratic values,’according to a White House news release.”

Read article

AI for Governance

Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence

on October 24, 2024 in The White House

President Joe Biden has released an AI national security memorandum, where he lays out a framework for agencies to harness the benefits of Artificial Intelligence. Furthermore, the memorandum bolsters the role of the AI Safety Institute, allowing for the United States to promote the usage of Artificial Intelligence in a safe, secure, and trustworthy manner. Shifting from the domestic use of AI to the protection of United States AI information from foreign threats, the memorandum introduces the implementation of safeguards against foreign risks.

Read article

AI for Governance

AI’s future depends on who’s at the table — not just who’s in the Oval Office

Gaurab Bansal on October 28, 2024 in Fedscoop

“But the truth is, regardless of who wins the presidency, the essential questions for AI regulation will remain the same: Who will write the policies that propel AI forward? Who will set the guardrails in the public interest and provide the clarity that all markets need to thrive? Right now, Big Tech is the only voice in the room. In 2022, the top five technology companies with the largest lobbying presence spent a combined $76 million on lobbying efforts alone, according to a Responsible Innovation Labs analysis of OpenSecrets data. RIL also found that over the past five years, these companies spent an average of $69 million and employed an average of 92 lobbyists. From 2022 to 2023, AI lobbying increased by 185%. At the other end of the spectrum are America’s startup founders, who you’ll have to forgive if they’re not hyper focused on politics. They are working right now to build the next great American companies powering — and powered by — AI. They are grinding tirelessly to win. They are creating AI applications to tackle health, disaster preparedness, agriculture, and critical infrastructure, among other areas of American economic strength. In other domains, they are exploring space and working toward energy security. For AI regulation — and tech policy — to work, American startup founders need to be at the table.”

Read article

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.