News That Caught Our Eye #30: October 3, 2024

Published by Autumn Sloboda, Domenick Gaita on January 1, 1970

This week in AI news: California Governor Gavin Newsom vetoed a controversial AI safety bill, citing concerns about hindering innovation, while a Stanford-led project argued for rethinking institutional checks and balances in the AI age. From new bipartisan AI legislation for pandemic preparedness to AI-powered inclusive hiring tools, this week's News That Caught Our Eye dives into AI's expanding influence on politics and governance.


In the news this week

AI and Public Engagement

AI and Public Engagement

Democracy Is Broken: Time for Data-Driven Decisions

Ahmed Bouzid on September 25, 2024 in Cmswire Editorial

“Why is it that commercial companies, using platforms like Facebook, Instagram, Twitter and LinkedIn, can gain deep insights into what their customers and potential buyers want, while we continue to rely on the outdated mechanism of extrapolating citizens' desires through elected representatives? The answer lies in two fundamental differences: motivation and methodology. First, businesses are motivated by a direct need to satisfy their customers and to expand their market shares. Failure to do so results in lost revenue and stagnant growth, neither of which is a good outcome for C-suite executives, the board that hires or fires them and the investors who finance these companies. This market-driven accountability forces companies to continuously innovate and adapt to consumer preferences. On the other hand, political parties and elected officials often cater more to the needs of their donors, lobbyists and pressure groups than to the electorate. The exigencies of political survival often trump the needs and desires of ordinary citizens.”

Read article

AI and Public Engagement

Want AI that flags hateful content? Build it.

Scott J. Mulligan on September 30, 2024 in MIT Technology Review

“The challenge asks for two different models. The first, a task for those with intermediate skills, is one that identifies hateful images; the second, considered an advanced challenge, is a model that attempts to fool the first one. ‘That actually mimics how it works in the real world,’ says Chowdhury. ‘The do-gooders make one approach, and then the bad guys make an approach.’ The goal is to engage machine-learning researchers on the topic of mitigating extremism, which may lead to the creation of new models that can effectively screen for hateful images.”

Read article

AI for Governance

AI for Governance

Amid Concern, Police in Maine Test AI to Write Reports

Morgan Womack on September 30, 2024 in Government Technology

“Police agencies in Maine are dipping into the world of artificial intelligence, they say, to help them save on hours of paperwork so they can do more policing. But experts who have studied this technology question whether it will actually save time, or if it will only bog down and raise more distrust in the criminal justice system. Lt. James Estabrook demonstrated the potentially time-saving new tool in the parking lot of the Cumberland County Sheriff's Office in Portland this month. He hopped out of his cruiser, clicked a button on his body camera and walked through a fake traffic stop scenario. After he pretended to issue a warning to his colleague for speeding, he ended the body camera recording with the click of a button. But behind the lens, the footage was being sent to the cloud to be analyzed by AI which, within seconds, produces the first draft of a police report.”

Read article

AI for Governance

Rethinking ‘Checks and Balances’ for the A.I. Age

Steve Lohr on September 24, 2024 in The New York Times

“In the late 1780s, shortly after the Industrial Revolution had begun, Alexander Hamilton, James Madison and John Jay wrote a series of 85 spirited essays, collectively known as the Federalist Papers. They argued for ratification of the Constitution and an American system of checks and balances to keep power-hungry ‘factions’ in check. A new project, orchestrated by Stanford University and published on Tuesday, is inspired by the Federalist Papers and contends that today is a broadly similar historical moment of economic and political upheaval that calls for a rethinking of society’s institutional arrangements. In an introduction to its collection of 12 essays, called the Digitalist Papers, the editors overseeing the project, including Erik Brynjolfsson, director of the Stanford Digital Economy Lab, and Condoleezza Rice, secretary of state in the George W. Bush administration and director of the Hoover Institution, identify their overarching concern. ‘A powerful new technology, artificial intelligence,’ they write, ‘explodes onto the scene and threatens to transform, for better or worse, all legacy social institutions.’”

Read article

AI for Governance

Bipartisan Senate bill seeks to leverage AI for new pandemic preparedness program

Madison Alder on September 26, 2024 in Fedscoop

“The bill would establish a program called ‘MedShield’ that would use AI to protect against future pandemics. The Department of Health and Human Services would be required to implement a pandemic preparedness and response program that leverages artificial intelligence under new bipartisan Senate legislation. That bill (S. 5222), which was introduced Wednesday and announced Thursday, would call on the secretary of HHS to establish a new program called ‘MedShield’ that would protect against future pandemics by aiding collaboration between government and the private sector and use AI in several areas, including detecting pathogens and developing vaccines.”

Read article

AI for Governance

Impact Report 2024

State of New Jersey on September 30, 2024 in Office of Innovation

“The report catalogs six years of efforts using digital technology and now AI to deliver better services to New Jerseyans: ‘Effective government service matters. When we turn to State government, it may be in a time of great uncertainty — such as when facing unemployment, a lack of stable housing, or a need to access nutrition for your children. How the government delivers in those moments is critical...Government provides the infrastructure and incentives for positive progress, as we in New Jersey are doing now through Business.NJ.gov and permit modernization efforts, and in the responsible use of generative artificial intelligence.’”

Read article

AI for Governance

S.F. CIO Makstman on City’s Sprawling Technology, Use of AI in Government

Tribunal News Service on September 26, 2024 in Government Technology

“Michael Makstman, originally from Ukraine, became San Francisco’s Chief Information Officer in 2023 after years of experience in cybersecurity. He manages a $140 million department and focuses on digital transformation, including modernizing legacy tech and improving city services. He advocates for a cautious approach to AI, testing applications without rushing its deployment. Makstman emphasizes collaboration with the city's 52 IT departments and encourages young technologists to join government efforts to solve complex, long-term problems.”

Read article

AI and International Relations

AI and International Relations

U.S. Development Agencies Should Embrace AI to Transform the U.S. Africa Relationship

Ramsey C. Day on September 25, 2024 in Carnegie Endowment for International Peace

“Artificial intelligence (AI) and the broader digital transformation are rapidly shaping the future of Africa with profound implications for U.S. national strategic, security, and economic interests. As a result, U.S. policymakers should elevate Africa’s weight within the U.S. foreign policy development process and AI should take center stage. This shift is in both America’s stated interests and the interests of African nations.1 If the United States does not meaningfully engage in shaping the continent’s digital landscape and AI ecosystem, then the world’s malign actors will.”

Read article

AI and International Relations

The tension between AI export control and U.S. AI innovation

John Villasenor on September 24, 2024 in Brookings

“Artificial intelligence (AI) raises an acute set of challenges with respect to export control. On the one hand, AI opens the door to potentially transformative military technologies. The United States has a strong interest in ensuring that U.S.-developed AI technology is not used by geopolitical rivals in ways that threaten national security. A key framework to further that interest is the Export Control Reform Act of 2018 (ECRA). The ECRA gives the Department of Commerce the authority to promulgate new export control rules regarding AI technologies. On the other hand, the more expansive a system of export control restrictions on AI becomes, the more cumbersome and impractical it is to enforce. In addition, adopting overly broad new export control restrictions aimed at blocking cloud-based access to AI computation by geopolitical rivals also risks impairing AI research at U.S. universities. The result would be a less robust and innovative U.S. AI ecosystem.”

Read article

AI and Elections

AI and Elections

Artificial intelligence (AI) in action: A preliminary review of AI use for democracy support

Grahm Tuohy-Gaydos on September 1, 2024 in Westminster Foundation for Democracy

“This policy paper provides a working definition of AI for Westminster Foundation for Democracy (WFD) and the broader democracy support sector. It then provides a preliminary review of how AI is being used to enhance democratic practices worldwide, focusing on several themes including: accountability and transparency, elections, environmental democracy, inclusion, openness and participation, and women’s political leadership. The paper also highlights potential risks and areas of development in the future. Finally, the paper shares five recommendations for WFD and democracy support organisations to consider advancing their ‘digital democracy’ agenda.”

Read article

AI and Elections

​​House panel moves bill that adds AI systems to National Vulnerability Database

Derek B. Johnson on September 25, 2024 in Cyberscoop

“A bill that would push the National Institute of Standards and Technology to set up a formal process for reporting security vulnerabilities in AI systems sailed through a House committee Wednesday. The AI Incident Reporting and Security Enhancement Act, introduced by Reps. Deborah Ross, D-N.C., Jay Obernolte, R-Calif., and Don Beyer, D-Va., was approved via voice vote by the House Science, Space and Technology Committee. It would direct NIST to add AI systems to the National Vulnerability Database, the federal government’s centralized repository for tracking cybersecurity vulnerabilities in software and hardware. It would also require the agency to consult with other federal agencies, like the Cybersecurity and Infrastructure Security Agency, the private sector, standards organizations and civil society groups to establish common definitions, terminology and standardized reporting rules for AI security incidents”

Read article

Governing AI

Governing AI

California governor vetoes contentious AI safety bill

David Shepardson on September 30, 2024 in Reuters

“California Governor Gavin Newsom on Sunday vetoed a hotly contested artificial intelligence safety bill after the tech industry raised objections, saying it could drive AI companies from the state and hinder innovation. Newsom said the bill ‘does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data’ and would apply ‘stringent standards to even the most basic functions — so long as a large system deploys it.’ Newsom said he had asked leading experts on generative AI to help California ‘develop workable guardrails’ that focus ‘on developing an empirical, science-based trajectory analysis.’ He also ordered state agencies to expand their assessment of the risks from potential catastrophic events tied to AI use.”

Read article

Governing AI

US Department of Labor announces framework to help employers promote inclusive hiring as AI-powered recruitment tools’ use grows

Office of Disability Employment Policy on September 24, 2024 in U.S Department of Labor

“The U.S. Department of Labor today announced the publication of the AI & Inclusive Hiring Framework, a new tool designed to support the inclusive use of artificial intelligence in employers’ hiring technology and increase benefits to disabled job seekers. Published by the Partnership on Employment & Accessible Technology, the framework will help employers reduce the risks of creating unintentional forms of discrimination and barriers to accessibility as they implement AI hiring technology. Funded by the department’s Office of Disability Employment Policy, the initiative will also help workers and job seekers navigate the potential benefits and challenges they may face when encountering AI-enabled technologies.”

Read article

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.