Loading blog post, please wait Loading blog post...

Listen to the AI-generated audio version of this piece.

00:00
00:00

The Senate’s Bipartisan AI Working Group has released twenty pages of recommendations for AI investment in its report on “Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate.”

As part of a yearlong review, the group hosted nine bipartisan AI Insight Forums in the fall of 2023, on topics ranging from “High Impact Uses of AI” and “Elections and Democracy” to “Privacy and Liability” and “National Security.” More than 150 area experts participated in the forums. 

Based on the 9 issue areas, the Senate AI Working Group recommends a robust investment strategy to enhance AI research, development, and deployment and maintain American competitiveness in artificial intelligence technologies. Funding would be deployed through a variety of statutory means, including legislation that has passed as well as bills still pending. The strategy’s kitchen-sink inclusion of a wide array of mechanisms, programs and agencies (DOE, DOC, NSF, NIST, NIH, and NASA) might turn out to be a bug rather than a feature if we fail to coordinate across agencies and spread our attention too thinly.

This annual funding target of at least $32 billion provides for work on biotechnology, advanced computing, robotics, and AI-ready data initiatives. Specific programs under the CHIPS and Science Act will receive support, such as the NSF Directorate for Technology, Innovation, and Partnerships, regional tech hubs, and microelectronics programs. The plan also includes funding for AI Grand Challenges modeled after successful DARPA initiatives, aimed at transforming science, engineering, and medicine. Last week’s introduction of the AI Grand Challenges Act is designed to advance this goal.

Much of the strategy focuses on national security. The recommendations emphasize AI advancements within the defense sector, highlighting investments in NNSA testbeds, AI-enhanced threat mitigation, and DARPA’s AI projects. There is a focus on developing secure algorithms for autonomous DOD platforms, improving Combined Joint All-Domain Command and Control (CJADC2), and enhancing AI tools for military operations. The recommendations also call for increased supercomputing and AI capacity within the DOD and collaboration with allies on integrated AI capabilities. Further funding is suggested for NIST’s AI testing and evaluation infrastructure, the U.S.

Of note in the strategy are recommendations to invest in upskilling the private sector workforce in AI as well as upskilling federal employees to “maximize the beneficial use of AI.” Unfortunately, the recommendations are silent about upskilling both state and local public professionals.

The push to fund the National AI Research Resource (NAIRR) and expand the National AI Research Institutes to involve all 50 states by encouraging passage of the CREATE AI Act (S. 2714) is perhaps one of the most important recommendations. By recognizing that data and innovation are widely distributed, it importantly recognizes the important role of states in the AI ecosystem. To spur AI research and development, the CREATE AI Act would provide:

  • Computational resources, including an open-source software environment and a programming interface providing structured access to AI models.

  • Data, including curated datasets of user interest and an AI data commons.

  • Educational tools and services, including educational materials, technical training, and user support.

  • AI testbeds, including a catalog of open AI testbeds and a collaborative project with the National Institute of Standards and Technology.

By expanding the focus to all fifty states, the strategy recognizes the importance of collaboration to spur innovation. 

Unfortunately, the “Elections and Democracy” category consists of only two short paragraphs, short enough for us to include in entirety here:

“The AI Working Group encourages the relevant committees and AI developers and deployers to advance effective watermarking and digital content provenance as it relates to AI-generated or AI-augmented election content. The AI Working Group encourages AI deployers and content providers to implement robust protections in advance of the upcoming election to mitigate AIgenerated content that is objectively false, while still protecting First Amendment rights.  

The AI Working Group acknowledges the U.S. Election Assistance Commission (EAC) for its work on the AI Toolkit for Election Officials, and the Cybersecurity and Infrastructure Security Agency (CISA) for its work on the Cybersecurity Toolkit and Resources to Protect Elections, and encourages states to consider utilizing the tools EAC and CISA have developed.”

The strategy with regard to elections focuses exclusively on risks and is completely silent about the myriad ways in which we could invest in AI to make elections more secure, improve the fairness of redistricting and improve access to campaigns for less well-funded candidates.

Ahead of a contentious election cycle, it seems unlikely that these few sentences  address the gamut of democracy-related opportunities and concerns raised in the corresponding “Elections and Democracy” AI Insight Forum.

Here are some additional highlights:

  • Broad funding for an an all-of-government “AI-ready data” initiative, and direction for research in “foundational trustworthy AI topics, such as transparency, explainability, privacy, interoperability, and security”

  • Providing local election assistance funding “to support AI readiness and cybersecurity through the Help America Vote Act (HAVA) Election Security grants

  • Supports a U.S. Comptroller General report “to identify any significant federal statutes and regulations that affect the innovation of artificial intelligence systems, including the ability of companies of all sizes to compete in artificial intelligence”

  • Recommends “developing legislation to establish a coherent approach to public-facing transparency requirements for AI systems”

  • Using public-private partnerships to explore mechanisms for deterring “the use of AI to perpetrate fraud and deception, particularly for vulnerable populations such as the elderly and veterans”

  • Developing “an analytical framework that specifies what circumstances would warrant a requirement of pre-deployment evaluation of AI models”

 

Read the full document for the entire list of recommended policies:

Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate

 

Tags