If you have an item that we should include in this news download, or a source we should review for future items, please email me at [email protected].
Booker, Rounds, Heinrich Announce Bipartisan AI Grand Challenges Act
Senator Booker (April 30, 2024)
Release: “Today, U.S. Senators Cory Booker (D-NJ), Mike Rounds (R-SD), and Martin Heinrich (D-NM) announced the introduction of the AI Grand Challenges Act, a bipartisan bill to harness the promise of artificial intelligence to solve complex problems across a range of sectors, including health, energy, environment, national security, materials science, and cybersecurity – as well as address AI system-specific challenges like bias mitigation, content provenance, and explainability. The bill directs the National Science Foundation (NSF) to establish an AI Grand Challenges Program and administer prize competitions – with $1 million minimum prizes – to incentivize researchers, entrepreneurs, and innovators to harness AI to address specific and measurable challenges to benefit the United States and serve the public good.” Read the full press release for more details.
Ohio uses AI to eliminate unnecessary words in state administrative code
Axios (April 29, 2024)
Lt. Gov. Jon Husted has enlisted the help of AI tools to reduce repetitive language or outdated regulations within administrative code, two years before ChatGPT was released to the public. Husted's AI-enabled edits “removed about 2.2 million of those words, he said, including thousands of pages of rules for lottery games that haven't been played in decades and repetitive language that made it hard to understand the state's building and fire codes.”
CISA unveils guidelines for AI and critical infrastructure
FedScoop (April 29, 2024)
Under the obligations of Biden’s October executive order on AI, the Cybersecurity and Infrastructure Security Agency released guidelines today on potential opportunities and weaponization of AI within critical tech infrastructure. Spanning 16 sectors, CISA intends to instruct “operators and owners of critical infrastructure to govern, map, measure, and manage their use of the technology, incorporating the National Institute of Standards and Technology’s AI risk management framework.”
Digital Twin Helps Raleigh, N.C., Foresee, Combat Urban Heat
GovTech (April 26, 2024)
Using a combination of GIS technology and AI systems, the city of Raleigh has been able to upgrade their ability to track heat mitigation levels and inform three ongoing initiatives: the Cool Roadways Pilot Project, Street Tree Equity and Green Stormwater Infrastructure. With these higher-fidelity urban models of current heat conditions, the city’s IT department hopes further incorporation will be a powerful tool in adapting to climate change on the local level.
Google Introduces New AI Training Course
Forbes (April 26, 2024)
Offered on Coursera, the new training course aims to upskill members of the workforce on foundational AI uses, tools, and best practices. The “Google AI Essentials” course is designed and taught by AI experts at the company, will cost $49, and will involve videos, readings, and interactive exercises towards the earning of a certificate. Registrants will learn how to write effective prompts for using generative AI tools to brainstorm ideas, plan events, speed up daily work, and make informed decisions.
ACLU seeks AI records from NSA, Defense Department in new lawsuit
FedScoop (April 26, 2024)
In a new complaint filed under the Freedom of Information Act, the ACLU is seeking the disclosure of AI usage records from the National Security Agency and Department of defense – with the ACLU arguing that promised transparency from the agencies on their integration of AI technology and plans for the future are “critical to allowing members of the public to participate in the development and adoption of appropriate safeguards for these society-altering systems. Ultimately, the ACLU wants to prevent the public from being further “left in the dark.”
DHS launches safety and security board focused on AI and critical infrastructure
FedScoop (April 26, 2024)
As the Department of Homeland Security ramps up its internal focus on AI, the agency on Friday announced the formation of its new Artificial Intelligence Safety and Security Board. The board “includes representatives of major technology companies, including OpenAI CEO Sam Altman and Alphabet CEO Sundar Pichai, as well as experts focused on artificial intelligence and civil rights… [and] leaders of companies focused on computer chips, like Lisa Su of Advanced Micro Devices and Jensen Huang, president and CEO of NVIDIA.” This collaborative effort between industry, civil society, and academia will hopefully lead to guideline creation that promotes innovation as much as tender-stepping.
‘I’m unable to’: How generative AI chatbots respond when asked for the latest news
Reuters Institute (April 25, 2024)
In new research on AI & journalism from the Reuters Institute, most generative AI chatbots were resistant to providing the latest news from specific outlets. The outputs were not news-like, and raises questions about how to ensure that chatbots don’t surpass search engines as the main sources of news inquiry from members of the public – until bots can match engines on measures of both recency and relevancy. The study also notes the potential for pending changes, as newsrooms and AI developers continue to discuss deals for content licensing.
Consumers Know More About AI Than Business Leaders Think
BCG (April 24, 2024)
Your conceptions about consumer passion for AI are probably wrong. Despite popular belief that AI is for the industry and academic thinkers, this recent survey of consumers run by BCG found that “people are more knowledgeable and excited about AI than you might think” – in fact, 75% of US consumers are aware of ChatGPT, and respondents broadly noted appreciation for the tools along measures of comfort, customization, and convenience. Check out the link for more specific data and insights.
Artificial Intelligence Legislative Outlook: Spring 2024 Update
R Street (April 24, 2024)
R Street put together another AI legislative outlook, which attempts to summarize many of the federal bills and legislative frameworks proposed, which govern algorithmic systems and processes around the nation. The article examines three comprehensive AI governance frameworks, noting that many laws are blending “hard” and “soft” techniques in proposed policies. Particularly insightful was the author's attempt to lay out how regulators are collaborating with two Commerce agencies: the National Telecommunications and Information Administration (NTIA) and the National Institute of Standards and Technology (NIST).
AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance
Doshi-Velez et. al (April 23, 2024)
From researchers at Harvard University and University College London, this noteworthy report examines public sector use and regulation of AI bias. After examining jurisdictions with mature AI regulations around protection of marginalized groups, the authors provide “three key pitfalls around expertise, risk frameworks and transparency, that can decrease the efficacy of regulations aimed at government AI use and suggest avenues for improvement.”
UAE tech minister: AI will be ‘the new lifeblood’ for governments and the private sector
Atlantic Council (April 22, 2024)
At an Atlantic Council Front Page event on Friday, the UAE minister of state for artificial intelligence, digital economy, and remote work applications emphasized the necessity and expectation for more AI ministers around the world in the coming years. He also spoke to the UAE’s ambitions and plans to become a global leader on artificial intelligence, including Microsoft’s recent $1.5 billion investment into the UAE’s G42 venture. Watch the event for the full comments.
How Tech Giants Cut Corners to Harvest Data for A.I.
The New York Times (April 8, 2024)
Revisit this comprehensive reporting from several weeks ago about how the largest AI developers are skirting regulations and ignoring internal & collaborator policies to scrape as much data as possible for training AI models. This context is necessary as we watch the same corporations make deals with government agencies and invest in new grant programs, to understand how power-players are wielding their influence and which avenues regulators should target to protect data. In the desperate hunt for digital data to further advance AI tech, ethical watchdogs are more important than ever.