If you have an item that we should include in this news download, or a source we should review for future items, please email [email protected].
Misunderstanding Democratic Backsliding – Journal of Democracy Volume 35 No. 3, by Thomas Carothers and Brendan Hartnett, July 3, 2024.
-
“One of the most common explanations of the ongoing wave of global democratic backsliding is that democracies are failing to deliver adequate socioeconomic goods to their citizens, leading voters to forsake democracy and embrace antidemocratic politicians who undermine democracy once elected. Yet a close look at twelve important cases of recent backsliding casts doubt on this thesis, finding that while it has some explanatory power in some cases, it has little in others, and even where it applies, it requires nuanced interpretation. Backsliding is less a result of democracies failing to deliver than of democracies failing to constrain the predatory political ambitions and methods of certain elected leaders. Policymakers and aid providers seeking to limit backsliding should tailor their diplomatic and aid interventions accordingly.”
Participation and Transparency in AI System Design and Integration – Cornell University ProQuest Dissertations & Theses, by Fernando Alonso Delgado, May 2024.
-
“As AI systems proliferate across various institutions and organizations, questions of stakeholder participation and algorithmic transparency grow in importance and urgency. Yet very few in-depth empirical studies of real-world AI design and governance processes exist that we can draw lessons from for informing the future state of practice. In this dissertation, I present an in-depth qualitative analysis of the design processes and transparency practices related to Technology-Assisted Review, or TAR: an AI-driven workflow that has been in use in the U.S. civil justice system for over a decade. Through extensive interviews with computer scientists, lawyers, and judges, as well as archival analysis of government research and U.S. civil court documents, I uncover an unprecedented model for AI participatory design previously unrecognized in the literature.”
Artificial Intelligence Gives Weather Forecasters a New Edge – New York Times, by William J. Broad, July 29, 2024.
-
“In early July, as Hurricane Beryl churned through the Caribbean, a top European weather agency predicted a range of final landfalls, warning that Mexico was most likely. The alert was based on global observations by planes, buoys and spacecraft, which room-size supercomputers then turned into forecasts. That same day, experts running artificial intelligence software on a much smaller computer predicted landfall in Texas. The forecast drew on nothing more than what the machine had previously learned about the planet’s atmosphere. Four days later, on July 8, Hurricane Beryl slammed into Texas with deadly force, flooding roads, killing at least 36 people and knocking out power for millions of residents. In Houston, the violent winds sent trees slamming into homes, crushing at least two of the victims to death. The Texas prediction offers a glimpse into the emerging world of A.I. weather forecasting, in which a growing number of smart machines are anticipating future global weather patterns with new speed and accuracy.”
Fed’s AI Wildfire Detection Program Tested in Boulder, Colo. – Government Technology, by Elise Schmelzer, July 30, 2024.
-
“A new artificial intelligence program will help identify wildfires as small as an acre by scanning images taken by weather satellites orbiting about 22,000 miles above the Earth’s surface. The AI program, developed by the National Oceanic and Atmospheric Administration and recently tested in Boulder, could dramatically cut the amount of time between identifying a fire and responding — minutes and hours that are critical to containing a blaze. Called the Next Generation Fire System, NOAA officials say it can process the deluge of data from the satellites — which capture images as frequently as every 30 seconds — and detect heat from fires smaller than a football field. The program then flags potential new fires to a dashboard so humans can check the images and verify the existence of a fire.”
White House says agencies hired 200 AI experts so far through governmentwide ‘talent surge’ – Federal News Network, by Jory Heckman, July 29, 2024.
-
“...DHS announced last Friday that it has onboarded its first cohort of 15 AI experts from the private and public sectors to serve in its AI Corps. Among their duties, AI Corps members are working with DHS’ Supply Chain Resilience Center to determine how AI could be used to forecast the impacts of critical supply chain disruptions to public safety and security. The DHS AI Corps is looking at how generative AI could help the department’s Homeland Security Investigations department combat fentanyl, human trafficking, child exploitation, and other criminal networks…The White House Office of Science and Technology Policy earlier this month outlined plans to spend $100 million to train new AI experts and bring a steady pipeline of them into government service.”
Brazil proposes $4 billion AI investment plan – Reuters, July 30, 2024.
-
“Brazil's government unveiled on Tuesday a 23 billion reais ($4.07 billion) proposal for an artificial intelligence (AI) investment plan aimed at developing sustainable and socially-oriented technologies….Brazil, the largest economy in Latin America, wants to achieve technological autonomy and competitiveness in the AI sector, aiming for what it called ‘national sovereignty’ instead of a reliance on imported AI tools from other countries. The proposed investment plan foresees resources for ‘immediate impact initiatives’ in sectors such as public health, agriculture, environment, business and education. Many of those initiatives include the development of AI systems to facilitate [customer] service and other operational procedures, a government presentation showed.”
New York automation bill would limit state agency use of AI – State Scoop, by Keely Quinlan, July 18, 2024.
-
“New York state lawmakers passed a bill last month to limit how state agencies can use artificial intelligence in decision making processes, becoming the first state to pass such legislation. The bill, known as the “Legislative Oversight of Automated Decision-making in Government Act,” or LOADinG Act, passed both houses last month. If allowed by the governor, it will require state agencies to publicly disclose when they use software powered by AI or use automated decision-making, including for systems already in use. It would also require state agencies using AI to do so with direct human review and oversight, and to generate a report for the governor every two years on how they use such technologies. The bill would also prohibit state agencies from replacing government workers with AI systems and require agencies to gain approval before using any automated decision-making system.”