News That Caught Our Eye #36: November 14, 2024

This Week in AI News: A Boston Globe letter to the editor urges Massachusetts to adopt AI tools to support legislative transparency. New Jersey’s State AI Task Force published its report recommending actions the Garden State can take to encourage the ethical and effective use of AI to improve government services and the resident experience. New reports spotlight AI’s transformative potential in philanthropy and AI’s growing influence in healthcare and justice. The UK has rolled out a new AI assurance platform to promote responsible use, while the UAE is harnessing AI to fast-track national development. From public service delivery to national security, this week’s news collects stories about the opportunities and challenges as AI becomes more deeply integrated into governance and public policy.

Autumn Sloboda

D G

Domenick Gaita

Dane Gambrell

Read Bio

Listen to the AI-generated audio version of this piece. 

EVENT: Join us November 19 to discuss “Our Biggest Fight” with Frank McCourt – Reboot Democracy, by Autumn Sloboda, November 12, 2024

Join Reboot Democracy on November 19 at 5 p.m. ET for a virtual conversation with Frank H. McCourt, Jr. about his new book Our Biggest Fight: Reclaiming Liberty, Humanity, and Dignity in the Digital Age and how Big Tech created an internet that is predatory and exploitative and how we can reclaim the internet for democracy. McCourt will talk with Beth Simone Noveck, Director of the Burnes Center for Social Change and the Governance Lab. Click here to register to attend this lecture. The event will be held on Zoom on Tuesday, November 19, 2024 at 5 p.m. ET.

 

AI for Governance

Mass. should follow Calif.’s lead, use AI to shed light on lawmakers - Reboot Democracy, By Beth Simone Noveck, November 11, 2024

Following criticism of Massachusetts' legislative opacity and a vote granting the state auditor oversight powers, this letter to the Boston Globe argues that the state should adopt California's successful AI-driven legislative transparency model. The system could be replicated in Massachusetts for about $50K, potentially transforming the state from having one of the least to most transparent legislatures.

 

[Press Release] New Jersey Releases Artificial Intelligence Task Force Report – By the Office of New Jersey Governor Phil Murphy, November 12, 2024

This week, New Jersey’s State Artificial Intelligence Task Force published its report to Governor Phil Murphy outlining actions the Garden State can take to encourage the ethical and effective use of emerging technologies. The report recommends actions that aim to create economic opportunities for residents and businesses, encourage ethical use of AI technologies, promote equitable outcomes, support public and private workforces, and improve government services and the resident experience. The report features free AI skills training developed for the state by InnovateUS, as well as the results from a nation-leading effort to survey New Jersey’s workforce, residents, institutions, and businesses on their views on AI technologies. 

 

The Routledge Handbook of Artificial Intelligence and Philanthropy - Routledge International Handbooks, Giuseppe Ugazio and Milos Maricic, November 6, 2024

“The Routledge Handbook of Artificial Intelligence and Philanthropy acts as a catalyst for the dialogue between two ecosystems with much to gain from collaboration: artificial intelligence (AI) and philanthropy. Bringing together leading academics, AI specialists, and philanthropy professionals, it offers a robust academic foundation for studying both how AI can be used and implemented within philanthropy and how philanthropy can guide the future development of AI in a responsible way. The contributors to this Handbook explore various facets of the AI‑philanthropy dynamic, critically assess hurdles to increased AI adoption and integration in philanthropy, map the application of AI within the philanthropic sector, evaluate how philanthropy can and should promote an AI that is ethical, inclusive, and responsible, and identify the landscape of risk strategies for their limitations and/or potential mitigation. These theoretical perspectives are complemented by several case studies that offer a pragmatic perspective on diverse, successful, and effective AI‑philanthropy synergies. As a result, this Handbook stands as a valuable academic reference capable of enriching the interactions of AI and philanthropy, uniting the perspectives of scholars and practitioners, thus building bridges between research and implementation, and setting the foundations for future research endeavors on this topic.”

 

AI And Data Science for Public Policy - LSE Public Policy Review, Kenneth Benoit, November 4, 2024

“Artificial intelligence (AI) and data science are reshaping public policy by enabling more data-driven, predictive, and responsive governance, while at the same time producing profound changes in knowledge production and education in the social and policy sciences. These advancements come with ethical and epistemological challenges surrounding issues of bias, transparency, privacy, and accountability. This special issue explores the opportunities and risks of integrating AI into public policy, offering theoretical frameworks and empirical analyses to help policymakers navigate these complexities. The contributions explore how AI can enhance decision-making in areas such as healthcare, justice, and public services, while emphasizing the need for fairness, human judgment, and democratic accountability. The issue provides a roadmap for harnessing AI’s potential responsibly, ensuring it serves the public good and upholds democratic values.”

 

Navigating Generative AI in Government - Business of Government, By Alex Richter

The IBM Center for The Business of Government has released a report exploring how generative AI can enhance government operations. The report argues that Generative AI can improve decision-making, efficiency, and public service delivery. The report outlines 11 strategic pathways for government agencies to effectively implement generative AI, including ethical AI practices, developing adaptive governance models, investing in data infrastructure, and providing employee training. It emphasizes the importance of public engagement and transparency to ensure responsible AI deployment and highlights the potential of AI to complement human skills in collaborative processes.

 

OpenAI further expands its generative AI work with the federal government - Fedscoop, By Rebecca Heilweil, November 4, 2024

Federal agencies are increasingly adopting ChatGPT Enterprise, a generative AI technology from OpenAI Recent contracts include purchases by the Internal Revenue Service (IRS), which acquired 150 licenses for the Department of the Treasury, and NASA, which has used OpenAI tools since last year and purchased an annual license for the platform this past summer. The Los Alamos National Laboratory and National Gallery of Art have also integrated ChatGPT Enterprise into their operations. On the defense side, OpenAI has partnered with the Air Force Research Laboratory to experiment with AI for reducing administrative tasks and improving efficiency. These developments highlight the growing role of generative AI in both civilian and defense government functions, with OpenAI working to establish stronger relationships with federal agencies, including seeking FedRAMP Moderate accreditation for enhanced accessibility.

 

UAE Government Annual Meetings kick off with focus on AI, family and national identity - Middle East Economy, By Yara Abi Farraj, November 4, 2024

The 2024 UAE Government Annual Meetings will review the progress of the ‘We the UAE 2031’ vision, with a focus on integrating AI technologies to enhance national development. These discussions will address how AI can drive improvements in government services, streamline decision-making, and boost efficiency across various sectors. The meetings will also explore AI’s role in aligning federal and local development plans, shaping the UAE’s future readiness, and preparing for the long-term goals outlined in the UAE Centennial 2071 plan. The aim is to leverage AI to accelerate progress, improve quality of life, and strengthen the nation’s global competitiveness.

 

AI and IR

Scale AI unveils ‘Defense Llama’ large language model for national security users - DefenseScoop, Brandi Vincent, November 4, 2024

Tech firm Scale AI has developed a custom large language model (LLM) called Defense Llama, for U.S. military and national security agencies. The company says that  the model assists with combat planning and intelligence analysis by providing responses based on military doctrine, international law, and ethical guidelines, helping military personnel analyze complex situations like tactical decisions and adversary behavior. This development is part of the Department of Defense's broader AI adoption strategy under the Biden administration, which aims to use AI in classified settings to improve data analysis and speed up decision-making processes.

 

Anthropic and Palantir Partner to Bring Claude AI Models to AWS for U.S. Government Intelligence and Defense Operations - Business Wire, By Morgan Gress, November 07, 2024

Palantir Technologies, Anthropic, and Amazon Web Services (AWS) have partnered to provide U.S. intelligence and defense agencies access to Claude 3 and 3.5 models on AWS. This partnership integrates Claude within Palantir’s AI Platform (AIP), leveraging AWS's secure and sustainable infrastructure. The collaboration aims to enhance government operations by enabling rapid data processing, improving decision-making in time-sensitive scenarios, and streamlining resource-intensive tasks. The platform, accredited by Defense Information Systems Agency (DISA), equips the U.S. government with AI tools to boost efficiency and analytical capabilities in classified environments.

 

Governing AI

AI and the Regulatory Challenge: a New Framework Using the SETO Loop - Cornell University, November 6, 2024

The Brooks Tech Policy Institute, supported by the Jain Family Institute, released a report introducing the "SETO Loop" as a regulatory framework for AI. The SETO Loop outlines four steps for effective AI regulation: defining the scope of protection, assessing existing regulations across the AI production chain, choosing regulatory tools (such as bans, taxes, or transparency requirements), and determining the appropriate organization to implement regulations. Through this framework, the report aims to guide policymakers in addressing AI’s societal impacts responsibly and efficiently.​

 

What Trump’s election win could mean for AI, climate and health - Nature, Jeff Tollefson et al, November 8, 2024

As President-elect Donald Trump prepares for his second term, he is signaling significant changes to U.S. science policy, particularly in the area of artificial intelligence (AI). One of his key promises is to repeal President Joe Biden's executive order on AI, which emphasizes the safe and responsible development of the technology. Trump's stance aligns with the Republican platform, which argues that such regulations hinder innovation. Instead, Trump plans to shift the responsibility for AI safety to technology companies, encouraging voluntary measures rather than government-imposed regulations. However, experts express concern over this approach. AI safety advocates, such as Suresh Venkatasubramanian from Brown University and Roman Yampolskiy from the University of Louisville, warn that loosening regulations could exacerbate risks related to biased algorithms, data privacy, and the potential dangers of superintelligent AI. While Trump’s policy may foster innovation, critics argue that without robust oversight, the unchecked development of AI could lead to harmful consequences, including the deployment of AI systems that operate unpredictably and disproportionately affect vulnerable groups.



UK government launches AI assurance platform for enterprises - ComputerWeekly, Sebastian Klovig Skelton, November 6, 2024

The UK government has launched an AI assurance platform to help businesses identify and mitigate risks associated with artificial intelligence. The platform aims to support the growing AI assurance sector, which is currently worth over £1 billion and could expand to £6.5 billion by 2035. It brings together a range of tools, services, and guidance on AI impact assessments, bias review, and responsible use. The initiative includes a self-assessment tool, AI Management Essentials (AIME), to help organizations ensure ethical and responsible AI practices. The platform is part of the UK’s broader effort to establish itself as a global hub for AI expertise, promoting safe AI development while enhancing trust in AI systems.



AI And Public Engagement

With Hawaii Civic Engagement At A Crossroads, Let’s Use AI For Good - Honolulu Civil Beat, By Dan Milz, Jennifer Kagan, Mahdi Belcaid, November 7, 2024 

In September 2023, Hawaii’s Commission on Water Resource Management held a lengthy meeting with over nine hours of emotional public testimony on Maui’s water issues, reflecting challenges in managing extensive public engagement. While Hawaii values civic participation, the volume can overwhelm decision-makers, limiting effective responses. Experts suggest using AI tools, like ChatGPT, to streamline public feedback processing, making engagement more manageable and efficient. However, AI’s effectiveness varies by data type, and ethical concerns persist. The authors call for Hawaii’s leaders to hold discussions on AI’s role in government, aiming to enhance civic engagement responsibly.

 

AI and Problem Solving

California High Schoolers Create AI Fact-Checking Tool - GovTech, By Abby Sourwine, November 06, 2024 

In an advanced computer science class at Amador Valley High School, students developed an AI-powered fact-checker to test during live political debates, aiming to automate a process typically handled by human fact-checkers. This project required students to work with AI and large language models, compiling and training the tool with thousands of news articles. They used speech-to-text technology or live transcription to identify statements in real time and check their accuracy, achieving an 87% accuracy rate.



AI for Good Impact Report - ITU Publications, Deloitte, October 2024

The ITU and Deloitte have published a report highlighting how AI is advancing progress toward the UN's Sustainable Development Goals (SDGs). The report showcases real-world applications of AI, such as improved weather forecasting through machine learning and AI-powered brain-machine interfaces that help ALS patients communicate. Over 40 UN agencies, including the ITU, are leveraging AI to improve education, early warning systems, and tackle social and economic inequalities. While AI offers significant benefits, the report also addresses challenges like job displacement, privacy concerns, and environmental impacts, calling for global collaboration on responsible AI development.

 

Vatican, Microsoft create AI-generated St. Peter’s Basilica to allow virtual visits, log damage - AP News, Nicole Winfield, November 11, 2024

The Vatican and Microsoft have created a digital twin of St. Peter’s Basilica using AI and 400,000 high-resolution images. The 3D replica helps identify structural issues, such as cracks and missing mosaics, which are difficult to detect with the naked eye. It also addresses the problem of overcrowding by allowing visitors to make entry reservations. The project, which includes 22 petabytes of data, aims to support the basilica’s preservation ahead of the 2025 Jubilee, when millions are expected to visit.

 

Exploring the Intersections of Open Data and Generative AI: Recent Additions to the Observatory - Open Data Policy Lab, By Roshni Singh, Hannah Chafetz, Andrew Zahuranec, Stefaan Verhulst, October 25, 2024

The Open Data Policy Lab has expanded its Observatory of Open Data and Generative AI, showcasing over 80 real-world use cases where AI and open data intersect. New additions include AI tools for government services, such as Bayaan Platform for Abu Dhabi's decision-makers and IN.gov for Indiana residents. The observatory highlights diverse initiatives, from AI-driven career coaching in Austria to climate change chatbots. Key themes include government engagement, culturally tailored AI solutions, and improving AI’s statistical reasoning capabilities. The effort aims to explore how generative AI can benefit public services while considering ethical implications.

 

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.