AI for Governance
In New Jersey, using AI starts with empowering employees- Statescoop, By Beth Simone Noveck, December 4, 2024
New Jersey is training 10,000 state workers to responsibly use AI for better public services. An employee survey guided the state's free online training program and the recommendations of the State’s AI Task Force. The training covers safe AI use, privacy, bias avoidance, and spotting "AI snake oil." AI is already speeding up unemployment benefits and improving call center resolution rates in the state. Engaging workers and promoting continuous learning is key to realizing AI's potential and increasing public trust. The article highlights New Jersey's bottom-up approach to AI governance through employee upskilling.
Institutions around the state unite to create New Mexico AI Consortium - The University of New Mexico, By Carly Bowling, December 2, 2024
The New Mexico Artificial Intelligence Consortium, a collaboration of research institutions and national laboratories, aims to advance AI development and understanding in New Mexico. Founding members include Los Alamos National Laboratory, Sandia National Laboratories, and several universities, such as UNM, NMSU, and NM Tech. The partnership seeks to position New Mexico as a leader in trustworthy AI solutions for national security, energy, and other fields. The consortium will leverage the labs' computational resources and expertise in AI security, machine learning, and high-performance computing. It also focuses on education and workforce development, offering academic programs and training opportunities to equip students at all levels with AI skills. Additionally, the group plans to support AI startups and foster industry collaboration to create an AI ecosystem that drives economic growth and job creation in the state.
AI is motivating data governance work, says Texas CIO - Statescoop, December 10, 2024
The arrival of AI is making it easier to talk about certain aspects of IT modernization in state government, Texas Chief Information Officer Amanda Crawford tells StateScoop in a recent video interview. Crawford says that agency leaders are seeing “tremendous potential” in AI, and so are now motivated to properly manage state data. “One of my colleagues likes to say that we had legacy modernization that was brought to you by the pandemic and that now we have data governance, data management and all of those policies brought to you by AI,” Crawford says. One project to improve data management, she says, is an online training course, designed to boost data literacy.
The AI We Deserve - Boston Review, By Evgeny Morozov, December 4, 2024
Despite generative AI's recent rise, Evgeny Morozov argues we must ask if its development truly serves the public good. He sees AI as narrowly focused on problem-solving, mirroring the instrumental rationality of the military-industrial "Efficiency Lobby" funding it since the 1950s. Morozov draws on Hans Otto Storm's concept of "eolithism"—a fluid craftsmanship exemplified by a Stone Age wanderer repurposing found objects—as key to human intelligence but neglected by AI. While today's AI is more open-ended, Morozov contends it still serves a neoliberal ethos casting users as individualistic consumers. He points to 1970s Latin American efforts to harness computing for participatory planning as roads not taken. Morozov calls for redirecting AI toward public ends, stressing this requires political struggle against the "Efficiency Lobby," not just technical tweaks. Realizing democratic AI means cultivating an ecological rationality respecting citizens' creativity and dignity.
Poland Launches $240 Million AI Development Plan To Boost Economy And Defense - Forbes, By Lidia Kurasinska, November 26, 2024
Poland announced a $240 million investment in artificial intelligence to boost economic competitiveness and national security amidst rising hybrid threats from Russia. The plan includes establishing an AI Fund, an advisory council, and building a European-first AI Factory at the AGH University of Cracow. Key initiatives include creating a Polish large language model (PLLuM) and supporting startups and small businesses. Amid increased cyberattacks, Poland’s defense sector prioritizes AI integration, with a new Artificial Intelligence Implementation Center under the Cyber Defense Forces Command. Lessons from Ukraine's use of AI in warfare are being incorporated into Poland’s strategies. The investment is part of a broader National Digitization Strategy targeting $24 billion in digital projects by 2030. AI adoption in Poland has surged, with 30% of businesses and 71% of defense firms already using the technology, potentially adding $139 billion to the economy by 2030.
U.S. Central Command Employs Large Language Model-based Artificial Intelligence - AFCEA International, By Kimberly Underwood, December 2, 2024
U.S. Central Command (CENTCOM) is using AI tools, including the CENTGPT platform, to improve efficiency in operations. Initially tested during the 2023 Hamas-Israel conflict, it now aids in code generation, document processing, and office tasks. The platform helps developers detect errors and streamlines document disclosure. CENTGPT’s secure network access improves information retrieval and summarization. While promising, CENTCOM emphasizes human oversight of AI outputs. The platform, based on the Air Force’s NIPRGPT, ensures secure handling of sensitive data. This initiative aims to enhance operations and guide future AI policy.
Does the UK’s liver transplant matching algorithm systematically exclude younger patients? - AI Snake Oil, By Sayash Kapoor, Arvind Narayanan, November 11, 2024
A UK liver transplant matching algorithm appears to discriminate against younger patients under 45, who are unable to get high enough priority scores even when very ill. The algorithm predicts 5-year survival rather than lifetime benefit, underestimating the value of transplants for younger recipients. Patient groups warned of this flaw before launch in 2018, but it persists after a 2022 update. The 5-year window was chosen based on available data, not clinical reasons, so the algorithm assesses need more than benefit. The case reveals issues with transparency, difficulty incorporating deservingness and donor preferences, and shifts to utilitarian ethics without adequate public debate. Collecting better data, adjusting the algorithm, and broader discussions on AI for medical decisions are needed.
Online Book Talk—The Tech Coup: How to Save Democracy from Silicon Valley - Harvard Kennedy School Ash Center for Democratic Governance and Innovation, By Marietje Schaake, Bruce Schneier, Danielle Allen, December 18, 2024
Former European Parliament MP Marietje Schaake’s book argues that tech companies have seized power from governments under the guise of innovation, posing major threats to democracy through issues like facial recognition surveillance, cryptocurrency instability, and spyware proliferation. The book examines how this occurred and outlines solutions to empower elected officials and citizens to resist corporate influence and protect democracy in the digital age. The event features Schaake in discussion with cybersecurity expert Bruce Schneier. It is moderated by Danielle Allen, Director of Harvard's Allen Lab for Democracy Renovation.
Government Must Be Willing to Reimagine, San Jose Mayor Says - Government Technology, By Skip Descant, December 10, 2024
San Jose Mayor Matt Mahan cautiously addressed plans by the incoming Trump administration to cut federal spending and create a Department of Government Efficiency led by tech leaders Elon Musk and Vivek Ramaswamy. While defending public workers, Mahan suggested being open to re-engineering government processes with new tools and a customer focus. The mayor, a former tech CEO, noted differences between government and Silicon Valley startups, arguing the temptation in government is to keep adding rules. He expressed support for reimagining processes, but stated it "shouldn't come from a place of, government is a problem." His response reflected the challenge of balancing innovation with respecting public servants.
AI and Public Engagement
InnovateUS Workshop: How New Jersey and Boston Lead the Way in Secure AI Implementation - InnovateUS, By Santiago Garces, Dave Cole, Naman Agrawal, Amani Farooque & Ruthie Nachmany, December 3, 2024
At the InnovateUS workshop "Accessing AI Safely: Setting an AI Sandbox" (December 3, 2024), leaders from New Jersey and Boston shared their pioneering approaches to implementing AI in government. New Jersey's Chief Information Officer Dave Cole and Boston's CIO Santiago Garces presented their distinct strategies: New Jersey's comprehensive AI Assistant Sandbox has reached over 10,000 state employees with a 79% positive feedback rating, while Boston partnered with Northeastern University's Burnes Center for Social Change to develop a cost-effective, secure platform. Both initiatives demonstrate how government agencies can harness AI's power while ensuring security and responsible use. Watch the full workshop recording here.
How AI Text Translators Could Improve K-12 Engagement - GovTech, By Mara Klecker, December 2, 2024
Minnesota schools are using AI-powered translation tools like TalkingPoints to improve communication with non-English-speaking families. Teachers, such as Mounds View’s Zoe Kourajian, use these tools to send messages in multiple languages, fostering equity and stronger relationships. However, districts like St. Paul and Rochester have encountered inaccuracies, prompting reliance on interpreters and transparency about AI limitations. Schools emphasize that AI tools are supplements, not replacements, for in-depth, face-to-face interactions. Usage is growing, with districts balancing technology and human oversight to ensure effective and accurate communication.
Building an accessible future for all: AI and the inclusion of Persons with Disabilities - United Nations Regional Information Centre for Western Europe, December 2, 2024
AI has the potential to significantly improve accessibility and inclusion for persons with disabilities, offering innovations like voice-recognition software, prosthetics, and digital assistants. However, there are risks of discrimination, as AI tools may perpetuate bias and fail to account for the diversity of human experiences. For example, AI systems may misinterpret body language or communication styles, excluding people with disabilities in areas like hiring, education, or services. The lack of transparency in AI decision-making makes it difficult to detect these biases, and marginalized groups, including those with disabilities, are often left out of AI development. To ensure AI serves everyone, including persons with disabilities, regulations must protect against discrimination and promote inclusion. The UN and EU are working on frameworks like the Global Digital Compact and the European AI Act, which aim to ensure AI's development respects human rights and is accessible to all, particularly marginalized groups.
Generative Agent Simulations of 1,000 People - Cornell University, By Joon Sung Park, Carolyn Q. Zou, Aaron Shaw, Benjamin Mako Hill, Carrie Cai, Meredith Ringel Morris, Robb Willer, Percy Liang, Michael S. Bernstein, November 15, 2024
The promise of human behavioral simulation--general-purpose computational agents that replicate human behavior across domains--could enable broad applications in policymaking and social science. We present a novel agent architecture that simulates the attitudes and behaviors of 1,052 real individuals--applying large language models to qualitative interviews about their lives, then measuring how well these agents replicate the attitudes and behaviors of the individuals that they represent. The generative agents replicate participants' responses on the General Social Survey 85% as accurately as participants replicate their own answers two weeks later, and perform comparably in predicting personality traits and outcomes in experimental replications. Our architecture reduces accuracy biases across racial and ideological groups compared to agents given demographic descriptions. This work provides a foundation for new tools that can help investigate individual and collective behavior.
Does AI interfere in our democracy? - CommonWealth Beacon, By Jennifer Smith, December 4, 2024
CommonWealth Beacon is a half-hour policy podcast that delves into the heart of Massachusetts’s most pressing and intriguing topics. This week on the Codcast, CommonWealth Beacon's Jennifer Smith is joined by Bruce Schneier, fellow and lecturer in public policy at the Harvard Kennedy School, and Nathan Sanders, fellow at the Berkman Klein Center for Internet & Society. They discuss how AI has the power to strengthen civic engagement in elections and policymaking, the importance of transparency in its use, and how it can be developed to prioritize democratic values.
AI and Problem Solving
Seattle Area Buses Deploy AI Cameras to Spot Lane Violations - Government Technology, Nicholas Deshais, December 2, 2024
King County Metro in Seattle has launched a pilot project using AI-equipped cameras on buses to monitor drivers in transit-only lanes. The cameras, mounted inside the bus, record violations for later review. The pilot, which began on November 6, aims to gather data on lane violations without issuing fines. The program, authorized by a 2024 state law, could eventually lead to ticketing. AI technology helps identify lane obstructions, with similar programs already operating in cities like New York and Washington, D.C.
Researchers May Have Solved a Decades-Old Brain Paradox With AI - SciTechDaily, Cold Spring Harbor Laboratory, November 28, 2024
Scientists at Cold Spring Harbor Laboratory (CSHL) have developed an AI algorithm inspired by the genome’s efficiency, achieving unprecedented data compression and task performance. Professors Anthony Zador and Alexei Koulakov, along with postdocs Divyansha Lachi and Sergey Shuvaev, created the algorithm based on the idea that the genome's limited capacity might actually enhance intelligence by forcing adaptation. Their AI model, which compresses large amounts of data, performs tasks like image recognition and even video games such as Space Invaders with surprising effectiveness. While it’s not yet on par with the brain’s full capacity, the algorithm demonstrates a level of data compression never before seen in AI. This breakthrough could have significant applications, such as running large language models on smaller devices like smartphones, enabling faster AI performance and more efficient tech development.
Amazon to pilot AI-designed material for carbon removal - Reuters, Jeffrey Dastin, December 2, 2024
Amazon plans to pilot a new carbon-removal material for its data centers, developed by AI from the startup Orbital Materials. The material, designed to filter CO2, works like a sponge at the atomic level, capturing carbon while avoiding other substances. The material is expected to cost up to 10% of the hourly charge for renting GPU chips for AI training, offering a cost-effective alternative to traditional carbon offsets. Amazon Web Services (AWS) will test the material in a data center starting in 2025 as part of a three-year partnership with Orbital. The startup, which also aims to address water use and chip cooling in data centers, is backed by companies like Nvidia and Radical Ventures.
How New AI Dashcams Could Improve Small-Town Policing - Government Technology, Thad Reuter, December 10, 2024
The Hickman County Sheriff's Department in rural Tennessee has deployed AI-powered dash cams from Motive to improve policing and officer safety. The high-quality videos have led some suspects to take plea deals instead of going to trial, as the public increasingly demands video evidence. Motive, which also serves industries like construction and trucking, entered the public sector market about two years ago. Its AI can detect 8-15 different behaviors to monitor driver safety and provide detailed accident recordings for insurance claims and court cases. The adoption of these dashcams in Hickman County shows how AI is making inroads even in small, budget-constrained agencies. It reflects a broader trend of AI being used across the public safety sector, from real-time crime centers to court support for defendants.