AI and Problem Solving
Combine AI With Citizen Science to Fight Poverty - Nature, Editorial, February 26, 2025
“Of the myriad applications of artificial intelligence (AI), its use in humanitarian assistance is underappreciated. In 2020, during the COVID-19 pandemic, Togo’s government used AI tools to identify tens of thousands of households that needed money to buy food, as Nature reports in a News Feature this week. Typically, potential recipients of such payments would be identified when they apply for welfare schemes, or through household surveys of income and expenditure. But such surveys were not possible during the pandemic, and the authorities needed to find alternative means to help those in need. Researchers used machine learning to comb through satellite imagery of low-income areas and combined that knowledge with data from mobile-phone networks to find eligible recipients, who then received a regular payment through their phones. Using AI tools in this way was a game-changer for the country.”
An AI cellmate? It’s not as dystopian as it sounds - Raconteur, Tamlin Magee, February 19, 2025
“Coracle, a company which provides secure laptops to prisoners in the UK, is working with the University of Hertfordshire to develop an AI assistant that will tailor courses to the learning styles of individual inmates. Coracle’s founder, James Tweed, says the project can help plug the skills shortage and get ex-offenders in steadier jobs once they leave prison. ‘The prison system can be likened to the public service of last resort,’ says Tweed. ‘Individuals fall through the system – maybe they’ve been through care, they’ve fallen out of school, they might have mental health issues and educational issues – but all services have failed at some point. So they’ve ended up in prison.’ Coracle’s educational devices are used in 91 of the 123 prisons in the UK and it currently partners with 60 content providers, including the Open University, Prisoners’ Education Trust and a range of vocational programmes.”
Can AI bridge the gaps in Africa’s healthcare systems? – Africa Science Focus, February 28, 2025
Podcast: “Africa’s healthcare systems face major challenges, from workforce shortages to gaps in infrastructure. In the latest episode of Africa Science Focus, reporter Michael Kaloki speaks with AI experts about how Artificial Intelligence (AI) is driving change across the continent. Charles Waka explains how AI is optimising maternal and neonatal healthcare, improving outcomes for mothers and newborns. Ndisha Mwakala, a former health information systems advisor at the Centers for Disease Control and Prevention, discusses how limited African data was a major hurdle in developing an AI tool to identify patients most at risk of HIV and connect them to testing and treatment. Zakia Salod, South African researcher in medical AI and bioinformatics, highlights how AI-powered tools like her VAXIN8 are accelerating vaccine development. Darlington Akogo, CEO of minoHealth AI Labs, discusses how his AI tool, Moremi, streamlines disease diagnosis, treatment planning, and prescription, while Uzma Alam, programme lead for science policy engagement at the Science for Africa Foundation, stresses the need for investment in AI research to maximise its benefits.”
Building safer chatbots for public health – Meedan, by Nat Gyenes, February 21, 2025
“With the rapid advance of large language models, AI chatbots are quickly becoming go-to sources of information on everything from politics to natural disasters to public health. While traditional search engines overwhelm us with information that may be relevant to our questions, chatbots “give us an answer.” But that answer may not always be correct. Despite significant breakthroughs, AI chatbots still regularly offer answers that are biased or flat-out wrong…
Right now, relying on an AI chatbot for information about one’s sexual or reproductive health is uniquely risky. Inaccurate or irrelevant responses can lead a person to put their own health in jeopardy, and responses that are insensitive to cultural or linguistic norms can drive users away entirely. This is why Meedan’s research team has set out to build standards for public health chatbots that will help address emergent health needs by taking a safe, inclusive, multilingual approach that will consider the unique challenges faced by healthcare workers and patients in crisis settings.”
The New York City Subway Is Using Google Pixels to Listen for Track Defects – Wired, by Aarian Marshall, February 27, 2025
“Between September and January, six Google Pixel smartphones hitched free rides on four New York City subway cars. Specifically, they took the A train, as it ping-ponged the 32 miles between the northern tip of Manhattan and the southern reaches of Queens…The phones were part of a brief experiment by New York City’s Metropolitan Transportation Authority and Google into whether cheap, mostly off-the-shelf technology could supplement the agency’s track inspection work. (Google Public Sector, the division that undertook the work, didn’t charge the MTA for this initial experiment.) Today, inspections are carried out by human inspectors, who together walk all 665 miles of New York City’s subway tracks, eyes peeled for problems like broken rails, busted signals, and water damage. Thrice-annual rides by specialized, sensor-laden ‘train geometry cars,’ also capture and upload more sophisticated data on the status of the city’s rail infrastructure.”
Governing AI
Is humour the key to better AI governance? Audrey Tang thinks so. – Apolitical, a conversation between Robyn Scott and Audrey Tang, February 25, 2025
“What if we could build a civil service from the ground up with AI at its core? This conversation between Audrey Tang and Apolitical CEO Robyn Scott explores what that could look like — how AI can assist rather than replace, how citizen participation can scale through AI-enabled deliberation and how trust anchors like academic institutions can stabilise governance in a polarised world. Along the way, they touch on humour as a tool for governance, the role of open-source safety measures and why The Hitchhiker’s Guide to the Galaxy offers lessons for AI policy. As governments adopt AI, the goal isn’t to replace civil servants — it’s to make their work more effective, responsive, and human.”
Artificial Intelligence and Procedural Due Process – SSRN, by Brandon L. Garrett, January 11, 2025
Legal article about how courts can protect individuals’ right to due process when government agencies use AI systems to aid their decision making: “In this Article, I argue that the use of AI can be compatible with procedural due process. It is a choice to adopt AI that violates due process rights, just like it is a choice to send unintelligible notice or conduct unfair administrative hearings in a way that violates people’s due process rights. As a design matter, AI systems that provide notice and an opportunity to contest them are not only feasible, but a mature body of computer science research has shown that these interpretable or glass box systems perform as well—or they perform better—than black box AI systems. If AI is designed to be interpretable, or a glass box, then people can be on notice of not just the fact that AI was used, but what factors it relied on. They can then meaningfully challenge decisions made using AI. Violating due process is a design choice, and a poor one. And interpretable AI can be compatible with due process protections. Existing due process protections can protect us, but only if judges or lawmakers rigorously insist on validated, reliable and interpretable AI to protect due process.”
CT senator wants to restrict insurance companies from using AI to decide health care – Connecticut Post, by Cris Villalonga-Vivoni, March 4, 2025
“In an ever-changing, tech-driven world, artificial intelligence systems are becoming increasingly pervasive each year. At the same time, lawmakers across the country and in Connecticut are looking to legislation as a way to regulate it, especially in the health care field. One such bill, proposed by state Sen. Saud Anwar, D-South Windsor, aims to amend the state's code to prohibit health insurance carriers from using AI to determine patient care ‘to safeguard patient access to testing, medications and procedures.’ He said the bill comes after a ProPublica investigation found how Connecticut-based Cigna Insurance was refusing care to patients with the help of AI-guided algorithms during the prior authorization process.”
What to know about deepfakes bill backed by Melania Trump – Axios, by Sareen Habeshian, March 3, 2025
“The TAKE IT DOWN Act would require tech and social media platforms to remove CSAM and non-consensual intimate images within 48 hours of being notified by a victim, and it criminalizes posting such content, per Axios' Maria Curi. Under the bill, people who post such content would face penalties and prison time. The FTC could sue tech companies for not complying as an unfair or deceptive act or practice, Curi writes…The bill is sponsored by Sen. Ted Cruz (R-Texas), and has bipartisan support including from cosponsors like Sen. Amy Klobuchar (D-Minn) and Sen. Cory Booker (D-N.J.).”
AI and Labor
Kenyan AI workers form Data Labelers Association – Computer Weekly, by Sebastian Klovig Skelton, February 14, 2025
“Artificial intelligence (AI) workers in Kenya have launched the Data Labelers Association (DLA) to fight for fair pay, mental health support and better overall working conditions. Employed to train and maintain the AI systems of major technology companies, the data labellers and annotators say they formed the DLA to challenge the “systemic injustices” they face in the workplace, with 339 members joining the organisation in its first week. While the popular perception of AI revolves around the idea of an autodidactic machine that can act and learn with complete autonomy, the reality is that the technology requires a significant amount of human labour to complete even the most basic functions. Otherwise known as ghost, micro or click work, this labour is used to train and assure AI algorithms by disseminating the discrete tasks that make up the AI development pipeline to a globally distributed pool of workers. Despite Kenya becoming a major hub for AI-related labour, the DLA said data workers are massively underpaid – often earning just cents for tasks that take a number of hours to complete – and yet still face frequent pay disputes over withheld wages that are never resolved.”
A Nobel laureate on the economics of artificial intelligence - MIT Technology Review, Peter Dizikes, February 25, 2025
“For all the talk about artificial intelligence upending the world, its economic effects remain uncertain. But Institute Professor and 2024 Nobel winner Daron Acemoglu has some insights.
Despite some predictions that AI will double US GDP growth, Acemoglu expects it to increase GDP by 1.1% to 1.6% over the next 10 years, with a roughly 0.05% annual gain in productivity. This assessment is based on recent estimates of how many jobs are affected—but his view is that the effect will be targeted. ‘We’re still going to have journalists, we’re still going to have financial analysts, we’re still going to have HR employees,’ he says. ‘It’s going to impact a bunch of office jobs that are about data summary, visual matching, pattern recognition, etc. And those are essentially about 5% of the economy.’”
Arkansas’ Employment Portal Uses AI to Match Seekers, Jobs – Government Technology, March 4, 2025
“Three years after conceiving a plan to seamlessly connect job seekers, employers and state workforce services, Arkansas officials unveiled LAUNCH, a modern, user-centric platform that is free to use. The effort, in collaboration with Research Improving People’s Lives (RIPL), a national nonprofit that helps governments harness data, is described in the Workforce Innovation and Opportunity Act (WIOA) Combined State Plan for 2024-27. LAUNCH, it said, offers integrated service delivery to ‘support all residents on pathways to employment and economic security.’ It utilizes the ‘no wrong door’ philosophy enabling a single point of entry into a website or system, and uses data and artificial intelligence to match job seekers with postings and suggest learning resources.”
AI for Governance
America First, Science Last? Kratsios Hearing Signals Empty AI Strategy – Reboot Democracy, by Beth Simone Noveck, March 3, 2025
“Trump's pick to lead OSTP professed American AI leadership at his confirmation hearing while ignoring the dismantling of the very scientific institutions that sustain it. With no vision beyond deregulation and no defense of research funding, he failed to address—and Senators failed to ask—about the role of AI in modernizing government or the growing influence of Elon Musk in shaping federal AI policy.”
Making the case for Artificial Intelligence (AI) in Transforming Public Services - Microsoft, Madhavi Gosalia, February 2025
This report from Microsoft presents case studies of how government agencies around the world are using AI to improve service delivery: “The ability to use GenAI as natural language interface for harnessing the power of AI is affording us a transformative moment to reimagine every aspect of our organizations, institutions, and business. Public sector leaders have taken note of this pervasiveness of AI across sectors including education, defence, social services, healthcare, public administration & governance. Across the world, over 70 countries have published National AI policies & strategies and other countries are evaluating the next steps…This paper describes an approach to ROI calculation and real-world case studies with quantifiable benefits. It provides a compelling argument for the broad adoption of AI technologies in the public sector followed a by step-by-step approach to getting started for successful adoption, as well as resources to dive into each topic.”
Oklahoma’s new AI watches procurement for filing errors – StateScoop, by Sophia Fox-Sowell, March 5, 2025
“The State of Oklahoma on Tuesday joined a growing number of states using artificial intelligence to streamline procurement processes — to reduce filing errors, increase efficiency and better monitor how taxpayer money is spent. The Office of Management Enterprise Services, responsible for managing statewide procurement efforts in Oklahoma, announced it recently deployed Process Copilot, a generative AI platform from the German software service firm Celonis that flags mistakes in submission forms, such as missing vendor contracts or including the wrong type of procurement order. According to the announcement, the platform helped OMES identify $190 million in flagged purchase card transactions across its 118 agencies, and identified $5.6 million in transactions that exposed ways for state agencies to implement better procurement system controls.”
AI Infrastructure
Governing in the Age of AI: Building Britain’s National Data Library - Tony Blair Institute for Global Change, by Henry Li et al., February 25, 2025
Report calling for the UK to establish a first-of-its-kind data resource to support AI development: “No nation today has the infrastructure needed to fully harness AI for public good. The National Data Library (NDL) represents an opportunity for the UK to be the first. It can help create the infrastructure needed to unlock the value of public-sector data alongside frameworks to identify and collect new types of data for breakthrough insights. Instead of a complicated web of systems and slow, one-off approvals, the NDL will establish a clear, secure way to access linked data sets, supporting AI innovation, better policymaking and research that drives economic growth. It will not centralise data but will put in place the legal, technical and governance structures to ensure that high-value data sets can be used efficiently while maintaining security and trust.”
AI and Education
ChatGPT To Be Given To All Estonian High School Students – Forbes, by Dan Fitzpatrick, February 26, 2025
“The Estonian government has announced a partnership with the company behind the AI chatbot ChatGPT. OpenAI will roll out a version of ChatGPT designed for education, called ChatGPT Edu, to all secondary school students and teachers. Starting in September 2025, 10th and 11th graders will be the first to gain access…OpenAI hope that ChatGPT Edu will provide Estonia’s high school students with personalized tutoring, AI study aids and feedback to complement traditional instruction. The AI-powered assistant will also help Estonian teachers with planning, administrative tasks, and tailored student support.” Learn more in OpenAI’s blog post.
AI and Public Engagement
How AI Can Support Democracy Movements – Ash Center for Democratic Governance and Innovation, by Erica Chenoweth, February 28, 2025
In December 2024, the Ash Center convened a workshop with “democracy activists and organizers, social scientists, and tech specialists to explore promising developments in AI that could support pro-democratic social movements, with discussion focusing on how to evaluate the impacts of AI on movement outcomes and the ethical trade-offs involved in embracing these technologies.” Priorities identified in the workshop include establishing a consortium to connect AI developers with democracy movements, supporting training efforts to enable democracy activists to use AI effectively and responsibly, launching a collaborative research agenda that employs AI tools and evaluates their impact on movement outcomes, and creating a Code of Conduct for integrating AI tools into democracy movement work. The Ash Center also published a basic guide for how social activists can use AI.
What Could BG Be? Enabling a Large-Scale Conversation in Kentucky – Jigsaw, February 14, 2025
“The city of Bowling Green, Kentucky stands on the cusp of dramatic change. Over the next 25 years, Warren County, where the town is situated, is projected to nearly double in size. While many of the factors driving Bowling Green’s growth are external, including the rapid growth of Nashville an hour to its south, the opportunity remains for the community to shape its future development. As the county’s Judge Executive Doug Gorman likes to say however, ‘that growth can either happen to us, or for us.’ Today, Innovation Engine, a local strategy consultancy, in technical partnership with The Computational Democracy Project and Jigsaw, with support from Google.org, is launching ‘What Could BG Be?’ a month-long online conversation designed to allow the people of Warren County to shape the future of their community.”
AI and Public Safety
AI-Generated Police Reports – Jotwell, by Mary Fan, March 4, 2025
“One of the most prescient scholars of policing and technology, Andrew Guthrie Ferguson’s recent paper, Generative Suspicion and the Risks of AI-Assisted Police Reports, offers a fascinating overview of AI-generated police reports and the potential impact on criminal practice. Police reports might seem like dull bureaucratic minutiae. But a police report can shape a person’s fate, from whether and what charges get filed, to the plea deal that is offered, and the sentence a defendant receives. One of the first items in a criminal case for a prosecutor or defense attorney to review, the police report shapes and constrains the narrative. The report defines victims and perpetrators, provides potential impeachment material for trial, and impacts the availability of defenses. The transformation of how police reports are generated is thus important, with potential systemic impacts.”
Events
March 13, 2025, 1-2pm ET – Join the Internet Archive on Thursday, March 13 for a thought-provoking discussion on Copyright, AI, and Great Power Competition, a new paper by Joshua Levine and Tim Hwang. The piece explores how different nations approach AI policy and copyright regulation, and also what’s at stake in the battle for technological dominance. Sign up here.
March 20th, 2025, 5-6pm ET – Artificial Intelligence in the Fight Against Misinformation: A Conversation with Ed Bice – Join Reboot Democracy for a conversation with Ed Bice, CEO of Meedan and 2024 Skoll Award winner. We'll explore how AI-powered tools like Meedan's "Check" platform and SynDy framework are combating misinformation across 53 countries, particularly as major platforms scale back fact-checking efforts. Learn how these innovations are protecting democratic processes in our increasingly complex information ecosystem. Sign up here.
Free AI, Governance, and Democracy Learning Opportunities with InnovateUS
-
March 11, 2025, 2pm - Ending Homelessness Together: Leveraging Data and Collaboration for Lasting Solutions
-
March 12, 2025, 2pm - Building Inclusive Climate Resilience with Human-Centered Design in Government
-
March 12, 2025, 4pm - Reading Out Loud, Growing Strong: AI Tools for Fluency Development
To register and see more workshops, visit https://innovate-us.org/workshops.