AI for Governance
Innovation at the Library of Congress with Natalie Buda Smith - Reboot Democracy, Beth Simone Noveck and Giorgia Christiansen, January 24, 2025
On January 23, the Rebooting Democracy in the Age of AI lecture series featured Natalie Buda Smith, Director of Digital Strategy at the Library of Congress, discussing how Congress can leverage AI to enhance operations. Her team is experimenting with generative AI for legislative data analysis, developing AI-generated bill summaries, testing commercial and open-source models for legislative content, and enhancing the Congress.gov API. AI-powered tools also help monitor system performance. While still in early stages, these initiatives aim to support staff workflows rather than replace human expertise, prioritizing authenticity and accuracy in Congressional work. Watch the full recording here.
AI Prototypes for UK Welfare System Dropped as Officials Lament ‘False Starts' - The Guardian, Robert Booth, January 27, 2025
“Pilots of AI technology to enhance staff training, improve the service in job centres, speed up disability benefit payments and modernise communication systems are not being taken forward, freedom of information (FoI) requests reveal. Officials have internally admitted that ensuring AI systems are “scalable, reliable [and] thoroughly tested” are key challenges and say there have been many “frustrations and false starts”.
‘Serious concerns’ about DWP’s use of AI to read correspondence from benefit claimants - The Guardian, Robert Booth, January 27, 2025
The UK’s Department for Work and Pensions (DWP) is facing criticism for its use of AI, known as “white mail,” to process the 25,000 letters and emails it receives daily, aiming to prioritize vulnerable claimants more efficiently. Previously, human review took weeks, but the AI completes the task in a day. However, concerns have been raised about transparency and data privacy, as claimants are not informed of its use, despite the AI handling sensitive personal information like medical and financial details.
GOVERNMENT AGENTS - Chief of Stuff LinkedIn Blog, Mitchell Weiss, January 24, 2025
In his essay, Harvard Business School professor and former City of Boston Chief of Staff Mitchell Weiss explores how AI agents can transform government operations by making public sector workflows more efficient. Rather than replacing human workers, these AI agents—each designed for specific tasks—could streamline processes like data collection, prototyping, and user testing, allowing government employees to focus on strategic decision-making. Weiss envisions a future where teams of AI "agents" collaborate to enhance government functions, from processing applications to improving service delivery. By integrating AI tools thoughtfully, he argues, governments can significantly boost productivity while maintaining human oversight and judgment. For more on AI Agents in Government, see last month’s essay by Tiago Peixoto of the World Bank.
Introducing ChatGPT Gov - OpenAI, January 28, 2025
OpenAI has announced a new product, ChatGPT Gov, which the company describes as “a new tailored version of ChatGPT designed to provide U.S. government agencies with an additional way to access OpenAI’s frontier models. Rather than introducing new functionalities, ChatGPT Gov focuses on enhanced security measures, such as allowing agencies to host the tool within their secure hosting environments The product launch, which some see as a response to the big splash made by DeepSeek’s launch last week, signals OpenAI’s strategic push to promote the use of its tools by U.S. government. OpenAI also wondered out loud (Mashable) whether DeepSeek has stolen data from them, which many greeted as ironic in light of lawsuits against OpenAI accusing it of ripping off copyrighted data.
Measuring and Mitigating Racial Disparities in Tax Audits - The Quarterly Journal of Economics, Hadi Ezaly et al., February 2025
The study examines how audit selection algorithms contribute to racial disparities in IRS tax audits, particularly for Black taxpayers claiming the Earned Income Tax Credit (EITC). Abstract: “Tax authorities around the world rely on audits to detect underreported tax liabilities and to verify that taxpayers qualify for the benefits they claim. We study differences in Internal Revenue Service audit rates between Black and non-Black taxpayers. Because neither we nor the IRS observe taxpayer race, we propose and use a novel partial identification strategy to estimate these differences. Despite race-blind audit selection, we find that Black taxpayers are audited at 2.9 to 4.7 times the rate of non-Black taxpayers. An important driver of the disparity is differing audit rates by race among taxpayers claiming the Earned Income Tax Credit (EITC).”
Governing AI
Copyright Office Releases Part 2 of Artificial Intelligence Report, U.S. Copyright Office, January 2025
In part 2 of its copyright guidance, the US Copyright Office e “confirms that the use of AI to assist in the process of creation or the inclusion of AI-generated material in a larger human-generated work does not bar copyright ability. It also finds that the case has not been made for changes to existing law to provide additional protection for AI-generated outputs.”
The Ezra Klein Show: MAGA’s Big Tech Divide - New York Times Opinion, Jan. 28, 2025
James Pogue and Ezra Klein explore the New Right's complex and contradictory relationship with technology. While the movement's intellectuals warn about tech's degrading effects on society and human potential, they've built their influence through social media and are now embracing figures like Elon Musk. The conversation untangles how a political movement that quotes the Unabomber's manifesto reconciles its tech skepticism with backing the world's most prominent tech leaders, and what this means for the future of American politics.
AI and Lawmaking
Artificial Intelligence Guidance for Members - Parliamentary Digital Service, January 2025
The UK Parliamentary Digital Service issued guidance for Members of Parliament endorsing the responsible use of AI technologies in their work. The Guidelines cover what is generative AI and what to use it for and outline three major principles: “1. Keep a human in the loop Whenever using content created by AI, it is important to check that output closely for accuracy and to be certain that you are happy for it to be used. 2. Be aware of what you share: What you enter into AI tools will be stored on a server that is not managed by Parliament…3. Use it if you wish: Finding and using an AI tool is the best way to find out whether AI can support your work.”
AI and Public Engagement
The Case for Local and Regional Public Engagement in Governing Artificial Intelligence - Medium, Stefaan Verhulst and Claudia Chwalisz, January 20, 2025
“As the Paris AI Action Summit approaches, the world’s attention will once again turn to the urgent questions surrounding how we govern artificial intelligence responsibly. Discussions will inevitably include calls for global coordination and participation, exemplified by several proposals for a Global Citizens’ Assembly on AI. While such initiatives aim to foster inclusivity, the reality is that meaningful deliberation and actionable outcomes often emerge most effectively at the local and regional levels. Building on earlier reflections in ‘AI Globalism and AI Localism,’ we argue that to govern AI for public benefit, we must prioritize building public engagement capacity closer to the communities where AI systems are deployed. Localized engagement not only ensures relevance to specific cultural, social, and economic contexts but also equips communities with the agency to shape both policy and product development in ways that reflect their needs and values.”
AI and Problem Solving
AI’s energy obsession just got a reality check - MIT Technology Review, James O’Donnell, January 28, 2025
Chinese AI startup DeepSeek has developed a chatbot at a fraction of the cost and energy consumption compared to industry giants. This achievement challenges the prevailing notion that significant AI advancements require massive investments in energy-intensive data centers. DeepSeek's approach suggests that efficient AI development is possible with less powerful hardware, potentially reducing the environmental impact of AI technologies. However, experts caution that as AI becomes more accessible, overall energy demand may still increase, underscoring the need for sustainable practices in AI development. See also last week’s story about Stargate.
The Impact of 25% Tariffs on Canadian GDP - The Lens, Stephanie Kelton, January 27, 2025
Economist Stephanie Kelton used the AI language model DeepSeek as a sophisticated simulation partner for economic policy analysis. Rather than providing a simple prediction, DeepSeek reasoned step by step, constructing a formal economic model, running simulations with various parameters, and performing sensitivity analysis. The AI completed in 12 seconds what would typically take policymakers weeks, demonstrating its potential as a powerful tool for rapid policy prototyping and complex economic modeling.
AI and Public Safety
Ready for Wildfire: Using GenAI as a "Practice Partner" for Future-Ready Governments - Reboot Democracy, Michael Baskin, January 29, 2025
AI can help cities prepare for crises, not just plan for them. “To be ready for emergent futures, organizations need to shift from having planned to being prepared. Practice closes the readiness gap. Organizations and leaders that re-imagine GenAI as a ‘practice partner’ can build adaptive, resilient organizations that are ready for what’s coming. As a ‘practice partner,’ GenAI can run live open-ended scenario exercises for city governments with low cost, low barrier to entry, and high effectiveness.”