In “AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance,” researchers at Harvard and UCL looked to jurisdictions with more mature regulations around government AI use – Brazil, Singapore and Canada – and examples of implemented checklists – the “Canadian Directive on Automated Decision-Making” and the World Economic Forum's “AI Procurement in a Box” – to provide suggestions on how governments should design AI procurement checklists, to avoid AI bias while accelerating AI implementation.
“On the one hand there are hard-to-address pitfalls associated with AI-based tools, including concerns about bias towards marginalized communities, safety, and gameability. On the other, there is pressure not to make it too difficult to adopt AI, especially in the public sector which typically has fewer resources than the private sector––conserving scarce government resources is often the draw of using AI-based tools in the first place. These tensions create a real risk that procedures built to ensure marginalized groups are not hurt by government use of AI will, in practice, be performative and ineffective.”
They make three major observations:
-
Need for expertise: The implementation of AI systems in government requires specialized knowledge that current generalist civil servants may not possess. AI procurement checklists are most effective when used by experts, as they serve as reminders to ensure critical aspects are not overlooked. There is a noted shortage of such experts, which poses a significant challenge to the ethical and effective deployment of AI technologies.
-
Closing loopholes: There are existing gaps in the AI procurement process that allow certain AI systems to bypass full scrutiny. “When defining ‘AI,’ it is very difficult to sweep in all systems that need additional oversight—we need to close the loopholes.”
-
Transparency is paramount: “No expert audit will be perfect. Public transparency is a necessary component for deploying effective and ethical AI systems.” Transparency is essential for maintaining public trust and ensuring the responsible use of AI. The paper argues for more public disclosure about the AI systems being used, including details about their design, implementation, and ongoing monitoring. This openness allows for better scrutiny and helps identify potential issues that may not be evident to experts alone.
Read the full report: AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance
Report by Tom Zick, Mason Kortz, David Eaves, and Finale Doshi-Velez; Harvard University and University College London