Last week, RAND published the takeaways from its May convening on AI-Enabled Policymaking. The event brought together representatives from research institutions, government, philanthropy, and the tech industry for a series of structured conversations which explored how artificial intelligence can enable more effective policymaking. As one of the workshop participants representing The GovLab, I wanted to summarize the learnings shared in RAND’s report, and then offer some thoughts about how these findings inform future work in our shared effort to support AI-enabled policymaking in practice.
What We Covered
Over four sessions, participants discussed:
- How the concept of AI-enabled policymaking should be defined
- Technical capabilities and requirements for AI systems
- Considerations for implementing AI systems
- The technical roadmap and future priorities for AI-enabled policymaking.
The discussions surfaced several key opportunities where AI tools can enhance policymaking by:
- Automating routine or formulaic tasks, such as drafting memos or legislative text, allowing staff to use time more efficiently.
- Enabling new and better forms of engagement with constituents.
- Democratizing access to high-quality information by making knowledge traditionally limited to well-funded institutions more widely available.
- Improving knowledge management by helping to track institutional and personal wisdom and know-how.
- Facilitating deliberation and negotiation to help reach areas of consensus
- Summarizing meetings and hearings, simplifying complex or overly legal language and translating information between different formats.
Participants also highlighted significant challenges:
- Limitations in current AI systems’ ability to effectively reason, prioritize, detect weak evidence, anticipate long-term consequences, and capture tacit knowledge – skills that policymakers commonly rely upon.
- Policymakers may not trust AI systems due to their potential to produce errors and hallucinations, concerns about bias, and other limitations in current systems.
- Institutional barriers including ambiguous AI acceptable use policies, a lack of clarity on tool capabilities, and challenges integrating AI tools into existing workflows, may slow adoption.
- The potential erosion of human agency, critical thinking and judgment skills if policymakers become more dependent on AI tools.
- Information overload as policymakers are flooded with information of varying quality, making it harder to find signal in the noise.
- Biases in how models are prompted and trained.
- The need for policymakers to develop skills that will allow them to effectively use AI tools.
The discussions emphasized that AI can support more efficient, evidence-based, and participatory decisionmaking throughout each stage of the policymaking process.
Where We Go Next
RAND’s report concludes by highlighting “the need for strategic adoption and development of AI tools to enhance policymaking processes and outcomes.”
So, where’s the best place to get started?
While it is important to understand the capabilities and limitations of AI platforms, the greater challenge lies in equipping public institutions with the leadership, skills, and governance frameworks necessary to translate those tools into meaningful, trusted, and democratic policymaking.
Building on these workshop findings, one priority for the AI-enabled policymaking community, I believe, should be research investigating how AI tools are already being used to support policymaking and improve service delivery. For decades, innovative public institutions around the world have experimented with using new technologies to improve the quality and legitimacy of their decisionmaking processes, and powerful new large language models are supercharging these efforts.
Based on my research at the GovLab, for example:
In 2024, New Jersey’s State AI Task Force developed and used a custom AI-enabled research toolkit together with a digital engagement with thousands of residents to design AI policy recommendations.
Virginia’s Governor recently announced plans to use AI to scan all the state’s regulations to identify duplicative or contradictory laws.
In Brazil, the federal Senate uses an online platform to engage thousands of citizens in developing legislative proposals and is now exploring how AI can further improve participation.
Bowling Green, Kentucky used AI to help facilitate an online conversation with thousands of residents to co-create a vision for the town’s future as part of the community’s long-term planning process.
As RAND’s report rightly points out, there are many difficult and complex institutional barriers to effectively implementing AI tools in the policymaking context. Studying initiatives like these could be hugely useful to better understand how policymakers have overcome these barriers (and other challenges surfaced in the workshop) in practice.
While a large body of research aims to answer the question of how to effectively govern AI, efforts to evaluate real-world case studies of uses of AI for governance are few and far between.
Public institutions need stronger capacity to effectively use AI—from technical infrastructure and leadership to governance frameworks and workforce skills. The most powerful tools won't create impact if public servants cannot use them properly.
Achieving AI readiness requires sustained transformation that many American institutions are just starting to tackle. A recent Code for America analysis found that only three states demonstrated advanced AI capabilities.
On the federal side, agencies that were already chronically under-resourced are in crisis as the White House continues to cut funding and staffing even as major AI firms race to sell their tools to goverment,
The challenge isn't just technical. It's building the institutional capacity needed for meaningful, trusted, and democratic policymaking.
Read the full report here.