This week, Virginia Governor Glenn Youngkin issued an executive order directing agencies to initiate an AI-powered regulatory review process. Under the pilot program, all regulations and guidance documents statewide will be scanned using a generative AI (GenAI) tool to identify inconsistencies, redundancies, and areas where language could be made simpler.
While much of the excitement around AI’s value to improve governance has focused on service delivery, Virginia’s plans show the potential for the technology to improve policymaking by enabling states to more effectively and efficiently review and revise laws and policies.
But we can also think even bigger about AI's potential to improve policymaking. In particular, with the rise of new AI-powered research tools, great opportunities are emerging to use AI to conduct policy research more effectively and efficiently.
Artificial intelligence can revolutionize policymaking by enabling policy researchers to cast a wider net for evidence, accelerate the synthesis of complex information, and incorporate community perspectives more effectively than traditional methods allow.
Whether policymakers are defining and prioritizing problems, developing solutions, determining how to implement those solutions, or evaluating the resulting policies and programs, they must gather, synthesize, and analyze information from both research evidence and insights from affected groups. This research forms the foundation of evidence-based policymaking, helping officials make informed decisions that serve the public interest.
Artificial intelligence can revolutionize policymaking by enabling policy researchers to cast a wider net for evidence, accelerate the synthesis of complex information, and incorporate community perspectives more effectively than traditional methods allow. However, this transformation also presents significant challenges from bias and transparency concerns to the risk of over-reliance on algorithmic outputs. Understanding both the promise and the pitfalls of AI-enabled research tools is essential for implementing them effectively and responsibly.
Opportunities
AI-enabled research tools can enable policymakers to gather more evidence from a wider range of sources, accelerate the policymaking process, and allow for better engagement with communities throughout the policy development process.
Expanding the Evidence Base
First, AI can enable policymakers to cast a wider net, incorporating greater volumes of evidence from a wider range of sources into their decision-making processes than traditional bench research approaches would allow. AI-powered web crawlers and web search agents can rapidly scan thousands of web pages including academic journals, government reports, news media, and social platforms for relevant information in response to a prompt. Large language models can summarize, synthesize, and draw insights from extensive searches, making large volumes of knowledge accessible to and understandable by policymakers.
Recognizing the opportunity, a number of AI-powered search tools have been developed that use AI to assist with searching for and synthesizing information. These include AI-powered search engines like Perplexity AI and tools to sift through scientific literature, such as Consensus and Scite. Policy Synth is an open source AI research toolkit designed specifically for policymakers. For resource-constrained government agencies facing increasingly complex policy challenges, AI research tools offer a great opportunity to develop a more comprehensive understanding of complex issues while maintaining evidence-based rigor and accessing previously untapped sources of relevant information.
Accelerating the Policy Research Process
Second, AI can make time-intensive research tasks more efficient, allowing policies to be developed more rapidly.
AI research tools can streamline multiple stages of the policy research workflow, from initial literature searches to report writing. For example, when users enter a question into the AI research tool Elicit, the platform returns a list of papers along with summaries. Users can then ask follow up questions to dig deeper, making the process of identifying relevant evidence and sources more efficient. Large language models such as ChatGPT, Claude, Copilot, Gemini can also make the process of writing up the findings from research more efficient, for example, by quickly generating first drafts of reports or memos, providing feedback and editing assistance, and helping writers to anticipate criticism or questions from others.
For policymakers facing tight deadlines and limited research capacity, these AI-enhanced synthesis capabilities offer a way to rapidly develop evidence-based policy proposals while maintaining analytical rigor.
New tools are beginning to combine automated search and report writing functionalities into a single workflow. For example, OpenAI’s “deep research” or Google’s “AI co-scientist,” combine various AI agents for search, evidence synthesis, and data analysis together with large language models to draft detailed reports, complete with references, in a fraction of the time it would take a researcher to produce the same output. These platforms go beyond basic search capabilities by providing contextual information, establishing links between different sources, and producing thorough analyses.
For policymakers facing tight deadlines and limited research capacity, these AI-enhanced synthesis capabilities offer a way to rapidly develop evidence-based policy proposals while maintaining analytical rigor. For example, New Jersey's AI Task Force used the AI research toolkit Policy Synth to synthesize evidence from thousands of online sources and transform findings into implementable policy recommendations in just eight weeks with only one part-time policy professional supporting the working group. New initiatives like the United Kingdom's £9.2 million investment in AI infrastructure for evidence synthesis recognize the potential of AI tools to dramatically improve the speed, quality, and usability of evidence synthesis for policymakers.
Incorporating Community Expertise
Finally, policymakers can combine AI-powered research tools with platforms that facilitate citizen input, enriching the research stage with broader perspectives while also enhancing the legitimacy of the policymaking process. There is an expanding number of digital platforms to facilitate online discussion, debate, and dialogue, often aided by AI. Just a few examples include platforms for online deliberation, such as GoVocal, Pol.is, or Remesh; opinion prioritization tools such as All Our Ideas; and transcription and sensemaking tools that support rich in-person deliberations, such as Cortico, Dembrane, or DeliberAIde.
Public institutions are already using AI tools to incorporate community expertise into the lawmaking and policymaking process. Bowling Green, Kentucky facilitated an online conversation with nearly 8,000 residents to co-create a vision for the town’s future as part of the community’s long-term planning process. AI helped to make sense of the large volume of ideas submitted and to identify areas and levels of agreement. New Jersey’s State AI Task Force used the pairwise voting platform All Our Ideas to engage 2,200 workers in prioritizing among the problems and opportunities presented by AI to transform the state’s workforce and economy. California is deploying an AI-powered deliberation platform to gather ideas from residents about how to support communities’ recovery from the 2024 Los Angeles wildfires. Thousands of Brazilians already participate in shaping legislative proposals through the Senate’s online engagement platform. The Senate is exploring how AI could make citizens’ participation more impactful. These examples demonstrate how AI can process and analyze large volumes of citizen input, identify patterns in public opinion, and extract key themes from community feedback.
For policymakers, combining knowledge synthesized from published research on the web together with findings from public engagement presents a great opportunity to create a more comprehensive understanding of problems and solutions. Policymakers often face a trade-off between the amount of evidence that they can review and the depth of community input that they can gather. AI can support the research stage of the policymaking process by making public input more actionable and useful to policymakers while also making the process of gathering and processing public input more efficient. By combining AI tools to synthesize collective knowledge with platforms to gather insights from communities, policymakers can develop a more nuanced understanding of complex issues that is not only more evidence-based but also more responsive to public needs and concerns.
Challenges
At the same time, for AI-powered research tools to be used effectively and responsibly, policymakers must overcome challenges including the risk of bias, a lack of transparency and accountability, deference to AI over human judgment, and the limited perspectives that AI is able to represent.
Overcoming Bias
First, there is a risk that AI-enabled research tools can deliver biased results.
One chief concern around bias in AI systems is “input bias,” a problem which occurs when the materials fed to AI tools do not accurately represent what the system purports to model. For example, facial recognition systems have been shown to be worse at recognizing the faces of darker-skinned people and women, due to training datasets that overrepresent lighter-skinned individuals and men.
When policymakers rely on AI research tools to analyze complex social issues or evaluate policy options, both input and output biases could skew the information that shapes critical decisions affecting millions of people.
At the same time, we also must be cautious of “output bias” – biases present in the responses and recommendations that AI systems generate, even when trained on seemingly balanced datasets. To take an example, we can look at the ongoing fiasco with Grok, the chatbot developed by Elon Musk’s xAI company. After a system update in July 2025, Grok began posting antisemitic conspiracy theories and making hateful comments about Jewish individuals on social media site X. According to the company, a system update to the bot’s internal instructions made Grok susceptible to extremist content from existing X user posts, causing it to "ignore its core values in certain circumstances in order to make the response engaging to the user." The incident demonstrates how an AI system’s instructions can amplify biases and harmful content in the system’s inputs. (Just days after xAI patched the problem, the company announced a new “Grok for Government” initiative to provide federal, state, and local agencies access to the AI model, including a $200 million deal with the Pentagon.)
While facial recognition tools failing to recognize Black faces and chatbots spewing vitriolic slop are obvious examples of the bias problem, the implications of bias for AI-powered policymaking tools are potentially more subtle and far-reaching. When policymakers rely on AI research tools to analyze complex social issues or evaluate policy options, both input and output biases could skew the information that shapes critical decisions affecting millions of people. An AI tool trained on historical policy documents might perpetuate past discrimination by recommending approaches that have previously excluded marginalized communities, while output bias could lead research tools to present findings in ways that reinforce existing power structures while minimizing the concerns of underrepresented groups. Unlike a chatbot that generates obviously problematic content, biased policy research tools might produce polished, authoritative-sounding reports that embed harmful assumptions or overlook crucial perspectives, making their bias harder to detect. These challenges underscore the critical need for public institutions to implement robust ethical frameworks, accountability mechanisms, and bias mitigation strategies when deploying AI-enabled research tools in policy development.
Ensuring Transparency, Explainability, and Accountability
Second, there is a related risk that the opaque nature of AI tools can create transparency and accountability challenges for policymakers who must justify their decisions to the public. While users can observe inputs and outputs of AI systems, the mechanisms that these systems use to process information and make decisions cannot always be understood or easily explained – a challenge known as the “black box” problem. This lack of transparency has real-world consequences, as seen in automated decision-making systems in Brazil, India, and the Netherlands that have incorrectly denied welfare benefits to eligible recipients – leaving those in need without a clear understanding of why they are being denied benefits to which they are owed. When the reasoning behind AI recommendations and decisions cannot be explained, affected individuals cannot effectively understand or appeal incorrect, harmful, or biased outcomes.
For AI-enabled policy research tools, this transparency problem could prove problematic, as policymakers may struggle to explain to constituents how they arrived at particular policy recommendations or justify why certain evidence was prioritized over other considerations. The democratic process depends on public accountability and the ability to scrutinize decision-making processes, making it essential for public institutions to prioritize explainable AI systems and develop frameworks that clearly document how artificial intelligence informs policy development.
Deferring to AI Over Human Judgment
As AI research tools become increasingly sophisticated, there's a risk that policymakers may inappropriately defer to algorithmic outputs rather than exercising critical human judgment.
Under the current administration, U.S. federal agencies have rushed to adopt AI tools across diverse policy areas including healthcare, immigration, and housing, often without adequate safeguards to prevent misuse. For example, the White House’s recent "Make America Healthy Again" (MAHA) report, spearheaded by Health and Human Services Secretary Robert F. Kennedy Jr., contained fabricated citations that experts believe were generated by artificial intelligence. Despite containing fundamental flaws that undermined the report's credibility, the White House dismissed the fabricated citations as mere "formatting issues," illustrating how officials may downplay serious problems with AI-generated content when it supports their preferred policy positions. Complex policy decisions require human judgment, ethical consideration, and accountability that AI systems cannot provide, making it essential for institutions to position AI as an advisor rather than a decision-maker.
Echo Chambers and Limited Perspectives
Finally, AI-enabled research tools may inadvertently create echo chambers that prioritize established viewpoints over innovative or alternative perspectives in policymaking. The search algorithms used to scan research databases typically favor content with high visibility, engagement metrics, or established credibility markers. An increasing number of websites are blocking web crawlers to prevent AI developers from accessing their data to train their models. As a result, web crawlers may miss newer scholarship, alternative viewpoints, or knowledge locked behind paywalls or inaccessible to web crawlers. For policymakers seeking transformative approaches to complex challenges, a lack of comprehensive access to knowledge could impair innovation by narrowing the range of ideas and evidence considered by AI-powered research tools. These echo chamber effects highlight the need for intentional strategies to diversify information sources and ensure that AI-enhanced research tools don't inadvertently constrain the breadth of policy thinking.
Conclusion
AI-enabled research tools represent a transformative opportunity for evidence-based policymaking, offering unprecedented capabilities to expand the evidence base, accelerate research processes, and incorporate community perspectives at scale. Early adopters like Virginia's regulatory review initiative and New Jersey's AI Task Force demonstrate how these technologies can help resource-constrained governments tackle complex challenges with greater speed and comprehensiveness than traditional methods allow.
However, the promise of AI-powered policy research comes with significant risks including bias, transparency challenges, over-reliance on algorithmic outputs, and the creation of echo chambers that may stifle policy innovation. The key to realizing AI's potential while mitigating these risks lies in ensuring that human experts remain firmly in control throughout the research and policymaking process. Only by keeping human expertise at the center of the policymaking process can we harness these powerful tools to serve the public interest while preserving the democratic values of transparency, accountability, and inclusive governance.
Cover image by Scott Graham.