Imagine a government that actively listens to its citizens, quickly responds to their needs, and uses cutting-edge technology to improve their lives. This is the promise of artificial intelligence (AI) in public service.
However, the reality is that many people have had frustrating experiences trying to communicate with government agencies. Whether it's navigating a confusing website, waiting on hold for hours, or feeling like their feedback goes into a black hole, citizens often struggle to make their voices heard.
AI has the potential to change that. By automating routine tasks, analyzing large volumes of data, and facilitating more personalized interactions, as I explained in testimony before the Senate Committee on Homeland Security and Governmental Affairs, AI could help government agencies become more responsive, efficient, and user-friendly.
But as the use of AI in government expands, it also raises important questions. How can agencies ensure that AI systems are transparent, accountable, and aligned with the public interest? What steps should they take to mitigate potential risks and unintended consequences? And how can they involve citizens in the design and oversight of these powerful technologies?
The White House Office of Management and Budget (OMB) recently released final guidance to federal agencies aimed at answering these questions. The memorandum (mostly unchanged from the November draft and subsequent public comment) lays out requirements for the responsible development and use of AI in government, with a particular focus on building AI skills and expertise within the federal workforce.
While the guidance is a strong step in the right direction and usefully takes a risk-based approach to internal AI governance, it misses a key opportunity to leverage AI itself as a tool for better public engagement. As we'll see, some agencies are already pioneering innovative approaches to using AI to listen to and learn from citizens. The OMB memo could have done more to highlight and encourage these efforts and instruct agencies in how to use AI to go beyond the status quo.
Let's take a closer look at the key provisions of the OMB guidance and consider what it will take to realize the full potential of AI as a force for responsive, accountable, and human-centered government.
New Roles and Responsibilities
The OMB Memorandum lays out the requirements federal agencies must follow regarding their own use of AI. Within 60 days, each agency must designate a Chief AI Officer (CAIO) to coordinate AI activities, promote innovation, and manage risks. The CAIO's responsibilities include developing compliance plans, advising on resource needs, and ensuring agencies don't use AI systems that fail to meet the new standards.
Agencies also have to convene an AI Governance Board within 60 days to oversee AI use and submit biannual compliance plans to OMB. Every year, agencies must inventory their AI systems and explain how they're being used.
As Mozilla points out: "The policy is rooted in a simple observation: not all applications of AI are equally risky or equally beneficial." It focuses the work of the CAIO on those high-risk uses while allowing low-risk uses to move forward unimpeded.
Workforce Development
One of the memo's strongest points is its emphasis on building AI skills and expertise within the federal workforce. Agencies have to assess their current AI capabilities, project future needs, and make plans to recruit, hire, train, and retain both technical and non-technical AI talent.
The memo recommends a range of strategies, including designating AI Talent Leads, offering reskilling opportunities, and adopting best practices from the Office of Personnel Management. For AI systems that could impact rights or safety, agencies must also ensure role-specific training so that human operators can effectively oversee the technology.
Public Engagement
The final guidance does require agencies to solicit public input on "rights-impacting" AI systems and use that feedback to inform decisions about whether a particular application should move forward. If the feedback suggests an AI system would do more harm than good, agencies are encouraged not to use it.
However, the memo is light on specifics when it comes to how agencies should engage the public. It lists old-fashioned mechanisms like usability testing, public hearings, and outreach to federal employee unions, but is silent on the ways AI itself could facilitate better communication between agencies and the people they serve.
Missed Opportunities
That's a missed opportunity. The Department of Veterans Affairs, for example, is already using AI to analyze online feedback from veterans and make it easier to understand and act on. That's leading to improved services for the men and women who have served our country.
Yet the OMB guidance does little to encourage other agencies to follow the VA's lead or explain how to do so.
Instead of seeing public engagement as integral to successful AI adoption, the memo treats it as a box to be checked. The new CAIOs are charged with promoting innovation and mitigating risk but public engagement is not elevated to one of those core responsibilities.
The Path Forward
Make no mistake: the OMB guidance is a step in the right direction. By emphasizing workforce development, risk management, and some public input, it lays the groundwork for the responsible use of AI in government to modernize government operations.
But realizing the full potential of AI will require a more fundamental shift. Instead of viewing public engagement as an afterthought, agencies need to recognize engagement as an invaluable tool for designing AI systems that truly serve the public interest and to make use of AI to make such engagement efficient and effective.
With a commitment to exploring how to use AI to strengthen public engagement, AI could usher in a new era of responsive, accountable, and human-centered government and use of AI aligned to the public interest. The OMB memo is a strong start, but there's still a long way to go.