From Interim to Institution: New Jersey’s Three-Pillar Strategy for Responsible AI
In July, Code for America named New Jersey one of only three states with “advanced” AI readiness. That honor recognizes how we have combined policy, access, and training into a coherent strategy for using AI responsibly in government.
We were the first state to issue an AI use policy back in 2023. Importantly, that guidance was always meant to be interim. Its purpose was to encourage responsible experimentation at a moment when generative AI tools were brand new and untested in government.
We knew staff needed both permission and guardrails to start exploring how these technologies might make their work faster, clearer, or more accessible to the public.
Two years later, more than 15,000 state employees—over one in five—are using generative AI. They’ve logged hundreds of thousands of prompts in the NJ AI Assistant, and they’ve completed training that equips them to use the technology ethically. With this scale of adoption, it was time to update our policy.
This month, New Jersey’s Chief Technology Officer, working together with the State’s Office of Innovation, issued version 2 of the State’s AI Policy.
What’s New in the 2025 Guidance
The revised guidance moves from a spirit of exploration to one of institutionalization, shifting from “try it out” to “scale it safely.”
-
Training is now required.
“Before accessing or using generative AI in their official capacity, all state employees should take the ‘Responsible AI for Public Professionals’ course available in the New Jersey Civil Service Commission Learning Management System.”
In New Jersey, we designed our own training and shared it freely with other states through the InnovateUS initiative -
High-risk uses require clearance.
“The use of resident-facing or decisional generative AI systems must be cleared by the State Chief Technology Officer or their delegate and registered with the NJ Office of Information Technology… When resident-facing or decisional generative AI systems are used, disclosure of generative AI use must be displayed prominently to the user.”
In other words, public professionals should—and are—using AI for their own productivity, but heightened scrutiny must be applied for uses that directly impact residents. -
Secure environments for sensitive data.
“Sensitive Personally Identifiable Information… may only be used under the following conditions: the tool used is a State-Approved AI Tool such as the NJ AI Assistant… [and] the tool use is approved by your Agency Chief Information Officer.”
This is a major shift from the earlier blanket ban. -
Human review remains non-negotiable.
“Human review of AI content should cover the following elements: accuracy; gender, racial, and other types of bias; completeness; accessibility; and style.”
Together, these updates show a deliberate shift from encouraging experimentation to building the systems and safeguards needed for AI at scale.
Why the Policy Still Matters Most
Access and training are critical, but policy remains the foundation. Without clear guardrails, public servants might not feel safe experimenting—or worse, they might adopt AI in ways that undermine trust.
The policy provides both permission and protection: it tells employees that they can use AI to draft, translate, summarize, or analyze, but it also tells them how to do it safely. It ensures that even as we scale access and build skills, AI is used for purposes that matter—helping families access food benefits, simplifying forms, improving call center response, or analyzing citizen feedback.
Lessons for Other States and Cities
I think we got this sequencing right and recommend it to others considering how to incorporate AI responsibly:
-
Start with a policy that encourages experimentation in the public interest. Don’t wait for perfection—set interim guardrails so employees can begin learning. Boston did this first, and we followed its lead in using AI with a clear injunction to use it wisely to improve governance.
-
Invest in access and training together. Giving people tools without guidance is risky; training without access is irrelevant. That’s why we created the InnovateUS training that shows how to use AI for public purposes in governing—and gave staff access to the latest models so they could test what AI can and cannot do while protecting privacy.
-
Update the policy as adoption grows. Governance must evolve alongside usage, becoming more structured as risks and opportunities become clearer. Every agency has an AI lead and we meet regularly. This forum, together with our live trainings, provides additional opportunities to surface and answer questions. We answer them in an FAQ that we can add to even faster than the policy.
Beyond Policy: The Next Challenge
New Jersey’s updated guidance is less about rules on paper and more about creating the conditions for responsible, meaningful use of AI in government. But policy is not the end point.
As Chief AI Strategist for New Jersey, I believe our next challenge is to accelerate how we use these tools critically in our day-to-day work to improve service delivery. The policy is a guardrail—but the purpose is impact.
By using AI to match records across agencies, the State’s Office of Innovation enrolled more than 693,000 eligible children in the summer food program, reaching tens of thousands more families than traditional methods.
We need to do much more of that—leveraging AI not just to make government work faster, but to make it work better for the people who rely on it most.
The Bottom Line
Policy, access, and training—together—make it possible for AI to improve the daily work of public service, while keeping equity, accountability, and public trust at the center. The job now is to ensure that responsible use translates into real, measurable improvements in people’s lives.