Loading blog post, please wait Loading blog post...

Listen to the AI-generated audio version of this piece.

00:00
00:00

AI for Governance: How Institutions Can Provide AI Access Safely, Affordably, and at Scale

When “Enterprise-Wide” Meets the Budget

The State of Massachusetts just announced that it will deploy a ChatGPT-powered AI Assistant across the executive branch to nearly 40,000 employees. The administration describes a “walled-off, secure environment” that protects state data and rolls out in phases, beginning with access for the Executive Office of Technology Services and Security.

$7-$13 per user per month.

Publicly-reported procurement documents make clear what enterprise-wide means in budget terms: depending on scale, annual licensing costs range from roughly $1.56 million (10,000 seats) to $3.36 million (40,000 seats) or between $7-$13 per user per month.

That’s a large, recurring line item. What happens when budgets plummet? Or when the ChatGPT gets worse and Claude gets better? And how do we ensure that the cost is leading to greater productivity, improved services and better outcomes for residents?

This kind of enterprise-wide contract is not unusual. As jurisdictions seek to join the AI revolution, it’s one of the faster ways to ensure employees have access to the latest technology. 

New Jersey reports costs of roughly $1 per user per month, and Boston expects to spend under $10,000 in year one while it lts staff learn how to use the tools well.

At a recent InnovateUS workshop, Accessing AI Safely: Setting Up an AI Sandbox, colleagues Naman Agrawal, Director of Engineering, Dave Cole, Chief Innovation Officer, Amani Farooque, Director of Product, Ruthie Nachmany, Product Manager, New Jersey Innovation Authority and Santiago Garces, Chief Information Officer, City of Boston described how they rolled out access government-wide not with a per-seat enterprise contract, but with tools built inside existing cloud agreements and priced them on actual usage, with the guardrails—authentication, auditability, acceptable-use policy, and training—designed in from the beginning. 

New Jersey reports costs of roughly $1 per user per month, and Boston expects to spend under $10,000 in year one while it lts staff learn how to use the tools well.

In the workshop (watch the recording) and a follow up interview with both teams, we asked five key questions:

  • What does AI use cost at scale?

  • Who decides who gets access?

  • How do we log usage?

  • What pricing model makes sense if usage varies widely across staff?

  • How do we pair access with training so it’s safe — and actually useful?

If you are interested in how to buy AI effectively, join us for our upcoming workshop and discussion series on Practical Strategies for Buying and Building Public AI, where we will explore together how to acquire AI that is affordable, accountable, transparent and effective.

A Different Model: Build Inside Your Own Cloud

Both New Jersey and Boston chose to build government-hosted AI tools inside their existing cloud environments rather than license a commercial assistant for every employee.

New Jersey built its AI Assistant (not called a sandbox), using tools already available within its government cloud services. The first version was developed in about a month. It was intentionally limited in features and launched alongside free, responsible AI training for the workforce. Today 20,000 employees have used it and 64,000 have access.

Boston is deploying what it calls its GenAI Sandbox — an open-source project adapted for government use from an open source project built by AI for Impact Fellows — on AWS Bedrock, leveraging its existing enterprise agreement.

In both cases:

  • Authentication runs through the government’s identity system.

  • Prompts and logs remain inside the agency’s cloud environment.

  • Guardrails are configured by the government, not the vendor.

  • Costs are largely usage-based, not per-seat.

  • This design choice drives everything that follows.

​​The question is not simply “buy versus build.” It is whether the government controls the infrastructure layer or rents it.

Question 1: Who Gets Access?

Boston was the first city and New Jersey the first state in the United States to publish an responsible AI use policy. Thus, both AI assistant/sandbox deployments were paired with formal acceptable-use policies that pre-dated covering fact-checking, disclosure, and limits on sensitive data entry. With widespread adoption, both jurisdictions have published an updated policy, reflecting the questions that user actually have.

New Jersey:

Generally, all executive branch employees have access to the NJ AI Assistant. The tool was built for its 64,000+ state workforce. To date, 20,000 have used the tool. Access is paired with responsible AI training strongly encouraged by the governor and integrated into the rollout. 

Boston:

All 25,000 employees and contractors are eligible for access to the AI Launchpad and BidBot. Users gain access after completing a Basics of Responsible AI course through InnovateUS. About 200 trained users are currently active as Boston evaluates early usage before scaling further.

“They began not with a per-seat enterprise contract, but with tools built inside cloud agreements they already controlled.”

Lesson:

In both cases, New Jersey (where until recently I served as the Chief AI Strategist) and Boston use free training designed for public servants by public servants, rather than by a vendor, to ensure that employees are learning how to use AI responsibly for government use.Access is broad, but not unstructured. Both governments pair access with training and acceptable-use policies. Access is not simply “turned on” but they are achieving scale at low costs.

Question 2: How Was It Procured?

New Jersey:

The New Jersey Office of Innovation built the AI Assistant using products already available in its cloud environment. The first version used Microsoft’s open-source chatbot application deployed through Azure OpenAI services. Following strong adoption, the state is rebuilding the product with more robust infrastructure, incorporating LibreChat and additional user research.

Boston:

Boston is using AWS through its existing enterprise agreement (adopted from a state NASPO contract). The system is built on components that incur costs only when used. No new large licensing contract was required.

Lesson:

Neither jurisdiction began with a new, enterprise-wide commercial license. They worked within existing cloud contracts and layered AI services into infrastructure they already controlled.

Question 3: What Does It Actually Cost?

Massachusetts’ enterprise licensing model prices access per seat, with annual costs in the millions depending on scale. New Jersey and Boston structured pricing differently.

“A minority of employees will use AI frequently; many will use it occasionally. A per-seat license charges the same for both.”

New Jersey:

Approximately $1 per user per month — compared to roughly $20 per user per month for many commercial tools. The state chose a pay-as-you-go model initially to understand usage patterns before committing to provisioned infrastructure.

As of January 2026:

  • 20,000+ employees have used the AI Assistant

  • 1,000,000+ total prompts

  • 79% positive feedback rating

  • 97% successful response rate

  • Costs remain below the team’s allocated budget

NJ reports an average of seven sessions per user, with roughly three prompts per session — evidence that usage varies widely rather than uniformly across staff.

“Pricing structure matters as much as model performance.”

Boston:

Estimated first-year cost: under $10,000.

The system incurs cost only when employees use it. This allows Boston to provide broad eligibility without paying for dormant seats.

Boston’s CIO framed the economic reality clearly: “a minority of employees will use AI frequently; many will use it occasionally. A per-seat licensing model charges the same for both groups. A usage-based model reflects that distribution.”

Lesson:

Pricing structure matters as much as model performance. Government usage patterns do not resemble uniform enterprise adoption curves.

Question 4: How Do You Decide Whether to Scale?

Both jurisdictions instrumented their systems from day one — tracking prompts, sessions, user feedback, and cost in order to learn before scaling.

For example, New Jersey tracked:

  • Number of users

  • Sessions per user

  • Prompt counts

  • Feedback widget ratings

  • Qualitative interviews and surveys

The initial version launched with minimal features. When usage grew quickly and feedback was strong, the state invested in a more robust rebuild.

Boston is taking a similar approach: monitor recurring users, monthly usage, and satisfaction before expanding commitments.

Lesson:

Evidence drives scaling decisions.

Question 5: Why Build Your Own Tool at All?

Both teams acknowledged the obvious counterargument: commercial vendors will always ship features faster.

The response was pragmatic.

A government-hosted tool allows:

  • Control over data

  • Configurable safety filters

  • Model comparison inside one interface

  • Usage analytics

  • Model routing based on cost and task complexity

  • Fewer features, but more relevant ones for public work

In Boston’s case, the system includes a feature that routes prompts to the most cost-effective model depending on task complexity — a small but telling example of designing for government economics rather than consumer experience.

New Jersey emphasized something similar: launching quickly, learning from actual use, and improving iteratively instead of over-engineering before demand was clear.

What This Means for Other Jurisdictions

States and cities are keen to give public servants safe access to secure tools. 

But the underlying design choices — per-seat licensing versus usage-based infrastructure, vendor-managed environment versus government-hosted tool, optional training versus required access pathways — create very different budget profiles and governance dynamics.

In my view, governments helped create the AI market. They do not need to accept whatever model that market offers. The real question is whether public institutions will exercise their leverage — insisting on affordability, auditability, and measurable public value — or allow enterprise defaults to define the future of AI in government.

WATCH the workshop recording 

SIGN UP for Practical Strategies for Buying and Building Public AI, beginning in March

Tags