Watch the InnovateUS workshop: Using AI to Understand Public Sentiment
A few years ago, I attended a town hall in my community. The mayor had just unveiled a redesign of a central square with renderings on the screen and an open mic for "community input."
One by one, residents stood up. Almost none of them wanted to talk about the square. They wanted to talk about why the square had been prioritized in the first place. Why not the bus line that had been cut? Why not have the school wait three years for repairs?
The mayor was listening. The residents were engaging. And everyone left frustrated, because the two weren't the same thing.
The mayor was listening. The residents were engaging. And everyone left frustrated, because the two weren't the same thing, and nobody in the room had the language to name the mismatch.
I've seen a version of this play out in dozens of cities since. It's one of the reasons we built Go Vocal — a participation infrastructure used by hundreds of governments worldwide to hear from their residents at scale and turn that input into decisions.
AI has reshaped this work, not in the way most people assume. It hasn't made democracy more automated, and it shouldn't. What it has done is remove the bottleneck that once made listening at scale practically impossible.
When a city receives 15,000 comments on a single question, no team of civil servants can read them all manually in any timeframe that would be useful for an actual decision. AI can cluster, surface sentiment, and generate traceable summaries so a human decision-maker can engage with what residents said, rather than drown in it.
What we've learned, over and over, is that the technology matters less than the timing. Governments are willing to listen. It's whether it's listening at the right moment, in the right way, for the right decision.
A framework for listening and engaging across the policy cycle
Policy change doesn't happen in a single instance. It moves through a cycle — identifying what problems matter, developing options, making decisions, implementing them, and evaluating outcomes.
The question isn't listening or engagement. It's matching the right form of participation to the right moment in the cycle.
At each stage, what the government needs from citizens and what citizens can meaningfully contribute to is different. The question isn't listening or engagement. It's matching the right form of participation to the policy development cycle.
Stage 1 Agenda-setting
Listening does its best work here. The earliest phase of the cycle is also the most underinvested. Governments tend to involve citizens once a problem is already defined and options are on the table. But if you only engage people within a frame you've already set, you limit what you can learn.
Open-ended, always-on listening is different: it lets residents surface what keeps them up at night, on their own terms and their own timeline. It's how you discover the priority you didn't know to ask about.
This is participation at its most generative, not reactive, not tied to a pending decision, but genuinely shaping what gets onto the agenda in the first place.
Stage 2 Formulation & decision
Next, engagement takes over. Once a challenge is identified and a decision is approaching, the nature of participation must shift.
Open-ended listening gives way to structured engagement: here are the options, here are the trade-offs, which direction do you favor? This has to be targeted, time-bounded, and tied to a real outcome. The input has to be actionable.
The risk of conflating the two is real. Use open-ended listening tools at the decision stage and you collect input that can't be acted on. Use structured engagement before communities have had the chance to shape the agenda, and you've already narrowed what's possible.
The tools aren't interchangeable; the stage matters.
Stage 3 — Implementation & evaluation
The final stage is closing the loop. The cycle doesn't end at the decision. Whether residents feel that what they asked for actually shaped the outcome, and whether the government demonstrates that it did what it said it would, determines whether trust is built or eroded.
An OECD study found that among people who don't trust the government, only 22% feel they can influence decisions. When people believe they actually can influence decisions, that figure rises to 69%. The feedback loop isn't administrative tidiness. It's the mechanism that builds trust and makes the whole exercise worth running.
The city of St. Louis shows what this looks like end-to-end. When the Board of Aldermen received a $250 million settlement after the Rams football team left town, they didn't shortcut the process of using citizen input to allocate that funding effectively.
They started with open listening on the Go Vocal platform, and more than 16,000 residents contributed ideas for their city. That listening phase surfaced seven priority challenges. Those challenges became the basis for 20 structured proposals. Over 1,000 different ideas were collected.
The top priority emerged as replacing aging water infrastructure, which became the foundation for legislation approved by the Board.

Each stage of the resource allocation input process served a different function in the cycle: the council listened to the community to set the agenda, structured engagement to develop programming options, and developed a decision mechanism tied to real outcomes, using public communication to close the loop.
The sequence is an architectural framework that defines how listening, deliberation, and decision-making are integrated into a single process. And it's why the process produced something that felt, and was, genuinely community-driven.
Three principles for responsible digital listening
The more governments invest in digital listening, the more an uncomfortable question surfaces: where does listening end and surveillance begin? I don't think this is a paranoid question. It's a legitimate one, and governments that don't grapple with it seriously will find that even well-intentioned listening programs erode the trust they were meant to build.
From our work at Go Vocal, where we work with hundreds of governments around the world, I think the line comes down to three principles.
1. Consent. People need to know they are being listened to and that it has been genuinely agreed to. Whether the channel is an engagement platform, a community survey, or a social media feed, residents should understand that their input is being taken into consideration. This isn't only a legal requirement; it's foundational to the whole exercise.
A useful test: would you be comfortable explaining your listening approach, in plain language, at a town hall meeting? If not, you probably shouldn't be doing it.
2. A clear purpose tied to decisions. Digital listening should exist to improve public decisions, nothing else. It cannot be a technique for profiling communities or building political intelligence that's never acted on. When a city listens to what residents say about public safety or infrastructure, the explicit purpose is to set better priorities and allocate resources more effectively. That purpose needs to be visible upfront, not tucked into a privacy policy.
3. Transparency. This is the most consequential principle, and the most underestimated. It's not enough to collect input well — residents need to see what you did with it.
The city of St. Louis understood this from the start. At every stage of the Speak Up St. Louis process, the Board of Aldermen published official updates that explicitly connected resident input to the proposals under consideration. The feedback loop wasn't an afterthought; it was load-bearing. Without it, participation becomes extraction: you take people's ideas, and they never hear from you again.
With it, participation becomes the foundation of something more durable, the sense that being heard actually changes things. Passive listening, at best, doesn't build trust. At worst, it destroys it. The governments we see achieving lasting results are the ones that close the loop, not once, but as a practice.
Where AI makes a real difference (and where it doesn't)
Open the door to listening at scale, and you immediately run into a constraint that no amount of civic ambition can solve: volume.

The city of St. Louis received over 1,000 ideas on its Go Vocal platform in a single process. Across the governments we work with, public consultations regularly generate tens of thousands of comments, ideas, and responses. No team of civil servants, however dedicated, can manually read through a dataset that size and extract meaningful patterns in any timeframe that's useful for actual decisions.
This is the problem AI is genuinely built to solve. Not democracy. Not participating. Those remain deeply human endeavors. But the bottleneck between citizen voice and government understanding of the challenge of making sense of massive, unstructured input at the speed governance requires is exactly where AI adds real value.
In practice, AI does three things well.
1. Clustering and thematic analysis. When residents submit hundreds or thousands of ideas, AI can group them into coherent themes — infrastructure, housing, public safety — giving decision-makers a structured map of what the community is actually saying. In St. Louis, this clustering enabled the city to move from a mass of raw ideas to a manageable set of priority challenges around which real decisions could be made.
2. Sentiment detection. AI can identify whether input is broadly positive or critical, and how that varies across topics and communities. This matters particularly when governments are also monitoring social media, which comes with a structural distortion worth naming. In our experience at Go Vocal, negative sentiment runs around 30% on social media, compared to under 10% on institutional engagement platforms. The loudest voices online are rarely the most representative ones. AI can help put those signals in context.
3. Summarization. For a communications team of one or two people, which is the reality in most city governments, reading thousands of individual comments isn't a bottleneck; it's an impossibility. AI-generated summaries, organized by theme, turn that impossibility into something workable. The difference between being buried under data and being able to act on it.
But here's where the line must be drawn clearly. AI should never generate claims or recommendations on its own. Every AI-produced summary or cluster must be traceable back to the original comments from real residents. If the system tells you that people in a particular neighborhood are concerned about investment disparities, a civil servant should be able to click through and read the actual inputs that produced that conclusion.
The promise of AI in governance is that it enables hearing from far more people, making sense of far more input, and responding with far more precision.
Traceability is the guardrail that keeps AI-assisted listening from becoming AI-generated fiction about what communities want. The promise of AI in governance is that it enables hearing from far more people, making sense of far more input, and responding with far more precision, while keeping a human at the center of every decision that matters.
In an era when governments are routinely accused of not listening, the ability to demonstrably show that you heard from 15,000 people, understood what they said, and let it shape policy is not a small thing. It's a meaningful shift in how democratic accountability works.
We're only beginning to understand its potential.