Loading blog post, please wait Loading blog post...

Listen to the AI-generated audio version of this piece.

00:00
00:00

This last week has been a marathon. Together with Dane Gambrell, I spent most of it drafting the first version of the script for our upcoming free online course, Designing Democratic Engagement with AI: Practical Strategies for Public Participation. Written for InnovateUS to deliver to public professionals, the course is a co-production of The GovLab and Danielle Allen's Allen Lab at Harvard.

First, we wrote twenty-five thousand words (hard) covering everything from how to set goals to how to select participants to how to close the loop between public input and real decisions, all with the help of artificial intelligence. Then we cut the script in less than half (harder). We still need to reach 9,000 (oy), so another editing round is ahead. But a first draft is done.

The script reflects the extraordinary response we received when we shared the course curriculum with our advisory group of fifty experts from twenty-four countries. As Dane described in his post last week, we received more than three hundred comments. Many were generous, specific, and immediately actionable and included suggestions for examples, requests for clearer definitions, identification of literature and resources and gaps we had simply missed. We incorporated the suggestions and moved on.

But some issues were genuinely hard. Places where thoughtful people disagreed with each other, where any choice involved a real tradeoff, and where we couldn't satisfy everyone.

I had the good fortune last week to unburden myself of some of these harder choices at the global AI and Democracy seminar organized by Professor Jose Luis Martí at Pompeu Fabra University in Barcelona with a room full of political philosophers and democratic theorists who pushed back hard in exactly the ways I needed. I left with clearer thinking on at least three of the toughest calls.

Before I get to those, one “easy” one that illustrates what the hard ones feel like by contrast.

Several reviewers asked whether the course should tell institutions plainly: if you are not prepared to act on public input, don't launch an engagement at all. It's arguably confrontational to say in a course aimed at practitioners interested in engagement. 

We believe engagement has to be connected to action, or it becomes democracy theatre and the kind of performative listening that breeds cynicism rather than trust.

But we said it anyway, clearly and early. "Before you begin, be honest about whether you are ready to engage." We believe engagement has to be connected to action, or it becomes democracy theatre and the kind of performative listening that breeds cynicism rather than trust. "If you are not ready to design, run, or use the outputs of the engagement, then focus first on building the conditions that make meaningful engagement possible."

These three represented harder choices to resolve and explain in a one-hour program:

1. The False Hierarchy of Representativeness

One of the nine questions we ask practitioners to think carefully about is who should participate in an engagement. In our original curriculum, we organized this around a distinction between "representative" and "non-representative" selection.

For many engagement theorists and practitioners demographic representativeness is treated, implicitly, as the gold standard.

The Pompeu Fabra seminar surfaced something that had been nagging at me: that framing creates a false hierarchy. Demographic representativeness, namely selecting participants to mirror a population by age, gender, race, income, geography, is considered a way to ensure legitimacy in decisionmaking. When you ask a cross-section of the population, the resulting actions reflect the “will of the people.”

AI allows institutions to combine multiple selection approaches in ways that were previously too costly or complex.

But it is not self-evidently superior to other approaches. You might instead select for domain expertise, for lived experience with a specific problem, for viewpoint diversity, for institutional role, or for geographic stake. You might over-sample for those who are low income. Each of these serves different purposes and reflects different theories of what makes public input legitimate and useful.

The Barcelona group were particularly pointed about this. There is a tendency in democratic theory to fetishize demographic representativeness — to treat it as the only form of selection that confers legitimacy. But a citizens' assembly on healthcare policy that mirrors census demographics may actually produce worse deliberation than one that brings together patients, caregivers, clinicians, and administrators. The question is not whether your sample looks like the population. It is: representative of what, and for what purpose?

We resolved this debate by reorganizing the topic around a different axis entirely: selected versus self-selected participation. That framing is more neutral and more useful for practitioners. Self-selection, namely opening participation to anyone who wants to engage, has real strengths: it channels genuine motivation, it reaches people who care deeply, and it can scale. Targeted selection — deliberately recruiting specific participants — allows you to reach people who might not come forward on their own and to ensure that specific kinds of knowledge or experience are in the room. The honest answer is that the most effective engagements often combine both.

What AI adds to this picture is genuinely exciting. It allows institutions to combine multiple selection approaches in ways that were previously too costly or complex. Now one can run large open consultations alongside a smaller deliberative process, synthesizing inputs from different selection processes, and doing so affordably. That's a much more interesting story than "try to get a representative sample."

2. How Political Should a How-To Course Be?

The second hard issue was one where our reviewers were genuinely split.

A course that leads with a political frame risks losing exactly the audience it most needs to reach.

Several pushed us to open the course with a full-throated account of the democratic crisis and suggested talking about declining trust in institutions, rising authoritarianism, the fragility of democratic norms as the driver for why to engage. They argued that without that framing, the course treats public engagement as a neutral administrative practice, a kind of civic plumbing, when it is in fact a political act with stakes.

Others pushed back just as hard. The course is designed for practitioners around the world, many of them working inside governments that are not fully democratic, or in political environments where explicit talk of "democratic backsliding" would get the course rejected before it was ever adopted. A course that leads with a political frame risks losing exactly the audience it most needs to reach.

This is genuinely hard because both sides are right.

We resolved it by making a distinction between framing and substance. The course opens by naming the participation crisis: three out of four residents across twenty-four countries say elected officials don't care what they think. That is a factual claim, not a political one, and it provides real motivation for why the course matters. But we stopped short of naming authoritarianism or framing the course as a response to democratic backsliding. The goal is to give practitioners practical tools that work across a wide range of political contexts and to trust that, when used well, these tools strengthen democratic practice.

One thing we cut for space that I still wish we'd kept: a clear distinction between public engagement and democratic engagement. Public engagement involves a wide range of approaches from co-creation to collecting comments. Democratic engagement is using that input in ways that strengthen democratic accountability and legitimacy. AI can support both, but it does not automatically turn the former into the latter. That distinction matters, and we may find another way to surface it through the coaching tool or accompanying materials.

3. Are We Taking People from Zero to Five, or Five to Ten?

Philosophers at Pompeu Fabra asked (and then answered) another hard question. Is this course trying to convert non-practitioners — people who have never run a public engagement — into practitioners? Or is it trying to help experienced practitioners do what they already do more effectively and at greater scale? Who is the more “important” audience.”

Those are very different courses. The first needs to start from scratch, justify why engagement matters, and avoid assuming any prior knowledge. The second can skip the motivation and go deep on technique. The first is an on-ramp. The second is a masterclass.

We have been pulled in both directions throughout. The institutional partners we want to adopt this course often have staff who haven't run meaningful engagements. Getting them from zero to five, from no practice at all to some practice, is enormously valuable and probably where the greatest impact lies. But many of our fifty advisors are themselves deep practitioners who found the early modules too basic and wanted more sophistication.

We decided to commit to the on-ramp. We wanted more people who are not “engagement professionals” to see public engagement as a new way of working; they should try it regardless of their role. This is a one-hour primer. Its job is to give someone who has never designed a public engagement a clear framework, a vocabulary, enough examples to make it feel real, and enough confidence to try.  

The course is an invitation.

What that means in practice is that we accept some frustration from expert practitioners who will find parts of the course too simple in exchange for (we hope) being genuinely useful to the much larger number of people who are starting from scratch. An AI-enabled coaching tool that people can use at their desks, subsequent live workshops, and future video courses are where the five-to-ten-hour work might happen

I think that's the right call. It's important to be transparent about the choices we made, as they shaped the examples we chose and how we talk about risk.

As in every good engagement, we combined multiple approaches. We invited comments from our advisory board, deliberated with experts at Pompeu Fabra and elsewhere, opened the curriculum to crowdsourced review from practitioners around the world, and are now sharing the draft script for another round of input. Then we decided (and tried to explain our reasoning transparently, including where we're still uncertain).

The course launches soon. We hope you'll take it, share it, and tell us what we got wrong. 

The course launches soon. We hope you'll take it, share it, and tell us what we got wrong.

Designing Democratic Engagement with AI is a free, self-paced course created by InnovateUSThe GovLab, and the Allen Lab for Democracy Renovation at Harvard University.

To receive updates about the course and the accompanying AI and engagement coaching tool, sign up here.

Cover Image by Jorge Franganillo | CC BY 3.0

Tags