In February, we shared a draft of the curriculum for Designing Democratic Engagement for the AI Era — a short, practical course for public servants on how to design effective public engagement, and how AI can help make running public conversations more efficient and effective.
We asked our advisory group of more than 50 practitioners, researchers, and civic technology experts from 24 nations to tell us what the curriculum got right, what we missed, and where we needed to go deeper. We invited Reboot Democracy’s readers to review and comment on the curriculum, reflecting on:
-
examples of AI-enabled engagement in practice
-
risks or tensions we should address more explicitly
-
practitioner stories that illustrate what works and what does not
-
suggestions for tools, methods, or cases to include.
The response was generous and substantive. Across inline comments on the curriculum outline, written email responses, and an advisory group call, we received more than 300 pieces of feedback. This post summarizes what we heard, explains how we used AI tools to help us make sense of that feedback, and describes what comes next.
What We Heard from You
We have organized the curriculum feedback into two documents:
-
A Comment Summary Table that groups reviewers’ substantive comments into 12 issues and identifies the resulting changes and additions to the curriculum.
-
A Resources Table that catalogs reviewers’ suggestions for additional examples, case studies, tools, and frameworks, organized by module.
Separating substantive editorial issues from resource additions makes it easier to track and implement both types of suggestions.
In summary, reviewers suggested that we should:
1. Clarify framing about audience, learning pathway, and framework. The course could be clearer upfront about who it is for, what prior knowledge it assumes, and where it sits within a broader learning journey.
2. Discuss prerequisites for engagement, such as assessing institutional readiness, securing leadership commitment, and building cross-departmental support.
3. Deepen discussion of "closing the loop." Engagement must produce genuine accountability — not just responsiveness — and participants need to see how their contributions mattered, including when input was not acted on.
4. Deepen discussion of risks and opportunities in using AI. Every module should name limitations and risks alongside benefits, and avoid overstating what AI tools can do.
5. Deepen discussion of diversity, equity, and inclusion. The course needs more concrete guidance on how to reach underrepresented communities.
6. Address trust and the meaning of democratic engagement more directly. The course should distinguish between public and democratic engagement, and make clear that AI offers new tools but not shortcuts to meaningful participation.
7. Clarify discussion of participant selection. The course should address mini versus maxi publics, cognitive diversity, attrition, and avoid descriptions of selection methods that could read as manipulative.
8. Address data protection and privacy throughout. Data governance and privacy considerations should run as a thread across all modules.
9. Add a section on risks and adversarial behavior. While the course assumes that institutions approach engagements in good faith, there is a need to address issues like coordinated manipulation, AI-drafted submissions, and staff safety.
10. Address module overlap and structural clarity. Modules 8 through 10 (covering task design, workflows, and evaluating inputs) overlap and could be consolidated or more clearly differentiated.
11. Add real-world examples and tool references throughout. Each module needs concrete examples of impactful projects. The course should point to free or low-cost options accessible to under-resourced teams.
12. Deepen discussion of goal-setting and human judgment. Practitioners need to internalize goals themselves, and for complex problems, problem discovery should precede goal-setting.
The feedback also surfaced several tensions where there was no consensus among commenters, including how much political framing belongs in a practical how-to course, how prescriptive to be about when AI is and is not appropriate, and how to balance breadth and depth within a one-hour format.
These tensions do not have easy answers; we are navigating them carefully and hope to provide further transparency on how we approach next steps.
How We Used AI to Synthesize Feedback
With 282 inline comments — plus written submissions and call notes — organizing and analyzing the feedback manually would have taken our small team weeks. Using AI tools, we completed this analysis in just a few days.
The process unfolded in several steps:
-
Skimming the raw feedback:
We began by doing a cursory manual scan through comments to get a sense of the scope and scale of the feedback, the range of issues raised, and whether any major concerns or patterns were immediately apparent.
-
Extracting and organizing the raw feedback:
We then used Claude (Sonnet 4.6) to extract the in-line comments into a spreadsheet. Each row listed one comment, its author, and a unique identifier.
This step created a complete, organized record of the feedback. The spreadsheet made the comments easier to read and presented them in tabular form, which is easier for AI tools to process.
-
Thematic analysis and coding:
Next, we used Claude to identify common themes in the comments.
The model reviewed all comments, identified recurring topics, ideas, concerns, and suggestions, and grouped similar ideas into preliminary categories. The output was a list of themes representing the main topics raised across the comments. We manually edited the list of themes for clarity and to remove repetition.
We then used the model to code the comments in the spreadsheet by theme, assigning a thematic label to each comment based on the issues it raised.
The thematic analysis enabled us to explore the range of topics covered in the comments, while the coding helped us gauge the prevalence of each theme.
-
Issue synthesis:
The next step was to synthesize the substantive feedback into a set of issues.
We first provided Claude with the draft course outline with in-line comments, the spreadsheet with thematically coded comments, and a summary of the themes. The model then grouped related feedback into a more specific set of substantive issues, combining comments that raised similar concerns or suggestions. For each issue, Claude created a bulleted list of possible changes or additions to the curriculum to address it. It also estimated how frequently each issue appeared in the comments.
In a second iteration, we provided an anonymized summary of comments from the advisory call and of the anonymized feedback shared via email, instructing the model to update the table to include any new issues and possible changes identified in those documents.
The output was a Comment Summary Table listing a description of each issue, the possible changes or additions, and the frequency. We then edited the table to ensure all issues were accurately captured and logically organized.
Whereas the thematic analysis was descriptive – helping us understand what people said – the synthesis was interpretive, helping us understand the issues underlying the comments and what changes or additions could be made to the curriculum as a result.
-
Resource extraction:
We then instructed the model to review all three feedback sources — inline comments, call notes, and email submissions — and identify the case studies, frameworks, and other resources that reviewers suggested.
The output was a Resources Table describing each resource, the course module it falls under, and its type (case study, framework, tool, or platform, and so on).
-
Cross-checking across models
We repeated the comment synthesis and resource extraction steps using three different models: Claude Opus 4.6 with Extended Thinking, OpenAI’s ChatGPT 5.4 – Pro, and Google’s Gemini 3.0 – Thinking.
Please note that these procedural documents include raw AI-generated outputs that may contain errors.
Manually comparing outputs across models helped us assess the quality and completeness of the synthesis, identify gaps, and ensure that all issues and resources were captured in the tables.
Human Review Throughout
To ensure the feedback was captured accurately, we carefully read all comments. We compared the AI-generated summaries and syntheses against the underlying source documents to correct errors. And we made substantive editorial judgments about how to represent and respond to your feedback. At every stage, AI-generated outputs were treated as starting points for analysis, not final products.
What Comes Next
We are drafting the course scripts based on your feedback and will share the drafts on Reboot Democracy in late March.
We will also be further reflecting on unresolved issues: why they're hard, what we think about them, and how we move forward to make decisions.
We are grateful to everyone who contributed to this process. The quality and candor of the feedback we received have made this a substantially better course. We look forward to sharing what we have built with you.
Designing Democratic Engagement for the AI Era is a free, self-paced course created by InnovateUS, The GovLab, and the Allen Lab for Democracy Renovation at Harvard University.
To receive updates about the course and the accompanying AI and engagement coaching tool, sign up here.