The tools and best practices of public communication are changing almost by the day.
Artificial intelligence is reshaping how messages are crafted and distributed, while trust in government, even at the state and local level, is beginning to decline.
For the 1,400 unique learners and over 2,700 participants who joined the InnovateUS’ Amplify: Mastering Public Communication in the AI Age workshop series, these are daily realities.
Part I: AI and the Future of Public Communication by John Wihbey
Over the academic year of 2025–26, hundreds of public-sector communications professionals participated in more than a dozen workshops moderated by Jill Abramson and me at Northeastern University’s Burnes Center for Social Change.
The dialogue was remarkable, and we felt fortunate to help curate this series. Jill is the former executive editor of The New York Times and a senior fellow at the Burnes Center; I direct the AI-Media Strategies Lab (AIMES Lab) at Northeastern and specialize in the intersection of AI and media.
The workshops were designed by Beth Noveck to be conversational and practical, pairing expert speakers with audience Q&A over Zoom so that participants from across the country could attend. A huge thanks as well to Agueda Quiroga and Eileen Twiggs, who expertly produced the series.
The sessions I moderated below focused on how communicators can responsibly adopt AI tools while maintaining the information integrity their roles demand. A consistent thread ran through conversations with experts: AI is most useful not as an all-knowing oracle, but as a specialized collaborator, one that still requires human judgment, verification, and institutional values to produce anything worth publishing.
AI 101 for Communicators — What You Need to Know
The series’ second session served as a practical orientation to the generative AI landscape, walking participants through major models—ChatGPT, Gemini, Claude, and others—and emphasizing that each has a different “psychology,” shaped by its training data and fine-tuning. The goal was not to turn communicators into technologists, but to help them develop intuition for what these tools can and cannot do.
A central argument was that communicators should think less about AI as an omniscient chatbot and more about building customized “worker bees” or specialized tools grounded in approved knowledge bases, institutional style guides, and repeatable instructions. The underlying principle: information integrity for public communicators is paramount.
My message encouraged everyone to try Claude Projects, Google Gems, and Custom GPTs.
The session also introduced a tension that would surface throughout the series: the line between cognitive offloading, letting AI do the thinking, and genuine cognitive amplification, where tools help you produce work you could not have done alone.
AI-Assisted Writing: Lessons from the Field
Luciana Herman and Phil Malone of Stanford Law School demonstrated how AI can function as a writing assistant, particularly using tools like Google’s NotebookLM, which operates within a “closed universe” of uploaded documents.
Using compelling examples from their policy work, they showed how to generate a lay-audience executive summary from a pair of dense, 20-plus-page policy reports in a matter of seconds.
Their message was clear: AI can dramatically accelerate synthesis and drafting, but it cannot replace judgment. As Malone put it, think of AI as a collaborator, never something you simply hand the work to. The keyword repeated throughout the session: verify.
Telling Public Stories with Data
Lee Rainie, formerly of the Pew Research Center, traced the history of data storytelling and its evolution into the AI era. His core insight: successful public communication of data requires thinking like a journalist, leading with what’s new, framing insights as stories, and investing in visual clarity.
He shared a telling example: the single biggest traffic day in his team’s history came not from a sweeping survey about internet use, but from a single number—that 6 percent of online adults used Reddit, because the Reddit community had never seen itself quantified before.
He closed with a warning that resonated deeply with public-sector communicators:
“Democracy dies when epistemology fails. Liberty collapses when lies and propaganda supplant accurate data and truth-detecting mechanisms—and that’s the business you’re in.”
Effective Use of Social Media: Storytelling, Trust, and Institutional Brand
Pearl Gabel emphasized that every government agency, no matter how technical, has stories worth telling.
She opened with the story of Jimmy Breslin, who, on the day of JFK’s funeral, ignored the ceremony and found the gravedigger, producing the most memorable piece of journalism from that moment.
The lesson: if you’re standing where everyone else is standing, you may be in the wrong place.
Her guidance was practical, train AI tools on your voice, avoid generic outputs, and never publish anything you cannot verify. In a crowded content environment, distinctiveness and credibility are inseparable.
“Every press conference, every weather event, every random Tuesday is content,” she told the group. “Every person in your office, in your department, is content.”
Fighting Fire with Fire: Combating Disinformation with AI
Megan Marrelli of Meedan explored whether AI can be used to counter misinformation. The discussion surfaced real tensions: trust in AI tools, ethical concerns about training data, and the difficulty of competing with the scale of false content online.
Can a closed-loop AI chatbot truly compete with the volume of misinformation flooding social media? How do you maintain user trust when the tool itself is built on technology that many associate with hallucination and unreliability? And if the underlying models were trained on ethically questionable data, what does it mean for newsrooms and public agencies to build on top of them?
Across these sessions, a shared conviction emerged: AI’s greatest value lies not in flashy outputs, but in the quieter, upstream work of research, synthesis, and drafting. These tools demand more human judgment, not less.
But tools alone are not enough. In an information environment defined by speed and fragmentation, the deeper challenge is not just how we communicate, but whether the public trusts what we say.
Part II: Rebuilding Trust in a Noisy Information Environment by Jill Abramson
Surveys over the years have consistently shown that Americans view state and local governments more favorably than the federal government. But do they really understand what they do?
Recently, however, even the trust numbers for state and local governments have begun to decline.
The people who gathered for the Amplify workshop series are well-positioned to reverse that trend: they are government officials charged with communicating with the press and public.
There was strong agreement across sessions about several core principles. Chief among them was the need for transparency. To establish trust and credibility, communications officials must clearly explain where information comes from and why it should be trusted.
As Mahen Gunaratna, former Communications Director to New Jersey Governor Phil Murphy, explained during the Navigating Crisis Communications with Confidence workshop:
“We were very intentional about putting out as much information as possible in real time… because we felt like that’s what the media and the public deserved.”
One of the standout sessions featured former Congressman Brian Baird, a clinical psychologist and persuasion expert. Drawing on his experience in Congress and beyond, Baird emphasized that effective communication starts not with the message, but with the audience. As he put it during Effective Communication for the Public Good:
“Know your audience… What is their interest? What is their knowledge level? What is their motivation?”
Baird’s insights resonated deeply with participants, particularly his emphasis on listening as a core skill of public leadership. Reflecting on his years holding town halls across Washington State, he noted:
“The voters want someone who will listen to them.”
His candor about both successes and mistakes made clear that trust is built not through polished messaging alone, but through genuine engagement.
He also underscored the constraints communicators face in crowded, fast-moving environments:
“You’ve got less than 30 seconds to tell me whether I should pay attention for the next 10 minutes.”
Another powerful voice in the series was Max Stier, CEO of the Partnership for Public Service, who spoke about the broader challenge of restoring trust in government. During Explaining Public Service: Strategies for Clear, Credible Communication, Stier argued that part of the problem is not performance, but visibility:
“No one has actually looked out for the brand… no one has felt responsible for creating a public that is educated about what the government is doing for them.”
For the communicators in the Amplify series, this presents both a challenge and an opportunity. If trust in government is declining, it is not only because of policy or performance, but because of a failure to clearly, consistently, and credibly tell the story of public service.
In an era of AI-accelerated communication, that responsibility becomes even more urgent. The tools exist. The expertise exists. The question is whether institutions are ready to use them.
Conclusion: From Tools to Trust
Across the Amplify series, two realities came into sharp focus. First, the tools of public communication are evolving rapidly, with AI reshaping how information is produced, analyzed, and shared. Second, the core challenge of public communication remains unchanged: earning and maintaining trust.
The sessions made clear that these dynamics are deeply interconnected. AI can help communicators work faster, synthesize more information, and reach broader audiences. But it also raises the stakes. In a more saturated and easily manipulated information environment, credibility is no longer assumed. It must be actively built and reinforced.
For public institutions, this means pairing technological adoption with renewed attention to fundamentals: transparency, clarity, listening, and human connection. The most effective communicators will not be those who use AI the most, but those who use it in ways that strengthen rather than substitute their relationship with the public.
The central lesson of this year-long journey is that the future of public communication will be shaped by both the tools we adopt and the trust we can sustain.