Read the full transcript here.
In this discussion, moderated by Burnes Center Director and Professor Beth Simone Noveck, we explored the evolution of misinformation, the challenges of combating it in today’s media environment, and the potential for AI-driven solutions to restore trust in information.
One of the key takeaways from the discussion was the failure of the original misinformation response model. For nearly a decade, many experts believed that if civil society organizations and journalists were well-networked, if social media platforms cooperated, and if fact-checking responses were shared across platforms, we could stop the spread and mitigate the impacts of misinformation. There was also a belief that open datasets would create more transparency and accountability. But this vision never materialized. Instead, major platforms withdrew support, regulatory efforts stalled, and misinformation became more deeply embedded in the media ecosystem. Bice explained that nothing is working the way it was originally envisioned, leading Meedan to rethink its approach.
Rather than relying on top-down strategies that depend on partnerships with social media companies, Meedan is now focusing on local newsrooms and community-driven solutions. Bice noted that there is significant philanthropic energy going into local news, but beyond that, at a time when distrust in national media is so high, enabling local newsrooms to meet the information needs of their communities may be one of the best ways to restore trust in democracy. By building chatbots and AI-driven tools that help local journalists connect with their communities, Meedan hopes to create a more sustainable, community-based model for combating misinformation. This approach also avoids many of the challenges that arise when working with social media platforms. Instead of facing accusations of censorship, these tools allow journalists to distribute their content more effectively and identify key information gaps within their communities.
A key topic of discussion was the role of generative AI in the misinformation crisis. Some argue that AI is accelerating misinformation by making deepfakes and manipulated content more accessible, but Bice pushed back against the idea that this is the primary cause of media distrust. Instead, he argued that the real issue is the broader political climate, where hyperbole and bad-faith actors have eroded public confidence in institutions. AI, however, can play a positive role if used correctly. Meedan is working on AI-powered tools that allow journalists to automate responses to frequently asked questions. Instead of spending hours debunking individual pieces of misinformation, journalists can focus on quality reporting while AI helps distribute their insights to the public in a more efficient way.
Beth Goldberg, Head of Research at Google’s Jigsaw joined the conversation to discuss the risks of dual-use AI tools—technologies that can be used for both good and harm. Jigsaw recently launched a tool that allows users to customize what types of speech should be detected and moderated, but there is growing concern that extremist groups or other bad actors could use it to suppress opposing voices. Bice acknowledged this ethical dilemma, noting that Meedan maintains legal control over who can use its tools to prevent misuse. However, he acknowledged there is no perfect solution to the problem of bad actors co-opting misinformation-fighting technology.
As the conversation wrapped up, Bice shared his long-term vision. He emphasized the need to move away from reliance on large social media platforms and instead build grassroots, community-owned AI models. Bice sees a future where communities have control over the data that powers their chatbots and information-sharing systems, rather than relying on external platforms that prioritize profit over the public good. He also highlighted the importance of developing federated data-sharing models that allow trusted networks to share verified information with one another.
The discussion made it clear that the fight against misinformation must take a new direction—one that builds resilience at the community level rather than relying on corporate platforms. While the challenges ahead are significant, Bice remains optimistic that there is still a way forward. For those who missed the live event, stay tuned for more discussions on the intersection of AI, misinformation, and democracy.
Moderated by Beth Simone Noveck, this conversation continued the Rebooting Democracy in the Age of AI lecture series, hosted by the Burnes Center for Social Change, The GovLab, and the Institute for Experiential AI at Northeastern University. Sign up for all upcoming events: rebootdemocracy.ai/events