The AI Echo Chamber: New Paper on AI and Epistemic Risk

NEW PAPER: John Wihbey argues that as AI increasingly mediates the information domains that shape public knowledge, it risks producing an "epistemically anachronistic" public sphere, stifling the emergence of new ideas and limiting democratic deliberation. While AI has the potential to enhance our informational ecosystem, he argues, there preserving space for organic human insight and discourse.

Beth Simone Noveck

Read Bio

Listen to the AI-generated audio version of this piece. 

In his paper AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge, Northeastern professor John P. Wihbey argues that AI poses significant epistemic risks to democratic societies by mediating and shaping the informational domains that support public knowledge and deliberation.

Wihbey pinpoints three major questions confronting scholars of AI and democracy: AI alignment, mechanism design, and the new issue he focuses on: epistemic risk.

The AI alignment problem refers to the challenge of ensuring that AI systems operate in accordance with human values and preferences, even as they evolve over time. Mechanism design, on the other hand, pertains to the challenge of structuring technological platforms and AI models in a way that allows humans to express their genuine views and have them accurately reflected in democratic decision-making processes. While these two challenges have been the primary focus of the AI ethics and governance discourse, Wihbey argues that epistemic risk deserves equal attention.

The question of what constitutes a healthy informational diet for democracy has long preoccupied scholars and policymakers, particularly in the study and regulation of broadcast journalism. Much of the European speech tradition is predicated upon the notion that democracy rests upon a society of well-informed citizens having access to a wide range of information, having the freedom to form opinions based on the available information and to express those opinions. The informational diet of democracy must be varied and colorful!

However, Wihbey contends that the rise of AI presents new challenges to the democratic knowledge ecosystem. 

He worries about automated journalism. Imagine an AI system that writes news articles based only on past data. It might keep using the same outdated storylines and facts, missing out on new information, changing opinions, and fresh perspectives that are important to people today.

He also uses the example of social media moderation. AI chatbots might accidentally delete or limit genuine human discussions, because the chatbots' decisions are based on old data that doesn't keep up with the fast-paced, always-changing nature of online talking points and trends.

Suppose opinion polls start using AI-generated survey responses instead of real people's answers. The AI might give incorrect predictions about what the public thinks on current topics, because it can't properly account for shifts in beliefs or changes in population groups over time. These skewed poll results could then wrongly influence how people really think and act.

As search engines use AI to condense information into short, simplified snippets, people may start relying too heavily on these AI-generated summaries. This could lead to people spending less time exploring, discovering, and thinking critically for themselves, and instead just accepting the AI's limited and potentially biased take on knowledge.

AI systems, trained on data from the human past, may struggle to capture the emergence of new knowledge, values, and preferences that arise through dynamic human interaction. This could lead to a recursive feedback loop, where AI's representation of reality shapes public perception and choice, which in turn reinforces the AI's limited epistemological framework.

The "epistemic risk" perspective he offers challenges us to think beyond the immediate benefits and drawbacks of AI in specific domains and to consider the broader, systemic impact of AI on the health and resilience of our democratic knowledge ecosystem as a whole. It raises important questions about the long-term compatibility of AI-mediated information ecosystems with the principles of democratic deliberation and collective self-determination.

I do wonder, however, whether the risk is greater than the limitations of the current human-driven information ecosystem. Today, public knowledge is heavily mediated by the subjective judgments, biases, and agendas of human gatekeepers, such as news editors and content moderators. By contrast, AI-powered systems also have the potential to increase content diversity and surface underrepresented viewpoints by drawing upon a vast range of data sources and employing algorithms designed to prioritize balance and inclusivity. AI could help counter issues like echo chambers and political polarization by exposing individuals to a broader spectrum of ideas and information.

Moreover, as he points out, concerns about AI's epistemological limitations may be mitigated through advances in machine learning techniques, such as reinforcement learning and transfer learning, which could enable AI systems to adapt more dynamically to evolving social realities. As AI becomes more sophisticated in its ability to process and generate human-like content, it may grow more responsive to the emergence of new knowledge and values.

Nevertheless, Wihbey's argument serves as a valuable reminder and caution that the development of AI systems for knowledge production and dissemination must be approached with attention to the question of the informational diet of democracy. If AI comes to dominate key informational domains without adequate safeguards and human oversight, it could inadvertently ossify public discourse and constrain the organic evolution of human understanding. 

Ultimately, the path forward lies first in recognizing the information quality problems and developing hybrid human-AI systems that harness the benefits of algorithmic content generation and curation while preserving space for the serendipitous, open-ended evolution of human insight and discourse. This will require ongoing research, experimentation, and public dialogue to strike a balance between the transformative potential of AI and the safeguarding of democratic epistemic health undergirded by a steadfast commitment to the principles of open inquiry, pluralism, and collective self-determination.

Read the full paper: AI and Epistemic Risk for Democracy: A Coming Crisis of Public Knowledge.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.