Loading blog post, please wait Loading blog post...

Listen to the AI-generated audio version of this piece.

00:00
00:00

Dreaming Better Elections Into Reality

White paper #001: Safeguarding elections in the age of AI and synthetic content by Aleš Čáp, Sir Geoff Mulgan 

Organizational theorists James G. March and Johan Olsen describe an institution as something that "distinguishes the quality of a dream from the quality of a dream implemented."

Institutions matter because they shape how we respond to challenges, coordinate complex activities, and adapt to change over time. They're the scaffolding that turns policy aspirations into practical reality. When new technologies threaten democratic processes, we need institutions capable of rapid, coordinated response.

In their recent whitepaper, UCL's Aleš Čáp and Sir Geoff Mulgan ask: What institutions do we need to safeguard our elections against AI-generated deepfakes and synthetic content? As these technologies make it increasingly easy to create convincing fake videos, audio, and images of political figures, they pose unprecedented threats to electoral integrity—from last-minute disinformation campaigns that can't be debunked in time to synthetic content that erodes trust in all media.

Published by The Institutional Architecture Lab, the paper tackles the question of electoral deep fakes by  designing the organizations we need to implement our democratic dreams, rather than calling for new legislation or policies. Using electoral integrity as a test case, Čáp and Mulgan demonstrate why institutional design matters—and offer concrete steps countries can take to establish electoral integrity institutions for the AI age.

Screen Shot 2025 05 27 at 1.47.14 Pm

Why Institutions Trump Policy

The paper's most important insight is that institutions matter more than policy (my words, not theirs) because they determine how we implement change. Laws and regulations are only as effective as the organizations that enforce them. When facing complex, fast-moving threats like synthetic content, we need institutions designed for coordination, adaptation, and rapid response.

The authors also reframe the synthetic content challenge itself. This isn't fundamentally a detection problem (an arms race we can't win) but a dissemination problem. As they argue: By focusing on how harmful synthetic contents spread and the mechanisms through which they reach and influence large audiences, we can develop more effective mitigation strategies, avoiding the gridlock associated with monitoring and moderating vast volumes of content.”

This paradigm shift unlocks more promising strategies. Rather than getting stuck trying to identify every deepfake (an increasingly impossible task), we can focus on how harmful content spreads and the mechanisms through which it reaches and influences audiences. This avoids the gridlock of trying to monitor and moderate vast volumes of content.

A New Kind of Institution

The authors propose that countries establish Electoral Integrity Institutions—specialized bodies that wouldn't duplicate existing electoral commissions but would "act as the central coordinating hub, ensuring alignment and accountability across relevant stakeholders." These institutions would operate through five core functions:

  1. FACILITATE collaboration among diverse stakeholders—government agencies, tech platforms, civil society, academia, and media

  2. SCAN the digital space by combining human networks and AI tools to identify threats

  3. ASSESS content effectively and impartially, balancing speed with responsibility

  4. ACT with both power and accountability through strong mandates and democratic oversight

  5. LEARN via feedback loops to rapidly adapt to new challenges

The paper is rich with practical guidance drawn from real-world examples. Sweden's Psychological Defence Agency shows how to balance national security with civil liberties by focusing only on foreign disinformation while treating domestic misinformation as a "vulnerability" requiring citizen education rather than enforcement. France's VIGINUM demonstrates the importance of separating detection capabilities from enforcement power—the agency identifies threats but leaves corrections to democratic institutions like courts and media. Taiwan's collaborative fact-checking platforms like Cofact show how crowdsourced verification can work at scale across both public social media and private messaging apps.

The Flexibility Advantage

Done right, institutions offer more flexibility and adaptability than rigid laws. Consider the contrast with the recent TAKE IT DOWN Act, which under the guise of protecting against non-consensual sexual deepfakes and revenge porn provides unrestrained power—including to the President—to those wishing to force removal of protected speech or misuse for their own advantage. Despite good intentions, this badly written bill might end up harming more than it helps.

By contrast, adaptive institutions can adopt new practices, tools and communities and adjust their approaches as threats evolve. They offer a broader, more nimble toolkit than policy alone.

Critical Gaps and Questions: Defense or Offense?

But this excellent and thought-provoking paper also raises questions about how to define the problem institutions are designed to solve.

The paper focuses on designing an institution to respond to harm rather than one to proactively improve elections. 

While protecting against deepfakes matters, what institutions do we need if we want to create better elections—more participatory, more representative, more responsive to citizen needs?

The authors use electoral integrity as a compelling test case to demonstrate that institutional design gives us concrete pathways forward. When we face complex challenges that span traditional boundaries—bringing together government, technology, civil society, and citizens—we need institutions purpose-built for collaboration and adaptation. But do we design institutions to combat synthetic media, or to forge freer, fairer, better elections? 

We don't have the resources or attention to do both equally well, and when we design our institutions to be primarily reactive to problems, we miss the opportunity to be proactive about possibilities. The choice shapes not just what we build, but what kind of democracy we become.

In addition, the paper doesn't have room to fully address the substantial costs of building and implementing new institutions or fully convince that existing electoral commissions couldn't be revamped and enhanced rather than bypassed. Would also love an expanded discussion of how to measure whether these institutions actually work.

The quality of our dreams matters. But as March and Olsen remind us, what distinguishes dreams from reality is the quality of institutions we build to implement them. In an age of AI and synthetic content, that institutional imagination may be democracy's most crucial capacity.

Read the full paper: Safeguarding Elections in the Age of AI and Synthetic Content

Tags