This week Justice Ketanji Brown Jackson decried the "relentless attacks" on judges that "ultimately risks undermining our Constitution and the rule of law." Her warning echoes a troubling pattern that extends beyond judges to those who administer our democracy at every level.
Consider Ruby Freeman and her daughter Wandrea "Shaye" Moss, African-American election workers from Georgia. After right-wing election deniers falsely accused them of ballot tampering in 2020, they could no longer go to the grocery store or walk down the street without being harassed. "I've lost my name, and I've lost my reputation," Freeman testified tearfully before Congress. "I've lost my sense of security — all because a group of people... scapegoated me and my daughter, Shaye, to push their own lies about how the presidential election was stolen."
Overton's magisterial, must-read two-part work is one of the first comprehensive academic examinations of how AI intersects with race and democracy. While most AI discourse is dominated by technical specialists who rarely address racial impacts, or civil rights advocates who lack technical knowhow, Overton bridges this divide and centers race in discussions of our technological future.
Without the right guardrails, in the future, explains law professor Spencer Overton in a 2025 article in the Iowa Law Review, “deepfake video, synthetic social media content appearing to come from thousands of people, and other AI applications could be targeted to attack the credibility of and threaten election workers like Ruby Freeman, and deter them from serving as election workers.” Imagine AI video showing Freeman tampering with ballots, synthetic audio of Freeman "confessing" to election fraud, with her voice perfectly mimicked and thousands of social media posts made by AI-enabled bots spreading the Big Lie.
In a forthcoming companion piece in the Utah Law Review, however, he offers the counterbalance, showing how AI could potentially help election officials detect misinformation campaigns, translate voting materials for language minorities, and allocate resources more equitably to prevent long lines at polling places in communities of color.
Overton's magisterial, must-read two-part work is one of the first comprehensive academic examinations of how AI intersects with race and democracy. While most AI discourse is dominated by technical specialists who rarely address racial impacts, or civil rights advocates who lack technical knowhow, Overton bridges this divide and centers race in discussions of our technological future.
In addition to decades of voting rights scholarship, Overton was also president of the Joint Center for Political and Economic Studies—America's Black think tank—from 2014 to 2023, where he worked directly with the Congressional Black Caucus and various policymakers to increase diversity and advance racially equitable policies. I knew him when we served together in the Obama presidential transition and he later worked in the Holder-era DOJ. His practical experience informs his analysis in ways purely academic or technical approaches cannot match.
In a field dominated by either uncritical techno-optimism or pessimistic determinism, Overton's assessment across two complementary articles offers a framework for evaluating when AI might harm or help the transition to a truly inclusive multiracial democracy. This is especially important when we recognize that race remains the most significant demographic factor shaping American voting patterns.
America's Changing Demographics: The Context for Overton's Analysis
Overton's analysis arrives at a pivotal moment in America's demographic evolution. The United States, he points out, is undergoing a profound transformation: people of color have grown from just 15% of the U.S. population in 1960 to 41% in 2020, and are projected to become a majority by 2050. Immigration patterns have shifted dramatically, with Europeans accounting for 75% of immigrants to the United States in 1960 but only 20% by 2010, as immigration from Asia, Africa, and Latin America has increased.
Overton worries that AI will exacerbate existing “racial polarization, cultural anxiety, antidemocratic attitudes, racial vote dilution, and voter suppression."
The Racial Harms of AI to Democracy
Overton identifies four primary categories of harm that AI poses to racially inclusive democracy:
Information Integrity Challenges
AI dramatically increases the "speed, scale, scope, and sophistication" of racial disinformation. Beyond the headline-grabbing deepfakes, Overton documents how AI facilitates racial impersonation and infiltration of community deliberation. For example, he notes that in 2016, 37% of fake Facebook pages created by Russian operatives targeted Black audiences, despite African Americans comprising only 12.7% of the U.S. population.
AI-powered microtargeting and manipulative chatbots pose unique dangers to communities of color that have experienced historical cultural conquest. As Overton argues, "For people who have internalized experiences like forced assimilation at Indian boarding schools, punishment for speaking Spanish at lunch in the cafeteria at school...manipulative facilitated by generative AI can represent a continuation of unfair cultural conquest that violates values of autonomy, choice, expression, association, and equality."
Model Design Problems
Even without malicious intent, AI models trained on datasets that underrepresent people of color lock in frameworks and political perspectives of a shrinking share of the population. "These tools can affect the electoral process," Overton writes, "and in turn reproduce and even deepen current inequality in voter participation, political representation, and policymaking process."
Language barriers represent a particular challenge. People of color account for over 86% of limited English-proficient Americans, and foundation models are dominated by English, with other languages underrepresented. This means AI tools for democratic participation may be less accurate and effective for non-English speakers.
Surveillance and Chilling Effects
Law enforcement's warrantless deployment of AI-powered analytics of mobile phone location data and social media content disproportionately targets communities of color and chills political speech. After the killing of George Floyd, for example, police used AI tools to monitor social media posts tracking protesters' whereabouts.
As Freedom House noted in a report Overton cites: "The chilling effect on free expression caused by increased surveillance is well documented. Activists and journalists who might otherwise hold governments to account for wrongdoing are more inclined to self-censor."
Election Structure Vulnerabilities
Election offices in communities of color are particularly vulnerable to AI-powered cyberattacks, nuisance open-record requests, mass frivolous voter challenges, and threats against election workers. Additionally, automated systems used in voter maintenance and signature verification have shown significant racial disparities – one study of Wisconsin voters revealed that the rate at which voters were erroneously flagged as having moved was 141% higher for people of color than for whites.
AI's Potential Benefits for Racially Inclusive Democracy
Despite these substantial risks, Overton's companion article argues that properly designed AI applications could help reduce racial disparities in political participation by:
-
Enhancing community organizing by helping identify and engage low-turnout voters in communities of color
-
Providing language access through translation services for language minorities
-
Improving resource allocation to enable election administrators to distribute voting resources more equitably
-
Detecting disinformation and suppression targeting communities of color
Overton also suggests that AI could enhance government responsiveness to communities of color, with civil rights organizations using AI to detect discrimination in voting, housing, and employment, and identify racial impacts of proposed policies.
However, Overton is clear-eyed about the limitations of current "bridging technologies" that claim to build connections across social divides. Current algorithms risk silencing minority perspectives and homogenizing debates, with little evidence they effectively manage racial polarization.
A Framework for Moving Forward
Recognizing that AI's relationship to racially inclusive democracy is neither predetermined nor binary, Overton proposes four core principles that should guide both regulation and private sector practices:
1. Anticipate Racial Harms to Democracy
Overton argues that AI developers must proactively identify and mitigate potential racial harms before deployment. As he writes:
"American democracy should be protected from automated systems that facilitate foreseeable racial harms, including but not limited to racial polarization, psychometric manipulation, racially-targeted deception, vote dilution and suppression, and racial entrenchment."
This principle is particularly important given the concentrated power of AI development in companies whose "workforces and governance structures [...] do not represent the diversity of our democracy."
2. Facilitate Pluralism and Prevent Algorithmic Discrimination
Beyond simply minimizing bias, Overton calls for AI systems that actively support democratic pluralism rather than homogenizing perspectives. He explains:
"This is not simply limited to unfair bias; it includes the design of many models that scrape the web for data and mimic the content of communities and perspectives that are most visible, mathematically defaulting to averages or dominant patterns. This automation of homogenization is a significant racial harm to a pluralistic, liberal democracy that values respect for the coexistence of diverse interests and viewpoints."
This principle recognizes that technical "neutrality" often reinforces existing power structures and that truly inclusive AI requires affirmative design choices that make space for diverse perspectives.
3. Mitigate Racial Disinformation and Manipulation
The third principle addresses how AI can facilitate targeted manipulation of communities of color. Overton asserts:
"Systems should not collect broad sets of data from various contexts, and then use that data to target particular racial groups or individuals with tailored messages designed to manipulate them, deceive them in political participation, stoke cultural anxiety or racial polarization, or chill them from exercising expressive liberties."
This principle would require both legal guardrails and technical solutions like watermarking synthetic content and enhanced privacy protections specifically designed to protect communities of color.
4. Provide Meaningful Accountability
Finally, Overton emphasizes that accountability mechanisms must be robust enough to address racial harms:
"Those who develop and deploy automated systems should be held accountable when their systems cause reasonably foreseeable racial harms to democracy. Private industry should adopt metrics and standards to ensure accountability, and actually enforce policies they voluntarily enact. Government should adopt laws to ensure accountability."
In his second article, Overton builds on these protective principles with a vision for how AI can actively advance racially inclusive democracy, emphasizing universal access:
"Advocates of racially inclusive democracy should anticipate and embrace technological change while recognizing the risks of concentrated corporate influence over AI-powered democracy…To address these concerns, federal and state lawmakers should enact laws to create public-option AI—publicly funded platforms, computational capacity, models, datasets, and applications that advance the public interest.”
Critical Assessment: Strengths and Limitations
Overton's analysis represents a herculean contribution to the field that integrates insights from political science, constitutional law, civil rights law, and technology policy. His rejection of both techno-utopianism and technological determinism is refreshing in a field often marked by extremes. His concrete examples make abstract concepts tangible along with his case-by-case analysis for the benefits and risks of AI to democracy in communities of color, concluding that some tools (like translation services) likely have benefits that outweigh risks, while others (like persuasion chatbots) currently pose more dangers than advantages.
The biggest challenge for Overton is that much of his initial work was done before Trump assumed office. Overton's vision of robust AI oversight will not have an audience among federal policymakers, which is why I will encourage him to pivot his approach to focus on state and local audiences.
Also in order to focus on the racial and political consequences, at times Overton has had, inevitably, to elide distinctions between different types of AI. Overton's broad definition of artificial intelligence—encompassing everything from simple automated systems to sophisticated generative models—sometimes obscures important technical distinctions. Computer scientists might argue that treating voter list maintenance systems and large language models as variants of the same technology oversimplifies complex technical realities.
The Stakes for Democracy
Despite these limitations, Overton's core insight remains powerful: we cannot understand the impact of AI on democracy without taking race into account.
As Overton concludes:
"A central role of AI and the law could be—and should be—to facilitate our transition to a well-functioning, inclusive, pluralistic democracy—one that respects both identity and individual autonomy and enables cross-group engagement, coalition building, and collective well-being."
In an increasingly diverse America with deepening racial polarization, whether AI entrenches or helps overcome racial hierarchy in democracy may be one of the most consequential questions of our time. Overton's framework doesn't provide all the answers, but it asks the right questions and offers a path forward that acknowledges both the promise and peril of our technological future.
The case of Ruby Freeman and Shaye Moss reminds us that these aren't abstract concerns but deeply human ones. Without thoughtful intervention, AI threatens to supercharge the racial threats to democracy they experienced. With proper design and governance, however, it might help build the inclusive democracy America has long aspired to become.