Loading blog post, please wait Loading blog post...

Listen to the AI-generated audio version of this piece.

00:00
00:00

"Time is Running Out," declared a recent Time Magazine headline covering a new statement calling for a ban on the development of "superintelligent AI." 

The Future of Life Institute (FLI) statement is the latest in a string of open letters warning that advanced artificial intelligence poses an existential threat to humanity.

Signed by hundreds of policymakers, entertainers, AI researchers, and business leaders, including "AI godfathers" Geoffrey Hinton and Yoshua Bengio, the statement reads: "We call for a prohibition on the development of superintelligence, not lifted before there is 1) broad scientific consensus that it will be done safely and controllably, and 2) strong public buy-in."

Advances in AI raise legitimate ethical and governance questions that deserve serious debate. But this statement exemplifies how the broader AI doomerism movement misdirects that debate.

The doomers’ warnings rely on undefined concepts that cannot be governed, conflate imaginary threats with real harms that demand immediate action, and ultimately serve the interests of tech corporations seeking to escape democratic accountability.

The Vagueness of "Superintelligence" Makes It Ungovernable

The statement’s first flaw is that it fails to define "superintelligent AI." While a preamble warns of systems that "can significantly outperform all humans on essentially all cognitive tasks," this description offers little clarity about what abilities, reasoning processes, or measurable outcomes constitute superior intelligence.

This matters because artificial superintelligence (ASI) is not a scientifically defined, empirically measurable, or universally agreed-upon benchmark. Instead, it is a theoretical concept used to guide speculative debates about advanced AI systems that don't yet exist. Without clear grounding, superintelligence becomes a catch-all for anxieties about technological change.

AI doomers exploit this vagueness to make sweeping claims about existential catastrophe that are impossible to verify or falsify. In their new book, If Anyone Builds It, Everyone Dies, Eliezer Yudkowsky and Nate Soares argue that the race to build ASI will likely exterminate humanity.

However, as Adam Becker notes in his review, the authors offer little scientific evidence, relying instead on shaky analogies, unfalsifiable assumptions, and malleable definitions that accommodate whatever conclusion they wish to reach.

Effective AI governance requires policy grounded in evidence. Without this foundation, crisis rhetoric distracts us from understanding real-world harms and knowing how to use these tools for good.

Conflating Real Harms with Imaginary Ones Undermines Effective Governance

Like much of the rhetoric around ASI, FLI lumps real problems we face today together with speculative fears of human extinction, distracting from the concrete work of democratic governance. The letter's preamble states that AI companies' pursuit of superintelligent systems raises concerns "ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction."

From mass layoffs to algorithmic bias in hiring, policing, and lending to deepfakes threatening democracy to new forms of surveillance, AI's harms are neither theoretical nor distant. However, unlike vague threats of human extinction, policymakers and communities are confronting these problems now. 

Cities like San Francisco and Portland have banned facial recognition technology, while New York City has enacted biometric privacy laws. States have enacted more than 50 laws that place guardrails on the creation and use of deepfakes. Workers are winning contract provisions through collective bargaining that restrict employers' ability to use AI to replace workers or violate their privacy.

While these are not the only harms posed by the improper use of artificial intelligence, these wins show that these challenges can be addressed through democratic oversight and collective action.

The Superintelligence Narrative Serves AI Firms Who Are Corrupting Democracy

Finally, the fixation on superintelligence amplifies a narrative that serves Big Tech's political and financial interests. 

The concept of ASI was popularized by Nick Bostrom, whose 2014 book Superintelligence argued that uncontrolled AI could threaten human survival. Though critics have linked his ideas to scientific racism, his warnings have been embraced by prominent Silicon Valley figures. By amplifying Bostrom’s dystopian narrative while constructing a utopian vision where AI solves all humanity's problems, Big Tech leaders have positioned themselves as the stewards of our technological destiny.

Embracing AI extinction fears became part of Silicon Valley's strategy to shape policy. In 2023, Matteo Wong observed how executives from OpenAI, Google, and Microsoft invoked extinction fears while lobbying lawmakers to put guardrails on frontier AI systems.

The rules they proposed were, as Wong put it, "defanged and self-serving," crafted to appear socially responsible while also making their technologies appear cutting-edge. This narrative conveniently shifted attention from controversies like copyright infringement and worker exploitation that had caught policymakers' attention.

Big Tech's strategy has evolved from performative calls for "responsible" regulation to overt efforts to reshape governance rules in ways that protect corporate power. Amazon, Meta, Tesla, Palantir, and other major tech companies are aligning with right-wing political forces that seek to weaken democratic oversight and undermine labor rights. Many are investing hundreds of millions into lobbying to roll back guardrails on their technologies.

As Becker observes, AI doomers warning of runaway machine intelligence are describing a distorted reflection of our current reality: "Instead of superintelligent AI, we have super-wealthy tech oligarchs. Like the hypothetical AI, the oligarchs want to colonize the universe, and like the hypothetical AI, they do not seem to care much about the desires and well-being of the rest of us."

By amplifying the narrative of "superintelligence," AI doomers’ warnings deflect attention from the real crisis: the concentration of power in tech corporations, which is already reshaping democratic governance in their favor.

Conclusion

The AI doomerism movement distorts public understanding and misdirects governance efforts. By framing AI as an uncontrollable existential threat rather than a technology shaped by human choices, it obscures urgent challenges of corporate power, surveillance, labor exploitation, and democratic accountability. If we are serious about governing AI responsibly, the focus must shift from speculative fears to the tangible systems, actors, and policies that determine how AI shapes our lives today

Tags