The Dark Side of Progress: Harari's Grim AI Predictions in Nexus

Nexus offers a sweeping historical exploration of information networks, illustrating how technology has shaped and reinforced power dynamics. Harari’s pessimistic outlook on AI highlights its potential dangers but falls short in proposing solutions. The book's vivid historical examples engage, yet its lack of practical guidance leaves readers wanting more.

Beth Simone Noveck

Read Bio

Listen to the AI-generated audio version of this piece. 

At 500 pages, Yuval Noah Harari's Nexus: A Brief History of Information Networks from the Stone Age to AI offers a sweeping and erudite journey through the history of technology, illustrating its complex entanglement with politics and power. The book’s core thesis is profound, albeit obvious: technology is inherently political, shaped by those who wield it, and often reinforces existing power structures.  

Human information networks—from cuneiform to code, the printing press to AI agents—amplify our capacity to store and share knowledge. But we can wield technology to pursue either truth or order. That is to say, we can use communication tools to foster self-correction or to reinforce control, as totalitarian regimes have done by employing surveillance technology to spread propaganda.  

Harari traces the development of information networks from ancient oral traditions, through the centralization of religious texts, to the expansion of wealth and social divisions during the Dutch Golden Age. He examines Stalin's Soviet Union, where state-controlled information networks stifled dissent, and concludes in modern Silicon Valley, where tech giants wield unprecedented power. He’s hoping to show that technology’s impact is not deterministic and we can use our communications tools to pursue democracy (self-correction) or totalitarian governance (centralization) of our information networks.

Despite his assertion that technology’s impact is not deterministic, Harari's outlook on AI remains decidedly fatalistic. Although he claims the future is not predetermined, his portrayal of AI suggests a dystopian sci-fi scenario where humanity’s path seems doomed to end in catastrophe. 

Unlike earlier writing technologies, AI can learn and take action without direct human oversight, and Harari argues that, despite our brilliance in creating information networks, we may be tripping over our own ambition and building the tools of our own extinction.  

He contends that AI’s the profound changes enabled by AI  are leading humanity toward a grim future for two main reasons: first, that AI resembles a black hole, hoovering up massive amounts of data about individuals and societies; and second, that it imposes categories and patterns on the world that do not always align with truth or reality.  

By continuously drawing in personal information, often without consent, AI is eroding human agency and control. AI-driven algorithms on social media, for instance, collect data to personalize content, but this personalization primarily serves corporate interests, not the users themselves. The potential for abuse extends far beyond targeted ads, as AI technology could enable surveillance states where privacy becomes a relic of the past.  

Harari points to chilling real-world applications, such as Iran's use of AI-enabled facial recognition technology to enforce strict dress codes for women. In 2023, Iranian authorities claimed to have sent over a million SMS warnings to women seen driving without a hijab, using AI to identify them on the road. This example demonstrates how AI can extend the reach of authoritarian control and make state surveillance ubiquitous.  

Harari’s second argument revolves around the limitations of AI's pattern recognition, which he describes as simplistic and rigid, failing to grasp the complexities of human life. AI’s tendency to impose narrow categories mirrors historical information networks—such as state propaganda in Stalin's Soviet Union—that were used to simplify and control narratives. In the modern context, AI-driven predictive policing can perpetuate biases by targeting marginalized communities based on past data, reinforcing rather than resolving social inequities. Similarly, AI’s use in hiring can inadvertently discriminate against candidates by reinforcing stereotypical traits associated with "successful" employees, based on biased training data. In both cases, the patterns that AI identifies do not necessarily reflect objective truth; rather, they can mirror and exacerbate existing social inequalities.  

While Harari presents an erudite and entertaining set of historical examples—and Nexus is an engaging read—the book falls short when addressing the future of AI. 

The warnings about AI echo familiar “doomerist” narratives from figures like Gary Marcus and Geoffrey Hinton, raising alarms about our inability to align AI with human values. Yet, despite asserting that the future is not determined, the parade of horribles is not balanced by any discussion of how we can use AI to strengthen democracy, improve governance, or promote social good.  

Harari essentially lays out a more nuanced and historically grounded version of the "Paperclip Apocalypse," a philosophical thought experiment imagining that if an AI robot is given the goal to create paper clips, it could stop at nothing to achieve this purpose—even if it means destroying humanity. He argues that we currently lack effective mechanisms to align AI with our values. The willingness of social media algorithms to auto-play sensational content for increased user engagement prefigures the challenge we face with AI agents making decisions in high-stakes areas like parole and loan approvals. “The more powerful and independent computers become, the bigger the danger.”  

Like many doomerist narratives, Nexus gives short shrift to potential solutions and counterarguments. While it claims that we need stronger institutions to govern AI, it stops short of offering a roadmap for building them or exploring how we might use AI itself to regulate and oversee AI more effectively. Despite lengthy discussions on how older information technologies like newspapers enabled democracy, there is no comparable vision for the present or future.  

Ultimately, Nexus provides a compelling and richly detailed historical perspective on how information networks have shaped societies, persuasively arguing that technology is political and that we need effective institutions if we are to manage these new tools responsibly. Harari is right that we made poor choices with our response to social media. But if we want to make the right choice with AI, we need to spend less time sounding the alarm bell and more time designing the future we want.  

Harari's pessimism about AI leaves readers with vivid warnings but little in the way of practical advice. The book's global historical romp entertains and educates, but its lack of practical solutions leaves the question of how to steer AI’s development toward a better future largely unanswered. Harari aptly warns that AI is shaped by our choices and calls for regulation, but the specifics of how to build a safer and more ethical future remain elusive.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.