Loading blog post, please wait Loading blog post...

Listen to the AI-generated audio version of this piece.

00:00
00:00

Many revolutions begin in the shadows. Small, local, largely invisible developments quietly take hold, expand, and eventually upend life before observers notice. Only when people look back do they realize that something fundamental has shifted. 

The ground beneath human judgment, agency, and society as a whole is quietly rearranging itself as artificial intelligence (AI) systems are woven into the fabric of every system that affects everyday life. 

That revolution is happening now. The ground beneath human judgment, agency, and society as a whole is quietly rearranging itself as artificial intelligence (AI) systems are woven into the fabric of every system that affects everyday life. 

To be sure, there are now high-stakes, very public arguments about the potentially existential impact of AI. But they are occurring as a much less visible change: AI systems being integrated into the structures of society, shaping how opportunity is distributed, services are delivered, risks are managed, and human rights are experienced.

Our report "Building a Human Resilience Infrastructure for the AI Age" draws on more than 160 detailed essays from an international and cross-disciplinary group of 386 experts

This is a key insight we unearthed as we recently explored the nature of human resilience in the Age of AI. Our report on these findings, "Building a Human Resilience Infrastructure for the AI Age," draws on more than 160 detailed essays from an international and cross-disciplinary group of 386 experts. 

When we set out to ask a global array of experts about resilience, we were mostly thinking about how, over the millennia, individuals developed inner resources to cope with disruption, loss, and change. We drew on an impressive body of scholarly literature about grit, adaptation, and the human capacity to absorb shocks and bounce back. We expected answers at that scale.

What we got instead was something more urgent. The 386 experts—academics, technologists, policy thinkers, futurists, and practitioners whom we surveyed—came back in essay after essay with a different diagnosis. 

They acknowledged that individual resilience always matters. But the bigger issue is that we are moving into a full-scale societal operating system so pervasive and so quietly powerful that relying only upon individual resilience strategies in the face of this change is like trying to carry an umbrella in a typhoon. 

They argued that, to retain autonomy and well-being, it is necessary to create highly coordinated, sweeping change at the level of humanity’s societal infrastructure to support human resilience. 

Fully 82% of respondents said AI will play a significantly larger role in shaping people’s lives and key societal functions within the next ten years or less, and 56% expect AI to influence, guide, or control most or nearly all human activities and decisions.

At the same time, 45% of experts said humans will be only “a little” or “not at all” resilient in the face of this level of change, revealing a widening gap between technological acceleration and human preparedness.

They urged that the window for proactive intervention is small. The time for developing this infrastructure is now.

Pushing back against the ‘slow drift’ toward unwitting compliance 

The central risk described by these experts is not a single Hollywood horror-style AI event. They said accelerating AI adoption is already leading to a reallocation of human agency, a setting in which people and institutions will find it hard to question, contest, or even notice what has changed. 

That drift can look like “progress” in the short term. But it has a price – the gradual weakening of human judgment, accountability, shared truth, and the individual agency that makes self-government possible. 

Experts repeatedly warned of three reinforcing dynamics: the loss of human agency, the fragmentation of shared reality, and the rise of automated complacency driven by over-trust in fluent AI systems.

Experts repeatedly warned of three reinforcing dynamics: the loss of human agency, the fragmentation of shared reality, and the rise of automated complacency driven by over-trust in fluent AI systems.

Many contributors reached for the same metaphor independently: AI and machine learning have already become infrastructure, enveloping the “environment” in which we live – so embedded in the functions of daily life that people simply accept it or don’t even notice it's there. And you feel you can’t opt out of it. 

Alf Rehn, professor of innovation and design management at the University of Southern Denmark, described it in his essay this way: AI will diffuse responsibility by design. ... Resilience in an AI-shaped world won’t just be about bouncing back. It will be about not vanishing while everything keeps running. The most dangerous kind of resilience is the kind that looks like stability but is actually surrender, because it feels good in the moment and empties the room over time. That’s why we need cognitive triage, yes, but also the wisdom to know when triage becomes abdication.”

Agenda for governments: To counter this surrender, experts recommend a comprehensive, multi-layered agenda to build a resilient infrastructure. This starts with governments. Among the many recommendations the experts offer are forging international treaties and establishing enforceable “red lines” for AI performance, conducting independent pre-deployment safety audits, and building a robust authenticity infrastructure, such as standardized watermarking and provenance-tracking, so we can distinguish between what is real and what is generated. 

They also emphasize the need for algorithmic contestability, the ability for individuals to challenge automated decisions, as a core institutional safeguard.

Agenda for AI developers: Popular online platforms are generally designed for attention capture, monetization, and surveillance. These experts urge a shift toward building “friction” and stop points into AI processes to encourage people to reflect on their choices rather than mindlessly deferring to an algorithm. They suggest that AIs should be trained to cite and honor the intellectual and psychological foundations of humanity, and that their outputs should be presented as probabilistic information rather than deterministic truth. Moreover, these experts said we need AI systems and knowledge platforms that buttress our capacity for altruism and compassion, not eliminate it.

Agenda for business leaders: Many essayists recommended that business leaders value human augmentation over replacement and adopt policies that address the psychological impact of AI-driven change. People’s self-worth, identity, and economic security are challenged when work roles change or jobs are lost. Some suggest creating “human-only zones” – areas of work where AI is intentionally prohibited to preserve the unique value of human labor and craft. Without these boundaries, the “AI hurricane,” as Srinivasan Ramani calls it, threatens to destroy the social fabric of half of humanity.

Agenda for educators: These experts say we need to move beyond simple digital literacy and toward what one referred to as “existential literacy” – the development of hybrid skills that blend adaptive mindsets, emotional intelligence, epistemic vigilance, creativity, and an understanding of AI systems that help them develop and advance their own agency and agenda. It encourages the development of new norms and the cultivation of a deep understanding of how technologies shape our goals, values, and identities. It teaches people to navigate life’s fundamental challenges, such as situations characterized by ambiguity, paradox, and anxiety, without surrendering their agency to a machine. 

Agenda for communities: Our respondents seek a heavy investment in local social capital. We need spaces that build positive community connections, strengthen social skills, and deepen citizen engagement. This includes pressing for distributed AI governance systems and participatory structures, such as local data trusts or citizen assemblies, that can influence how AI is deployed in our neighborhoods. One expert noted that “analog communities,” “dumb homes,” and “dumb phones” may allow people to find ways to exist outside of algorithmic mediation and constant surveillance. 

Agenda for individuals: For people, these experts say the path forward is one of active, attentive adaptation, the conscious cultivation of their own cognition and agency. Human flourishing requires that we operate in the AI “environment,” retaining epistemic vigilance, embedding “stop-and-reflect” practices into our digital lives, and consulting with others to maintain our moral accountability. We are encouraged to cherish the moments in which we must confront ambiguity, make our own decisions, and spend significantly more time away from screens. Most importantly, we must consciously cultivate in-person social relationships, lest we realize too late that this interaction with “primitive new intelligences” has cost us our irreducibly interior selves.

Throughout the report, a consistent theme emerges that resilience cannot be treated as an individual trait. It must be built as shared infrastructure—legal, educational, economic, and social—capable of preserving human agency in systems increasingly shaped by AI.

Some final thoughts from contributors to our report, which encapsulate this need:

Alison Poltock, co-founder of AI Commons UK, wrote, “We are in a moment of epistemic shift. ... The developmental frameworks shaping identity, agency, and social orientation are shifting. ... This is the terrain of vulnerability. Yet there is no shared conversation. No civic space where this new reality is named, let alone addressed. We are operating on outdated institutional architecture, strapping jetpacks to systems built for another age and allowing our children to grow up in the gap.” 

Mel Sellick, founder of the Future Human Lab, said, “AI has become the infrastructure through which all relating now happens. Even when we think we are not using AI directly, we are constantly interacting with what AI has already touched. There is no ‘outside’ anymore. Some form of AI is upstream of everything.”

Sellick’s main point is stark and haunting: “We are the last generation that knows what human capacity felt like before it became inseparable from AI.”

Tags