Our Love-Hate Relationship with Digital Technology

Many Americans are fearful in important ways about AI – particularly generative AI and large language models (LLMs) – and yet the user base is exploding. New research from the Imagining the Digital Future Center at Elon University looks at our love-hate relationship with emerging technologies.

Lee Rainie

Read Bio

Listen to the AI-generated audio version of this piece. 

We worry and worry about technology and yet continue to use and use it. 

That foundational paradox makes it hard to craft policy related to digital technologies. 

How do you craft effective consumer protection rules when Americans say they cherish their privacy, but ceaselessly use free services on their devices that compromise their privacy? How do you settle on data governance regimes when people say they want control of their data, but admit they never read the terms and conditions for the apps they use? How do you think about misinformation when Americans say they want it controlled, yet want the freedoms of the First Amendment? How do you strike a balance between those who are sure facial recognition will be misused, but also want law enforcement authorities to use it to capture criminals? 

I come to this screen as a longtime fan and sometimes kibitzer with the wise and wonderful leaders of the GovLab, but without great insight into answering the questions above. Instead, I come bearing evidence of the newest paradox that will nettle policy makers. It’s the artificial intelligence (AI) version of deeply conflicting attitudes and behaviors among Americans. 

At the Imagining the Digital Future Center, we have found that Americans are fearful in important ways about AI – particularly generative AI and large language models (LLMs) – and yet the user base is exploding.

Picture1

On the fear side, our surveys show that people are especially concerned about the way AI systems will erode their personal privacy, their opportunities for employment, how these systems might change their relationships with others, their potential impact on basic human rights, the way they will disrupt people’s physical and mental health. At the level of institutions and big systems, they also have great anxiety that AI will negatively impact politics and elections, further erode the level of civility in society, worsen economic inequality, and be harmful to both K-12 education and higher education. 

Picture2

Those concerns are leavened to a degree by the public’s sense that AI will be helpful in health and science discovery. Still, overall and in broad terms these are grim expectations. 

And yet … the survey results we just reported show that 52% of U.S. adults already are LLM users, making them one of the fastest – if not the fastest – adopted consumer technology in history. 

Picture3

More striking was the array of ways people reported having human-like kinds of encounters with LLMs, including the 65% of users who reported having spoken conversations with the bots. 

The rapid spread of LLMs is explained by their utility. Two-thirds use them like search engines; about half use them for brainstorming ideas and summarizing documents; a third use them to create presentations and plan things like trips; a quarter use them for planning social gatherings and writing computer code. And 23% have used LLMs to look up what the models say about people they know, while 18% have looked up what a model says about them

In the end, though, even these users – many of them quite enthusiastic – have relatively bleak views about the future impact of LLMs. Again, these users, like the general population, are upbeat about AIs’ impact on medical and scientific breakthroughs. But they also think it likely that the models will worse social isolation, create a net loss of jobs, surpass human intelligence and foment social unrest. They also think the LLMs will develop their own identity and goals.

Picture4

Maybe one of the things we ask them to solve the riddle of why humans say they value one thing and then do something that completely undermines it.

 

Lee Rainie is Director of the Imagining the Digital Future Center at Elon University

 

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.