Loading blog post, please wait Loading blog post...

Listen to the AI-generated audio version of this piece.

00:00
00:00

CNN recently reported (and we showcased in this blog’s News of the Week) that parents of some victims of the Parkland shooting used AI to recreate their children's voices, generating over 54,000 AI-voiced calls to lawmakers to urge Congress to address gun violence. The fight for gun control underscores the urgent need to shift the narrative around AI, if we are to harness its benefits effectively.

In his new book, What We’ve Become , Dr. Jonathan Metzl, my colleague at the Burnes Center for Social Change, posits that the battle for gun control is hindered by a public health narrative that naively overlooks liberty concerns. The freedom to protect oneself, often cited as a defense against a perceived overreaching state (and from a non-white, immigrant “other”) frequently overrides concerns about injury or death. This liberty—or really fear—narrative helps to explain why, in the wake of the horrific 2018 Tennessee Waffle House mass killings that are the centerpiece of Metzl’s excellent book, residents elected (and later re-elected) a candidate who successfully pushed through permitless carry. Metzl’s bottom line message? If we want to reduce gun violence, advocates need to take on the liberty argument.

The same government-is-the-problem-not-the-solution narrative has direct parallels to the conversation unfolding about artificial intelligence. Titans of tech have peddled the view that they (not tech-illiterate politicians in Washington) know better how to run the world. Adrienne LaFrance in The Atlantic critiques this as an "antidemocratic, illiberal movement," while Matteo Wong highlights the doomerism promoted by tech leaders.

It’s “boom time for doom time,”as Bryan Walsh says in a recent Vox article. But this simplistic, fear-induced narrative needs to be countered by a more nuanced and complex conversation about how to use AI to help us solve big problems. 

The opposite of doomerism isn’t an equally naive utopianism. Instead, what we need in our approach to policy, research and journalism is a more sober and balanced discussion about ways to advance the public interest here and now and the hurdles that impede those advances.

We could have a good laugh over the breathless hyperbole coming out of Silicon Valley (and parrotted in Washington) were this narrative not infecting our policymaking, dominating media headlines, and driving the focus of academic research. As with concerns about the liberty narrative in the gun debate, we need more conscious awareness of the fact that the doomerist narrative is manufactured to reinforce a distrust in government. 

Doomerism distorts our perception of the important role the government needs to play, first, to safeguard civil rights but also to advance the use of technology for the public good through practical investment and proactive policies that address how to responsibly use these technologies and ensure they work for the public interest. 

Speculative, alarmist outlooks are skewing the discourse away from how to do more today to advance medical breakthroughts, rescue endangered languages, fight climate change and improve the workings of government using AI. Such efforts are overshadowed in the headlines by Chicken Little-style narratives that the sky is falling. AI ethics need to go beyond focusing predominantly on risk avoidance and address how to proactively promote public interest uses of technology that advance equity and deepen democracy.

As we have seen with guns, the narrative matters. As Metzl commented in Time Magazine: "Democrats need to tie gun safety to the defense of the American public square." When we are told that we need to safeguard our liberty against an encroaching state, we go out to buy a gun despite the risks to life and limb. When we are wrongly (or at least exaggeratedly) told that artificial general intelligence is around the corner, we invest time, money and attention in worrying about the robot apocalypse, instead of asking how we can use these data processing tools despite their many flaws to reduce inequality, combat discrimination, and create a better world. Yet, if we are to counter this narrative, we both have to recognize and engage with it. We have to address the fears that Silicon Valley, the media,  academics, and politicians have fomented and then go beyond it, addressing how AI can also respond to risks, not simply create them.

By adopting a balanced, evidence-based approach to discussing AI, we can foster a more productive dialogue that prioritizes current benefits over speculative dangers, paving the way for AI to contribute meaningfully to societal advancement.