Last week, I turned in my English literature paper. While the final draft could be typed with version history, our outlines had to be handwritten in a Bluebook.
While tedious to handwrite, I understand the rationale. As an editor on my high school’s newspaper, I see two extremes. Some student writers use AI as a cheating machine to “generate” articles without thought, while others do not use it at all, missing out on the real upsides from its thoughtful deployment.
High schools struggle to have an open, nuanced conversation on AI. What’s missing is a shared understanding of what skills AI can assist with, not just replace.
When you look at the national media narrative about AI, it’s no surprise that most high schools struggle to have an open, nuanced conversation. What’s missing is a shared understanding of what skills AI can assist with, not just replace. That gap doesn’t start in schools. It reflects the broader public conversation.
If you listened to the average academic, journalist, or self-described "thought leader," you would likely believe that AI is a civilizational risk with few upsides, on par with nuclear war or a global pandemic. Recent headlines in The New York Times warn of “brain fry” from AI use, or the existential risk of an "AI apocalypse."
At the same time, a different reality is unfolding, largely outside that spotlight. States like Massachusetts are using AI to unlock millions of dollars in federal benefits they didn't know they were eligible for. Cities like Boston are using AI to issue permits to small businesses faster than ever. Students without the financial resources to afford traditional tutoring can gain a more level playing field for studying for the SAT.
Why is there such a disconnect?
Meet the "messenger class." Jerusalem Demsas, editor-in-chief of The Argument and former writer for The Atlantic, argues that a hyper-influential milieu of those who "determine the boundaries of debate" — from influencers to nonprofit heads to political staffers — tend to push a remarkably similar narrative.
The messenger class acts not "as a mirror to society," but as a spotlight on their own concerns, missing the issues that matter
They may work in different fields but share similar knowledge-class experiences, living in the same expensive cities: New York, D.C., and San Francisco, which tend to surface a similar set of concerns. While well-intentioned, Demsas argues the messenger class acts not "as a mirror to society," but as a spotlight on their own concerns, missing the issues that matter to lower-income, working-class Americans.
Demsas illustrates the mirror vs. spotlight effect by pointing to the media’s lack of early coverage of the opioid epidemic and its increased coverage of a “union revival” at the time when universities and newsrooms started unionizing.
Nowhere is this clearer than in how the messenger class covers AI and work. The media has adopted a doomerist narrative about AI's impact on workers, but they're describing its impact on their labor market, not on the one most Americans face.
Most Americans don't code or write substacks for a living. The most common jobs are home health aide and retail salesperson. The reason low-skill workers are less immediately vulnerable than the messenger class assumes is captured by Moravec's paradox: what is easy for a human is often hard for an AI, and vice versa.
Cognitive skills like advanced calculus or essay writing can be replicated by AI, while skills such as finger dexterity that took millions of years to develop are difficult to cultivate in a machine. The slow evolution of mass-market humanoids means that positions with lower educational requirements, such as dishwasher or food prep worker, tend to involve less exposure to AI in daily tasks.
As a result, many jobs that dominate public discourse are more exposed than those that define much of the broader labor market.
By contrast, those most likely to be helped by AI — who I’ll call the experiential class — have little to no voice.
Although Demsas explains the lopsided nature of the reporting, she misses part of the story. It is not only about risk exposure. It’s also about visibility of benefit.
To see how AI can actually improve lives, you have to look in places the messenger class rarely does. Take the Individualized Education Plan (IEP), which prescribes special education services for 15% of American public school students.
These documents can be over 50 pages of dense legal language, and a challenge for any family to decode, let alone those whose native language is not English. Projects like AI-EP are already helping families use generative AI to simplify and translate these documents, making them accessible to all.
How to get AI tools into the hands of the home health aide, the immigrant parent, or the small business owner never gets asked at all.
When these use cases remain invisible, the policy conversation narrows. The focus shifts to regulating AI rather than deploying it to improve access to services, reduce administrative burden, or expand opportunity. It’s whether the conversation is broad enough to account for where AI is already working and for whom. How to get AI tools into the hands of the home health aide, the immigrant parent, or the small business owner never gets asked at all.
This is, ultimately, a participation problem.
At my school newspaper, we make a very deliberate effort as a staff to not seek out the same people for comment every time. It doesn’t require a structural overhaul; it just requires a willingness to seek out perspectives that are less visible but equally important. The national publications and policymakers that shape how America thinks about AI could try it too.
If the goal is to design systems that work for the public, then the people experiencing those systems, like families navigating benefits, workers adapting on the job, and communities interacting with the government, or high schoolers like me, need to shape the conversation from the start.
Otherwise, we risk building policy around a partial view of the problem and missing where AI is already making a difference.