Next week, the new edition of Marion Nestle’s fabulous What to Eat Now will hit the shelves. (Pre-order it here) The average American supermarket carries over thirty thousand products, so an evidence-based guide is more important than ever. The chapter on the Fish Counter is already available as a short volume, and it incisively makes the case that, despite years of research on the dangers of methylmercury and other contaminants, we’re largely left to decide for ourselves how much fish’s health benefits outweigh its significant risks. Without expertise in fish biology, it's hard to know what to do. If you’re a kid or pregnant, the short answer is: caveat emptor.
Photo Credit: Steven Barclay Agency
Artificial intelligence poses a similar dilemma: how do we reap the benefits without swallowing the risks?
After all, there are a lot of stupid AIs (like the new so-called robot from NEO or the friend.com surveillance necklace). All major generative AI companies use our data to train their models by default. They have inconsistent policies and data privacy protections, including for children. Americans overwhelmingly believe AI will diminish key human capacities, like empathy, deep thinking, and personal agency. Meanwhile, companies blame AI transformation and automation for their bad management decision to fire workers, transferring wealth from workers to shareholders, as Amazon did this week.
It shouldn't have to be this way, but just as with food, each of us bears the burden of making wise choices about AI.
Artificial intelligence poses a similar dilemma: how do we reap the benefits without swallowing the risks?
Federal policymakers are not going to step in anytime soon The White House is focused on how to promote American corporate dominance. “Whoever has the largest AI ecosystem,” declares the Trump AI Action Plan, “will set the global standards and reap broad economic and security benefits.”
At the state level, legislatures have concentrated on preventing risks—safeguarding against deepfakes and AI bias. Since 2019, lawmakers have introduced more than 1,600 AI-related bills. The vast majority focus on guardrails—restrictions, audits, and bans—rather than on proactive strategies for using AI for public good, as my colleagues leading the Rethink AI initiative on local government and artificial intelligence point out. These state-level rules, too, are at risk of federal preemption.
There’s no “nutrition label” for AI tools, no clear guidance on which ones are trustworthy, safe for kids, or designed to augment rather than exploit our labor and data. We do not yet fully understand how to get the most from these powerful new data-processing technologies while avoiding the risks.
There’s no “nutrition label” for AI tools, no clear guidance on which ones are trustworthy, safe for kids, or designed to augment rather than exploit our labor and data.
So, we’re left to navigate this new ecosystem, much like at the fish counter: reading the fine print, cross-checking the claims, and balancing the potential benefits with the unseen risks.
AI risks making us dumber when we rely on it. Evidence suggests, however, that when people learn to work alongside AI, the outcomes for many workers are consistently better. AI-assisted teams outperform both humans and machines alone. An NBER study found that Call center operators who edited AI-suggested responses handled more calls, made fewer mistakes, and saw faster wage growth. Across industries, firms using AI with humans in the loop saw productivity rise by 10–25% overall, and by 20–40% at the task level.
My AI for Impact Northeastern students built an AI tool that helps Massachusetts highway engineers find safety rules 78% faster by amplifying, not replacing, human expertise: engineers working with AI to improve resident services. An MIT review of 100 workplace studies found that human-AI teams were consistently more accurate and efficient than working alone. In other words: AI works best as a partner, not a predator.
AI is not an all-or-nothing proposition; it’s a spectrum that runs from harmful automation to powerful augmentation of human capacity. Our task is to move toward the latter.
I’ve been writing for some time now about how we need to invest in learning how to control our tools, lest they control us.
Studies show that organizations that invest in upskilling see higher productivity, better morale, and fewer layoffs. A Census study of 180,000 firms found that those investing in retraining saw productivity gains of up to 25% within a few years.
Companies that paired AI adoption with training and job redesign achieved double the performance improvements compared with those that didn’t. States like New Jersey lead by example, training every public employee in AI. We developed that training by asking workers what training they want and need. When employees shape how AI is used, the technology works better, and so do they.
The danger isn’t that AI will make us dumber—it’s that governments, companies, and schools won’t make us smarter with it.
The danger isn’t that AI will make us dumber—it’s that governments, companies, and schools won’t make us smarter with it.
We need to know when to use these tools—and when not to. We need to understand what they’re good at, their risks, and how to recognize when others misuse them to perpetuate bias, perpetrate surveillance, and undermine human agency.
As with climate change, food safety, and other critical issues, shifting the burden to already beleaguered individuals is far from ideal. We need better regulation, more accountability, greater transparency, and explainability.
We need to direct public spending toward AI that serves the public interest, not corporate profit. And we need to stop introducing unregulated commercial tools into schools without the investments in teaching, research, or oversight necessary to ensure AI improves student learning rather than developers’ bottom lines.
To fight for this future, we need to be armed with the knowledge of using AI well. We cannot wait—any more than we can wait for the FDA or grocery stores or companies to “self-regulate” and ensure the safety of our food supply.
Just as we distinguish between albacore and skipjack, we need to learn to read the labels on the technologies shaping our lives. If the metaphor holds: fish, like AI, can be good for us—when handled wisely. The future of AI, like our food supply, depends not only on what companies sell us, but on what we’re willing to accept and how wisely we decide to consume it.
That’s what we’re building at InnovateUS: a peer-to-peer community where public workers learn—and teach—how to use AI for good.