What struck me in this conversation with B Cavello is how quickly the interview frame shifts from technology to institutional capacity. In a survey of more than 100 food security practitioners across 20 UN regions, governance consistently ranked among the top challenges facing food systems.
That finding matters. It suggests that the constraint is not better prediction or optimization, but the ability of public systems to make fair, adaptive, and legitimate decisions under pressure.
I keep returning to the gap between where innovative energy flows and where genuine need lives. The finding that only 3.6% of AI-for-good projects focus on Zero Hunger says something about the future we're building toward.
When we talk about food security in the context of AI, we tend to imagine precision agriculture. The practitioners B surveyed are pointing instead to land governance, distribution systems, and institutional strain. B also points out that this is in part due to the self-imposed role that AI technologists play.
B’s call to action is to make human flourishing the first priority, a challenge to the AI community.
For policymakers, that signal requires reflection. If the core bottleneck is governance capacity, then the AI strategy cannot be hyper-focused on maximizing efficiency gains. Cavello’s observation is one I hear across many contexts: we do not lack technical ambition, we lack collective focus.
The question for governments is not always whether AI can optimize the system, but whether we are willing to organize it around legitimacy, resilience, and public purpose.
A conversation with B Cavello, Director of Emerging Technologies at the Aspen Institute's Aspen Digital program.
This interview has been edited for length and clarity.
Elana Banin: Tell us about your role at the Aspen Institute and what you’re working on at the intersection of AI, innovation, and food security.
B Cavello: I’m the Director of Emerging Technologies at the Aspen Institute. I work within Aspen Digital, one of the Institute’s policy programs. The work my team does focuses on creating the better future we want and on the role technology plays in helping us get there. And more specifically, how can people shape the technologies to actually build that future?
One thing we all know is that food security is an ingredient for a better future. Solving world hunger is the most “Miss America” answer out there; everyone has known it’s a problem basically since the dawn of humanity, and yet it remains a problem even with all of these incredible technologies available to us.
There are many claims that AI and other new technologies will vastly improve the world, and I would love for that to be true. So the goal of our work is, if we take that at face value, if we believe people really want to make the world better, how can we help them focus on the problems that matter most?
In our food security work, we brought in experts and practitioners from the global food security community and asked where help was needed. Where would energy, investment, and innovation actually bring the greatest impact?
Now we’re at the point of actually bridging two communities, the AI and machine learning research ecosystem and the food security ecosystem, to say, let’s get real. What are we actually going to do together to make progress on this issue?
Elana: You recently conducted a global survey of more than 100 food security practitioners across 20 UN regions. What did the findings tell you about how food insecurity is experienced, and where systems are breaking down?
B: First off, it depends on where you are and what your context is. We asked practitioners around the world to rank six challenges, including production and resource management; access and distribution; finance and risk management; labor, equity, and inclusion; nutrition and food utilization; and institutional capacity and governance.
In less developed regions, where food insecurity is greatest, production was ranked as the number one challenge. Interestingly, in the most developed regions, like here in the United States, production was last on the list, while access and distribution ranked much higher.
Across the world, whether it held the number one or number two slot, everyone ranked institutional capacity and governance as critically important.
But across the board, everywhere in the world, whether it took the number one or number two slot, everyone ranked institutional capacity and governance as critically important.
The Reboot Democracy audience won’t be surprised. We know that state capacity and institutional capacity are key factors in the human flourishing we want to achieve. But I think it’s still striking because when we picture “AI for world hunger,” we tend to imagine rows of seeds sprouting.
And there absolutely is a role for agriculture in this conversation. But we need to appreciate that food systems are actually complex supply chains and complex governance systems. There are lots of places for improvement across the board.
People are especially feeling the strain of institutions that are ill-equipped for the kind of changing world we’re living in, not just because of technological change, but because of climate change, social change, and all of these shifts happening simultaneously. Institutional capacity and governance, as well as the ability to adapt and make effective decisions, are increasingly critical.
It’s also worth asking why I, as Director of Emerging Technologies, am focused on institutional capacity and governance at all. There’s a common frame, and I’ve probably used it myself, that says, “That’s not a tech problem, it’s a social problem.” But something the Reboot Democracy community can appreciate is that those categories are not mutually exclusive. Building technological systems and institutional capacity in ways that make it easier for people to solve social problems is a core ingredient in food security.
Yes, it’s a social problem, and technology has a role to play.
And when we look at the ecosystem, where are tech innovators actually focusing their energy? Not on food security. One analysis found that only 3.6% of AI-for-SDG projects — even among people explicitly trying to do AI for good — focus on Zero Hunger. There’s a real misalignment between where tech innovators are putting their energy and where the need is greatest.
Elana: How do you envision AI inserting into the food system across the supply chain, from land governance to distribution?
B: There are so many examples. I’ll name a few that I find particularly inspiring.
When we asked food security practitioners what strategies show the most promise, the top two centered on transparency, accountability, and participatory decision-making in food systems, particularly land governance. How we make decisions about how land is used is a very live question, including here in the U.S., as we have conversations about data centers. So many of these questions come back to how we govern the things we share, like our planet.
One way AI can play a role is simply by making it easier to include more people in these processes. A lot of land-use decisions tend to happen in spaces that aren’t open or accessible, especially in regions where people speak many different languages within a single geographic area. There’s a real need to make information more available.
There’s an organization in India, OpenNyAI, that works to make legal documents and the legal process legible to people in their own natural language. How do we make it baseline possible for people to participate in governing their collective resources? That’s one contribution technology can make.
We’ve also been working with collaborators at LGND.ai, an organization focused on using time-series satellite imagery to answer natural-language questions about land. You could ask: when was this area converted to a farm? When was that facility built? When was there a fire here? Having more of a conversation with Earth systems data gives people a genuinely new lens on questions we may never have been able to answer before.
And then there are the smaller-scale examples I love, like the Barilla pasta factory, where a computer vision system blows little puffs of air to remove bad seeds or stones as grain falls into a trough.
There are countless ways AI can support food systems. But we need to make that technology broadly accessible, and we need to ensure that the people who are actually the experts in doing the work are centered in figuring out how it should be deployed.
Elana: What institutional reforms are needed for these changes to take place?
B: I’m not an expert here. I’m trying to use this work as a conduit for experts in food security. What we’re doing right now, this bridging exercise, is about interrogating that question. I don’t want to get ahead of myself and prescribe specific reforms.
Institutional reforms require greater transparency, accountability, and participation. There is an appetite for that.
But as a rule, greater transparency, greater accountability, greater participation. There is an appetite for that.
Historically, participation was hard. Getting people to testify before Congress meant traveling to Capitol Hill. Then we had the pandemic, and everyone was doing Zoom calls. And it turns out, we have the technology. We can make our institutions more participatory and more transparent. We can do this.
Elana: You deliberately avoided mentioning AI in the survey. Why?
B: It’s a funny thing, we’re working on AI for food security, and we didn’t talk about AI at all. That was very intentional.
When you say “AI,” people immediately anchor on ChatGPT. They don’t think about the Barilla factory blowing stones out of grain. We didn’t want people to focus on any particular incarnation of the technology.
It’s actually a challenge to the AI community. It’s us saying: do better. Build AI that actually solves our problems.
But there’s a bigger reason. The goal of Feeding the Future is not just to ask how we use today’s AI to address food security issues. It’s actually a challenge to the AI community. It’s us saying: do better. Build AI that actually solves our problems.
Jack Clark recently told Ezra Klein in an interview that the field doesn’t lack funding, brilliant people, or cool technology. What it lacks is the ability to know which problems are worth focusing on.
That is the spirit of this work: here are the problems worth focusing on. Here’s where to get started. We’re going to connect you, AI researcher, with the people you need to know, so that when you invest your time and energy into building tech for these problems, it will actually benefit the world.
Elana: You’ve been active in the movement for public AI. What is that?
B: Public AI is a movement to make AI more accessible, more accountable, and more protected from enclosure, from being locked up by a few private interests.
It challenges the notion that the only way innovation can happen is among a handful of private firms, and instead asks what it would look like to put the means of production into the public's hands.
In the United States, there's the National AI Research Resource, the Empire AI initiative, and Cal Compute coming up in California, initiatives about ensuring that public and public-interest institutions, like research universities, have access. Not just to query a language model, but to actually build systems and make the things we want to see.
Some folks in the Public AI Network are doing incredible work on AI access in libraries, ensuring people can use these systems at kiosks in their local libraries.
Internationally, there’s significant innovation happening in Europe, ALIA in Spain, Swiss AI has Apertus, and serious questions are being asked about what we want these models to be and do.
In my decade of working in responsible AI, I’ve noticed that when people hear “public accountability,” they often interpret it as a list of things you should not do. There’s a place for that. We need the bright lines. Last week’s current events have really driven that home.
Public accountability also means listening to the public's goals, demands, and desires. Yet the way technology is being developed isn’t always responsive to those concerns.
But public accountability also means listening to the public's goals, demands, and desires. There’s that classic meme: I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes. And yet the way technology is being developed isn’t always responsive to those concerns.
When we think about what public accountability really looks like, we’re trying to paint a different vision, one where we actively want innovation on the hard stuff. We want to solve world hunger for real. Let’s do it. What technology will we need to get there?
Elana: How do attitudes about AI differ globally versus in the United States in your experience?
B: Pew published a study at the end of last year showing that Americans rank among the most negative globally about AI. And I think with good reason.
Many countries facing greater food insecurity show more optimism about technology’s potential. So there’s a tension. The people who arguably have the most up-close-and-personal experience with technology are among the most concerned about it.
That’s a challenge for policymakers and civic leaders in the U.S. We have to be much more careful about how we implement these tools so people don’t feel like technology is being forced on them.
We also lack a positive public vision. Companies tell us things are going to be great. But we don’t have many examples of these technologies visibly working for us. There’s an opportunity to prove it and to involve people, through transparent and accountable processes, in shaping what that future looks like.
Elana: For U.S. government decision-makers at the state level, particularly those dealing with food insecurity, agricultural resilience, or land governance, what lessons from this global work are most applicable at home?
B: There’s a ton of opportunity in this space. People are already using AI tools throughout our food systems and supply chains, and that’s a beautiful thing. We love to see people using innovative technologies to make systems more efficient. At the same time, there’s a lot of fear and skepticism, so here are a few things governments can do.
We don’t want to create systems that become more fragile precisely because they’ve become so efficient.
First, put clear resilience checks in place. We see this in the cybersecurity space, but it applies broadly. We don’t want to create systems that become more fragile precisely because they’ve become so efficient.
We saw this at the beginning of the pandemic. Because we had such lean, streamlined supply chains, when a disruption hit, it cascaded throughout the entire ecosystem. Everyone was racing to find toilet paper. As we introduce these technologies, we need to stress-test them and make sure they’re not just robust, but potentially antifragile, performing better in moments of disruption, not worse.
Second, it’s important to remember that the technology being built today is not being developed for food security. It’s built for other things and then adapted, bolted on after the fact. Another thing government decision-makers can do is use the government megaphone. Say, this is what matters. Let’s focus here.
Specifically, we need more technology to address the most pressing challenges of institutional capacity and governance, and, in particular, to make fairer decisions about land use in the food security space.
There is a real opportunity to use public platforms to direct attention and investment toward the hardest and most important problems, and to bring people along in the process.
There is a real opportunity to use public platforms to direct attention and investment toward the hardest and most important problems, and to bring people along in the process.
Elana: Tell us what’s next for the Feeding the Future initiative, and where can people find out more?
B: The Feeding the Future initiative is, in some ways, just getting started, even though we’ve already learned so much. If you want to see what we’ve learned, the full report is available.
We’d love for people to follow along, but even more, we’d love for people to participate. Right now, we’re actively recruiting participants for our AI for Food Security Working Group. We would particularly love to hear from food security practitioners across the country, people working in government food system administration, or agricultural offices.
We’d also love to hear from machine learning researchers and AI experts, people who are building these tools and want to collaborate seriously with experts closest to these problems.
Learn More
The full findings from the Feeding the Future survey are available at aspendigital.org. Aspen Digital is currently forming a working group to bridge AI researchers and food security practitioners — you can sign up for updates or nominate an expert to participate.