In late September, all 193 UN Member States endorsed a resolution on artificial intelligence, a diplomatic milestone. The resolution establishes three pillars for global cooperation: a forum for sharing governance approaches, a panel of scientists to provide technical guidance, and a fund to help countries build oversight capacity.
The real test ahead lies in how these pillars are implemented. A handful of companies control the foundational models, and a handful of political actors across the United States, China, and the European Union shape the regulatory frameworks. Yet the people most affected by AI systems, like teachers, doctors, parents, civil servants, and citizens, are routinely shut out of shaping decisions about how those systems work.
If the UN helps countries adopt or contain AI without supporting democratic governance, it risks locking in the same top-heavy power dynamics that have long undermined its legitimacy.
The UN was never designed to regulate fast-moving technologies. Its institutions are slow, deliberative, and consensus-based by design. Enforcement has never been its strength.
But the UN does excel at convening diverse actors, coordinating across borders, and establishing shared baselines. That coordinating role matters, but only if it becomes infrastructure for adaptive, democratic governance, not just another stage for symbolic consensus.
What Actually Works
From my new seat as Communications Director at the Burnes Center, I see how public servants and civic leaders are already governing AI in ways that center democratic practice.
What the evidence makes clear is that legitimacy doesn’t flow from global declarations. It’s earned from the ground up.
In California, the Burnes Center and Innovate Public Schools are co-creating an AI-powered tool with parents of children in special education. The goal is to build civic power. Parent leaders are trained to use and explain the tool, then go on to recruit and support other families. Through workshops, WhatsApp-based AI literacy courses, and school-based organizing, the project is turning technology into a catalyst for community advocacy.
In Vietnam, the Academy of Public Administration and Governance is training civil servants not just to use AI, but to govern it. Through a national reform effort, the Academy introduced hands-on AI learning programs, developed 136 international case studies using large language models, and emphasized critical thinking and contextual application. Civil servants are learning to audit algorithms, ask the right questions about bias and applicability, and adapt solutions to local needs, all with limited resources and modest infrastructure.
These stories demonstrate what becomes possible when governance starts with the people closest to impact. Systems governed through democratic means can shift power and evolve in step with technology.
Three Tests for Success
The UN’s resolution opens the door to cooperation. But it will only matter if it passes three critical tests:
1. Operate at the speed of deployment
The first Global Dialogue isn’t scheduled until mid-2026. By then, AI will be embedded in thousands more schools, hospitals, and public systems. A yearly summit producing symbolic declarations won’t cut it. The UN should establish regional governance hubs that convene quarterly and create practitioner networks that share real-time lessons across borders. If governance guidance lags behind model releases, the effort will be irrelevant before it begins.
2. Build democratic capacity, not just technical capacity.
The new fund should strengthen the institutions that govern AI, not just the tools that run it. That means training officials in accountability, supporting public participation, and funding civil society oversight. It should back participatory budgeting for AI procurement and public literacy programs that teach people to question algorithmic decisions. At least 100 community-driven governance experiments should be funded in the first two years, with systems in place to track what works and share it widely.
3. Shift power, not just perspectives.
Geographic diversity on its own isn’t meaningful if decision-making stays concentrated. Who defines what risk means? Who shapes the evidence base? The scientific panel’s agenda should include questions submitted by frontline practitioners, and “best practices” must draw from low-resource and fragile contexts that might not otherwise have a say.
What Happens Next
The most effective AI governance starts at the edges of decision-making, where people are solving real problems with the stakes on their doorsteps. The parents in California. The civil servants in Vietnam. They aren’t waiting for Geneva or Washington to permit them. They’re moving ahead.
What local leaders need is support, not oversight.
That’s the job of global institutions now. Not to lead these movements, but to recognize them, resource them, and then move out of their way.