Growing up in Kakuma refugee camp in Kenya, my childhood was defined by barriers.
Opportunity was scarce, and my peers and I were locked out of the global economy, excluded from the systems that determine who gets to learn, earn, and advance.
When I returned to Kakuma in 2023 to run a digital literacy workshop, I saw something I had never seen before.
Young people were using tools like ChatGPT to learn new languages, improve their writing, and explore career pathways in real time. In a place where opportunity had always been constrained by geography and infrastructure, they were bypassing several barriers in real-time.
They were not receiving aid. They were building capability. That experience changed how I think about AI.
They were not receiving aid. They were building capability. That experience changed how I think about AI.
What I witnessed in Kakuma was a technology that actively expanded people’s ability to build their futures, rather than just stabilizing them in crisis.
Nhial Deng leading a digital literacy workshop for young people in Kakuma.
AI, in this sense, is emerging as an economic opportunity layer—one that allows people to access skills, income, and professional advancement even when formal systems fall short.
The challenge is that today, this layer is emerging unevenly and without purposeful design.
Across contexts, from refugee camps in East Africa to underserved communities in North America, people are already using AI to translate, learn, freelance, and navigate institutions. But they are doing so without reliable access, protections, or clear pathways to stable economic outcomes.
If AI is already functioning as an informal pathway to economic opportunity, the role of government is to make that access intentional.
If AI is already functioning as an informal pathway to economic opportunity, the role of government is to make that access intentional.
What Global Practices Reveal
My work across Canadian and international policy spaces has shown that this shift, from accidental to intentional access, is already taking shape in distinct ways across different national contexts.
Nhial Deng speaking at the Global Refugee Forum in Geneva, Switzerland, about the role of technology in expanding access to education and opportunities for displaced youth
The following three examples are instructive:
Canada offers a model of infrastructure investment as an equity strategy. The government’s $300 million AI Compute Access Fund helps small and medium-sized enterprises afford high-performance computing.
The equity argument is indirect but important: local businesses are the primary engines of community-level employment. While AI adoption carries the real risk of displacing certain roles, it also actively reshapes job demand.
When local businesses gain the ability to adopt AI, they reshape the types of jobs they offer, creating demand for workers who can operate AI tools in logistics, customer service, and business operations. Compute access at the firm level can translate into opportunity at the worker level if it is connected to workforce pathways.
Singapore offers a model of national scale. Through its SkillsFuture initiative, the government has built a system of more than 1,600 AI-related courses, with over 105,000 individuals trained in the past year alone. Singapore has become one of the fastest-growing workforces in the world in adopting AI skills because the state treated AI literacy as a public good and built deliberate infrastructure around it. While not every country can replicate Singapore's conditions, the intentional, government-backed investment in AI skills can move an entire workforce at scale.
Kenya offers a third model, closer to the Kakuma experience and resonating across the Global South. The UNDP Africa Centre of Competence for Digital and AI Skilling, developed in partnership with Kenya's Ministry of Information and Microsoft, embeds AI and digital skills training inside government institutions, with cohorts drawing from Kenya, Uganda, Rwanda, Tanzania, Nigeria, and beyond. Beyond civil servants, the Kenya AI Skilling Alliance (KAISA) targets gig workers, women, and underserved communities, connecting AI literacy to livelihoods in the informal economy.
Ultimately, AI policy is just as much about shaping how access translates into economic opportunity. The countries making the most progress are those that have treated this as a deliberate public investment rather than an accidental byproduct of market diffusion.
The 2026 New Delhi AI Impact Summit Declaration, endorsed by more than 90 countries, reflects growing global consensus on this point.
It positioned AI not only as a tool for innovation but as a driver of social and economic empowerment, calling for AI deployment to be aligned with human development outcomes, particularly where traditional systems have failed to deliver inclusion.
Three Lessons for Governments
The task ahead is uneven, but it is not without direction. The experiences of Canada, Singapore, and Kenya suggest three principles that governments, at the national or subnational level, can apply.
1. Connect AI literacy directly to employment, not just education.
Training only creates opportunity if it leads to real jobs. Singapore's SkillsFuture works in part because it is tied to employer demand, companies help shape the programs, and workers develop competency for roles that already exist or are actively emerging. Kenya's KAISA model similarly connects skilling to the informal and gig economy pathways where most workers actually earn their income.
The most durable version of this principle is the earn-while-you-learn model: apprenticeship structures that integrate AI into existing roles, so workers develop competency on the job rather than preparing in the abstract for a future that may not materialize as they expect.
In practice, this might look like training healthcare administrators to use AI documentation tools, supporting logistics workers in operating AI-assisted supply chains, or helping small-business employees integrate AI into customer operations.
This is already taking shape in the U.S.
According to Georgetown's Center for Security and Emerging Technology, AI-focused apprenticeship tracks currently achieve a completion rate of 68 percent, roughly 25 percentage points higher than the average for all non-military apprenticeships. The U.S. Department of Labor’s AI Literacy Framework, released in February 2026, creates a pathway for federal workforce funding to support this model. It is one example of how a national framework can enable local implementation.
2. Build partnerships that embed access inside trusted institutions, not just digital platforms.
In Kakuma, young people could use AI because they had just enough connectivity, exposure, and support to experiment. But that kind of access is fragile and uneven. While tech platforms provide the baseline infrastructure, access only becomes sustained when it is embedded in institutions that communities already trust, such as government training centers, community colleges, public libraries, or employer partnerships, rather than offered as a standalone product that individuals must navigate on their own.
The goal is not to subsidize tools that are already free. It is to provide structured, supported environments where people can develop competency that translates into real economic outcomes: reliable connectivity, guided learning, and human support for the workers and communities who have been most excluded from the digital transition. Governments at every level have existing institutional infrastructure that can serve this function. The question is whether they invest in activating it.
3. Build accountability frameworks before harms compound.
In many parts of the world, weak governance has allowed digital tools to amplify harm rather than distribute benefits. AI is no different, and the risks are already visible.
Algorithmic bias in hiring and screening tools is among the most documented. A 2024 University of Washington audit study found that leading AI resume screening models systematically disadvantaged Black male candidates across job types and contexts. A Brookings analysis confirmed similar patterns of racial and gender bias in AI hiring systems more broadly. Workers who gain AI skills through public investment, but then encounter AI-filtered hiring processes on the other end, can be excluded by the very systems they were trained to enter.
Data exploitation is another concern. Participants in publicly funded AI programs generate commercially valuable data about their skills and economic circumstances, and without clear protections, that data can be extracted rather than used to benefit them.
Poorly governed AI products targeting low-income and low-literacy communities represent a third risk, particularly in contexts where consumer protection frameworks have not kept pace with rapid deployment.
The New Delhi Declaration again offers a global reference point for addressing these risks, emphasizing accountability across the full AI lifecycle from design to deployment. But global frameworks are only as effective as their local implementation.
Governments need to name the specific harms they are protecting against, and build standards of transparency and accountability that match the actual scope of risk, which extends well beyond “users” of AI tools to workers, residents, and communities who encounter AI in hiring, benefits, insurance, and public services.
From Access to Opportunity
I saw it in Kakuma, where young people used AI to access knowledge and pathways that had long been out of reach. I see it now in policy spaces—in Ottawa, in Nairobi, and in the corridors of the New Delhi summit—where governments are beginning to recognize that access to AI is quickly becoming a determinant of economic participation on par with access to electricity or the internet.
AI matters not because it replaces traditional systems of support, but because it can expand what people can learn, earn, and pursue today. Without intentional design, those gains remain fragile. The question is whether governments will treat this as a peripheral innovation or as core economic infrastructure.
From Singapore's deliberate investment in national AI literacy, to Kenya's community-anchored skilling models, to the multilateral accountability principles endorsed in New Delhi, the direction is clear: connect infrastructure to jobs, embed access in trusted institutions, and build safeguards alongside deployment.
For communities like the one I grew up in, the AI opportunity is not about efficiency. It is about whether marginalized people are included in the systems that define opportunity.
For communities like the one I grew up in, the AI opportunity is not about efficiency. It is about whether marginalized people are included in the systems that define opportunity.
And that shift, from access to opportunity, is everything.