Artificial intelligence (AI) has arrived in Latin American judicial offices without, in most cases, anyone yet having determined whether its arrival was welcome, under what conditions it was acceptable, or what specific risks apply to the region's realities.
UNESCO data for 2025 confirm that 44% of the world's courts and tribunals already use these tools daily, while only 9% have official guidelines governing them.
Ecuador was no stranger to this reality, and in response, it initiated a process worth examining, not only for its results, but for the methodology that made them possible.
Between 2025 and 2026, the Public Defender's Office of Ecuador, in collaboration with the UPF Barcelona School of Management (UPF-BSM) and the support of UNESCO, developed the Basic Guidelines for the Responsible Use of AI in the Judiciary of Ecuador, which will be made public in a few weeks once the final draft is ready.
The agreement signing between UPF Barcelona School of Management and the Ecuadorian judiciary
The process, devised by European and local experts, involved the National Court of Justice, the Judiciary Council, and the Attorney General's Office, and was developed through an internal deliberative mechanism.
Developing basic AI guidelines was a collective diagnostic process that prioritized the contribution of judicial officials themselves.
It was a collective diagnostic process that prioritized the contribution of judicial officials themselves, who contributed contextual knowledge that no external technical framework could have produced.
This methodological choice proved decisive: it shapes the nature of the process’s most significant finding. It enabled the process to identify governance risks rooted in Ecuador’s institutional and cultural realities while demonstrating how participatory governance can surface risks that standardized AI safety frameworks fail to detect.
None of the international reference instruments to date responded to the demands of the Ecuadorian reality for the use of AI in legal cases involving indigenous peoples, nationalities, and their original languages.

This includes the UNESCO Guidelines for the use of AI Systems in Courts and Tribunals, which served as the starting point for our process. Nor the European AI Act, the CEPEJ Ethical Charter, or even instruments produced in Latin America itself.
AI tools available in the global market have been designed and trained predominantly from data that reflect the logic of the Western justice system, without incorporating the customs, normative systems, and forms of conflict resolution of the Kichwa, Shuar, and other nationalities of Ecuador.
This is especially important as Ecuador is a country that constitutionally recognizes “plurinationality” and the rights of indigenous peoples to maintain, develop, and freely strengthen their identity, sense of belonging, ancestral traditions, and forms of social organization.
In addition, the vast majority of the data used to train AI models has not been taken into account, nor has it had access to content in the original languages of these peoples, information already scarce in digital environments.
These limitations are not technical defects that can be corrected by software updates. They represent a structural shortcoming that stems from the way these systems were conceived.
They know that when an AI system processes testimony in Kichwa without understanding the language, it also fails to grasp the cultural context, meanings, and worldview embedded in it. As a result, the system can misinterpret testimony, posing serious risks in legal proceedings.
The Moratorium as an Act of Regulatory Integrity
The institutional response that the Ecuadorian Guidelines faithfully reflect is notable for being the first in the region to take the form of an explicit moratorium, defined as a temporary suspension.
This moratorium prohibits the use of AI tools to translate, transcribe, or analyze testimonies expressed in indigenous languages until Ecuador has validated specific tools for that purpose, with the active participation of interpreters and certified cultural experts.
The available technology does not meet the conditions that the right to effective defense and the guarantee of due process require.
This decision could be interpreted as a limitation, but it actually constitutes an act of regulatory integrity, since it publicly recognizes that the available technology does not meet the conditions that the right to effective defense and the guarantee of due process require.
Moving forward without this condition being fulfilled would risk reproducing old forms of exclusion, racism, and discrimination based on origin, ethnic identity, or culture.
The difference is that these harms would now appear under the guise of technological modernity, which both the International Convention on the Elimination of All Forms of Racial Discrimination and the Ecuadorian law prohibit.
Ecuador has opted for a moratorium rather than an absolute prohibition because this cannot be a permanent solution. A permanent ban would risk becoming a restrictive regulation of rights. Access to efficient, effective justice that AI could facilitate must be available to and enjoyed by all the country's inhabitants.
The principle of technological sovereignty, as contained in the Ecuadorian Guidelines, establishes the duty to progressively move towards the use of proprietary tools.
But the fiscal and institutional realities of Ecuador, as in most countries in the region, do not allow us to assume the cost of developing language models in environments with limited technological and linguistic resources in the short term.
AI Infrastructure as a Public Good
It is precisely here that the conceptual framework for understanding AI infrastructure as a public good offers a path forward, organized around universal access, public mission objectives, and real democratic control.
Under this approach, Ecuador does not need to develop its own tools; it needs to participate in cooperative ecosystems where data, models, and computational infrastructure are treated as common goods rather than private assets.
South-South cooperation with Peru and Bolivia, which share the challenge of Quechua and Aymara and have accumulated experience in linguistic digitization, is an immediate option.
Open-source initiatives such as Mozilla Common Voice, which has begun incorporating Andean languages, offer a concrete entry point for communities without the capacity for self-development.
Community validation, with active participation by indigenous communities, in which certified interpreters themselves evaluate tools before any moratorium can be lifted, is a contribution to knowledge and a condition of legitimacy.
A Two-Layer Model for Participatory AI Governance
A central feature of the Ecuadorian methodology is its two-layered approach to regulating AI in the justice system
A first layer is reflected in the Basic Guidelines. With limited resources but strong institutional commitment and international support, Ecuador established clear guidelines for how judicial officials should use AI. This includes the introduction of the moratorium on its use in cases related to indigenous peoples that we have described.
A second layer, beginning in 2027, will focus on the governance of AI systems used in the justice system. This will include rules for how AI tools are bought, developed, or adopted from national and international providers. It will also establish the standards these systems must meet before restrictions, such as the moratorium, can be lifted.
Processes that attend to local realities and participatory AI governance are not only an ethical imperative but a mechanism for detecting and managing risk.
What the Ecuadorian case demonstrates, with replicable evidence, is that processes that attend to local realities and their limitations, as well as participatory governance of AI, are not only an ethical imperative but also a mechanism for detecting risks and managing them. These are risks that purely technical or technology-first approaches are not designed to recognize or address.
The principles of interculturality and plurinationality did not emerge from a diversity checklist, nor were they imported from a foreign model.
It emerged because the two-layered process undertaken in Ecuador listened to those who know, from the inside, what is at stake and managed to understand the institutional reality, offering a realistic response with more substantive long-term goals in mind.
That is the lesson that, at a global level, can and should be incorporated: treating AI governance capacity as more than an instrument, but as the adoption of a method that places people at the center before technology takes that place.