Over the past few years, artificial intelligence (AI) policy and legislation initiatives around the world have flourished—and Latin America is no exception to this trend. Countries in the region have been developing local regulations, policies, and initiatives from various perspectives to benefit from the use of this technology while preventing risks and harms. However, the current regulatory landscape is still nascent, which provides a unique opportunity for the region. This post will discuss the current landscape and provide a series of recommendations for the region to shape AI safety and governance.
Several bills are being discussed in different legislatures in the region. For instance, legislatures in Argentina, Brazil, Colombia, Costa Rica, Chile, Mexico, Peru, and Uruguay have seen numerous initiatives that intend to regulate the development and use of AI—whether bills that regulate general uses of AI or specific initiatives that focus on particular areas, such as health or disinformation. Nevertheless, most of them will not make it to the finish line. In general, bills tend to be quite vague, fail to provide actionable measures or a clear definition of AI, or grant the executive branch with excessive powers to define when and how regulations should apply. Peru, for example, passed the first AI law in the region in July 2023. This law aims to “promote the use of AI in favor of the country’s economic and social development,” while respecting human rights and use of AI that is ethical, sustainable, transparent, replicable, and responsible. As highlighted by civil society actors, this law is merely declarative rather than actionable, as it does not propose any concrete measures to attain its goal. Moreover, bills usually mention human rights in passing, without further operational steps about how they will be guaranteed or about relevant procedures, such as human rights impact assessments.
In addition, legislative initiatives have been clearly influenced by frameworks developed in the Global North. Most of the bills have been noticeably shaped by the European Union’s AI Act, which entered into force in July 2024. With a few alterations in some cases, bills in the region have adopted a risk-based approach to regulating AI, which categorizes AI systems into risk levels and assigns corresponding obligations and is one of the most salient characteristics of the EU AI Act. For instance, the Brazilian bill recently passed by the Brazilian Senate—one of the most developed bills in the region—has a risk-based approach. Other countries include Argentina, Chile, Colombia, Costa Rica, Peru, and Uruguay. Thus, while it may be too early to say—as the Peruvian law is the only law already passed in the region—a “Brussels Effect” may be taking place around AI regulation.
In terms of AI safety, requirements can be found in some of the bills. Brazil’s bill, for instance, includes a specific section on safety, which addresses transparency, and the obligations to generate adequate documentation and implement trust evaluations, among others, as well as a section on algorithmic impact evaluations, which outlines a detailed methodology for its implementation. Other bills—for example, in Argentina and Costa Rica—as well as the Peruvian law, require impact evaluations, transparency, and the creation of public registries. However, except for the case of the Brazilian bill, which was the result of a three-year discussion that included the participation of relevant stakeholders, requirements are very generic and not consistent throughout the different bills. In some cases, they are mentioned as principles, without any further detail on how they should be implemented. For instance, a Costa Rican bill—which was, ironically, entirely developed by ChatGPT—states that “developers must implement technical and organizational measures to mitigate algorithmic biases and prevent unjust discrimination. The use of representative and diversified data will be promoted, as well as the revision and periodic audit of algorithms, to mend any biases and guarantee equitable results,” without mentioning any further actionable steps.
At the same time, the region is also going through a “cooperation phase.” In 2024, in Montevideo, Uruguay, the second Ministerial Summit on the Ethics of AI in Latin America and the Caribbean “brought together ministers, high-level authorities, and experts from 20 countries to discuss the implementation of AI public policies and strategies that promote innovation and mitigate harms.” This meeting followed the first one hosted in 2023 in Santiago, Chile, which resulted in the Santiago Declaration. In 2024, the Montevideo Declaration focused on strengthening regional dialogues on governance and use of AI in Latin America and the Caribbean and was approved together with a roadmap that outlines the first few actions to be prioritized over the next year. Summits will take place every year “to analyze and discuss the development of regional policies on AI, and to follow up on the implementation of the approved Roadmap and its reviews.” Other recent regional initiatives include the Ibero-American Forum of Digital Parliamentarians—hosted in October 2024 to strengthen legislators’ capacities in Latin America and the Caribbean on AI regulation and policy design, focusing on promoting ethical, responsible, and inclusive use of AI—and the Cartagena Declaration, which was adopted by 17 countries in the region to foster regional cooperation on responsible AI. Although these initiatives share the goal of advancing the development and use of AI models that are “ethical, responsible, and inclusive,” a more detailed and actionable approach is needed.
Latin America can benefit from AI, but it must establish specific guardrails to prevent harms and guarantee human rights. AI safety considerations should inform governance initiatives from a Latin American perspective; thus, regional initiatives offer a crucial opportunity. To this end, these initiatives should address the following:
- First, while the regional initiatives do not mandate countries to regulate in a particular way, they could be instrumental in raising awareness on the importance of incorporating robust AI safety measures in policies and regulations, taking into consideration the particular risks and harms the region is already experiencing—for example, the risks resulting from misleading responses in Spanish by AI models, as well as concerns about how personal data is being used for AI training in Latin America, where personal data protections are not as strong as in the EU. In addition, regional initiatives could play a vital role in facilitating a shared understanding of AI safety across countries, leading to common definitions and approaches that reflect the region’s specific challenges, along with actionable steps to consider. Despite the practical approach outlined in the roadmap mentioned above, AI safety considerations are not addressed.
- Second, regional initiatives should meaningfully engage with the AI safety dialogues currently being held at the international level, as these discussions are still defining essential concepts and standards that will directly impact countries in the Global Majority. The Seoul Declaration outlines the AI Action Summit’s goal of “fostering international cooperation and dialogue on artificial intelligence.” For this forum and the AI Safety Institute Network to become truly international, regional initiatives in Latin America should be allowed to participate and share their needs and concerns. As has been highlighted by experts such as Rumman Chowdhury, given that countries in the Global Majority lack the resources of those in the Global North, supporting the development of regional AI safety institutes could be a valuable approach. The regional initiatives described above could be a good place to start, empowering Latin American countries to effectively influence outcomes.
- Moreover, the initiatives provide a unique opportunity to ensure that human rights are effectively protected and guaranteed. Given that local bills seem to follow the EU’s lead, it is worth noting that civil society organizations such as Access Now have repeatedly highlighted that the EU AI Act’s risk-based approach is not compatible with the protection of human rights, as it allows some level of risk if specific requirements are met. Thus, the regional initiatives could make this point visible and support a rights-based approach instead, clarifying what the protection of human rights means and outlining concrete steps to achieve it. This rights-based approach should include the implementation of human rights impact assessments to ensure that human rights are effectively protected.
- Finally, while country representatives have decided to adopt these regional agreements, this decision may change when administrations change, and this is not uncommon in the region. Policies and decisions implemented by one administration may be completely disregarded by the next, compromising the sustainability of these initiatives in the long run. Additionally, there are no mechanisms to ensure adherence to the principles established by these initiatives, and each country seems to have its own viewpoint around AI regulation. For instance, while Brazil appears to currently favor regulation, Argentina’s current administration has encouraged a lack of regulation to attract investors and become an “AI hub.” Thus, achieving effective, long-lasting results at the regional level may be challenging. Agreements should strike a delicate balance between specific, actionable terms—such as the ones included in the roadmap—and vague principles that risk becoming declarations of intent. In this context, strong commitments, transparency, and accountability will be key to reaching substantive results.
As shown, the Latin American AI governance landscape has made important strides toward responsible AI over the past few years. However, the field is still nascent, presenting a unique opportunity for the region to engage in serious discussions that go beyond political conversations. These initiatives should evolve from declarations of intent into actionable policies with tangible impacts, avoiding the adoption of frameworks that are unsuitable for the region’s specific needs, while addressing technical AI safety considerations that ensure the protection of human rights.
-
Acknowledgements and disclosures
The findings, interpretations, and conclusions posted in this piece are solely those of the author and do not reflect the opinions of the Trust and Safety Foundation.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Regional cooperation crucial for AI safety and governance in Latin America
February 13, 2025