Sections

Commentary

A new writing series: Re-envisioning AI safety through global majority perspectives

February 12, 2025


  • The Brookings AI Equity Lab will publish a series of research commentaries to share how various Global Majority countries are defining and interrogating western definitions of AI Safety.
  • By collaborating with a wide array of stakeholders from the Global Majority, authors will drive a significant shift in our approach to AI safety. This shift will move past Western-centric models, incorporating a more diverse set of perspectives and lived experiences.
A visitor places her hands on a "Tangible Earth," a digital globe which real time global metrological data is fed through the Internet from about 300 places in the world, is displayed at an exhibition pavillion inside the media center for the G8 Hokkaido Toyako Summit in Rusustu town, on Japan's northern island of Hokkaido on July 6, 2008.
Introduction

The concept of artificial intelligence (AI) safety consists of research, strategies, and policies aimed at ensuring these systems are reliable, aligned with human values, and not causing serious harm. While this field traditionally addresses both immediate risks (e.g., algorithmic bias and system reliability) and longer-term risks like questions of AI alignment and existential threats to humanity, the dominant discourse reflects a distinctly Western epistemological framework. This Western-centric paradigm primarily serves the interests of technological institutions and stakeholders in high-income nations, often privileging abstract future scenarios over pressing sociotechnical harms that disproportionately affect marginalized communities. Current AI safety frameworks also frequently overlook the lived experiences of communities both within high-income countries and across Global Majority nations where diverse linguistic traditions, cultural practices, and value systems remain underrepresented in technical architectures and policy discussions. This systemic exclusion not only perpetuates existing power asymmetries but also compromises the development of genuinely robust AI systems that fail to account for the complex ways in which these technologies interact with and impact non-Western contexts. Such narrowly conceived approaches to AI safety risk entrench global inequities and miss crucial insights, which could emerge from more inclusive and culturally nuanced perspectives. 

The inclusion of diverse linguistic communities, cultural values, and social norms is fundamental to developing equitable AI systems. Global Majority communities, including those within Africa, the Caribbean, Latin America, the Middle East, Oceania, and Central, South, and Southeast Asia face significant underrepresentation in AI development teams and in the training data embedded within AI systems. The systemic exclusion of these populations poses risks as governments across Africa, Asia, and Latin America increasingly procure AI technologies developed primarily within Western contexts) while also being reliant on computing infrastructure from large tech companies. However, China has been significantly increasing AI exports to countries in these regions over the past decade, but more transparency is needed to ensure that these systems align with local values and existing data privacy protection standards. While global frameworks for AI safety present a vital opportunity to reshape the discourse around responsible AI development, they must move beyond Western-centric definitions of “safety” to meaningfully incorporate diverse epistemologies and lived experiences. The field can develop more nuanced and inclusive approaches to AI safety by examining the specific risks, opportunities, and cultural considerations unique to non-Western contexts. 

Over the course of February 2025, the Brookings AI Equity Lab, a project housed within the Center for Technology Innovation (CTI), will publish various research commentaries as part of this series to share how various Global Majority countries are defining and interrogating Western definitions of the concept. In particular, the pieces will address how countries and regions that include Africa, the Caribbean, Latin America, Southeast Asia, and Oceania are tackling Western approaches to safety, and offering proposals for greater diversification in approaches. The commentaries hope to systematically examine and address the limitations of current AI safety frameworks through a global lens and deconstruct and reimagine Western-centric assumptions that have historically dominated AI safety discussions. By engaging with diverse stakeholders across the Global Majority, authors will catalyze a paradigm shift in how we conceptualize and implement AI safety measures, moving beyond traditional Western-centric frameworks to encompass a broader range of perspectives and lived experiences. On February 19, 2025, these authors will share their perspectives at a Brookings event on the topic.

Current Limitations in AI Safety

The landscape of global AI safety has experienced significant evolution in recent years, marked by high-profile convenings, the advancement of technical benchmarks, and the launch of specialized AI safety institutes. The inaugural AI Safety Summit at Bletchley Park in November 2023 represented a watershed moment in elevating AI safety discourse to the international stage, followed by subsequent gatherings, including the AI Seoul Summit (May 2024) and the Paris AI Action Summit (February 2025). However, these initiatives have faced substantive criticism regarding their inclusivity and transparency, particularly concerning the stakeholder selection processes and the limited socioeconomic diversity of participating nations. While resulting frameworks like the Bletchley Declaration and Seoul AI Declaration articulate aspirational goals for international cooperation in risk mitigation and governance, their concrete impact on advancing global equity in AI development remains unclear. Despite proliferating commitments to support Global Majority countries in building AI capabilities, translating these pledges into measurable progress faces significant challenges. The path toward global parity in AI development necessitates substantial resource allocation and robust accountability mechanisms—including transparent fund management and rigorous impact assessment frameworks. The tension between declarative commitments and actionable change underscores the need for more systematic approaches to fostering genuine inclusion in global AI development to establish concrete mechanisms for equitable participation and capability-building. 

Current technical benchmarks for AI systems reflect deeply embedded Western assumptions about safety and capability, resulting in evaluation frameworks that inadequately capture global linguistic and cultural diversity. Widely adopted benchmarks, such as the Massive Multitask Language Understanding (MMLU), demonstrate limited scope in assessing AI performance across non-Western languages, knowledge systems, and social contexts. Other benchmarks, like TruthfulQA and SuperGLUE, remain anchored in U.S.-centric knowledge, privileging knowledge of American legal frameworks, political history, and cultural references. The anglocentric manner of prominent language models also reinforces the predominance of evaluation benchmarks in English. However, researchers have begun to address these limitations by developing benchmarks that enable models to perform more robust tasks in different African languages; refine multiple choice reasoning for Arabic language models; improve mathematical reasoning in African languages; increase understanding of Indonesian culture and languages; include severely underrepresented languages like Quechua and Haitian Creole, while also adapting established benchmarks such as MMLU for Indic languages. These interventions, while promising, highlight the pressing need for systematic approaches to developing and evaluating AI systems that meaningfully engage with Global Majority contexts, moving beyond mere linguistic translation to encompass deeper cultural dimensions of safety and capability.

Contextualized Approaches to AI Safety

The six contributors of the forthcoming commentaries focused on five regions: Africa, Southeast Asia, Oceania, the Caribbean, and Latin America. Our contributors discussed present-day issues around AI safety, incorporating perspectives on AI development, implementation, and governance within these regions. Their valuable perspectives are also complemented by a growing number of efforts in regions within the Global Majority to diversify where AI safety mechanisms are developed, how these methods are developed, and who these approaches benefit. The next section dives into why each region was selected for the series. 

Africa

Numerous efforts have arisen to improve AI safety in Africa, though most initiatives focus primarily on AI governance through national strategies in Algeria, Benin, Egypt, Ghana, Kenya, Mauritius, Morocco, Nigeria, Rwanda, Senegal, Sierra Leone, South Africa, Tunisia, and Uganda, along with continental strategies published by the African Union. However, African researchers remain underrepresented in developing significant AI and machine learning models, contributing less than 0.05% of publications in AI conferences. To bolster AI safety research, initiatives like the ILINA Program run seminars and organize fellowships for early-career AI safety researchers in Africa. Similar efforts on the continent, like AI Safety Cape Town, also conduct research, run fellowships, host workshops, and hold regular meetups within South Africa. While academic research and technical frameworks for AI safety are still emerging within the continent, African countries must invest resources to support the acceleration of these efforts. In her forthcoming commentary, Grace Chege, a junior research scholar within ILINA, will examine the emerging landscape of open-access AI within Africa and its implications for the growth of technical AI safety work throughout the continent. Her work provides a balanced outlook on efforts to drive context-specific AI safety approaches in Africa despite systemic developmental challenges and the dependency dynamics between open-access AI model producers and adopters.  

Southeast Asia 

Southeast Asia is one of the world’s most active regions in AI safety and regulation, with at least 10 countries establishing national AI policies or strategies and Singapore issuing 25 AI governance initiatives alone. ASEAN, the Association of Southeast Asian Nations, has developed documents such as the ASEAN Guide on AI Governance and Ethics and recently released a joint statement with the United States focused on promoting safe, secure, and trustworthy AI. Grassroots efforts focusing on AI safety include AI Safety Asia, which focuses on conducting policy research, upskilling civil servants, and promoting interdisciplinary collaboration. While many local multilingual large language models have been developed throughout the region, along with robust evaluation benchmarks, smaller countries like Brunei, Laos, East Timor, and Myanmar remain excluded from AI safety and governance conversations, necessitating further investments and support. Coauthors, Shaun Ee and Jam Kraprayoon, will discuss Southeast Asia’s limited participation in global AI safety discussions in their commentary piece, and provide recommendations to steer the region toward more deeply integrated AI safety efforts by focusing on localized multilingual evaluations, improved regional infrastructure and talent development, and developing new mechanisms for inclusive public engagement. 

Latin America 

Within Latin America, countries like Argentina, Brazil, Chile, Colombia, Mexico, Peru, and Uruguay have led efforts to draft and implement national AI strategies and policies. Many of these mechanisms cover aspects of AI safety, such as prohibiting AI systems based on acceptable risk, implementing robust risk management strategies to mitigate algorithmic bias, and using systems proportionally to their respective purposes. Complementing other initiatives in Africa, impactRIO in Brazil and AI Safety Colombia conduct AI safety research, facilitate courses, and organize events for their community members. Like other regions in the Global Majority, Latin America needs context-specific approaches to AI safety that encourage the development of multilingual and multicultural benchmarks that enable robust system evaluations. Author Maia Levy-Daniel will discuss the current landscape of AI governance within Latin America and provide a series of recommendations for the region to shape AI safety and governance. She advocates for developing technical AI safety measures and adopting context-specific governance frameworks to move toward actionable policies with tangible impacts. 

Oceania 

From the regions we covered in this project, efforts to coordinate AI safety are still nascent in Global Majority countries in Oceania and the Caribbean, and the understanding of harms within these respective regions remains limited. While UNESCO has noted significant disparities in AI readiness for small island developing states (SIDS), AI safety efforts are more prominent in non-Global Majority Oceania countries, like Australia and New Zealand, which have already made significant strides in establishing national AI strategies and action plans. However, AI has the potential to exacerbate climate and economic risks in SIDs, warranting the development of locally relevant approaches to mitigate AI harms. Author Ben Kereopa-Yorke will posit that current AI safety discourse in Oceania operates under colonial misconceptions, given the prioritization of growth and control over sustainability and sovereignty. He unpacks AI safety in Oceania through an Indigenous lens, proposing new frameworks based on environmental justice and digital sovereignty to address the real, present harms of AI development. 

Caribbean 

In the (non-Latin)Caribbean, no country has published a national AI strategy, and there is even less context on research and technical mechanisms for AI safety in this region. While UNESCO has also been influential in guiding AI policy frameworks through the UNESCO Caribbean Artificial Intelligence Initiative and the Caribbean Artificial Intelligence Policy Roadmap, Caribbean countries must work toward independent efforts to steer safe AI. Although global interest in AI has benefitted Caribbean countries like Anguilla, which generated $32 million in “.ai” domain registrations in 2023, governments within the region must also work to diversify their inclusion in the global AI economy. To understand and critically unpack these issues, author Craig Ramlal will examine the lack of inclusion in mainstream AI safety strategies that miss crucial contextual nuances in the Caribbean and other related small island developing states. Additionally, he proposes efforts to strengthen policy coordination, develop accountability mechanisms, and increase agency over data collection and use to move the Caribbean from being a passive data provider to an active AI producer.

Broadening Perspectives on Global AI Safety

While the writing series launched with a focus on just five regions within the Global Majority, AI safety must include all regions around the world to be truly a global conversation. The Middle East, South Asia, and Central Asia also provide new areas for further inquiry on expanding perspectives on AI safety. For example, countries in these regions, like the United Arab Emirates, have invested $500 million into AI research, infrastructure, and development, buoyed by their state-sponsored series of Falcon models. Newer models within the Falcon family have also achieved greater efficiency and are smaller in size, indicating potential to mitigate infrastructure challenges that prevent Global Majority communities from adopting AI tools. The UAE government has also released policy documents regarding AI, and G42, a leading AI development holding company, recently published a Frontier AI Safety Framework in February 2025, which aims to establish governance structures for responsible AI innovation within the company’s respective projects. The Indian government has continued the trend of establishing AI Safety Institutes and recently launched an AI Safety Institute in January 2025, becoming the first country in the Global Majority to do so. Countries within Central Asia also have a lower profile within global AI governance and safety discourse. Tajikistan was the first country in Central Asia to develop a National AI Strategy in 2021, with Uzbekistan following in 2024. However, Turkmenistan, Kazakhstan, and Kyrgyzstan are yet to publish strategies. Growing progress within these regions warrants closer attention while promising to continue democratizing AI safety. 

Future work must aim to explore these regions in more detail to gain more nuance on the limitations of AI safety in these contexts and how developers, researchers, governments, and organizations can move the needle forward. We also acknowledge that AI risks are not limited to populations within the Global Majority. Marginalized communities in higher-income Western countries—such as Black, Latinx, and Indigenous people, as well as those who are disabled, LGBTQ, or socioeconomically disadvantaged—face disproportionate harm from AI systems in health care, education, finance, social benefit disbursement, and employment.

Recommendations Towards Globalized AI Safety

As the Paris AI Summit convenes, several key interventions are also necessary to advance more equitable and globally inclusive approaches to AI safety, which are proposed in the recommendations below. These recommendations emphasize the importance of examining AI’s sociotechnical limitations in Global Majority contexts, developing culturally informed evaluation frameworks, and fostering meaningful participation from affected communities in AI development. By addressing the sociotechnical elements of AI safety while centering traditionally marginalized perspectives, these interventions aim to reshape how we conceptualize and implement AI safety measures on a global scale. 

  • Examine sociotechnical limitations of AI: Individual governments, research organizations, and tech companies should prioritize conducting rigorous multimethod research examining AI systems’ limitations across Global Majority contexts, with particular attention to infrastructural constraints, cultural epistemologies, linguistic diversities, and value systems. This research should integrate ethnographic methodologies and participatory design approaches to understand how AI technologies interact with existing social structures, cultural practices, and local knowledge systems. Such investigation must move beyond purely technical assessments to examine the complex sociotechnical ecosystems in which AI systems operate. 
  • Develop culturally informed benchmarks: Current evaluation frameworks predominantly reflect Western knowledge systems and assessment standards. For example, the United States Uniform Bar Exam is seen as a standard for evaluating large language model capabilities despite overstated performance claims from companies such as OpenAI. Researchers should work to expand technical benchmarks to incorporate diverse forms of expertise and understanding in Global Majority contexts, including region-specific educational assessments such as the West African Senior School Certificate Examination (WASSCE), the Brazil National High School Exam (ENEM), and the Medical Licensing Examination of Thailand (MLET). These contextually grounded evaluation metrics would provide more meaningful assessments of AI capabilities across different cultural and linguistic domains. 
  • Advance novel safety frameworks: The rich linguistic and cultural diversity within Global Majority contexts necessitates innovative approaches to AI safety evaluation. Researchers should develop new theoretical frameworks and technical methodologies that move beyond Western-centric safety paradigms to encompass multiple epistemological traditions. These approaches should establish robust evaluation standards that authentically reflect global cultural variations, value systems, and linguistic diversity while maintaining technical rigor and empirical validity. 
  • Enable meaningful global majority participation: Positioning affected communities at the forefront of technological innovation can help address systemic inequities in AI development. Governments, philanthropic funders, and large tech companies should support Global Majority researchers in developing contextually appropriate datasets and training robust AI models that reflect local realities. This requires addressing fundamental challenges around infrastructure access, technical capacity-building, financial resources, and development priorities.  
  • Democratize safety discourse: Contemporary AI safety discussions have disproportionately focused on speculative existential risks while minimizing attention to present-day societal harms. There is an opportunity to move beyond exclusive forums like the series of AI safety summits into more inclusive spaces that facilitate substantive dialogue and concrete action toward diversifying participation in AI development. Additionally, governments should work to establish transparent mechanisms for measuring progress in expanding global participation and implementing culturally informed safety measures. 

The imperative to globalize approaches to AI safety extends far beyond current Western-centric paradigms of technological governance. As AI systems increasingly mediate crucial aspects of human experience, their potential to perpetuate harm through linguistic and cultural exclusion poses particular risks for Global Majority communities, which this commentary piece and forthcoming ones will elevate for discussion. Systemic marginalization is derived from technical limitations of AI models and also persists in fundamental questions about whose knowledge systems, values, and lived experiences shape our understanding of AI safety. Addressing these challenges requires a transformative approach that moves beyond superficial inclusion to enable meaningful participation from communities historically underrepresented in both AI development and safety discourse. This project seeks to foster sustained dialogue in broadening perspectives on AI safety to advance concrete mechanisms for ensuring AI systems serve the interests of all communities—regardless of geographic location, socioeconomic position, or cultural context. This is crucial for developing robust and beneficial AI systems that operate effectively across diverse global contexts. 

Forthcoming pieces can be found here on the home page of AI Equity Lab and on TechTank.

Register for our upcoming event here

  • Footnotes
    1. Within this project, we separate Caribbean countries from Latin America, which is classified as a cultural region within North, Central, and South America where the dominant languages are Spanish and Portuguese.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).