
In the realm of聽, the 鈥榳hite saviour鈥 trope has long been a subject of聽. This phenomenon, often rooted in colonialist attitudes, positions Western individuals or entities as benevolent rescuers of non-Western communities, usually without acknowledging or addressing systemic multidimensional inequalities, colonial/racial privilege, and local agency of indigenous communities. The white saviour complex has not only perpetuated聽聽but has also聽聽the efforts and voices of those it claims to help.
As artificial intelligence (AI) has emerged as a global force to potentially , we see a new manifestation of the white saviour industrial complex within emerging global AI governance.
Global governance and the (colonial) race towards AI supremacy
The race towards AI supremacy has increasingly mirrored colonial-era power dynamics, with and striving to establish dominance in global AI technology and its governance. In this contemporary digital race, wealthier nations, primarily from the Global North, leverage their significant resources and technological advancements to dictate the terms of AI development and deployment. This pursuit often marginalises and sidelines the contributions and needs of the Global Majority, perpetuating of exploitation and inequality.
The development of AI systems is inextricably linked to the continuities of historical injustices, constituting a There are global economic and political power imbalances in AI production, with value being extracted from the in the Majority world to benefit Western technology companies. This perpetuates an 鈥榠nternational division of digital labour鈥 that concentrates the most stable, well-paid AI jobs in the West, while exporting the most precarious, low-paid work to the Majority world.
The competitive drive for AI supremacy is not just about technological innovation but also about control
Additionally, AI development is often shaped by Western values and knowledge, marginalising non-Western alternatives and limiting possibilities for decolonising AI 鈥 a reflection of a broader pattern of 鈥榟egemonic knowledge production鈥 within the 鈥樷.
As the old adage goes 鈥渢here is nothing new under the sun鈥, from the , research on , to, the epistemic challenges in global AI governance are reflective of historical structural inequities. Much of the research and policy development in many academic fields has historically been conducted by scholars and institutions based in the Global North. This dominance has shaped the research agenda, methodologies, and policy recommendations in ways that with the needs and perspectives of the Global Majority.
Scholars from the Global North often have more resources, better funding, and greater access to academic networks, which allows them to many fields. Likewise, the ethical, legal, social, and policy (ELSP) research on AI development and deployment is often led by Western academics who prioritise issues, solutions, and policies that resonate with Western perspectives. Consequently, Western academics also other contexts, needs, and overall create a that does not reflect the socioeconomic realities and lived experiences of the Global Majority, or risk perpetuating existing forms of .
Navigating the white saviour complex in global AI governance
Much like traditional international development initiatives, AI governance often involves Western-developed solutions being implemented in non-Western contexts, with imposed solutions through a 鈥榗opy and paste鈥 approach to Western 鈥榞olden standards鈥, with several significant consequences. For example, Western nations often export their governance models as 鈥榞olden standards鈥, assuming that these frameworks will be universally applicable. However, this approach neglects the unique social, political, and economic landscapes of non-Western countries. For instance, may not be suitable for countries with different governance structures or developmental priorities.
These solutions often designed for Western innovation ecosystems do not consider non-Western local nuances, needs, or cultural contexts. As a result, they are often ineffective or rather than fostering self-sufficiency, which can lead to a loss of agency and autonomy, perpetuating cycles of dependency and underdevelopment, with devastating .
Furthermore, the imposition of Western AI governance frameworks can reinforce a cycle of dependency, where non-Western countries rely on external expertise and solutions rather than developing their own capacities. This dynamic can stifle local innovation and self-sufficiency, leading to long-term detrimental effects on local governance and technological development.
While there are increased calls for a decolonial informed approach (DIA) to , from and , in many global AI forums and discussions, Western technical experts and policymakers dominate the conversation, often marginalising those who are most affected by AI technologies. This exclusion mirrors of disenfranchisement and reinforces geopolitical power imbalances.
The ethical considerations surrounding AI governance, such as data privacy, bias mitigation, and transparency, may be inadequately addressed when Western frameworks are applied without . For example, data privacy laws that work in Western contexts may not consider the cultural attitudes towards privacy in non-Western societies, leading to ethical dilemmas and . While these frameworks are important and often abide to Western versions of 鈥榙emocracy鈥 and 鈥榬ule of law鈥, reveal that this emerging ethical imperialism may not fully encompass the lived experience, diverse values, and ethical considerations of different cultures in our global society. Imposing a singular ethical perspective can be seen as a form of ethical imperialism, where Western norms are prioritised over non-Western local traditions and beliefs.
, including AI systems, are not value neutral, they are developed by individuals and organisations that bring their own values, beliefs, and biases into the . For instance, if the majority of AI researchers come from a particular cultural or socioeconomic background, their perspectives will likely dominate the development process, leading to that reflect their worldviews. This can result in algorithms that prioritise certain types of data or decision-making processes that align with those values, while neglecting others. These embedded values can influence how technologies are designed, implemented, and utilised, potentially perpetuating existing power dynamics and intersectional inequalities.
Another concern is that many organisations that are increasingly funding access to digital public goods (DPG) , digital public infrastructure (DPI), and AI related policies still maintain an inherent colonial culture where as writes 鈥淪uddenly, we find ourselves in a world where the act of calling out racism is more offensive than racism itself.鈥
From , and the inequality associated with , we must critically address undertones of injustice to ensure that AI advancements contribute to equitable digital development rather than reinforcing historical injustices and systemic power imbalances of .
The South African context: an insidious manifestation of the white saviour trope
In South Africa, the white saviour trope has subtle nuances, but with very harmful effects. The country鈥檚 history of apartheid has left , with a privileged minority often holding significant power and influence, particularly . Today, this dynamic is evident in how privileged minorities are supported to position themselves as the 鈥榲oices of African people鈥 and 鈥榓dvocating African values鈥 at the global stage on discussions related to frontier technologies and the overall digital economy.
An unsurprising phenomena since, according to the , 鈥淭he dualism that stems from the legacy of demographic and spatial exclusion in South Africa is reflected in the digital economy landscape, and a large share of South Africans remain disconnected from the opportunities it has created.鈥
Certain privileged individuals and groups assume the role of leaders representing broader Indigenous local communities often legitimised by with Indigenous subordinates. These 鈥楢frican voices鈥 may not genuinely reflect the diverse perspectives and needs of the Indigenous majority. Instead, their viewpoints often align more closely with their own interests of , the , or of the Western institutions they are affiliated with.
These self-appointed spokespersons backed by generous funding often act as intermediaries between local communities and international entities. However, their legitimacy and commitment to true diversity, equity, and inclusion (DEI) are often questionable as they benefit from the status quo of being palatable to international donors. As , they have limited motivation to challenge systemic issues and the overall status quo in practice when situations that call for allyship, ethics, and decolonisation (which these individuals publicly advocate for) end in cognitive dissonance and default to protecting .
By ignoring , maintaining colonial practices in IDA, and encouraging privileged minorities to dominate the narrative on the socio-technical disruptions associated with new technologies for the Global Majority, we risk perpetuating which hinders genuine progress towards real equity, epistemic justice, and risk perpetuating scenarios where the voices of the marginalized and real victims remain unheard.
Moving towards truly responsible global AI governance
To counter the white saviour industrial complex in global AI governance, a shift towards more inclusive and equitable practices is necessary, that places positionality and reflexivity at the centre of global AI governance and overall ELSP research on the digital economy. This shift could be based upon the following approaches.
Inclusive representation is paramount, voices from the Global Majority and marginalised communities should be included in global AI governance discussions. This means creating platforms and opportunities including resource allocation for diverse perspectives to be heard and considered.
Context-specific solutions should be considered. AI solutions should be tailored to local contexts, ethical, legal, social, cultural, and economic factors should be considered through engaging with local experts and communities, to understand their unique needs and challenges. Local experts should also be supported with the capacity to create home-grown solutions as well as contribute to technical discussions at the global stage.
Moreover, funding to boost collaborative frameworks must be prioritised. Developing collaborative governance frameworks that involve multiple stakeholders, including governments, civil society, and the private sector, can help create more balanced and effective policies. These frameworks require concerted funding and should prioritise bottom-up co-creation and mutually beneficial partnerships, rather than top-down imposition.
Ethical pluralism is key. Recognising and respecting the plurality of ethical perspectives is essential. Global AI governance should be flexible enough to incorporate different ethical frameworks and values, allowing for a more nuanced and comprehensive approach to AI ethics.
Finally, there is a need to decolonise research and policy. In both AI governance and development economics, it is important to decolonise research and policy-making processes. This involves including reflexivity on researcher positionality, valuing indigenous knowledge systems, promoting local research initiatives, and ensuring that policy recommendations are grounded in local realities, including through thought leadership of indigenous technical experts representing their communities.
The white saviour industrial complex in global AI governance reflects broader historical and systemic issues. As AI continues to shape our world, it is imperative that we address these issues head-on. We can move towards a more just and equitable global AI governance landscape by reflecting on the atrociousness of the past to ensure our collective efforts to foster inclusivity in the digital age respect local contexts, embrace ethical pluralism, reflexivity, and a decolonial informed approach鈥攁n exercise which will not only enhance the effectiveness of truly global AI solutions for good but also empower communities worldwide to shape their own technological futures.
is a pioneering policy entrepreneur. As founder and executive director of the (DepHUB), she is the first indigenous African woman to establish an independent think tank in South Africa. Shamira was a 2023-2024 Policy Leader Fellow at the EUI Florence School of Transnational Governance. She is an active member of many global expert working groups. Shamira has published a wide range of knowledge products that focus on diverse areas such as measuring the data-driven digital economy, sustainable digital transformation, and the multidimensional aspects of crafting human-centred responsible transnational AI governance, that benefits the Global Majority.
This post was first published on .