My husband and I love to eat and to learn about history. So shortly after we married, we chose to honeymoon along the southern coast of Spain. The region, historically ruled by Greeks, Romans, Muslims, and Christians in turn, is famed for its stunning architecture and rich fusion of cuisines.

Little did I know how much this personal trip would intersect with my reporting. Over the last few years, an increasing number of scholars have argued that the impact of AI is repeating the patterns of colonial history. European colonialism, they say, was characterized by the violent capture of land, extraction of resources, and exploitation of people—for example, through slavery—for the economic enrichment of the conquering country. While it would diminish the depth of past traumas to say the AI industry is repeating this violence today, it is now using other, more insidious means to enrich the wealthy and powerful at the great expense of the poor.

I had already begun to investigate these claims when my husband and I began to journey through Seville, Córdoba, Granada, and Barcelona. As I simultaneously read The Costs of Connection, one of the foundational texts that first proposed a “data colonialism,” I realized that these cities were the birthplaces of European colonialism—cities through which Christopher Columbus traveled as he voyaged back and forth to the Americas, and through which the Spanish crown transformed the world order.

In Barcelona especially, physical remnants of this past abound. The city is known for its Catalan modernism, an iconic aesthetic popularized by Antoni Gaudí, the mastermind behind the Sagrada Familia. The architectural movement was born in part from the investments of wealthy Spanish families who amassed riches from their colonial businesses and funneled the money into lavish mansions.

Related work from others:  Latest from MIT : Mining the right transition metals in a vast chemical space

One of the most famous, known as the Casa Lleó Morera, was built early in the 20th century with profits made from the sugar trade in Puerto Rico. While tourists from around the world today visit the mansion for its beauty, Puerto Rico still suffers from food insecurity because for so long its fertile land produced cash crops for Spanish merchants instead of sustenance for the local people.

As we stood in front of the intricately carved façade, which features flora, mythical creatures, and four women holding the four greatest inventions of the time (a lightbulb, a telephone, a gramophone, and a camera), I could see the parallels between this embodiment of colonial extraction and global AI development.

The AI industry does not seek to capture land as the conquistadors of the Caribbean and Latin America did, but the same desire for profit drives it to expand its reach. The more users a company can acquire for its products, the more subjects it can have for its algorithms, and the more resources—data—it can harvest from their activities, their movements, and even their bodies.

To support MIT Technology Review’s journalism, please consider becoming a subscriber.

Neither does the industry still exploit labor through mass-scale slavery, which necessitated the propagation of racist beliefs that dehumanized entire populations. But it has developed new ways of exploiting cheap and precarious labor, often in the Global South, shaped by implicit ideas that such populations don’t need—or are less deserving of—livable wages and economic stability.

MIT Technology Review’s new AI Colonialism series, which will be publishing throughout this week, digs into these and other parallels between AI development and the colonial past by examining communities that have been profoundly changed by the technology. In part one, we head to South Africa, where AI surveillance tools, built on the extraction of people’s behaviors and faces, are re-entrenching racial hierarchies and fueling a digital apartheid.

Related work from others:  Latest from MIT Tech Review - Modernizing the automotive industry: Creating a seamless customer experience 

In part two, we head to Venezuela, where AI data-labeling firms found cheap and desperate workers amid a devastating economic crisis, creating a new model of labor exploitation. The series also looks at ways to move away from these dynamics. In part three, we visit ride-hailing drivers in Indonesia who, by building power through community, are learning to resist algorithmic control and fragmentation. In part four, we end in Aotearoa, the Maori name for New Zealand, where an Indigenous couple are wresting back control of their community’s data to revitalize its language.

Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development—the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more—a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.

That is ultimately the aim of this series: to broaden the view of AI’s impact on society so as to begin to figure out how things could be different. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way.

Now a new generation of scholars is championing a “decolonial AI” to return power from the Global North back to the Global South, from Silicon Valley back to the people. My hope is that this series can provide a prompt for what “decolonial AI” might look like—and an invitation, because there’s so much more to explore.

Related work from others:  Latest from MIT : Creating a versatile vaccine to take on Covid-19 in its many guises

Similar Posts