This is an excerpt from Unmasking AI: My Mission to Protect What Is Human in a World of Machines by Joy Buolamwini, published on October 31 by Random House. It has been lightly edited. 

The term “x-risk” is used as a shorthand for the hypothetical existential risk posed by AI. While my research supports the idea that AI systems should not be integrated into weapons systems because of the lethal dangers, this isn’t because I believe AI systems by themselves pose an existential risk as superintelligent agents. 

AI systems falsely classifying individuals as criminal suspects, robots being used for policing, and self-driving cars with faulty pedestrian tracking systems can already put your life in danger. Sadly, we do not need AI systems to have superintelligence for them to have fatal outcomes on individual lives. Existing AI systems that cause demonstrated harms are more dangerous than hypothetical “sentient” AI systems because they are real. 

One problem with minimizing existing AI harms by saying hypothetical existential harms are more important is that it shifts the flow of valuable resources and legislative attention. Companies that claim to fear existential risk from AI could show a genuine commitment to safeguarding humanity by not releasing the AI tools they claim could end humanity. 

I am not opposed to preventing the creation of fatal AI systems. Governments concerned with lethal use of AI systems can adopt the protections long championed by the Campaign to Stop Killer Robots to ban lethal autonomous systems and digital dehumanization. The campaign addresses potentially fatal uses of AI without making the hyperbolic jump that we are on a path to creating sentient systems that will destroy all humankind.

Related work from others:  Latest from Google AI - Enabling delightful user experiences via predictive models of human attention

Though it is tempting to view physical violence as the ultimate harm, doing so makes it easy to forget pernicious ways our societies perpetuate structural violence. The Norwegian sociologist Johan Galtung coined this term to describe how institutions and social structures prevent people from meeting their fundamental needs and thus cause harm. Denial of access to health care, housing, and employment through the use of AI perpetuates individual harms and generational scars. AI systems can kill us slowly.

Given what my “Gender Shades” research revealed about algorithmic bias from some of the leading tech companies in the world, my concern is about the immediate problems and emerging vulnerabilities with AI and whether we could address them in ways that would also help create a future where the burdens of AI did not fall disproportionately on the marginalized and vulnerable. AI systems with subpar intelligence that lead to false arrests or wrong diagnoses need to be addressed now. 

When I think of x-risk, I think of the people being harmed now and those who are at risk of harm by AI systems. I think about the risk and reality of being excoded. You can be excoded when a hospital uses AI for triage and leaves you without care, or uses a clinical algorithm that precludes you from receiving a life-saving organ transplant. You can be excoded when you are denied a loan based on algorithmic decision-making. You can be excoded when your résumé is automatically screened out and you are denied the opportunity to compete for the remaining jobs that are not replaced by AI systems. You can be excoded when a tenant-screening algorithm denies you access to housing. All of these examples are real. No one is immune from being excoded, and those already marginalized are at greater risk.

Related work from others:  Latest from MIT Tech Review - Deep learning can almost perfectly predict how ice forms

This is why my research cannot be confined just to industry insiders, AI researchers, or even well-meaning influencers. Yes, academic conferences are important venues. For many academics, presenting published papers is the capstone of a specific research exploration. For me, presenting “Gender Shades” at New York University was a launching pad. Deserting the island of decadent desserts, I felt motivated to put my research into action—beyond talking shop with AI practitioners, beyond the academic presentations, beyond private dinners. Reaching academics and industry insiders is simply not enough. We need to make sure everyday people at risk of experiencing AI harms are part of the fight for algorithmic justice.

Read our interview with Joy Buolamwini here

Similar Posts