The notion that artificial intelligence will help us prepare for the world of tomorrow is woven into our collective fantasies. Based on what we’ve seen so far, however, AI seems much more capable of replaying the past than predicting the future.

That’s because AI algorithms are trained on data. By its very nature, data is an artifact of something that happened in the past. You turned left or right. You went up or down the stairs. Your coat was red or blue. You paid the electric bill on time or you paid it late. 

Data is a relic–even if it’s only a few milliseconds old. And it’s safe to say that most AI algorithms are trained on datasets that are significantly older. In addition to vintage and accuracy, you need to consider other factors such as who collected the data, where the data was collected and whether the dataset is complete or there is missing data. 

There’s no such thing as a perfect dataset–at best, it’s a distorted and incomplete reflection of reality. When we decide which data to use and which data to discard, we are influenced by our innate biases and pre-existing beliefs.

“Suppose that your data is a perfect reflection of the world. That’s still problematic, because the world itself is biased, right? So now you have the perfect image of a distorted world,” says Julia Stoyanovich, associate professor of computer science and engineering at NYU Tandon and director at the Center for Responsible AI at NYU

Can AI help us reduce the biases and prejudices that creep into our datasets, or will it merely amplify them? And who gets to determine which biases are tolerable and which are truly dangerous? How are bias and fairness linked? Does every biased decision produce an unfair result? Or is the relationship more complicated?

Today’s conversations about AI bias tend to focus on high-visibility social issues such as racism, sexism, ageism, homophobia, transphobia, xenophobia, and economic inequality. But there are dozens and dozens of known biases (e.g., confirmation bias, hindsight bias, availability bias, anchoring bias, selection bias, loss aversion bias, outlier bias, survivorship bias, omitted variable bias and many, many others). Jeff Desjardins, founder and editor-in-chief at Visual Capitalist, has published a fascinating infographic depicting 188 cognitive biases–and those are just the ones we know about.

Related work from others:  Latest from MIT : Q&A: Gabriela Sá Pessoa on Brazilian politics, human rights in the Amazon, and AI

Ana Chubinidze, founder of AdalanAI, a Berlin-based AI governance startup, worries that AIs will develop their own invisible biases. Currently, the term “AI bias” refers mostly to human biases that are embedded in historical data. “Things will become more difficult when AIs begin creating their own biases,” she says.

She foresees that AIs will find correlations in data and assume they are causal relationships–even if those relationships don’t exist in reality. Imagine, she says, an edtech system with an AI that poses increasingly difficult questions to students based on their ability to answer previous questions correctly. The AI would quickly develop a bias about which students are “smart” and which aren’t, even though we all know that answering questions correctly can depend on many factors, including hunger, fatigue, distraction, and anxiety. 

Nevertheless, the edtech AI’s “smarter” students would get challenging questions and the rest would get easier questions, resulting in unequal learning outcomes that might not be noticed until the semester is over–or might not be noticed at all. Worse yet, the AI’s bias would likely find its way into the system’s database and follow the students from one class to the next.

Although the edtech example is hypothetical, there have been enough cases of AI bias in the real world to warrant alarm. In 2018, Reuters reported that Amazon had scrapped an AI recruiting tool that had developed a bias against female applicants. In 2016, Microsoft’s Tay chatbot was shut down after making racist and sexist comments.

Perhaps I’ve watched too many episodes of “The Twilight Zone” and “Black Mirror,” because it’s hard for me to see this ending well. If you have any doubts about the virtually inexhaustible power of our biases, please read Thinking, Fast and Slow by Nobel laureate Daniel Kahneman. To illustrate our susceptibility to bias, Kahneman asks us to imagine a bat and a baseball selling for $1.10. The bat, he tells us, costs a dollar more than the ball. How much does the ball cost?

As human beings, we tend to favor simple solutions. It’s a bias we all share. As a result, most people will leap intuitively to the easiest answer–that the bat costs a dollar and the ball costs a dime–even though that answer is wrong and just a few minutes more thinking will reveal the correct answer. I actually went in search of a piece of paper and a pen so I could write out the algebra equation–something I haven’t done since I was in ninth grade.

Related work from others:  Latest from MIT Tech Review - OpenAI says ChatGPT treats us all the same (most of the time)

Our biases are pervasive and ubiquitous. The more granular our datasets become, the more they will reflect our ingrained biases. The problem is that we are using those biased datasets to train AI algorithms and then using the algorithms to make decisions about hiring, college admissions, financial creditworthiness and allocation of public safety resources. 

We’re also using AI algorithms to optimize supply chains, screen for diseases, accelerate the development of life-saving drugs, find new sources of energy and search the world for illicit nuclear materials. As we apply AI more widely and grapple with its implications, it becomes clear that bias itself is a slippery and imprecise term, especially when it is conflated with the idea of unfairness. Just because a solution to a particular problem appears “unbiased” doesn’t mean that it’s fair, and vice versa. 

“There is really no mathematical definition for fairness,” Stoyanovich says. “Things that we talk about in general may or may not apply in practice. Any definitions of bias and fairness should be grounded in a particular domain. You have to ask, ‘Whom does the AI impact? What are the harms and who is harmed? What are the benefits and who benefits?’”

The current wave of hype around AI, including the ongoing hoopla over ChatGPT, has generated unrealistic expectations about AI’s strengths and capabilities. “Senior decision makers are often shocked to learn that AI will fail at trivial tasks,” says Angela Sheffield, an expert in nuclear nonproliferation and applications of AI for national security. “Things that are easy for a human are often really hard for an AI.”

In addition to lacking basic common sense, Sheffield notes, AI is not inherently neutral. The notion that AI will become fair, neutral, helpful, useful, beneficial, responsible, and aligned with human values if we simply eliminate bias is fanciful thinking. “The goal isn’t creating neutral AI. The goal is creating tunable AI,” she says. “Instead of making assumptions, we should find ways to measure and correct for bias. If we don’t deal with a bias when we are building an AI, it will affect performance in ways we can’t predict.” If a biased dataset makes it more difficult to reduce the spread of nuclear weapons, then it’s a problem.

Related work from others:  Latest from MIT Tech Review -  Nobody knows how AI works

Gregor Stühler is co-founder and CEO of Scoutbee, a firm based in Würzburg, Germany, that specializes in AI-driven procurement technology. From his point of view, biased datasets make it harder for AI tools to help companies find good sourcing partners. “Let’s take a scenario where a company wants to buy 100,000 tons of bleach and they’re looking for the best supplier,” he says. Supplier data can be biased in numerous ways and an AI-assisted search will likely reflect the biases or inaccuracies of the supplier dataset. In the bleach scenario, that might result in a nearby supplier being passed over for a larger or better-known supplier on a different continent.

From my perspective, these kinds of examples support the idea of managing AI bias issues at the domain level, rather than trying to devise a universal or comprehensive top-down solution. But is that too simple an approach? 

For decades, the technology industry has ducked complex moral questions by invoking utilitarian philosophy, which posits that we should strive to create the greatest good for the greatest number of people. In The Wrath of Khan, Mr. Spock says, “The needs of the many outweigh the needs of the few.” It’s a simple statement that captures the utilitarian ethos. With all due respect to Mr. Spock, however, it doesn’t take into account that circumstances change over time. Something that seemed wonderful for everyone yesterday might not seem so wonderful tomorrow.    

Our present-day infatuation with AI may pass, much as our fondness for fossil fuels has been tempered by our concerns about climate change. Maybe the best course of action is to assume that all AI is biased and that we cannot simply use it without considering the consequences.

“When we think about building an AI tool, we should first ask ourselves if the tool is really necessary here or should a human be doing this, especially if we want the AI tool to predict what amounts to a social outcome,” says Stoyanovich. “We need to think about the risks and about how much someone would be harmed when the AI makes a mistake.”

Author’s note: Julia Stoyanovich is the co-author of a five-volume comic book on AI that can be downloaded free from GitHub.

Similar Posts