Joy Buolamwini, the renowned AI researcher and activist, appears on the Zoom screen from home in Boston, wearing her signature thick-rimmed glasses. 

As an MIT grad, she seems genuinely interested in seeing old covers of MIT Technology Review that hang in our London office. An edition of the magazine from 1961 asks: “Will your son get into college?” 

I can tell Buolamwini finds the cover amusing. She takes a picture of it. Times have changed a lot since 1961. In her new memoir, Unmasking AI: My Mission to Protect What Is Human in a World of Machines, Buolamwini shares her life story. In many ways she embodies how far tech has come since then, and how much further it still needs to go. 

Buolamwini is best known for a pioneering paper she co-wrote with AI researcher Timnit Gebru in 2017, called “Gender Shades,” which exposed how commercial facial recognition systems often failed to recognize the faces of Black and brown people, especially Black women. Her research and advocacy led companies such as Google, IBM, and Microsoft to improve their software so it would be less biased and back away from selling their technology to law enforcement. 

Now, Buolamwini has a new target in sight. She is calling for a radical rethink of how AI systems are built. Buolamwini tells MIT Technology Review that, amid the current AI hype cycle, she sees a very real risk of letting technology companies pen the rules that apply to them—repeating the very mistake, she argues, that has previously allowed biased and oppressive technology to thrive.

“What concerns me is we’re giving so many companies a free pass, or we’re applauding the innovation while turning our head [away from the harms],” Buolamwini says. 

A particular concern, says Buolamwini, is the basis upon which we are building today’s sparkliest AI toys, so-called foundation models. Technologists envision these multifunctional models serving as a springboard for many other AI applications, from chatbots to automated movie-making. They are built by scraping masses of data from the internet, inevitably including copyrighted content and personal information. Many AI companies are now being sued by artists, music companies, and writers, who claim their intellectual property was taken without consent

The current modus operandi of today’s AI companies is unethical—a form of “data colonialism,” Buolamwini says, with a “full disregard for consent.”  

Related work from others:  Latest from MIT Tech Review - An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

“What’s out there for the taking, if there aren’t laws—it’s just pillaged,” she says. As an author, Buolamwini says, she fully expects her book, her poems, her voice, and her op-eds—even her PhD dissertation—to be scraped into AI models. 

“Should I find that any of my work has been used in these systems, I will definitely speak up. That’s what we do,” she says.  

Buolamwini says a real innovation worth boasting about would be models that companies could show have a positive climate impact and legitimately sourced data, for example. 

“I see an opportunity to learn from so many mistakes of the past when it comes to oppressive systems powering advanced technologies. My hope is we go in a different direction,” she says.

I ask what aspect of AI systems she would audit today if she could repeat the success of “Gender Shades.” Without missing a beat, Buolamwini says that instead of one single audit, she would like AI to have an audit culture, where systems get rigorously tested before they are deployed in the real world. She’d like to see AI developers ask essential questions about any system they come up with, such as: How does it do what it does, and should we even be using it? 

“Right now we’re in guinea pig mode, where the systems haven’t been fully tested … That’s irresponsible, especially when these systems are being used in ways that impact people’s lived experiences and life opportunities,” she says. 

An accidental activist

Buolamwini has been at the forefront of raising awareness around AI harms for the better part of a decade. But her path to becoming a prominent researcher and activist was far from typical. 

Her memoir gives us an inside look into how she evolved from a young computer science student who didn’t want to deal with “consequences and humans,” as she puts it, into someone who could not look away when confronted with deeply flawed technology that she realized was not created for people like her. 

In a striking moment in the book, she describes discovering that common facial recognition software would recognize a white Halloween mask but not her own face. Although she was unsettled, Buolamwini’s initial reaction was that someone else, perhaps a more seasoned AI researcher, would take care of the problem. But the more examples she saw of algorithmic bias, the harder it became not to do something herself. 

Related work from others:  Latest from MIT : 3 Questions: Fotini Christia on racial equity and data science

While still a PhD researcher, propelled by the findings of her research on facial recognition, Buolamwini founded the Algorithmic Justice League (AJL), a nonprofit organization fighting algorithmic biases and injustice, and became a prominent voice warning about the harm posed by biased AI systems. In 2019, she gave a pair of expert testimonies to Congress on facial recognition technology and its effect on civil rights and liberties.

While Buolamwini’s story is in some ways inspirational, it’s also a warning. The pressure of raising awareness about AI harms and speaking up against some of the world’s most powerful companies took a toll. In a particularly sad and touching part of the book, Buolamwini describes checking herself into an emergency room the night before an exam, exhausted from the stress of back-to-back congressional testimonies on top of extensive travel and advocacy. 

After she’d done years of intense work setting up the AJL and advising governments about AI while working on her PhD, her “burnout mode” was so overwhelming that in 2021, she wrote a letter to her professors, saying she might drop out. She asked them to make a case for the actual benefit of an MIT PhD—which would be her fourth university degree. 

She took a month off from writing her dissertation to rest, and took up skateboarding. “Maybe it was not too late to become a professional skateboarder and maybe even represent Ghana at the next Olympics,” Buolamwini writes, tongue in cheek. 

In the end, she decided to finish her doctorate, graduating in 2022. She resolved to do so, she writes, after realizing that for her as a woman of color, unlike white colleagues, dropping out of a degree program would be seen as a failure. 

Big risk, big reward

Buolamwini also describes an episode in which she went up against a tech “Goliath,” Amazon. Her PhD research about auditing facial recognition systems elicited public attacks from senior executives at the company, which was at the time—in 2019—competing with Microsoft for a $10 billion contract to provide AI services to the Pentagon. After research by Buolamwini and Inioluwa Deborah Raji, another AI researcher, showed that Amazon’s facial recognition technology was biased, an Amazon vice president, Matt Wood, claimed that her paper and press coverage about it were “misleading” and drew “false conclusions.” 

Related work from others:  Latest from MIT Tech Review - Artificial intelligence is infiltrating health care. We shouldn’t let it make all the decisions.

Amazon was the only technology company that took a combative approach to Buolamwini’s research, she says. She worried she’d put Raji, who at the time was still an undergraduate, in danger: “Prospective computer science departments might perceive her as too much of a risk. Future employers in the tech industry could blacklist her,” Buolamwini writes. 

“I was concerned that if other researchers saw what Amazon was doing to us and no academics stood up in defense, other researchers would perceive the professional risk of this kind of research as being too high,” Buolamwini adds. 

Many people came to fight in her corner. The American Civil Liberties Union of Massachusetts and the Georgetown Law Center on Privacy and Technology supported her publicly on Twitter (now X). In the research community, Margaret Mitchell and Gebru, who at the time were leading Google’s AI ethics work, organized an open letter in support of Buolamwini’s work. Seventy-five researchers, including Turing Prize winner Yoshua Bengio, signed it. 

Disappointingly, she says, most of the public support she received was from outside MIT. In her book, she writes that she felt she had been “largely abandoned by the majority of MIT leadership at that moment.” 

“I assumed the Media Lab, which showcased my work to attract students and news coverage, would defend me. It was disappointing to feel like I had to plead for protection and then receive very little,” Buolamwini writes. (We have contacted MIT Media Lab for comment on this allegation, and will update this piece if they reply.)

Her advice to young people in tech who want to speak up about the problems they see? “Don’t jump in the deep end not knowing that it’s deep,” Buolamwini says. She wants more people to be able to call it out when they see injustices. 

But at the same time, she advises them to surround themselves with people who can offer support and protection. “Your voice matters,” she says.

Similar Posts