Who’s afraid of the big bad bots? A lot of people, it seems. The number of high-profile names that have now made public pronouncements or signed open letters warning of the catastrophic dangers of artificial intelligence is striking.

Hundreds of scientists, business leaders, and policymakers have spoken up, from deep learning pioneers Geoffrey Hinton and Yoshua Bengio to the CEOs of top AI firms, such as Sam Altman and Demis Hassabis, to the California congressman Ted Lieu and the former president of Estonia Kersti Kaljulaid.

The starkest assertion, signed by all those figures and many more, is a 22-word statement put out two weeks ago by the Center for AI Safety (CAIS), an agenda-pushing research organization based in San Francisco. It proclaims: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

The wording is deliberate. “If we were going for a Rorschach-test type of statement, we would have said ‘existential risk’ because that can mean a lot of things to a lot of different people,” says CAIS director Dan Hendryks. But they wanted to be clear: this was not about tanking the economy. “That’s why we went with ‘risk of extinction’ even though a lot of us are concerned with various other risks as well,” says Hendryks.

We’ve been here before: AI doom follows AI hype. But this time feels different. The Overton window has shifted. What were once extreme views are now mainstream talking points, grabbing not only headlines but the attention of world leaders. “The chorus of voices raising concerns about AI has simply gotten too loud to be ignored,” says Jenna Burrell, director of research at Data and Society, an organization that studies the social implications of technology.

What’s going on? Has AI really become (more) dangerous? And why are the people who ushered in this tech now the ones raising the alarm?   

It’s true that these views split the field. Last week, Yann Lecun, chief scientist at Meta, and joint recipient with Hinton and Bengio of the 2018 Turing Award, called the doomerism “preposterous”. Aiden Gomez, CEO of AI firm Cohere, said it was “an absurd use of our time.”

Others scoff too. “There’s no more evidence now than there was in 1950 that AI is going to pose these existential risks,” says Signal president Meredith Whittaker, who is co-founder and former director of the AI Now Institute, a research lab that studies the social and policy implications of artificial intelligence. “Ghost stories are contagious, it’s really exciting and stimulating to be afraid.”

“It is also a way to skim over everything that’s happening in the present day,” says Burrell. “It suggests that we haven’t seen real or serious harm yet.”

An old fear

Concerns about runaway, self-improving machines have been around since Alan Turing. Futurists like Vernor Vinge and Ray Kurzweil popularized these ideas with talk of the so-called Singularity, a hypothetical date at which artificial intelligence outstrips human intelligence and machines take over. 

Related work from others:  Latest from MIT Tech Review - AI is at an inflection point, Fei-Fei Li says

But at the heart of such concerns is the question of control: how do humans stay on top if (or when) machines get smarter? In a paper called “How Does Artificial Intelligence Pose an Existential Risk?” published in 2017, Karina Vold, a philosopher of artificial intelligence at the University of Toronto (who signed the CAIS statement), lays out the basic argument behind the fears.    

There are three key premises. One, it’s possible that humans will build a superintelligent machine that can outsmart all other intelligences. Two, it’s possible that we will not be able to control a superintelligence that can outsmart us. And three, it’s possible that a superintelligence will do things that we do not want it to.

Putting all that together, it is possible to build a machine that will do things that we don’t want it to—up to and including wiping us out—and we will not be able to stop it.   

There are different flavors of this scenario. When Hinton raised his concerns about AI in May, he gave the example of robots rerouting the power grid to give themselves more power. But superintelligence (or AGI) is not necessarily required. Dumb machines, given too much leeway, could be disastrous too. Many scenarios involve thoughtless or malicious deployment rather than self-interested bots. 

In a paper posted online last week, Stuart Russell and Andrew Critch, AI researchers at the University of Berkeley (who also both signed the CAIS statement), give a taxonomy of existential risks. These range from a viral advice-giving chatbot telling millions of people to drop out of college, to autonomous industries that pursue their own harmful economic ends, to nation states building AI-powered superweapons.

In many imagined cases, a theoretical model fulfills its human-given goal but does so in a way that works against us. For Hendryks, who studied how deep learning models can sometimes behave in unexpected and undesirable ways when given inputs not seen in their training data, an AI system could be disastrous because it is broken rather than all-powerful. “If you give it a goal and it finds alien solutions to it, it’s going to take us for a weird ride,” he says.

The problem with these possible futures is that they rest on a string of what-ifs, which makes them sound like science fiction. Vold acknowledges this herself. “Because events that constitute or precipitate an [existential risk] are unprecedented, arguments to the effect that they pose such a threat must be theoretical in nature,” she writes. “Their rarity also makes it such that any speculations about how or when such events might occur are subjective and not empirically verifiable.”

So why are more people taking these ideas at face value than ever before? “Different people talk about risk for different reasons, and they may mean different things by it,” says Francois Chollet, an AI researcher at Google. But it is also a narrative that’s hard to resist: “Existential risk has always been a good story.”

Related work from others:  Latest from MIT Tech Review - Doctors using AI catch breast cancer more often than either does alone

“There’s a sort of mythological, almost religious element to this that can’t be discounted,” says Whittaker. “I think we need to recognise that what is being described, given that it has no basis in evidence, is much closer to an article of faith, a sort of religious fervor, than it is to scientific discourse.”

The doom contagion

When deep learning researchers first started to rack up a series of successes—think of Hinton and his colleagues’ record-breaking image-recognition scores in the ImageNet competition in 2012 and DeepMind’s first wins against human champions with AlphaGo in 2015—the hype soon turned to doom then too. Celebrity scientists, such as Stephen Hawking and fellow cosmologist Martin Rees, as well as celebrity tech leaders like Elon Musk, raised the alarm about existential risk. But these figures weren’t AI experts.   

Eight years ago, AI pioneer Andrew Ng, who was chief scientist at Baidu at the time, stood on a stage in San Jose and laughed off the entire idea. 

“There could be a race of killer robots in the far future,” Ng told the audience at Nvidia’s GPU Technology Conference in 2015. “But I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.” (Ng’s words were reported at the time by tech news website The Register.)

Ng, who cofounded Google’s AI lab in 2011 and is now CEO of Landing AI, has repeated the line in interviews since. But now he’s on the fence. “I’m keeping an open mind and am speaking with a few people to learn more,” he tells me. “The rapid pace of development has led scientists to rethink the risks.”

Like many, Ng is concerned by the rapid progress of generative AI and their potential for misuse. He notes that a widely-shared AI-generated image of an explosion at the Pentagon spooked people enough last month that the stock market dropped.  

“With AI being so powerful, unfortunately it seems likely that it will also lead to massive problems,” says Ng. But he still stops short of killer robots: “Right now, I still struggle to see how AI can lead to our extinction.”

Something else that’s new is the widespread awareness of what AI can do. Earlier this year, ChatGPT brought this technology to the public. “AI is a popular topic in the mainstream all of a sudden,” says Chollet. “People are taking AI seriously because they see a sudden jump in capabilities as a harbinger of more future jumps.” 

The experience of conversing with a chatbot can also be unnerving. Conversation is something that is typically understood as something people do with other people. “It added a kind of plausibility to the idea that AI was human-like or a sentient interlocutor,” says Whittaker. “I think it gave some purchase to the idea that if AI can simulate human communication, it could also do XYZ.”

Related work from others:  Latest from Google AI - Google at CVPR 2022

“That is the opening that I see the existential risk conversation sort of fitting into, extrapolating without evidence,” she says.

There’s reason to be cynical too. With regulators catching up to the tech industry, the issue on the table is what sorts of activity should and should not get constrained. Highlighting long-term risks rather than short-term harms (such as discriminatory hiring or misinformation) refocuses regulators’ attention on hypothetical problems down the line.

“I suspect the threat of genuine regulatory constraints has pushed people to take a position,” says Burrell. Talking about existential risks may validate regulators’ concerns without undermining business opportunities. “Superintelligent AI that turns on humanity sounds terrifying, but it’s also clearly not something that’s happened yet,” she says.

Inflating fears about existential risk is good for business in other ways too. Chollet points out that top AI firms need us to think that AGI is coming, and that they are the ones building it. “If you want people to think what you’re working on is powerful, it’s a good idea to make them fear it,” he says.

Whittaker takes a similar view. “It’s a significant thing, to cast yourself as the creator of an entity that could be more powerful than human beings,” she says.

None of this would matter much if it was simply about marketing or hype. But deciding what the risks are, and what they’re not, has consequences. In a world where budgets and attention spans are limited, harms less extreme than nuclear war may get overlooked because we’ve decided they aren’t the priority.

“It’s an important question, especially with the growing focus on security and safety as the narrow frame for policy intervention,” says Sarah Myers West, managing director of the AI Now Institute.

When UK Prime Minister Rishi Sunak met with heads of AI firms, including Sam Altman and Demis Hassabis, in May, his government issued a statement saying: “The PM and CEOs discussed the risks of the technology, ranging from disinformation and national security, to existential threats.”

The week before, Altman told the US Senate that his worst fears were that the AI industry would cause significant harm to the world. Altman’s testimony helped spark calls for a new kind of agency to address such unprecedented harm.

With the Overton window shifted, is the damage done? “If we’re talking about the far future, if we’re talking about mythological risks, then we are completely reframing the problem to be a problem that exists in a fantasy world and its solutions can exist in a fantasy world too,” says Whittaker.

But Whittaker points out that policy discussions around AI have been going on for years, longer than this recent buzz of fear. “I don’t believe in inevitability,” she says. “We will see a beating back of this hype, it will subside.”

Similar Posts