Geoffrey Hinton, a computer scientist whose pioneering work on deep learning in the 1980s and ’90s underpins all of the most powerful AI models in the world today, has been awarded the 2024 Nobel Prize in physics by the Royal Swedish Academy of Sciences.

Speaking on the phone to the Academy minutes after the announcement, Hinton said he was flabbergasted: “I had no idea this would happen. I’m very surprised.”

Hinton shares the award with fellow computer scientist John Hopfield, who invented a type of pattern-matching neural network that could store and reconstruct data. Hinton built on this technology, known as a Hopfield network, to develop backpropagation, an algorithm that lets neural networks learn.

Hopfield and Hinton borrowed methods from physics, especially statistical techniques, to develop their approaches. In the words of the Nobel Prize committee, the pair are recognized “for foundational discoveries and inventions that enable machine learning with artificial neural networks.”

But since May 2023, when MIT Technology Review helped break the news that Hinton was now scared of the technology that he had helped bring about, the 76-year-old scientist has become much better known as a figurehead for doomerism—the idea there’s a very real risk that near-future AI could produce catastrophic results, up to and including human extinction.  

Doomerism wasn’t new, but Hinton—who won the Turing Award, the top prize in computing science, in 2018—brought new credibility to a position that many of his peers once considered kooky.

What led Hinton to speak out? When I met with him in his London home last year, Hinton told me that he was awestruck by what the latest large language models could do. OpenAI’s latest flagship model, GPT-4, had been released a few weeks before. Based on what Hinton had seen, he was now convinced that such technology—based on deep learning—would quickly become smarter than humans. And he was worried about what motivations it would have when it did.  

Related work from others:  Latest from MIT Tech Review - The inside story of how ChatGPT was built from the people who made it

“I have suddenly switched my views on whether these things are going to be more intelligent than us,” he told me at the time.  “I think they’re very close to it now and they will be much more intelligent than us in the future. How do we survive that?”

Hinton’s views set off a months-long media buzz and made the kind of existential risks that he and others were imagining (from economic collapse to genocidal robots) into mainstream concerns. Hundreds of top scientists and tech leaders signed open letters warning of the potential catastrophic downsides of artificial intelligence. A moratorium on AI development was floated. Politicians assured voters they would do what they could to prevent the worst.

Despite the buzz, many consider Hinton’s views to be fantastical. Yann LeCun, chief scientist at Meta AI and Hinton’s fellow recipient of the 2018 Turing Award, has called doomerism “preposterously ridiculous.”

Today’s prize rewards foundational work in a technology that has become part of everyday life. It is also sure to shine an even brighter light on Hinton’s more scaremongering opinions.

Similar Posts