Geoffrey Hinton, a VP and Engineering Fellow at Google—and a pioneer of deep learning who developed some of the most important techniques at the heart of modern AI—is leaving the company after 10 years, the New York Times reported today.

According to the Times, Hinton says he has new fears about the technology he helped usher in and wants to speak openly about them, and that a part of him now regrets his life’s work.

Hinton, who will be speaking live to MIT Technology Review in his first post-resignation interview at EmTech Digital on Wednesday, was a joint recipient with Yann Lecun and Yoshua Bengio of the 2018 Turing Award—computing’s equivalent of the Nobel. 

“Geoff’s contributions to AI are tremendous,” says Lecun, who is chief AI scientist at Meta. “He hadn’t told me he was planning to leave Google, but I’m not too surprised.”

The 75-year-old computer scientist has divided his time between the University of Toronto and Google since 2013, when the tech giant acquired Hinton’s AI startup DNNresearch. Hinton’s company was a spin-out from his research group, which was doing cutting edge work with machine learning for image recognition at the time. Google used that technology to boost photo search and more.  

Hinton has long called out ethical questions around AI, especially its co-option for military purposes. He has said that one reason he chose to spend much of his career in Canada is that it is easier to get research funding that does not have ties to the U.S. Department of Defense. 

Hinton is best known for an algorithm called backpropagation, which he first proposed with two colleagues in the 1980s. The technique, which allows artificial neural networks to learn, underpins nearly all today’s machine learning models. In a nutshell, backpropagation is a way to adjust the connections between artificial neurons over and over until a neural network produces the desired output. 

Related work from others:  UC Berkeley - Reverse engineering the NTK: towards first-principles architecture design

Hinton believed that backpropagation mimicked how biological brains learn. He has been looking for even better approximations since, but never improved on it.

“In my numerous discussions with Geoff, I was always the proponent of backpropagation and he was always looking for another learning procedure, one that he thought would be more biologically plausible, and perhaps a better model of how learning works in the brain,” says Lecun.  

“Geoff Hinton certainly deserves the greatest credit for many of the ideas that have made current deep learning possible,” says Yoshua Bengio, who is a professor at the University of Montreal and scientific director of the Montreal Institute for Learning Algorithms. “I assume this also makes him feel a particularly strong sense of responsibility in alerting the public about potential risks of the ensuing advances in AI.”

MIT Technology Review will have more on Hinton throughout the week. Be sure to tune in to Will Douglas Heaven’s live interview with Hinton at EmTech Digital on Wednesday, May 3 at 13.30 ET. Tickets are available from the event website.

Similar Posts