This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

David Baker is sleep-deprived but happy. He’s just won the Nobel prize, after all. 

The call from the Royal Swedish Academy of Sciences woke him in the middle of the night. Or rather, his wife did. She answered the phone at their home in Washington, D.C. and screamed that he’d won the Nobel Prize for Chemistry. The prize is the ultimate recognition of his work as a biochemist at the University of Washington.

“I woke up at two [a.m.] and basically didn’t sleep through the whole day, which was all parties and stuff,” he told me the day after the announcement. “I’m looking forward to getting back to normal a little bit today.”

Last week was a major milestone for AI, with two Nobel prizes awarded for AI-related discoveries. 

Baker wasn’t alone in winning the Nobel Prize for Chemistry. The Royal Swedish Academy of Sciences awarded it to Demis Hassabis, the cofounder and CEO of Google DeepMind, and John M. Jumper, a director at the same company, too. Google DeepMind was awarded for its research on AlphaFold, a tool which can predict how proteins are structured, while Baker was recognized for his work using AI to design new proteinsRead more about it here

Meanwhile, the physics prize went to Geoffrey Hinton, a computer scientist whose pioneering work on deep learning in the 1980s and ’90s underpins all of the most powerful AI models in the world today, and fellow computer scientist John Hopfield, who invented a type of pattern-matching neural network that can store and reconstruct data. Read more about it here.

Related work from others:  Latest from MIT Tech Review - OpenAI and Google are launching supercharged AI assistants. Here’s how you can try them out.

Speaking to reporters after the prize was announced, Hassabis said he believes that it will herald more AI tools being used for significant scientific discoveries. 

But there is one problem. AI needs masses of high-quality data to be useful for science, and databases containing that sort of data are rare, says Baker. 

The prize is a recognition for the whole community of people working as protein designers. It will help move protein design from the “lunatic fringe of stuff that no one ever thought would be useful for anything to being at the center stage,” he says.  

AI has been a gamechanger for biochemists like Baker. Seeing what DeepMind was able to do with AlphaFold made it clear that deep learning was going to be a powerful tool for their work. 

“There’s just all these problems that were really hard before that we are now having much more success with thanks to generative AI methods. We can do much more complicated things,” Baker says. 

Baker is already busy at work. He says his team is focusing on designing enzymes, which carry out all the chemical reactions that living things rely upon to exist. His team is also working on medicines that only act at the right time and place in the body. 

But Baker is hesitant in calling this a watershed moment for AI in science. 

In AI there’s a saying: Garbage in, garbage out. If the data that is fed into AI models is not good, the outcomes won’t be dazzling either. 

The power of the Chemistry Nobel Prize-winning AI tools lies in the Protein Data Bank (PDB), a rare treasure trove of high-quality, curated and standardized data. This is exactly the kind of data that AI needs to do anything useful. But the current trend in AI development is training ever-larger models on the entire content of the internet, which is increasingly full of AI-generated slop. This slop in turn gets sucked into datasets and pollutes the outcomes, leading to bias and errors. That’s just not good enough for rigorous scientific discovery.

Related work from others:  Latest from MIT Tech Review - OpenAI released its advanced voice mode to more people. Here’s how to get it.

“If there were many databases as good as the PDB, I would say, yes, this [prize] probably is just the first of many, but it is kind of a unique database in biology,” Baker says. “It’s not just the methods, it’s the data. And there aren’t so many places where we have that kind of data.”

Now read the rest of The Algorithm

Deeper Learning

Adobe wants to make it easier for artists to blacklist their work from AI scraping

Adobe has announced a new tool to help creators watermark their work and opt out of having it used to train generative AI models. The web app, called Adobe Content Authenticity, also gives artists the opportunity to add “content credentials,” including their verified identity, social media handles, or other online domains, to their work.

A digital signature: Content credentials are based on C2PA, an internet protocol that uses cryptography to securely label images, video, and audio with information clarifying where they came from—the 21st-century equivalent of an artist’s signature. Creators can apply them to their content regardless of whether it was created using Adobe tools. The company is launching a public beta in early 2025. Read more from Rhiannon Williams here.

Bits and Bytes

Why artificial intelligence and clean energy need each other
A geopolitical battle is raging over the future of AI. The key to winning it is a clean-energy revolution, argue Michael Kearney and Lisa Hansmann, from Engine Ventures, a firm that invests in startups commercializing breakthrough science and engineering. They believe that AI’s huge power demands represent a chance to scale the next generation of clean energy technologies. (MIT Technology Review)

Related work from others:  Latest from MIT : Making property assessments as simple as snapping a picture

The state of AI in 2025
AI investor Nathan Benaich and Air Street Capital have released their annual analysis of the state of AI. Their predictions for the next year? Big, proprietary models will start to lose their edge, and labs will focus more on planning and reasoning. Perhaps unsurprisingly, the investor also bets that a handful of AI companies will begin to generate serious revenue. 

Silicon Valley, the new lobbying monster
Big Tech’s tentacles reach everywhere in Washington DC. This is a fascinating look at how tech companies lobby politicians to influence how AI is regulated in the United States.  (The New Yorker

Share via
Copy link
Powered by Social Snap