This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Lately I’ve lost a lot of sleep over climate change. It’s just over five weeks until Christmas, and last weekend in London, it was warm enough to have a pint outside without a coat. As world leaders gather in Egypt for the final week of climate conference COP27 to “blah, blah, blah,” this week I’m focusing on the carbon footprint of AI. 

I’ve just published a story about the first attempt to calculate the broader emissions of one of the most popular AI products right now—large language models—and how it could help nudge the tech sector to do more to clean up its act.  

AI startup Hugging Face calculated the emissions of its large language model BLOOM, and its researchers found that the training process emitted 25 metric tons of carbon. However, those emissions doubled when they took the wider hardware and infrastructure costs of running the model into account. They published their work in a paper posted on arXiv that’s yet to be peer reviewed.  

The finding in itself isn’t hugely surprising, and BLOOM is way “cleaner” than large language models like OpenAI’s GPT-3 and Meta’s OPT, because it was trained on a French supercomputer powered by nuclear energy. Instead, the significance of this work is that it points to a better way to calculate AI models’ climate impact, by going beyond just the training to the way they’re used in the real world. 

The training is just the tip of the iceberg, because while it’s very polluting, it only has to happen once. Once released into the wild, AI models power things like tech companies’ recommendation engines, or efforts to classify user comments. The actions involved use much less energy, but they can happen a billion times a day. That adds up.

Related work from others:  Latest from MIT Tech Review - Why Big Tech’s watermarking plans are some welcome good news

Tech companies want us to just focus on the emissions from training AI models because it makes them look better, David Rolnick, an assistant professor of computer science at McGill University, who works on AI and climate change, told me. 

But the true carbon footprint of artificial intelligence is likely to be bigger than even Hugging Face’s work suggests, Rolnick argues, when you take into account how AI is being used to boost extremely polluting industries—not to mention its broader, societal knock-on effects. For example, recommendation algorithms are often used in advertising, which in turn drives people to buy more things, which causes more emissions. 

And while AI may play a part in fighting climate change, it’s also contributing to our planet’s death spiral. It’s estimated that the global tech sector accounts for 1.8% to 3.9% of global greenhouse emissions. Although only a fraction of those emissions are caused by AI and machine learning, AI’s carbon footprint is still very high for a single field within tech.

The Hugging Face paper is a good way to begin addressing that, by trying to provide honest data on the broader emissions attributable to an AI model. Tech companies like Google and Meta, which dominate this sector, do not publish this data. That means we really don’t have a remotely accurate picture of AI’s carbon footprint right now. 

Demanding that tech companies provide more data about the climate impact of building, training, and using AI is a start. We should also shift away from being obsessed with building ever-bigger AI models, and try to come up with ways to do AI research using more energy-efficient methods, such as fine-tuning existing models.

Related work from others:  Latest from MIT : How to assess a general-purpose AI model’s reliability before it’s deployed

Deeper Learning

Inside Alphabet X’s new effort to combat climate change with AI and seagrass 

MIT Technology Review got a sneak peek at Tidal, a new climate change mitigation project by X, the moonshot division of Google’s parent company, Alphabet. Tidal uses cameras, computer vision, and machine learning to track the carbon stored in the biomass of the oceans. It’s part of an effort to improve our understanding of underwater ecosystems in order to inform and incentivize efforts to protect the oceans amid mounting threats from pollution, overfishing, ocean acidification, and global warming. 

With projects like Tidal, X is creating tools to ensure that industries can do more to address environmental dangers and that ecosystems can survive in a hotter, harsher world. It’s also leaning heavily in to its parent company’s areas of strength, drawing on Alphabet’s robotics expertise as well as its ability to derive insights from massive amounts of data using artificial intelligence. Read James Temple’s story about it.

Bits and Bytes

Elon Musk is starting to see the consequences of laying off AI teams
When the billionaire took over Twitter, he laid off half the company’s staff, including machine-learning teams working to ensure that the platform’s infrastructure is safe, secure, and reliable. The ethical-AI team and those working on infrastructure were among those let go. The results were almost immediate: the site is slowly starting to break down. We spoke to a former Twitter engineer to hear how it’s likely to pan out. (MIT Technology Review)

A lawsuit could rewrite the rules of AI copyright
In the first class action lawsuit in the US on the training of AI systems, Microsoft, GitHub, and OpenAI are being sued for allegedly violating copyright law by reproducing open-source code using AI. GitHub Copilot scrapes websites for code and, like large language models, regurgitates what it’s collected in its database without crediting the original source. A lawsuit challenging the legality of this model could have major knock-on effects for other AI systems that are trained by scraping the web, from text-generation to image-making AI. (The Verge)

Related work from others:  Latest from MIT Tech Review - How machine learning is helping us probe the secret names of animals

Are the US and China really in an AI cold war? 
This is a really interesting series that unpacks some of the problematic narratives around the race for AI development between the US and China. (Protocol)

Amazon’s new robot can handle most items in its warehouse
The new robot, called Sparrow, can pick up items in shelves or bins so that they can be packed into boxes. This has traditionally been too tricky for robots, because there are so many different kinds of objects with different shapes and sizes. Amazon’s robot uses machine learning and cameras to identify objects. This could help speed up warehouse operations. (Wired)

Supermodel generator
A new text-to-image AI called Aperture is reportedly about to drop this week from Lexica, and it seems to be able to generate very realistic-looking photos of supermodels. I’m very curious to see this model in action, because other popular image-generating AIs, such as DALL-E and Stable Diffusion, struggle to generate fingers and hands, as well as human faces that don’t look as if they have melted in the sun.  

Similar Posts