This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

Is it hot where you are? It sure is here in London. I’m writing this newsletter with a fan blasting at full power in my direction and still feel like my brain is melting. Last week was the hottest week on record. It’s yet another sign that climate change is “out of control,” the UN secretary general said. 

Punishing heat waves and extreme weather events like hurricanes and floods are going to become more common as the climate crisis worsens, making it more important than ever before to produce accurate weather forecasts.  

AI is proving increasingly helpful with that. In the past year, weather forecasting has been having an AI moment. 

Three recent papers from Nvidia, Google DeepMind, and Huawei have introduced machine-learning methods that are able to predict weather at least as accurately as conventional methods, and much more quickly. Last week I wrote about Pangu-Weather, an AI model developed by Huawei. Pangu-Weather is able to forecast not only weather but also the path of tropical cyclones. Read more here

Huawei’s Pangu-Weather, Nvidia’s FourcastNet, and Google DeepMind’s GraphCast, are making meteorologists “reconsider how we use machine learning and weather forecasts,” Peter Dueben, head of Earth system modeling at the European Centre for Medium-Range Weather Forecasts (ECMWF), told me for the story. 

ECMWF’s weather forecasting model is considered the gold standard for medium-term weather forecasting (up to 15 days ahead). Pangu-Weather managed to get comparable accuracy to the ECMWF model, while Google DeepMind claims in an non-peer-reviewed paper to have beat it 90% of the time in the combinations they tested.

Related work from others:  Latest from MIT Tech Review - A new public database lists all the ways AI could go wrong

Using AI to predict weather has a big advantage: it’s fast. Traditional forecasting models are big, complex computer algorithms based on atmospheric physics and take hours to run. AI models can create forecasts in just seconds. 

But they are unlikely to replace conventional weather prediction models anytime soon. AI-powered forecasting models are trained on historical weather data that goes back decades, which means they are great at predicting events that are similar to the weather of the past. That’s a problem in an era of increasingly unpredictable conditions.

We don’t know if AI models will be able to predict rare and extreme weather events, says Dueben. He thinks the way forward might be for AI tools to be adopted alongside traditional weather forecasting models to get the most accurate predictions. 

Big Tech’s arrival on the weather forecasting scene is not purely based on scientific curiosity, reckons Oliver Fuhrer, the head of the numerical prediction department at MeteoSwiss, the Swiss Federal Office of Meteorology and Climatology. 

Our economies are becoming increasingly dependent on weather, especially with the rise of renewable energy, says Fuhrer. Tech companies’ businesses are also linked to weather, he adds, pointing to anything from logistics to the number of search queries for ice cream.  

The field of weather forecasting could gain a lot from the addition of AI. Countries track and record weather data, which means there is plenty of publicly available data out there to use in training AI models. When combined with human expertise, AI could help speed up a painstaking process. What’s next isn’t clear, but the prospects are exciting. “Part of it is also just exploring the space and figuring out what potential services or business models might be,” Fuhrer says. 

Related work from others:  Latest from MIT : Stratospheric safety standards: How aviation could steer regulation of AI in health

Deeper Learning

AI-text detection tools are really easy to fool

Within weeks of ChatGPT’s launch, there were fears that students would be using the chatbot to spin up passable essays in seconds. In response to those fears, startups started making products that promise to spot whether text is written by a human or a machine. Turns out it’s relatively simple to trick these tools and avoid detection. 

Snake-oil alert: I’ve written about how difficult—if not impossible—it is to detect AI-generated text. As my colleague Rhiannon Williams reports, new research found that most of the tools that claim to be able to spot such text perform poorly. Researchers tested 14 detection tools and found that while they were good at spotting human-written text (with 96% accuracy on average), that fell to 74% for AI-generated text, and even lower, to 42%, when that text had been slightly tweaked. Read more

Bits and Bytes

AI companies are facing a flood of lawsuits over privacy and copyright
What America lacks in AI regulation, it makes up for in multimillion-dollar lawsuits. In late June, a California law firm launched a class action lawsuit against OpenAI, claiming that the company violated the privacy of millions of people when it scraped data from the internet to train its model. Now, actor and comedian Sarah Silverman is suing OpenAI and Meta for scraping her copyrighted work into their AI models. These cases, along with existing copyright lawsuits by artists, could set an important precedent for how AI is developed in the US. 

Related work from others:  O'Reilly Media - Teaching Programming in the Age of ChatGPT

OpenAI has introduced a new concept: “superalignment” 
It’s a bird … It’s a plane … It’s superalignment! OpenAI is assembling a team of researchers to work on “superintelligence alignment.” That means they’ll focus on solving the technical challenges that would be involved in controlling AI systems that are smarter than humans. 

On one hand, I think it’s great that OpenAI is working to mitigate the harm that could be done by the superintelligent AI it is trying to build. But on the other hand, such AI systems remain wildly hypothetical, and existing systems cause plenty of harm today. At the very least, I hope OpenAI comes up with more effective ways to control this generation of AI models. (OpenAI)

Big Tech says it wants AI regulation, so long as users bear the brunt
This story gives a nice overview of the lobbying happening behind the scenes around the AI Act. While tech companies say they support regulation, they are pushing back against EU efforts to impose stricter rules around their AI products. (Bloomberg)

How elite schools like Stanford became fixated on the AI apocalypse
Fears about existential AI risk didn’t come from nowhere. In fact, as this piece explains, it’s a billionaire-backed movement that’s recruited an army of elite college students to its cause. And they’re keen to capitalize on the current moment. (The Washington Post)

Similar Posts