In September 2017, about two minutes before a magnitude 8.2 earthquake struck Mexico City, blaring sirens alerted residents that a quake was coming. Such alerts, which are now available in the United States, Japan, Turkey, Italy, and Romania, among other countries, have changed the way we think about the threat of earthquakes. They no longer have to take us entirely by surprise.

Earthquake early warning systems can send alarms through phones or transmit a loud signal to affected regions three to five seconds after a potentially damaging earthquake begins. First, seismometers close to the fault pick up the beginnings of the quake, and finely programmed algorithms determine its probable size. If it is moderate or large, the resulting alert then travels faster than the earthquake itself, giving seconds to minutes of warning. This window of time is crucial: in these brief moments, people can shut off electricity and gas lines, move fire trucks into the streets, and find safe places to go. 

The magnitude 9 Tohoku-Oki earthquake of 2011 was preceded by two slow earthquakes.
AP IMAGES

But these systems have limitations. There are false positives and false negatives. What’s more, they react only to an earthquake that has already begun—we can’t predict an earthquake the way we can forecast the weather. And so many earthquake-­prone regions are left in a state of constant suspense. A proper forecast could let us do a lot more to manage risk, from shutting down the power grid to evacuating residents.

When I started my PhD in seismology in 2013, the very topic of earthquake prediction was deemed unserious, as outside the realm of mainstream research as the hunt for the Loch Ness Monster. 

But just seven years later, a lot had changed. When I began my second postdoc in 2020, I observed that scientists in the field had become much more open to earthquake prediction. The project I was a part of, Tectonic, was using machine learning to advance earthquake prediction. The European Research Council was sufficiently convinced of its potential to award it a four-year, €3.4 million grant that same year. 

Today, a number of well-respected scientists are getting serious about the prospect of prediction and are making progress in their respective subdisciplines. Some are studying a different kind of slow-motion behavior along fault lines, which could turn out to be a useful indicator that the devastating kind of earthquake we all know and fear is on the way. Others are hoping to tease out hints from other data—signals in seismic noise, animal behavior, and electromagnetism—to push earthquake science toward the possibility of issuing warnings before the shaking begins. 

In the dark

Earthquake physics can seem especially opaque. Astronomers can view the stars; biologists can observe an animal. But those of us who study earthquakes cannot see into the ground—at least not directly. Instead, we use proxies to understand what happens inside the Earth when its crust shakes: seismology, the study of the sound waves generated by movement within the interior; geodesy, the application of tools like GPS to measure how Earth’s surface changes over time; and paleoseismology, the study of relics of past earthquakes concealed in geologic layers of the landscape. 

Without good knowledge of what’s happening under the ground, it’s impossible to intuit any sense of order.

There is much we still don’t know. Decades after the theory of plate tectonics was widely accepted in the 1960s, our understanding of earthquake genesis hasn’t progressed far beyond the idea that stress builds to a critical threshold, at which point it is released through a quake. Different factors can make a fault more susceptible to reaching that point. The presence of fluids, for instance, is significant: the injection of wastewater fluid from oil and gas production has caused huge increases in tectonic activity across the central US in the last decade. But when it comes to knowing what is happening along a given fault line, we’re largely in the dark. We can construct an approximate map of a fault by using seismic waves and mapping earthquake locations, but we can’t directly measure the stress it is experiencing, nor can we quantify the threshold beyond which the ground will move.

For a long time, the best we could do regarding prediction was to get a sense of how often earthquakes happen in a particular region. For example, the last earthquake to rupture the entire length of the southern San Andreas Fault in California was in 1857. The average time period between big quakes there is estimated to be somewhere between 100 and 180 years. According to a back-of-the-envelope calculation, we could be “overdue.” But as the wide range suggests, recurrence intervals can vary wildly and may be misleading. The sample size is limited to the scope of human history and what we can still observe in the geologic record, which represents a small fraction of the earthquakes that have occurred over Earth’s history.

Related work from others:  Latest from MIT : Second round of seed grants awarded to MIT scholars studying the impact and applications of generative AI

In 1985, scientists began installing seismometers and other earthquake monitoring equipment along the Parkfield section of the San Andreas Fault, in central California. Six earthquakes in that section had occurred at unusually regular intervals compared to earthquakes along other faults, so scientists from the US Geological Survey (USGS) forecasted with a high degree of confidence that the next earthquake of a similar magnitude would occur before 1993. The experiment is largely considered a failure—the earthquake didn’t come until 2004. 

Instances of regular intervals between earthquakes of similar magnitudes have been noted in other places, including Hawaii, but these are the exception, not the rule. Far more often, recurrence intervals are given as averages with large margins of error. For areas prone to large earthquakes, these intervals can be on the scale of hundreds of years, with uncertainty bars that also span hundreds of years. Clearly, this method of forecasting is far from an exact science. 

Tom Heaton, a geophysicist at Caltech and a former senior scientist at the USGS, is skeptical that we will ever be able to predict earthquakes. He treats them largely as stochastic processes, meaning we can attach probabilities to events, but we can’t forecast them with any accuracy. 

“In terms of physics, it’s a chaotic system,” Heaton says. Underlying it all is significant evidence that Earth’s behavior is ordered and deterministic. But without good knowledge of what’s happening under the ground, it’s impossible to intuit any sense of that order. “Sometimes when you say the word ‘chaos,’ people think [you] mean it’s a random system,” he says. “Chaotic means that it’s so complicated you cannot make predictions.” 

But as scientists’ understanding of what’s happening inside Earth’s crust evolves and their tools become more advanced, it’s not unreasonable to expect that their ability to make predictions will improve. 

Slow shakes

Given how little we can quantify about what’s going on in the planet’s interior, it makes sense that earthquake prediction has long seemed out of the question. But in the early 2000s, two discoveries began to open up the possibility. 

First, seismologists discovered a strange, low-amplitude seismic signal in a tectonic region of southwest Japan. It would last from hours up to several weeks and occurred at somewhat regular intervals; it wasn’t like anything they’d seen before. They called it tectonic tremor.

Meanwhile, geodesists studying the Cascadia subduction zone, a massive stretch off the coast of the US Pacific Northwest where one plate is diving under another, found evidence of times when part of the crust slowly moved in the opposite of its usual direction. This phenomenon, dubbed a slow slip event, happened in a thin section of Earth’s crust located beneath the zone that produces regular earthquakes, where higher temperatures and pressures have more impact on the behavior of the rocks and the way they interact.

The scientists studying Cascadia also observed the same sort of signal that had been found in Japan and determined that it was occurring at the same time and in the same place as these slow slip events. A new type of earthquake had been discovered. Like regular earthquakes, these transient events—slow earthquakes—redistribute stress in the crust, but they can take place over all kinds of time scales, from seconds to years. In some cases, as in Cascadia, they occur regularly, but in other areas they are isolated incidents.

Scientists subsequently found that during a slow earthquake, the risk of regular earthquakes can increase, particularly in subduction zones. The locked part of the fault that produces earthquakes is basically being stressed both by regular plate motion and by the irregular periodic backward motion produced by slow earthquakes, at depths greater than where earthquakes begin. These elusive slow events became the subject of my PhD research, but (as is often the case with graduate work) I certainly didn’t resolve the problem. To this day, it is unclear what exact mechanisms drive this kind of activity.

Could we nevertheless use slow earthquakes to predict regular earthquakes? Since their discovery, almost every big earthquake has been followed by several papers showing that it was preceded by a slow earthquake. The magnitude 9 Tohoku-Oki earthquake, which occurred in Japan in 2011, was preceded by not one but two slow ones. There are exceptions: for example, despite attempts to prove otherwise, there is still no evidence that a slow earthquake preceded the 2004 earthquake in Sumatra, Indonesia, which created a devastating tsunami that killed more than 200,000 people. What’s more, a slow earthquake is not always followed by a regular earthquake. It’s not known whether something distinguishes those that could be precursors from those that aren’t. 

It may be that some kind of distinctive process occurs along the fault in the hours leading up to a big quake. Last summer a former colleague of mine, Quentin Bletery, and his colleague Jean-Mathieu Nocquet, both at Géoazur, a multidisciplinary research lab in the south of France, published the results of an analysis of data on crustal deformation in the hours leading up to 90 larger earthquakes. They found that in the two hours or so preceding an earthquake, the crust along the fault begins to deform at a faster rate in the direction of the earthquake rupture until the instant the quake begins. What this tells us, Bletery says, is that an acceleration process occurs along the fault ahead of the motion of the earthquake—something that resembles a slow earthquake.

Related work from others:  Latest from MIT : AI generates high-quality images 30 times faster in a single step

“This does support the assumption that there’s something happening before. So we do have that,” he says. “But most likely, it’s not physically possible to play with the topic of prediction. We just don’t have the instruments.” In other words, the precursors may be there, but we’re currently unable to measure their presence well enough to single them out before an earthquake strikes. 

Bletery and Nocquet conducted their study using traditional statistical analysis of GPS data; such data might contain information that’s beyond the reach of our traditional models and frames of reference. Seismologists are now applying machine learning in ways they haven’t before. Though it is early days yet, the machine-learning approach could reveal hidden structures and causal links in what would otherwise look like a jumble of data. 

Finding signals in the noise

Earthquake researchers have applied machine learning in a variety of ways. Some, like Mostafa Mousavi and Gregory Beroza of Stanford, have studied how to use it on seismic data from a single seismic station to predict the magnitude of an earthquake, which can be tremendously useful for early warning systems and may also help clarify what factors determine an earthquake’s size.

Brendan Meade, a professor of earth and planetary science at Harvard, is able to predict the locations of aftershocks using neural networks. Zachary Ross at Caltech and others are using deep learning to pick seismic waves out of data even with high levels of background noise, which could lead to the detection of more earthquakes.

Paul Johnson of the Los Alamos National Laboratory in New Mexico, who became something between a mentor and a friend after we met during my first postdoc, is applying machine learning to help make sense of data from earthquakes generated in the lab. 

There are a number of ways to create laboratory earthquakes. One relatively common method involves placing a rock sample, cut down the center to simulate a fault, inside a metal framework that puts it under a confining pressure. Localized sensors measure what happens as the sample undergoes deformation.  

In Italy, increased agitation among animals was linked to strong earthquakes, including the deadly Norcia quake in 2016.
SIPA USA VIA AP

In 2017, a study out of Johnson’s lab showed that machine learning could help predict with remarkable accuracy how long it would take for the fault the researchers created to start quaking. Unlike many methods humans use to forecast earthquakes, this one uses no historical data—it relies only on the vibrations coming from the fault. Crucially, what human researchers had discounted as low-­amplitude noise turned out to be the signal that allowed machine learning to make its predictions. 

In the field, Johnson’s team applied these findings to seismic data from Cascadia, where they identified a continuous acoustic signal coming from the subduction zone that corresponds to the rate at which that fault is moving through the slow earthquake cycle—a new source of data for models of the region. “[Machine learning] allows you to make these correlations you didn’t know existed. And in fact, some of them are remarkably surprising,” Johnson says. 

Machine learning could also help us create more data to study. By identifying perhaps as many as 10 times more earthquakes in seismic data than we are aware of, Beroza, Mousavi, and Margarita Segou, a researcher at the British Geological Survey, determined that machine learning is useful for creating more robust databases of earthquakes that have occurred; they published their findings in a 2021 paper for Nature Communications. These improved data sets can help us—and machines—understand earthquakes better.

“You know, there’s tremendous skepticism in our community, with good reason,” Johnson says. “But I think this is allowing us to see and analyze data and realize what those data contain in ways we never could have imagined.”

Animal senses

While some researchers are relying on the most current technology, others are looking back at history to formulate some pretty radical studies based on animals. One of the shirts I collected over 10 years of attending geophysics conferences features the namazu, a giant mythical catfish that in Japan was believed to generate earthquakes by swimming beneath Earth’s crust. 

Related work from others:  O'Reilly Media - ChatGPT, Now with Plugins

The creature is seismology’s unofficial mascot. Prior to the 1855 Edo earthquake in Japan, a fisherman recorded some atypical catfish activity in a river. In a 1933 paper published in Nature, two Japanese seismologists reported that catfish in enclosed glass chambers behaved with increasing agitation before earthquakes—a phenomenon said to predict them with 80% accuracy. 

The closer the animals were to the earthquake’s source, the more advance warning their seemingly panicked behavior could provide.

Catfish are not the only ones. Records dating back as early as 373 BCE show that many species, including rats and snakes, left a Greek city days before it was destroyed by an earthquake. Reports noted that horses cried and some fled San Francisco in the early morning hours before the 1906 earthquake.

Martin Wikelski, a research director at the Max Planck Institute of Animal Behavior, and his colleagues have been studying the possibility of using the behavior of domesticated animals to help predict earthquakes. In 2016 and 2017 in central Italy, the team attached motion detectors to dogs, cows, and sheep. They determined a baseline level of movement and set a threshold for what would indicate agitated behavior: a 140% increase in motion relative to the baseline for periods lasting longer than 45 minutes. They found that the animals became agitated before eight of nine earthquakes greater than a magnitude 4, including the deadly magnitude 6.6 Norcia earthquake of 2016. And there were no false positives—no times when the animals were agitated and an earthquake did not occur. They also found that the closer the animals were to the earthquake’s source, the more advance warning their seemingly panicked behavior could provide.

Wikelski has a hypothesis about this phenomenon: “My take on the whole thing would be that it could be something that’s airborne, and the only thing that I can think of is really the ionized [electrically charged] particles in the air.”

Electromagnetism isn’t an outlandish theory. Earthquake lights—glowing emissions from a fault that resemble the aurora borealis—have been observed during or before numerous earthquakes, including the 2008 Sichuan earthquake in China, the 2009 L’Aquila earthquake in Italy, the 2017 Mexico City earthquake, and even the September 2023 earthquake in Morocco

Friedemann Freund, a scientist at NASA’s Ames Research Center, has been studying these lights for decades and attributes them to electrical charges that are activated by motion along the fault in certain types of rocks, such as gabbros and basalts. It is akin to rubbing your sock on the carpet and freeing up electrons that allow you to shock someone. 

Some researchers have proposed different mechanisms, while others discount the idea that earthquake lights are in any way related to earthquakes. Unfortunately, measuring electromagnetic fields in Earth’s crust or surface is not straightforward. We don’t have instruments that can sample large areas of an electromagnetic field. Without knowing in advance where an earthquake will be, it is challenging, if not impossible, to know where to install instruments to make measurements. 

At present, the most effective way to measure such fields in the ground is to set up probes where there is consistent groundwater flow. Some work has been done to look for electromagnetic and ionospheric disturbances caused by seismic and pre-seismic activity in satellite data, though the research is still at a very early stage.

Small movements

Some of science’s biggest paradigm shifts started without any understanding of an underlying mechanism. The idea that continents move, for example—the basic phenomenon at the heart of plate tectonics—was proposed by Alfred Wegener in 1912. His theory was based primarily on the observation that the coastlines of Africa and South America match, as if they would fit together like puzzle pieces. But it was hotly contested. He was missing an essential ingredient that is baked into the ethos of modern science—the why. It wasn’t until the 1960s that the theory of plate tectonics was formalized, after evidence was found of Earth’s crust being created and destroyed, and at last the mechanics of the phenomenon were understood. 

In all those years in between, a growing number of people looked at the problem from different angles. The paradigm was shifting. Wegener had set the wheels of change in motion.

Perhaps that same sort of shift is happening now with earthquake prediction. It may be decades before we can look back on this period in earthquake research with certainty and understand its role in advancing the field. But some, like Johnson, are hopeful. “I do think it could be the beginning of something like the plate tectonics revolution,” he says. “We might be seeing something similar.” 

Allie Hutchison is a writer based in Porto, Portugal.

Similar Posts