Revolutionizing Disaster Prevention: New Earthquake Prediction Model Unveiled by Scientists

Earthquake Rubble

Earthquakes are sudden and intense shaking of the ground caused by the movement of tectonic plates or volcanic activity. They can occur anywhere in the world and have the potential to cause significant damage to buildings, infrastructure, and loss of life. Seismologists study earthquakes to understand their causes and predict future events, but predicting the exact timing and location of an earthquake remains a challenge.

A new earthquake model has been developed by Northwestern University that considers the full history of a fault’s earthquakes to better forecast the next one.

Northwestern University researchers have published a study that could help solve one of seismology’s main challenges — predicting when the next big earthquake will occur on a fault.

Seismologists traditionally believed that large earthquakes on faults follow a regular pattern and occur after the same amount of time as between the previous two. However, the Earth doesn’t always comply, as earthquakes can sometimes occur sooner or later than expected. Until now, seismologists lacked a way to explain this unpredictability.

Now they do. The Northwestern research team of seismologists and statisticians has developed an earthquake probability model that is more comprehensive and realistic than what is currently available. Instead of just using the average time between past earthquakes to forecast the next one, the new model considers the specific order and timing of previous earthquakes. It helps explain the puzzling fact that earthquakes sometimes come in clusters — groups with relatively short times between them, separated by longer times without earthquakes.

“Considering the full earthquake history, rather than just the average over time and the time since the last one, will help us a lot in forecasting when future earthquakes will happen,” said Seth Stein, William Deering Professor of Earth and Planetary Sciences in the Weinberg College of Arts and Sciences. “When you’re trying to figure out a team’s chances of winning a ball game, you don’t want to look only at the last game and the long-term average. Looking back over additional recent games can also be helpful. We now can do a similar thing for earthquakes.”

The study was published recently in the Bulletin of the Seismological Society of America. The authors of the study are Stein, Northwestern professor Bruce D. Spencer and recent Ph.D. graduates James S. Neely and Leah Salditch. Stein is a faculty associate of Northwestern’s Institute for Policy Research (IPR), and Spencer is an IPR faculty fellow.

“Earthquakes behave like an unreliable bus,” said Neely, now at the University of Chicago. “The bus might be scheduled to arrive every 30 minutes, but sometimes it’s very late, and other times it’s too early. Seismologists have assumed that even when a quake is late, the next one is no more likely to arrive early. Instead, in our model, if it’s late, it’s now more likely to come soon. And the later the bus is, the sooner the next one will come after it.”

Traditional model and new model

The traditional model, used since a large earthquake in 1906 destroyed San Francisco, assumes that slow motions across the fault build up strain, all of which is released in a big earthquake. In other words, a fault has only short-term memory — it “remembers” only the last earthquake and has “forgotten” all the previous ones. This assumption goes into forecasting when future earthquakes will happen and then into hazard maps that predict the level of shaking for which earthquake-resistant buildings should be designed.

However, “Large earthquakes don’t occur like clockwork,” Neely said. “Sometimes we see several large earthquakes occur over relatively short time frames and then long periods when nothing happens. The traditional models can’t handle this behavior.”

In contrast, the new model assumes that earthquake faults are smarter — have longer-term memory — than seismologists assumed. The long-term fault memory comes from the fact that sometimes an earthquake didn’t release all the strain that built up on the fault over time, so some remains after a big earthquake and can cause another. This explains earthquakes that sometimes come in clusters.

“Earthquake clusters imply that faults have long-term memory,” said Salditch, now at the U.S. Geological Survey. “If it’s been a long time since a large earthquake, then even after another happens, the fault’s ‘memory’ sometimes isn’t erased by the earthquake, leaving left-over strain and an increased chance of having another. Our new model calculates earthquake probabilities this way.”

For example, although large earthquakes on the Mojave section of the San Andreas fault occur on average every 135 years, the most recent one occurred in 1857, only 45 years after one in 1812. Although this wouldn’t have been expected using the traditional model, the new model shows that because the 1812 earthquake occurred after a 304-year gap since the previous earthquake in 1508, the leftover strain caused a sooner-than-average quake in 1857.

“It makes sense that the specific order and timing of past earthquakes matters,” said Spencer, a professor of statistics. “Many systems’ behavior depends on their history over a long time. For example, your risk of spraining an ankle depends not just on the last sprain you had, but also on previous ones.”

Reference: “A More Realistic Earthquake Probability Model Using Long‐Term Fault Memory” by James S. Neely, Leah Salditch, Bruce D. Spencer and Seth Stein, 27 December 2022, Bulletin of the Seismological Society of America.
DOI: 10.1785/0120220083

1 Comment on "Revolutionizing Disaster Prevention: New Earthquake Prediction Model Unveiled by Scientists"

  1. TIMOT HY MWANIKI MURIUKI | February 20, 2023 at 2:52 am | Reply

    These are the signs of the end let people watch and pray.

Leave a comment

Email address is optional. If provided, your email will not be published or shared.