
Artificial intelligence reveals new evidence indicating that rapid decarbonization alone will not prevent global warming from exceeding 1.5 degrees Celsius. The hottest years of this century are poised to break recent temperature records.
Researchers have determined that the global target of limiting warming to 1.5 degrees Celsius above pre-industrial levels is now virtually unattainable.
A study published on December 10 in Geophysical Research Letters indicates that the coming years are likely to set unprecedented heat records. The researchers estimate a 50% probability that global temperatures will exceed 2 degrees Celsius, even if current goals to achieve net-zero greenhouse gas emissions by the 2050s are met.
While past studies, including key reports from the Intergovernmental Panel on Climate Change (IPCC), have suggested that decarbonization at this rate could keep warming below 2 degrees, the new findings challenge this optimism, underscoring the difficulty of meeting climate targets.
“We’ve been seeing accelerating impacts around the world in recent years, from heatwaves and heavy rainfall and other extremes. This study suggests that, even in the best-case scenario, we are very likely to experience conditions that are more severe than what we’ve been dealing with recently,” said Stanford Doerr School of Sustainability climate scientist Noah Diffenbaugh, who co-authored the study with Colorado State University climate scientist Elizabeth Barnes.
This year is set to beat 2023 as Earth’s hottest year on record, with global average temperatures expected to exceed 1.5 degrees Celsius or nearly 2.7 degrees Fahrenheit above the pre-industrial baseline, before people started burning fossil fuels widely to power industry. According to the new study, there is a nine-in-ten chance that the hottest year this century will be at least half a degree Celsius hotter even under rapid decarbonization.
Using AI to refine climate projections
For the new study, Diffenbaugh and Barnes trained an AI system to predict how high global temperatures could climb, depending on the pace of decarbonization.
When training the AI, the researchers used temperature and greenhouse gas data from vast archives of climate model simulations. To predict future warming, however, they gave the AI the actual historical temperatures as input, along with several widely used scenarios for future greenhouse gas emissions.
“AI is emerging as an incredibly powerful tool for reducing uncertainty in future projections. It learns from the many climate model simulations that already exist, but its predictions are then further refined by real-world observations,” said Barnes, who is a professor of atmospheric science at Colorado State.
The study adds to a growing body of research indicating that the world has almost certainly missed its chance to achieve the more ambitious goal of the 2015 Paris Climate Agreement, in which nearly 200 nations pledged to keep long-term warming “well below” 2 degrees while pursuing efforts to avoid 1.5 degrees.
A second new paper from Barnes and Diffenbaugh, published Dec. 10 in Environmental Research Letters with co-author Sonia Seneviratne of ETH-Zurich, suggests many regions including South Asia, the Mediterranean, Central Europe, and parts of sub-Saharan Africa will surpass 3 degrees Celsius of warming by 2060 in a scenario in which emissions continue to increase – sooner than anticipated in earlier studies.
Extremes matter
Both new studies build on 2023 research in which Diffenbaugh and Barnes predicted the years remaining until the 1.5 and 2 degrees Celsius goals are breached. But because these thresholds are based on average conditions over many years, they don’t tell the full story of how extreme the climate could become.
“As we watched these severe impacts year after year, we became more and more interested in predicting how extreme the climate could get even if the world is fully successful at rapidly reducing emissions,” said Diffenbaugh, the Kara J Foundation Professor and Kimmelman Family Senior Fellow at Stanford.
For a scenario in which emissions reach net-zero in the 2050s – the most optimistic scenario widely used in climate modeling – the researchers found a nine-in-ten chance that the hottest year this century will be at least 1.8 degrees Celsius hotter globally than the pre-industrial baseline, with a two-in-three chance for at least 2.1 degrees Celsius.
For a scenario in which emissions decline too slowly to reach net-zero by 2100, Diffenbaugh and Barnes found a nine-in-ten chance that the hottest year will be 3 degrees Celsius hotter globally than the pre-industrial baseline. In this scenario, many regions could experience temperature anomalies at least triple what occurred in 2023.
Investing in adaptation
The new predictions underline the importance of investing not only in decarbonization but also in measures to make human and natural systems more resilient to severe heat, intensified drought, heavy precipitation, and other consequences of continued warming. Historically, those efforts have taken a back seat to reducing carbon emissions, with decarbonization investments outstripping adaptation spending in global climate finance and policies such as the 2022 Inflation Reduction Act.
“Our results suggest that even if all the effort and investment in decarbonization is as successful as possible, there is a real risk that, without commensurate investments in adaptation, people and ecosystems will be exposed to climate conditions that are much more extreme than what they are currently prepared for,” Diffenbaugh said.
References: “Data-Driven Predictions of Peak Warming Under Rapid Decarbonization” by Noah S. Diffenbaugh and Elizabeth A. Barnes, 10 December 2024, Geophysical Research Letters.
DOI: 10.1029/2024GL111832
“Combining climate models and observations to predict the time remaining until regional warming thresholds are reached” by Elizabeth A Barnes, Noah S Diffenbaugh and Sonia I Seneviratne, 10 December 2024, Environmental Research Letters.
DOI: 10.1088/1748-9326/ad91ca
The Geophysical Research Letters study was supported by Stanford University and the Regional and Global Model Analysis program area of the U.S. DOE Office of Biological and Environmental Research as part of the Program for Model Diagnosis and Intercomparison.
The Environmental Research Letters study was supported by Stanford University, the European Union’s Horizon 2020 and Horizon Europe programs, the Swiss State Secretariat for Education, Research and Innovation (SERI), and the Stanford Woods Institute for the Environment.
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.
4 Comments
The problem with depending on so-called artificial intelligence, which usually means neural networks (NN) in the context of this article, is that it is immediately obvious if a picture is rendered incorrectly or a physical prototype is designed that can be manufacture quickly, is that there is almost immediate feedback as to whether it has been done properly. If a prototype physical model fails to do what is expected, the NN can be re-trained and another design made quickly. That is good because it is difficult to impossible to understand WHY the NN did what it did.
However, in the case of forecasting future climate, it may be decades before there is enough stochastic observational data to realize that the forecast was/is/and will continue to be wrong. One might rationalize the problems by claiming that the forecast just needs to be improved. Unfortunately, it isn’t a matter of just improving the accuracy or precision. Sometimes the output is just wrong! And, there may be decades of investment that will never accomplish what was promised.
I have spent some time working with the recently introduced AI Large Language Models such as ChatGPT and Copilot, asking them questions about climatology. Typically, the first response(s) sound like ‘boiler plate’ marketing and I know that it is wrong. The first time this happened, and I challenged the claim, I expected an argument. Instead, it came back with an apology, admitting it was wrong, and revised its claims. Unfortunately, it wasn’t long before it made another mistake, which I challenged. This went on until I had backed it into a ‘logical’ corner, at which point, it started contradicting its previous admissions of a mistake and got stuck in a loop, without further progress.
There are two points to be made from this exercise; 1) If the user is naive and knows little about climatology they will likely go away thinking they have ‘learned’ something about climatology. This is likely the way a NN climate forecast will be handled, as evidenced in this article; 2) The training of a NN, and all AI, depends on what is available in the published literature. If the published literature is wrong, it is almost certain that the NN conclusions will be wrong. Although, it is possible that by some quirk, it might come to the right conclusion. However, coming to the right conclusion for the wrong reason is the greatest ‘sin’ in science because it was just shear luck and can’t be depended on to always be right.
One solution to this problem is for AI researchers to invent a way to understand how and why the NN reached its conclusion and be able to formalize that into a testable hypothesis that doesn’t require years to validate. Also, A validation approach has to be found to compare and perhaps weight the value of contradictory research results such as the in-depth chess programs do. I don’t think that the Stanford work is ready for prime time yet and unknowingly doing the wrong thing can result in more damage than just remediating the obvious trends.
AI will soon become an environmental disaster in its own right.
As for its scientific value in climatology, it’s just gonna make sh!t up faster than the existing “climate models”.
We’re not gonna burn.
Another phony climate scare article. The climate is fine – the Dinosaurs got a long with far hotter clime – for 600,000,000 years – lush jungles, the Sahara was green with water falls, and birds flying, rainbows – we’ve only been here for a couple hundred thousand years – marked mostly with stupid people yelling made-up stories at each other – just like now – with, by the way, lots-of money to be had from global alarmist’s drive to take over the world.
But it turns out, every one of you are smarter than all the groups yelling – because they’re brainless (called group brainlessness) – where YOU have a marvelous ‘super-computer’ in your head that, if you trust it, will outperform all the groups quoting dumb stuff someone told them to think.
Try it.
I don’t mind. The heating bills are through the roof anyway.