
The world’s largest simulation of the cosmos lays a new computational foundation for simultaneous extreme-scale dark matter and astrophysical investigations.
Researchers used the Frontier supercomputer to conduct the largest astrophysical simulation to date, simulating both atomic and dark matter across universe-sized scales. This was facilitated by advancements in HACC, a code developed to run on exascale-class supercomputers, now capable of performing quintillion calculations per second. This breakthrough in cosmological hydrodynamics simulations will aid in matching observational data with theoretical models.
Universe Simulation Breakthrough
The universe just expanded—at least in the realm of computer simulations.
Earlier this month, researchers at the Department of Energy’s Argonne National Laboratory harnessed the power of the world’s fastest supercomputer to execute the largest astrophysical simulation of the universe ever achieved.
This groundbreaking simulation was made possible by the Frontier supercomputer at Oak Ridge National Laboratory. The calculations set a new standard for cosmological hydrodynamics, offering a pioneering approach to modeling both atomic matter and dark matter simultaneously. The simulation’s scale matches that of large telescope surveys, a capability that was previously out of reach at this magnitude.

Advanced Cosmological Simulations at Exascale
“There are two components in the universe: dark matter — which as far as we know, only interacts gravitationally — and conventional matter, or atomic matter.” said project lead Salman Habib, division director for Computational Sciences at Argonne.
“So, if we want to know what the universe is up to, we need to simulate both of these things: gravity as well as all the other physics including hot gas, and the formation of stars, black holes and galaxies,” he said. “The astrophysical ‘kitchen sink’ so to speak. These simulations are what we call cosmological hydrodynamics simulations.”
Overcoming Computational Challenges
Not surprisingly, the cosmological hydrodynamics simulations are significantly more computationally expensive and much more difficult to carry out compared to simulations of an expanding universe that only involve the effects of gravity.
“For example, if we were to simulate a large chunk of the universe surveyed by one of the big telescopes such as the Rubin Observatory in Chile, you’re talking about looking at huge chunks of time — billions of years of expansion,” Habib said. “Until recently, we couldn’t even imagine doing such a large simulation like that except in the gravity-only approximation.”

Leveraging High-Performance Computing
The supercomputer code used in the simulation is called HACC, short for Hardware/Hybrid Accelerated Cosmology Code. It was developed around 15 years ago for petascale machines. In 2012 and 2013, HACC was a finalist for the Association for Computing Machinery’s Gordon Bell Prize in computing.
Later, HACC was significantly upgraded as part of ExaSky, a special project led by Habib within the Exascale Computing Project, or ECP — a $1.8 billion DOE initiative that ran from 2016 to 2024. The project brought together thousands of experts to develop advanced scientific applications and software tools for the upcoming wave of exascale-class supercomputers capable of performing more than a quintillion, or a billion-billion, calculations per second.
As part of ExaSky, the HACC research team spent the last seven years adding new capabilities to the code and re-optimizing it to run on exascale machines powered by GPU accelerators. A requirement of the ECP was for codes to run approximately 50 times faster than they could before on Titan, the fastest supercomputer at the time of the ECP’s launch. Running on the exascale-class Frontier supercomputer, HACC was nearly 300 times faster than the reference run.
Exascale Achievements in Universe Modeling
The novel simulations achieved their record-breaking performance by using approximately 9,000 of Frontier’s compute nodes, powered by AMD Instinct™ MI250X GPUs. Frontier is located at ORNL’s Oak Ridge Leadership Computing Facility, or OLCF.
“It’s not only the sheer size of the physical domain, which is necessary to make direct comparison to modern survey observations enabled by exascale computing,” said Bronson Messer, OLCF director of science. “It’s also the added physical realism of including the baryons and all the other dynamic physics that makes this simulation a true tour de force for Frontier.”
In addition to Habib, the HACC team members involved in the achievement and other simulations building up to the work on Frontier include Michael Buehlmann, JD Emberson, Katrin Heitmann, Patricia Larsen, Adrian Pope, Esteban Rangel and Nicholas Frontiere who led the Frontier simulations.
Prior to runs on Frontier, parameter scans for HACC were conducted on the Perlmutter supercomputer at the National Energy Research Scientific Computing Center, or NERSC, at Lawrence Berkeley National Laboratory. HACC was also run at scale on the exascale-class Aurora supercomputer at Argonne Leadership Computing Facility, or ALCF.
In early November 2024, researchers at the Department of Energy’s Argonne National Laboratory used Frontier, the fastest supercomputer on the planet, to run the largest astrophysical simulation of the universe ever conducted. This movie shows the formation of the largest object in the Frontier-E simulation. The left panel shows a 64x64x76 Mpc/h subvolume of the simulation (roughly 1e-5 the full simulation volume) around the large object, with the right panel providing a closer look. In each panel, we show the gas density field colored by its temperature. In the right panel, the white circles show star particles and the open black circles show AGN particles. Credit: Argonne National Laboratory, U.S Dept. of Energy
OLCF, ALCF and NERSC are DOE Office of Science user facilities.
This research was supported by the ECP and the Advanced Scientific Computing Research and High Energy Physics programs under the DOE Office of Science.
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.
2 Comments
How large can the human brain think ? how many universes can there be ? I had a thought of trying to encompass a large amount of theories two one spiral verse from the beginning of time and space if we could think large enough . Imagen a disk plan that is spiraled and the disk is rippled like a wave pattern up and down radiating from the center out into the nothingness that surrounds the plained disk of multiverses the wave pattern would ripple vertically to the plane and on each side of the ripples is a separate universe encased in it own side of the wave ripple the possibility of having several universes expanding from the center action of creation would be phenomenal , depending on the largeness of the rippled spirale verse , the waves are not as like a pond water ripple but ripple from the center but vertical to the plane contained bye each others expansion , one universe on each side of the ripple up then down depending on your perspective of the planed disk , you could go even future if the you were to think in terms of a sphere that radiates in separate segments of rippled waves in all directions . I know it’s out there thinking but some of the evidence could point to this kind of thinking of larger than how large of what is out beyond our sight .
Thinking large from central event that would allow for unobservable multi universes that do not have the shape of a sphere , more to the shape that is portrayed bye this article , a cone shape . A macroscale of size that surrounds what I call Astrophysical macroscale multiple cosmic events , in a spiral formation somewhat like the shape of the Sombrero galaxy but the outer edge would look like a ribbon wave construction that mimics a oscilloscope wave pattern across the view screen .