Supercomputers illuminate deep Earth: How giant 'blobs' shape our magnetic shield

 
Researchers at the University of Liverpool and their collaborators have achieved a significant advance in understanding Earth's interior, leveraging the capabilities of modern supercomputing. For the first time, they have demonstrated how two massive, ultra-hot formations deep within Earth's mantle affect the creation and long-term dynamics of our planet's magnetic field, using sophisticated numerical models powered by high-performance computing.
 
The Earth’s magnetic field, the invisible shield that protects life from dangerous solar and cosmic radiation, is generated by the turbulent motion of molten iron in the outer core, a process known as the geodynamo. Understanding what governs the geodynamo’s behavior over millions of years requires not only sophisticated palaeomagnetic measurements from ancient rocks but also large-scale, three-dimensional simulations that test how variations deep within the planet affect core dynamics.
 
In their study, the research team combined palaeomagnetic datasets, which record changes in the magnetic field over geological time, with supercomputer-based dynamo simulations to reveal the importance of thermal heterogeneity at the core–mantle boundary. Two continent-sized regions of intensely hot rock, located roughly 2,900 km beneath Africa and the Pacific, sit atop Earth’s outer core and create strong, lateral temperature contrasts that profoundly influence the flow of molten iron below.
 
These “blobs,” known in geophysics as Large Low-Velocity Provinces (LLVPs) because they slow down seismic waves, were already observed by seismic imaging, but their significance for magnetic field generation had been unclear until now. By incorporating the effects of thermal heterogeneity into supercomputer models of the geodynamo, researchers found that these deep mantle structures help explain key features of Earth’s ancient and modern magnetic field, including the persistence of a dominant dipolar structure and subtle longitudinal variations that earlier homogeneous models could not reproduce.
 
Running these simulations is a remarkable computational feat. The equations that govern the interaction of heat, fluid motion, magnetic induction, and rotation in Earth’s core are exceptionally complex and demand high-performance computing (HPC) clusters with massive parallel processing capability. Even with today’s most powerful machines, exploring how the magnetic field evolves over hundreds of millions of years, and how it responds to boundary conditions set by deep mantle structures, represents an immense computational challenge.
 
According to Professor Andy Biggin of the University of Liverpool, “strong contrasts in the spatial pattern of core–mantle heat flux … have influenced the geodynamo for at least the last few hundred million years.” This implies that Earth’s magnetic field, while often approximated as a simple bar magnet aligned with the rotation axis, has subtle asymmetries imprinted by deep Earth processes that are only now coming into focus thanks to HPC-enabled modeling.
 
The implications of this research extend beyond geomagnetism. By demonstrating how deep mantle thermal structures influence core dynamics, these models offer a new framework for understanding the long-term evolution of the planet, including the connections between internal dynamics and surface phenomena such as continental assembly and breakup, climate shifts, and the formation of mineral resources. More accurate reconstructions of Earth’s ancient magnetic field also serve as essential constraints in palaeogeographical studies and plate-tectonic history.
 
For the supercomputing community, this breakthrough exemplifies how HPC is becoming indispensable to Earth sciences. Supercomputers are not merely accelerators of computation; they are exploratory instruments that allow scientists to build and test virtual Earths, probing regimes that cannot be accessed through direct observation or laboratory experiments. By enabling models that integrate data spanning hundreds of millions of years with high-resolution physics, supercomputers are transforming our understanding of the deep interior of the planet we call home.
 
With the ongoing growth of computational power, driven by larger HPC systems and improved algorithms, researchers can continue to enhance their models, broaden the range of conditions they examine, and incorporate the latest observational data. These advances will deepen our insights into the geodynamo, Earth’s thermal history, and the intricate connections between our planet’s interior and surface life.
 
According to the study’s authors, combining palaeomagnetic records with dynamo simulations introduces a “new means to constrain the properties and time evolution of the core–mantle boundary.” This approach provides a clearer perspective on the forces that have safeguarded life on Earth for millions of years. Thanks to supercomputing, we are now seeing a far more dynamic and interconnected Earth than previously imagined.

Supercomputers unravel the mystery of missing Tatooine-like planets

The idea of planets with twin suns, like Tatooine from Star Wars, has fascinated both scientists and the public for years. Despite binary stars being common throughout our galaxy, planets orbiting two stars (circumbinary planets) remain unexpectedly scarce. Researchers from the University of California, Berkeley, in collaboration with the American University of Beirut, now offer a compelling answer to this puzzle. Their explanation, grounded in Einstein’s general theory of relativity, emerged from sophisticated computational models powered by cutting-edge supercomputing technology.
 
Of the more than 6,000 confirmed exoplanets identified to date, only a handful orbit binary stars; a statistic that stands in stark contrast to expectations, given that stars commonly form in pairs. The team’s analysis shows that as tight binary stars spiral closer over millions of years due to tidal interactions, general relativistic precession, a subtle warping of spacetime predicted by Einstein, changes the dynamics of the entire system in a way that destabilizes potential planets.
 
Planets in a circumbinary orbit experience gravitational tugs from both stars. Under Newtonian physics alone, this complex interplay is already difficult for planets to navigate. But when the binary stars themselves begin to precess, that is, the orientation of their orbit rotates due to relativistic effects, the system can enter a state of secular resonance. At this point, the precession of the stars’ orbit matches that of the planet’s orbit, steadily pumping energy into the planet’s motion. Eventually, the planet’s path becomes highly elongated and chaotic. It can either be flung outward into interstellar space or drawn inward, where it risks destruction by one of its host stars.
 
This resonant disruption, described in the study “Capture into Apsidal Resonance and the Decimation of Planets around Inspiraling Binaries,” was elucidated through orbit-averaged simulations that explore the dynamical evolution of circumbinary systems under a range of initial conditions. These simulations, computationally intensive and numerically sophisticated, map out the phase space of binary–planet interactions and reveal how frequently planets are captured into destructive resonances as binaries tighten. According to the study, roughly eight out of every ten potential planets in tight binary systems encounter this resonance, and three out of four are eventually destroyed or ejected, leaving behind only a few survivors on distant, hard-to-detect orbits.
 
Such high-resolution modeling is intrinsically dependent on supercomputing capabilities. Simulating the long-term evolution of three-body systems, where two stars and a planet influence one another gravitationally, requires solving coupled differential equations with precision over billions of simulated years. Conventional computing alone is insufficient for this scale of calculation; only through HPC systems can researchers explore vast ensembles of scenarios, integrate relativistic effects accurately, and uncover the nuanced mechanisms that shape planetary destinies.
 
For the supercomputing community, this research offers both inspiration and affirmation of the critical role HPC plays in astrophysics. By enabling simulations that incorporate general relativity alongside classical dynamics, supercomputers open windows into processes that cannot be observed directly, but that govern the architecture of planetary systems throughout the galaxy. They allow scientists to test theoretical ideas against virtual models of reality, refining our understanding of how planets form, persist, or perish in the cosmos.
 
The Berkeley-led team clarifies that their findings do not mean binary stars are devoid of planets. Instead, they show that while planets often form around binary stars, most are pushed into orbits that current detection tools, including NASA’s Kepler and TESS, struggle to find. A few planetary survivors may remain, hidden in distant, long-period orbits that will require innovative search techniques to uncover.
 
Looking ahead, researchers plan to apply similar modeling techniques to other astrophysical contexts, such as the environments around pairs of supermassive black holes, to understand how relativistic dynamics influence large-scale cosmic structures. In doing so, they continue to push the boundaries of computational astrophysics, using supercomputers not just as tools for calculation but as engines of discovery in the quest to understand our universe.

ML, supercomputing unite to revolutionize high-power laser optics

Researchers at the University of Strathclyde in Scotland are leveraging advanced computational techniques to transform scientific discovery. By integrating machine learning algorithms with powerful supercomputer models, they have significantly accelerated the design process for robust optical components used in high-power laser systems. This innovative approach not only shortens design cycles but also uncovers new physical phenomena, marking a breakthrough with wide-reaching impacts across science, industry, and emerging technologies.
 
High-power lasers are vital to advancements in nuclear fusion, high-field physics, and advanced manufacturing, but their optical components must endure extreme intensities without failing. Traditional optics are often large, expensive, and challenging to scale, which restricts the development of next-generation laser facilities. To overcome these limitations, Strathclyde’s multidisciplinary team is developing plasma photonic structures, temporary, self-assembled mirrors formed in ionized gas, that can fulfill the same roles at a much smaller and more cost-effective scale.
 
The central challenge lies in navigating a highly complex parameter space where interdependent variables determine performance. Traditional design methods involve resource-intensive, trial-and-error iterations that may require hundreds of thousands to millions of individual evaluations before an acceptable design can be identified. By coupling machine learning algorithms with supercomputer-driven physical models, specifically deep kernel Bayesian optimization (DKBO) paired with particle-in-cell (PIC) simulations, researchers have reduced this process to just a few dozen iterations, enabling rapid identification of high-reflectivity, robust plasma mirror designs.
 
This achievement depends on computationally intensive supercomputer simulations to model the spatio-temporal evolution of transient plasma structures and evaluate performance metrics such as reflectivity and pulse compression. The simulations, executed at high resolution with millions of interacting particles, are inherently demanding and could not be conducted at scale without HPC resources. In fact, the team’s use of national supercomputing services, including the ARCHER2 UK National Supercomputing Service, exemplifies how targeted computational power can transform scientific inquiry.
 
According to lead Dr. Slavi Ivanov of Strathclyde’s Department of Computer and Information Sciences, the integration of DKBO with particle-in-cell models enables not just faster design optimization but also unexpected discovery. In their work, the optimization framework found regimes where incident laser pulses are compressed by the plasma mirror structure, a phenomenon that emerged from the simulations rather than human intuition, underscoring the capacity of machine-assisted design to reveal new physics.
 
Professor Dino Jaroszynski, co-author and distinguished laser physicist, described the research as an engine of discovery that expands the objectives beyond mere performance targets. “By specifying innovative or unconventional design goals, we can uncover mechanisms that might otherwise remain hidden,” he noted, suggesting that this approach could redefine how optical components are conceived for extreme environments.
 
The implications of this work extend well beyond high-power lasers themselves. The general nature of the machine learning and simulation framework means it can be adapted to other optical elements, from beam splitters to focusing devices, and even to real-time experimental optimization workflows where objective functions are derived from empirical measurements. This flexibility opens new pathways for rapid, HPC-enabled design across photonics, telecommunications, and other advanced technologies.
 
Importantly for the supercomputing community, this research illustrates how machine learning and HPC models can be coupled in powerful synergy. Machine learning provides an intelligent search strategy that dramatically reduces the number of required simulation runs, while the supercomputer executes the high-fidelity physical models necessary to evaluate each candidate design. This integrated loop, where algorithms guide simulations and simulations train algorithms, is becoming a hallmark of contemporary computational science.
 
As high-performance computing infrastructure continues to advance in both capability and accessibility, hybrid approaches such as deep kernel Bayesian optimisation are becoming essential tools for addressing complex, multidisciplinary challenges. From the design of next-generation optical components to the discovery of previously unknown physical phenomena, the integration of machine learning with high-fidelity simulation is accelerating innovation and narrowing the gap between theoretical research and practical application.
 
For the Supercomputing community, the Strathclyde plasma mirror project illustrates how supercomputing has evolved beyond traditional numerical analysis into a collaborative force in scientific discovery, enabling researchers to navigate vast design spaces, reveal unexpected behaviors, and redefine how technologies are engineered for extreme operating conditions.

Supercomputing reveals hidden galactic architecture around the Milky Way

Leveraging the capabilities of modern high-performance computing (HPC), astronomers have unraveled a cosmic mystery: the Milky Way and its closest neighboring galaxies are embedded in a sprawling sheet of matter that shapes the movement of surrounding galaxies. This breakthrough, featured in Nature Astronomy, was achieved using advanced simulations powered by cutting-edge supercomputers to model the mass distribution and dynamics of our local universe.
 
For decades, cosmologists have grappled with an apparent contradiction in galactic motion. While most galaxies in the universe recede from one another in accord with the expansion described by the Hubble–Lemaître law, our immediate neighborhood, the Local Group comprising the Milky Way, the Andromeda Galaxy, and dozens of dwarf galaxies, exhibits surprisingly coherent motion patterns that ordinary mass distributions failed to explain. The Andromeda Galaxy itself moves toward the Milky Way at about 100 km/s, a phenomenon long understood as gravitational interaction within the Local Group. Yet the behavior of other nearby galaxies did not align with theoretical expectations.
 
Now, an international team led by doctoral researcher Ewoud Wempe and Professor Amina Helmi at the University of Groningen has shown that the key to this puzzle lies not within the confines of the Local Group alone but in an extended, planar mass structure surrounding it. Using sophisticated cosmological simulations constrained by observational data, including the positions, masses, and velocities of 31 galaxies just beyond the Local Group, the researchers demonstrated that the vast majority of dark matter and visible matter in our vicinity is organized in a flat sheet extending tens of millions of light-years. Above and below this planar structure are vast voids with minimal matter.wempe
 
What sets this discovery apart is the critical role of supercomputing in constructing these “virtual twin” universes. The team’s simulations began with initial conditions seeded by early-universe observations and then evolved forward using numerical methods that solve Einstein’s equations of gravity together with fluid dynamics for dark matter and baryonic matter. Such calculations involve millions of interacting elements and demand parallel computation at scale, the exclusive domain of HPC systems. By performing these simulations on powerful supercomputers, astronomers were able to trace the gravitational influence of the large-scale sheet on galaxy motions and verify that this configuration reproduces observed velocities with high fidelity.
 
According to Helmi, this marks the first time that the distribution and velocity field of dark matter in the region surrounding our galactic neighborhood have been quantitatively constrained in a manner consistent with both ΛCDM cosmology and observed local dynamics. “Astronomers have been trying to solve this problem for decades,” Helmi said. “It is extraordinary that, based purely on the motions of galaxies, we can infer a mass distribution that matches the observed positions and motions of galaxies within and just outside the Local Group.”
 
For the supercomputing community, this achievement is profoundly inspirational. It highlights how modern HPC infrastructures, with their massive parallelism, high memory bandwidth, and optimized numerical libraries, are enabling scientists to probe cosmic questions that were once deemed intractable. These simulations not only illuminate the hidden architecture of our cosmic neighborhood but also exemplify how simulation-based science complements observation, allowing researchers to explore scenarios that cannot be directly imaged or measured.
 
Beyond resolving a decades-old enigma in galactic astronomy, this work reinforces the broader scientific view that large-scale structures, from filaments of the cosmic web to planar mass configurations like the one now identified around the Milky Way, are fundamental to understanding the universe’s evolution. Supercomputers are not just tools for speeding up calculations; they are essential engines of discovery that empower scientists to simulate the universe with realism and precision.
 
As supercomputing technology advances, both in terms of hardware and algorithms, scientists are poised to create increasingly detailed “virtual universes.” These sophisticated simulations will not only put our cosmological models to the test but also inform future telescope and space mission observations, leading to a richer understanding of our cosmic context.
 
According to the study’s authors, uncovering the influence of the Local Sheet on galactic motion is more than just resolving a persistent mystery; it demonstrates the remarkable discoveries possible when computational power, observational insights, and scientific curiosity are combined on a cosmic scale.

Supercomputing drives materials breakthrough for green computing: 3D graphene-like electronic behavior unlocks new low-energy electronics

Marking a major advance in sustainable computing, researchers at the University of Liverpool have developed a groundbreaking three-dimensional material that mirrors the remarkable electronic properties of two-dimensional graphene, while offering the durability needed for practical use.
 
Detailed in the journal Matter, this innovation holds the potential to enable greener, more energy-efficient electronics and underscores the essential role of supercomputing in discovering and designing new materials, a development that could reshape the landscape of high-performance and low-power computing.
 
Graphene is a single layer of carbon atoms organized in a honeycomb pattern. This material has fascinated scientists and engineers due to its exceptional electrical, thermal, and mechanical characteristics. Electrons in graphene act like massless Dirac fermions, which allows for extremely fast electron movement with minimal energy loss. Despite these impressive qualities, applying graphene's unique properties to practical, large-scale devices has faced persistent obstacles: its ultra-thin structure is fragile, hard to incorporate into bulk technologies, and expensive to manufacture at scale.
 
The new study addresses this by demonstrating that hafnium tin, HfSn₂, a fully three-dimensional crystal, can mimic graphene’s fast, two-dimensional electron flow. In the HfSn₂ structure, honeycomb layers are arranged in a special chiral stacking pattern that preserves the signature electronic behavior of graphene, specifically, high electron mobility with low energy dissipation, despite the material being fully 3D. This electronic behavior is associated with Weyl points in the material’s band structure, points where conduction and valence bands touch, allowing electrons to move with minimal resistance.
 
These insights emerged from a combination of theoretical modeling, crystallographic simulations, and experimental characterization, and could not have been realized without high-performance computational tools. Supercomputers enable researchers to explore how atomic arrangement, chemical bonding, and quantum mechanical effects interplay across multiple length scales, from electrons to crystals, and to identify Weyl electronic states and transport properties that are inaccessible to simpler computational methods.
 
In particular, density functional theory (DFT) and related ab initio simulation frameworks, inherently computationally intensive, were crucial in predicting how electrons behave within the 3D honeycomb lattice and how different stacking arrangements influence transport. These simulations, typically run on supercomputing clusters equipped with optimized parallel solvers and high memory bandwidth, allow researchers to map out electronic band structures and isolate topological features such as Weyl points with high precision. Without this scale of computation, evaluating the energetic and structural feasibility of such new materials would be prohibitively slow and less reliable.
 
The ability to use supercomputer-driven simulations to screen candidate materials accelerates the discovery process dramatically. Instead of relying solely on costly and time-consuming experimental synthesis of countless samples, researchers can now refine materials candidates through in silico modeling, identifying promising structures that combine desired electronic properties with robustness and environmental resilience.
 
Why does this matter for the green computing agenda? Modern computing systems, from mobile devices to data centers, consume vast amounts of energy. Next-generation logic and spintronic devices (which exploit electron spin as well as charge) require materials that combine low-energy electronic transport with stability under operational conditions. A 3D material that mimics graphene’s electron transport while being easier to integrate into conventional device architectures could lead to significantly lower energy consumption in future information processing and memory technologies, directly addressing sustainability challenges in both artificial intelligence and high-performance computing sectors.
 
Moreover, supercomputing plays a central role beyond discovery; it enables multiscale modeling that connects atomic-scale electronic behavior with device-level performance predictions. By integrating quantum mechanical simulations with larger-scale finite-element and mesoscopic models, researchers can assess how new materials will behave under real operational loads, including temperature variation, stress, and electron-phonon interactions, before ever fabricating a prototype.
 
The discovery of HfSn₂ highlights a compelling convergence of materials science, quantum physics, and high-performance computing. Together, these disciplines are enabling new approaches to energy-efficient electronics. As researchers increasingly rely on supercomputing resources to navigate complex materials landscapes, the pace of breakthroughs aimed at reducing the environmental footprint of computing is expected to accelerate, pointing toward a more sustainable and environmentally responsible digital infrastructure.