Scholars harness supercomputers to peer inside black holes, through code, not telescopes

A team of computational astrophysicists has broken new ground, using the planet's most powerful supercomputers to simulate, in full fidelity, how matter spirals into a black hole and lights up in a blaze of radiation. Their results, published on December 3, 2025, by the Institute for Advanced Study (IAS) and the Flatiron Institute, deliver what may be the most detailed, realistic model yet of "luminous black hole accretion."

From Toy Models to Full-Blown Virtual Realities

For decades, astrophysicists have studied black hole accretion, the process by which gas, dust, and other matter fall into black holes, using simplified models. These toy-model approximations treated radiation as if it were a fluid, glossing over the real physics of how light moves through warped spacetime around a black hole.
 
Thanks to a new computational algorithm coupled with access to exascale-class supercomputers,  namely Frontier at Oak Ridge and Aurora at Argonne, the researchers directly solved the full radiation-transport equations under general relativity, without simplifying assumptions.
 
Lead author Lizhong Zhang describes it as "observing" black hole behavior not through telescopes, but through the computer, effectively creating a digital observatory of regions impossible to image directly.
What the Simulations Reveal
  • The simulations show that, even in a radiation-dominated, highly turbulent environment, matter forms a dense, thin thermal disk near the black hole, embedded inside a magnetically dominated envelope. The envelope appears to stabilize the system, a surprising sign of structural order emerging from chaos.
  • Around the disk, the model captures winds and sometimes powerful jets: outflows of matter and energy that match what astronomers see in real systems like ultraluminous X-ray sources and X-ray binaries.
  • When the team compared the simulated radiation spectra to real observations, the match was strong. That suggests the simulation is more than theoretical; it may faithfully represent how black holes behave in nature.

Why Supercomputers Were Critical

Modeling a black hole's accretion in full detail is computationally brutal. Gravity warps spacetime (general relativity), matter behaves under magneto-hydrodynamics (MHD), and radiation interacts with gas, all tightly coupled in nonlinear, dynamic ways. Solving that in 3D over time requires billions of calculations per second and software optimized down to the metal.
 
The combination of cutting-edge algorithm design (led by co-authors such as Christopher White and Patrick Mullen) with the brute force of exascale machines allowed the team to finally do this computation, the kind of problem that would have been intractable a decade ago.

What's Next: Cosmic Simulations Go Big

This is just the first in a series of papers. The team plans to apply their model to a wider range of black hole systems, from stellar-mass holes (a few times the mass of the Sun) to the supermassive giants that lurk at the centers of galaxies.
 
If successful, this work could reshape our understanding of how black holes grow and affect their surroundings, from the jets they shoot out to the winds they drive, and how they light up in X-rays and other wavelengths.

Big Picture: When Code Becomes Our Telescope

We're living in an era where code + supercomputing = cosmic telescope. With enough computational power and smart algorithms, researchers can simulate regions of the universe that not even our most advanced telescopes can resolve. The result is a kind of synthetic observation, a digital microscope turned on the universe's darkest objects.
 
It's a perspective shift: rather than just watching the universe, we're now capable of recreating pieces of it in silico, exploring how extreme gravity, magnetism, and radiation dance together around black holes.
 
The cosmic circus is no longer only for telescopes; now, supercomputers get front-row seats.

Seeing the unseeable: How AI, Supercomputers provide a clearer view of black holes

The world gasped when the first image of the black hole M87 was released in 2019. The hazy ring with a dark core confirmed what Einstein predicted decades prior: black holes cast "shadows" where no light escapes.
 
However, for scientists at the Perimeter Institute for Theoretical Physics (PI) in Canada, this image was merely the starting point. Now, thanks to supercomputing and the rise of artificial intelligence (AI), researchers are uncovering layers of cosmic fog, enabling them not only to see black holes but also to understand their dynamics with unprecedented precision.

From fuzzy ring to data-rich portraits

The tool doing much of this heavy lifting is a machine-learning model developed by PI researcher Avery Broderick and his team. Their system, called ALINet, can generate billions of candidate images, a thousand times faster than traditional methods, enabling scientists to compare real observational data against thousands of theoretical black hole models in a matter of hours.
 
Traditionally, interpreting data from the Event Horizon Telescope (EHT) meant painstakingly reconstructing images, then matching them by hand to models of how black hole plasma behaves. That process could take weeks, even on powerful hardware. Now, with ALINet, what once took a month can be achieved in a day, using a fraction of the computational cores.

Denoising the cosmos, even through the galactic haze

The challenge isn’t just speed. The center of our own galaxy, home to Sagittarius A*, the supermassive black hole at the Milky Way’s heart, lies behind a dense curtain of interstellar gas, dust and turbulent plasma. That material distorts radio waves, blurring and scattering the signals that astronomers receive.
 
Broderick’s team has now trained neural networks to perform “de-scattering,” essentially deblurring cosmic interference and letting scientists peer through the galactic veil. Early results published in 2025 show this can almost completely reverse the scattering at the EHT’s operational wavelength, offering a much clearer view of Sgr A*.

Supercomputing + AI: a combo that changes the game

This isn’t just about pretty pictures. Supermassive black holes, M87*, Sagittarius A*, and many more, are extreme gravitational laboratories. Understanding their behavior helps physicists probe deep questions: how matter behaves under extreme gravity, how space–time warps, how quantum effects might play out in the most intense conditions in the cosmos.
 
In fields beyond imaging, AI + high-performance computing (HPC) is already making waves. Teams have used distributed AI models running on supercomputers to detect gravitational wave signals from colliding black holes, and do so far faster than older methods. The success of such efforts shows that combining AI with raw compute scale isn’t just clever, it’s essential for the next frontier of astrophysics.

Why this matters, and why now

With tools like ALINet, astronomers can now treat black hole observations as data-rich investigations rather than fuzzy guesses. Instead of asking "Does this look like a ring?", scientists can now ask, "What spin, mass, and plasma configuration best matches the data?" They can also get answers rapidly, enabling more frequent updates as new observations come in.
 
For humanity, this means black holes, once relegated to science fiction and unreachable math, are becoming real, measurable entities. AI and supercomputers are turning the unknown into the known.
 
As Broderick puts it, this is "enabling technology," transforming a month-long computational slog into a swift, repeatable analysis. The cosmos just got sharper.

Big numbers, big bets: Dell scales up HPC for the AI era

Dell Technologies (Dell) posted strong third-quarter results for fiscal 2026, with $27.0 billion in revenue, up 11% year-over-year, and diluted EPS of $2.28. Its Infrastructure Solutions Group (servers and networking) was the standout, delivering $10.1 billion in revenue, up 37% YoY, with overall ISG revenue hitting $14.1 billion, up 24%.
 
Dell says this growth stems from surging demand for AI servers, with $12.3 billion in new AI-server orders during the quarter alone, and a year-to-date pipeline of about $30 billion, mixed across enterprise, sovereign-cloud, and large-scale "neocloud" customers.
 
In plain terms: Dell is investing heavily in high-performance computing infrastructure. This includes building large HPC clusters, deploying custom AI servers, and providing flexible scaling options for global enterprises and sovereign cloud buyers. Their ability to provide complete HPC solutions, including compute, networking, support, and storage, makes them a key partner for organizations needing powerful, scalable computing resources, from research institutions to cloud providers.

The GPU King: Nvidia’s Q3 Rocket Fuel for HPC Infrastructure

NVIDIA delivered blow-out third-quarter results: $57.0 billion in revenue, a 62% increase over last year. Data center revenue alone hit a record $51.2 billion, up 66% YoY.
 
Nvidia executives highlighted that demand for its latest GPU architecture, NVIDIA Blackwell, remains red-hot and that cloud GPUs are “sold out.” The firm sees this demand driven by exploding workloads in training and inference for generative AI, large-language models, HPC, and emerging “agentic” AI. 
 
On margins and profitability, Nvidia remains a beast, non-GAAP gross margin of around 73.6%, operating income and EPS both rising sharply.
 
Bottom line: Nvidia is arguably the single most influential driver of high-performance AI and HPC compute capacity today. Its GPUs, systems, and software stack (e.g., CUDA) have become the backbone for data centers, research labs, and cloud providers racing to build next-gen AI infrastructure.

Dell vs. Nvidia: Two Sides of the HPC Coin

Business Model
Selling servers, networking gear, storage, services, full-stack HPC and AI infrastructure.
Selling GPU accelerators (and full systems) the compute “engines” behind AI/HPC workloads.
Q3 FY26 Results (scale) Revenue: $27B; Servers & Networking revenue up 37% YoY; strong cash-flow, $30B+ pipeline in AI server orders. Revenue: $57B; Data-center revenue: $51.2B; GPU demand “off the charts”, high margins.
Value Prop in HPC
Custom, turnkey computing + networking + support, ideal for enterprises, sovereign clouds, large HPC deployments.
Massive compute density and efficiency, enabling cutting-edge AI training/inference and HPC workloads; the horsepower behind workloads.
Strategic Strength
Engineering and integration, combining compute + infrastructure + global support + customization.
Tech leadership, GPU performance, software ecosystem, scale, and brand dominance in AI/HPC.
Best Fit Use Cases Organizations that want turnkey HPC clusters, enterprise AI deployments, or regulated/sovereign environments. Entities needing raw GPU compute for AI training, large-scale inference, simulation, scientific computing where maximum performance matters.
 
In other words: Dell builds the highway; Nvidia builds the engines that run fastest on it.

Why This Matters and What’s Next

With both firms posting record results, the HPC and AI-infrastructure space is clearly firing on all cylinders. For enterprises and institutions in any region, this means two things:
  • Access to enterprise-grade HPC infrastructure is becoming easier and more affordable. Institutions needing heavy compute (data analysis, big data, simulation, AI modeling) can now tap into turnkey server/GPU clusters from Dell, powered by Nvidia GPUs.
  • AI and HPC scale are accelerating. Given Nvidia’s GPU dominance and Dell’s global delivery + support capabilities, the barrier to entry for building powerful AI-powered compute environments is dropping. We might soon see more data-heavy, compute-intensive startups or public-sector deployments outside traditional tech hubs.
Looking ahead, if current order backlogs, demand for AI servers, and GPU supply hold, we could be on the brink of a new wave of HPC deployments across research, modeling, enterprise AI, climate modeling, healthcare genomics, and other data-heavy fields.
 
This quarter's numbers from Dell and Nvidia aren't just financial wins; they signal that high-performance computing is shifting from niche to mainstream. As someone involved in software, and big data, this is a signal worth paying attention to.