Scholars harness supercomputers to peer inside black holes, through code, not telescopes

A team of computational astrophysicists has broken new ground, using the planet's most powerful supercomputers to simulate, in full fidelity, how matter spirals into a black hole and lights up in a blaze of radiation. Their results, published on December 3, 2025, by the Institute for Advanced Study (IAS) and the Flatiron Institute, deliver what may be the most detailed, realistic model yet of "luminous black hole accretion."

From Toy Models to Full-Blown Virtual Realities

For decades, astrophysicists have studied black hole accretion, the process by which gas, dust, and other matter fall into black holes, using simplified models. These toy-model approximations treated radiation as if it were a fluid, glossing over the real physics of how light moves through warped spacetime around a black hole.
 
Thanks to a new computational algorithm coupled with access to exascale-class supercomputers,  namely Frontier at Oak Ridge and Aurora at Argonne, the researchers directly solved the full radiation-transport equations under general relativity, without simplifying assumptions.
 
Lead author Lizhong Zhang describes it as "observing" black hole behavior not through telescopes, but through the computer, effectively creating a digital observatory of regions impossible to image directly.
What the Simulations Reveal
  • The simulations show that, even in a radiation-dominated, highly turbulent environment, matter forms a dense, thin thermal disk near the black hole, embedded inside a magnetically dominated envelope. The envelope appears to stabilize the system, a surprising sign of structural order emerging from chaos.
  • Around the disk, the model captures winds and sometimes powerful jets: outflows of matter and energy that match what astronomers see in real systems like ultraluminous X-ray sources and X-ray binaries.
  • When the team compared the simulated radiation spectra to real observations, the match was strong. That suggests the simulation is more than theoretical; it may faithfully represent how black holes behave in nature.

Why Supercomputers Were Critical

Modeling a black hole's accretion in full detail is computationally brutal. Gravity warps spacetime (general relativity), matter behaves under magneto-hydrodynamics (MHD), and radiation interacts with gas, all tightly coupled in nonlinear, dynamic ways. Solving that in 3D over time requires billions of calculations per second and software optimized down to the metal.
 
The combination of cutting-edge algorithm design (led by co-authors such as Christopher White and Patrick Mullen) with the brute force of exascale machines allowed the team to finally do this computation, the kind of problem that would have been intractable a decade ago.

What's Next: Cosmic Simulations Go Big

This is just the first in a series of papers. The team plans to apply their model to a wider range of black hole systems, from stellar-mass holes (a few times the mass of the Sun) to the supermassive giants that lurk at the centers of galaxies.
 
If successful, this work could reshape our understanding of how black holes grow and affect their surroundings, from the jets they shoot out to the winds they drive, and how they light up in X-rays and other wavelengths.

Big Picture: When Code Becomes Our Telescope

We're living in an era where code + supercomputing = cosmic telescope. With enough computational power and smart algorithms, researchers can simulate regions of the universe that not even our most advanced telescopes can resolve. The result is a kind of synthetic observation, a digital microscope turned on the universe's darkest objects.
 
It's a perspective shift: rather than just watching the universe, we're now capable of recreating pieces of it in silico, exploring how extreme gravity, magnetism, and radiation dance together around black holes.
 
The cosmic circus is no longer only for telescopes; now, supercomputers get front-row seats.

Seeing the unseeable: How AI, Supercomputers provide a clearer view of black holes

The world gasped when the first image of the black hole M87 was released in 2019. The hazy ring with a dark core confirmed what Einstein predicted decades prior: black holes cast "shadows" where no light escapes.
 
However, for scientists at the Perimeter Institute for Theoretical Physics (PI) in Canada, this image was merely the starting point. Now, thanks to supercomputing and the rise of artificial intelligence (AI), researchers are uncovering layers of cosmic fog, enabling them not only to see black holes but also to understand their dynamics with unprecedented precision.

From fuzzy ring to data-rich portraits

The tool doing much of this heavy lifting is a machine-learning model developed by PI researcher Avery Broderick and his team. Their system, called ALINet, can generate billions of candidate images, a thousand times faster than traditional methods, enabling scientists to compare real observational data against thousands of theoretical black hole models in a matter of hours.
 
Traditionally, interpreting data from the Event Horizon Telescope (EHT) meant painstakingly reconstructing images, then matching them by hand to models of how black hole plasma behaves. That process could take weeks, even on powerful hardware. Now, with ALINet, what once took a month can be achieved in a day, using a fraction of the computational cores.

Denoising the cosmos, even through the galactic haze

The challenge isn’t just speed. The center of our own galaxy, home to Sagittarius A*, the supermassive black hole at the Milky Way’s heart, lies behind a dense curtain of interstellar gas, dust and turbulent plasma. That material distorts radio waves, blurring and scattering the signals that astronomers receive.
 
Broderick’s team has now trained neural networks to perform “de-scattering,” essentially deblurring cosmic interference and letting scientists peer through the galactic veil. Early results published in 2025 show this can almost completely reverse the scattering at the EHT’s operational wavelength, offering a much clearer view of Sgr A*.

Supercomputing + AI: a combo that changes the game

This isn’t just about pretty pictures. Supermassive black holes, M87*, Sagittarius A*, and many more, are extreme gravitational laboratories. Understanding their behavior helps physicists probe deep questions: how matter behaves under extreme gravity, how space–time warps, how quantum effects might play out in the most intense conditions in the cosmos.
 
In fields beyond imaging, AI + high-performance computing (HPC) is already making waves. Teams have used distributed AI models running on supercomputers to detect gravitational wave signals from colliding black holes, and do so far faster than older methods. The success of such efforts shows that combining AI with raw compute scale isn’t just clever, it’s essential for the next frontier of astrophysics.

Why this matters, and why now

With tools like ALINet, astronomers can now treat black hole observations as data-rich investigations rather than fuzzy guesses. Instead of asking "Does this look like a ring?", scientists can now ask, "What spin, mass, and plasma configuration best matches the data?" They can also get answers rapidly, enabling more frequent updates as new observations come in.
 
For humanity, this means black holes, once relegated to science fiction and unreachable math, are becoming real, measurable entities. AI and supercomputers are turning the unknown into the known.
 
As Broderick puts it, this is "enabling technology," transforming a month-long computational slog into a swift, repeatable analysis. The cosmos just got sharper.

Big numbers, big bets: Dell scales up HPC for the AI era

Dell Technologies (Dell) posted strong third-quarter results for fiscal 2026, with $27.0 billion in revenue, up 11% year-over-year, and diluted EPS of $2.28. Its Infrastructure Solutions Group (servers and networking) was the standout, delivering $10.1 billion in revenue, up 37% YoY, with overall ISG revenue hitting $14.1 billion, up 24%.
 
Dell says this growth stems from surging demand for AI servers, with $12.3 billion in new AI-server orders during the quarter alone, and a year-to-date pipeline of about $30 billion, mixed across enterprise, sovereign-cloud, and large-scale "neocloud" customers.
 
In plain terms: Dell is investing heavily in high-performance computing infrastructure. This includes building large HPC clusters, deploying custom AI servers, and providing flexible scaling options for global enterprises and sovereign cloud buyers. Their ability to provide complete HPC solutions, including compute, networking, support, and storage, makes them a key partner for organizations needing powerful, scalable computing resources, from research institutions to cloud providers.

The GPU King: Nvidia’s Q3 Rocket Fuel for HPC Infrastructure

NVIDIA delivered blow-out third-quarter results: $57.0 billion in revenue, a 62% increase over last year. Data center revenue alone hit a record $51.2 billion, up 66% YoY.
 
Nvidia executives highlighted that demand for its latest GPU architecture, NVIDIA Blackwell, remains red-hot and that cloud GPUs are “sold out.” The firm sees this demand driven by exploding workloads in training and inference for generative AI, large-language models, HPC, and emerging “agentic” AI. 
 
On margins and profitability, Nvidia remains a beast, non-GAAP gross margin of around 73.6%, operating income and EPS both rising sharply.
 
Bottom line: Nvidia is arguably the single most influential driver of high-performance AI and HPC compute capacity today. Its GPUs, systems, and software stack (e.g., CUDA) have become the backbone for data centers, research labs, and cloud providers racing to build next-gen AI infrastructure.

Dell vs. Nvidia: Two Sides of the HPC Coin

Business Model
Selling servers, networking gear, storage, services, full-stack HPC and AI infrastructure.
Selling GPU accelerators (and full systems) the compute “engines” behind AI/HPC workloads.
Q3 FY26 Results (scale) Revenue: $27B; Servers & Networking revenue up 37% YoY; strong cash-flow, $30B+ pipeline in AI server orders. Revenue: $57B; Data-center revenue: $51.2B; GPU demand “off the charts”, high margins.
Value Prop in HPC
Custom, turnkey computing + networking + support, ideal for enterprises, sovereign clouds, large HPC deployments.
Massive compute density and efficiency, enabling cutting-edge AI training/inference and HPC workloads; the horsepower behind workloads.
Strategic Strength
Engineering and integration, combining compute + infrastructure + global support + customization.
Tech leadership, GPU performance, software ecosystem, scale, and brand dominance in AI/HPC.
Best Fit Use Cases Organizations that want turnkey HPC clusters, enterprise AI deployments, or regulated/sovereign environments. Entities needing raw GPU compute for AI training, large-scale inference, simulation, scientific computing where maximum performance matters.
 
In other words: Dell builds the highway; Nvidia builds the engines that run fastest on it.

Why This Matters and What’s Next

With both firms posting record results, the HPC and AI-infrastructure space is clearly firing on all cylinders. For enterprises and institutions in any region, this means two things:
  • Access to enterprise-grade HPC infrastructure is becoming easier and more affordable. Institutions needing heavy compute (data analysis, big data, simulation, AI modeling) can now tap into turnkey server/GPU clusters from Dell, powered by Nvidia GPUs.
  • AI and HPC scale are accelerating. Given Nvidia’s GPU dominance and Dell’s global delivery + support capabilities, the barrier to entry for building powerful AI-powered compute environments is dropping. We might soon see more data-heavy, compute-intensive startups or public-sector deployments outside traditional tech hubs.
Looking ahead, if current order backlogs, demand for AI servers, and GPU supply hold, we could be on the brink of a new wave of HPC deployments across research, modeling, enterprise AI, climate modeling, healthcare genomics, and other data-heavy fields.
 
This quarter's numbers from Dell and Nvidia aren't just financial wins; they signal that high-performance computing is shifting from niche to mainstream. As someone involved in software, and big data, this is a signal worth paying attention to.

SC25 pushes network frontiers as Pegatron unveils modular server ambitions

In STL, the high-performance computing world thrives on pushing limits, and this year’s SC25 conference delivered another leap forward, both on the show floor and across the wires of the legendary SCinet network.
 
Pegatron, a global leader in electronics manufacturing, showcased its next-generation server roadmap, emphasizing the company’s vision for modular, power-efficient systems engineered for the AI-accelerated era. Today’s press release has highlighted a strategic expansion into advanced rack-scale design, with an emphasis on flexibility, field-replaceable modules, and full-stack energy optimization. But even that technical momentum was matched, if not eclipsed, by the sheer scale of the network beneath attendees’ feet.

SCinet Hits a New Threshold: 13.72 TB/s

SCinet, the volunteer-built engineering marvel that powers every Supercomputing conference, announced its highest throughput ever recorded: 13.72 terabytes per second (TB/s) for SC25.
 
To put this into perspective, SCinet’s wide-area network (WAN) backbone has grown at a pace few global networks can match:
  • SC25 (St. Louis): 13.72 TB/s
  • SC24 (Atlanta): 8.71 Tbps
  • SC23 (Denver): 6.71 Tbps
  • SC22: 5.01 Tbps
  • SC19: 4.22 Tbps
Every year, SCinet is torn down and rebuilt by an army of volunteers of engineers, network architects, and researchers from around the world, who converge to create the fastest temporary network on Earth. Its sole mission: enable the bleeding-edge demos that define the HPC community.
 
As datasets balloon and GPU clusters grow hungrier by the day, SCinet’s growth isn’t a luxury; it’s a necessity.

Pegatron’s Modular Pivot: A Server for the AI Era

In its SC25 release, Pegatron detailed its next-gen server platform built around modularity, thermal efficiency, and rapid deployment, all themes dominating this year’s conference.
 
Key takeaways from Pegatron’s announcement include:
• Modular AI-ready infrastructure
Pegatron outlined blade-style compute modules designed to scale from traditional HPC to dense GPU and accelerator configurations.
• Energy-optimized design
The company emphasized new power-distribution and cooling architectures intended to support the surge of high-wattage AI accelerators without sacrificing stability or serviceability.
• Manufacturing muscle
 
Leveraging Pegatron’s global supply chain, the company aims to support hyperscalers, enterprise AI builders, and research labs that need rapid, consistent deployment cycles as models grow more compute-intensive.
 
Pegatron’s SC25 presence signals its intent to be more than an OEM; it wants to shape the future of rack-scale AI infrastructure.

Why the Two Stories Intersect

SCinet’s explosive bandwidth growth and Pegatron’s hardware ambitions aren’t isolated trends, they’re parallel responses to the same fundamental shift: AI workloads are becoming the dominant driver of HPC system design.
 
Training runs now require:
  • Uncompressed terabyte-scale dataset transfers
  • Multi-site distributed training
  • Real-time visualization pipelines
  • Exascale-class telemetry
At SC25, the relationship between compute, cooling, networking, and manufacturing has never been more visible. Pegatron’s modular hardware approach pairs naturally with a world where SCinet-class networks will soon be the norm, not the exception.

A Future Built on Collaboration and Momentum

SCinet’s volunteers, the invisible heroes of the SC conference, have once again demonstrated what’s possible when the global HPC community collaborates without restraint.
 
Pegatron’s announcement adds another layer of optimism: that the companies powering AI and HPC infrastructure are evolving just as quickly as the workloads they support.
 
SC25 feels like a hinge moment. Faster networks. Smarter servers. Greener cooling systems. More modular racks. And an industry that’s learning to innovate at the pace of AI itself.
 
The bar has officially been raised. And judging by the energy on the SC25 floor, the community seems ready to clear it again next year.
Darren Burgess, Castrol’s Data Center Cooling
Darren Burgess, Castrol’s Data Center Cooling

Castrol expands its thermal management empire with strategic investment in ECS

In STL, the rising heat of next-generation AI met its match at SC25 as Castrol announced a strategic investment in Electronic Cooling Solutions (ECS), a Santa Clara–based thermal engineering firm known for its deep bench of CFD modeling, reliability testing, and design-for-deployment expertise. The move signals Castrol’s shift from “fluid supplier” to full-stack thermal partner for data centers navigating the swelling power demands of artificial intelligence and high-performance computing.
 
Between keynotes, we sat down with Darren Burgess, Castrol’s Data Center Cooling specialist from Austin, Texas. In a conversation that bounced from Bitcoin mines to hyperscale design rooms, Burgess laid out why Castrol is betting big on immersion cooling, and why ECS is the linchpin.

Immersion Cooling’s Momentum and Why Single-Phase Leads Today

Burgess described immersion cooling as “the simplest path to big power savings,” emphasizing single-phase immersion as the star of today’s deployments. Bitcoin miners have already paved the way: predictable thermals, easy heat capture, fewer moving parts, and measurable reductions in energy overhead.
 
“The industry is learning what miners figured out early,” Burgess told us. “When the density goes up, air just taps out.”
 
Two-phase immersion may be the future, but Castrol is positioning it carefully. “It’s coming,” Burgess said. “But the industry needs predictable supply chains and stability first. That’s where Castrol’s global network becomes an advantage.”

The Glycol Problem No One Talks About

Castrol’s data center expansion isn’t just about immersion. Burgess highlighted a quieter but critical battleground: the chemistry inside traditional hydronic loops. Specifically, propylene glycol (PG 25), a staple in cooling systems, whose stability is often taken for granted.
 
“PG is like a living system,” Burgess said. “If you don’t monitor it, corrosion becomes an invisible tax. Fluid health isn’t optional anymore, it’s uptime insurance.”
 
Castrol is developing next-gen formulations, including detoxified ethylene glycol options with higher-temperature tolerance.

ECS + Castrol: A Full-Stack Thermal Alliance

The newly announced investment gives Castrol something it has never possessed at a global scale: deep thermal engineering capabilities that touch every layer of system design.
ECS brings:
  • Room-to-rack thermal modeling
  • System-level CFD
  • Failure-mode and reliability analysis
  • Immersion and liquid cooling design validation
  • Acclimation, condensation, and corrosion forensic services
Their portfolio includes AI module liquid-cooling designs up to 17 kW, corrosion root-cause tracing, and environmental acclimation studies for hyperscale data centers.
 
With Castrol’s investment, Bharat Vats, an industry veteran and former CEO of Atom Power, has been named President and CEO of ECS. His mandate: scale up ECS’s impact across hyperscalers, OEMs, cloud providers, and energy-intensive AI labs.
 
“Working with Castrol opens the door for ECS to reach the entire data center ecosystem,” Vats said. “Together, we can accelerate the shift to more efficient cooling architectures.”

Why This Investment Matters Now

A recent Castrol-commissioned survey found that 74% of data-center experts now believe liquid cooling is the only path forward for today’s AI power densities. Yet many operators hesitate due to integration complexity and a lack of trusted partners.
 
Castrol believes combining its supply-chain muscle with ECS’s engineering precision will remove those barriers.
 
Peter Huang, Castrol’s Global VP of Data Centre Thermal Management, put it plainly: “The industry needs partners that can guide them from whiteboard to deployment. Castrol wants to be that end-to-end partner.”

A Turning Point for AI-Era Data Centers

SC25 has made one thing obvious: thermal is no longer a back-of-house concern. It is the governing constraint of AI. The players who master heat will be the ones who shape the computing landscape of the next decade.
 
With Castrol expanding from automotive lubricants into immersion, hydronics, and now full-stack thermal design, and ECS bringing decades of analysis and validation expertise, the partnership lands at a pivotal moment.
 
Together, they’re sending a clear message to hyperscalers and AI labs everywhere: The future isn’t just faster. The future runs colder.