Supercomputing’s next frontier: NVIDIA, CoreWeave unite to build the AI factories of tomorrow

In a defining moment for the high-performance computing (HPC) and artificial intelligence (AI) landscape, NVIDIA and CoreWeave have announced an expanded collaboration to accelerate the construction of massive AI factories, purpose-built data centers optimized for large-scale AI workloads. This partnership marks a significant leap forward for the supercomputing community, combining cutting-edge hardware, software innovation, and strategic infrastructure expansion to meet the growing demand for AI compute resources.
 
At the heart of the announcement is a $2 billion investment by NVIDIA in CoreWeave’s Class A common stock, underscoring NVIDIA’s confidence in CoreWeave’s strategy and setting the stage for an ambitious build-out of more than 5 gigawatts of AI-optimized compute capacity by 2030. These facilities, often referred to as AI factories, are expected to become the backbone of next-generation AI research, training, and deployment, offering unprecedented access to accelerated computing for enterprises, startups, and scientific institutions alike.
 
This deepening partnership goes beyond financial backing. Under the expanded agreement, CoreWeave will adopt NVIDIA CPU and storage platforms and deploy multiple generations of NVIDIA accelerated computing architectures across its cloud infrastructure, including future innovations such as the Rubin AI platform, Vera CPUs, and advanced Bluefield storage systems. CoreWeave’s purpose-built software stacks, such as CoreWeave Mission Control and its reference architectures, will be jointly tested and validated to ensure seamless performance at scale.
 
For the supercomputing community, this represents more than a business transaction; it heralds the maturation of an ecosystem where dense GPU clusters, optimized interconnects, and advanced orchestration software come together to deliver supercomputing-class performance for AI workloads. These AI factories will support ultra-large neural network training, complex simulations, and inference tasks that push the limits of parallel processing and memory bandwidth, work that would be inconceivable without HPC-grade infrastructure underpinning the operations.
 
CoreWeave’s CEO, Michael Intrator, encapsulated this vision poignantly in his “The Year AI Gets to Work” blog post: the era of AI is no longer about possibility, but about making it operational at a global scale, powering real-world impact across industries and scientific fields. In his reflection, Intrator emphasized that AI has crossed a crucial threshold where the challenge has shifted from what’s possible to how do we deliver it everywhere it’s needed? This “working” phase requires infrastructure that can meet the relentless pace of innovation, and that is exactly what the expanded collaboration with NVIDIA seeks to enable.
 
What makes this partnership especially noteworthy for HPC practitioners is the tight integration of evolving hardware platforms with cloud-native supercomputing architectures. CoreWeave has been among the first cloud providers to deploy NVIDIA’s advanced GPU platforms, such as the GB200 NVL72 systems, at scale, demonstrating that purpose-built AI infrastructure can rival traditional supercomputer installations in both performance and flexibility. These deployments exemplify how the modern supercomputing stack is increasingly GPU-centric, designed to support massive parallel workloads with efficiency and resilience.
 
Moreover, the collaboration underscores a broader industry trend: the convergence of HPC and AI infrastructure, where the traditional boundaries between scientific computing, enterprise AI, and cloud-native services continue to blur. The AI factories envisioned by NVIDIA and CoreWeave will serve not only core AI model training and inference but also data-intensive simulation tasks, real-time reasoning engines, and agentic AI, workloads that demand HPC-level compute, networking, and orchestration.
 
For the supercomputing community, this development is inspirational on multiple fronts. It validates the central role of accelerated computing architectures in driving the next wave of AI and scientific discovery. It illustrates how deep collaboration between hardware innovators and infrastructure builders can unlock new levels of performance and accessibility. And it signals that the age of supercomputers is expanding from traditional national-lab behemoths into a distributed ecosystem of cloud-native AI super-infrastructure that anyone with visionary applications can tap into.
 
As we enter the AI era, our ability to construct, expand, and make these AI factories widely accessible will shape the years to come. The NVIDIA and CoreWeave partnership stands as a model for realizing these remarkable opportunities.

Supercomputing advances the quest to resolve the Hubble tension in cosmology

In a significant step toward solving a longstanding puzzle in cosmology, a team led by Simon Fraser University is leveraging supercomputing power to investigate the Hubble tension, a paradox at the core of modern astrophysics that questions our grasp of the universe’s expansion. Their latest findings merge creative theoretical perspectives with sophisticated numerical simulations, suggesting that primordial magnetic fields may be crucial in reconciling conflicting measurements of the cosmic expansion rate. Importantly, these advances were only possible thanks to state-of-the-art supercomputing infrastructure.
 
The Hubble tension refers to the persistent discrepancy between two independent methods of measuring the rate of expansion of the universe. Local measurements using Type Ia supernovae and other distance indicators yield a higher value for the Hubble constant (H₀) than estimates derived from the cosmic microwave background, the afterglow of the Big Bang, as observed by missions such as Planck. This mismatch has challenged the standard cosmological model (ΛCDM) and inspired a plethora of hypotheses that require rigorous theoretical and numerical assessment.
 
In the new study, the research team proposes that primordial magnetic fields, tiny magnetic fields present in the early universe, could have subtly altered the physics of recombination, the epoch when electrons and protons first combined to form neutral atoms. This alteration affects the interpretation of the cosmic microwave background and, consequently, inferences about the Hubble constant. If confirmed, the existence and influence of such fields would not merely ease the tension between different measurements; they could also illuminate the origin of cosmic magnetism observed throughout galaxies and intergalactic space.
 
However elegant the theory, testing it against the wealth of cosmological data requires formidable computational effort. Over the past three years, the international collaboration, including SFU’s Levon Pogosian, Karsten Jedamzik, Tom Abel, and Yacine Ali-Haimoud has utilized SFU’s Cedar supercomputer and its successor, Fir, to run large-scale simulations of recombination processes under various magnetic field scenarios. These simulations incorporate the physics of the early universe at high resolution and are used to generate predicted observational signatures that can be directly compared against data from the Hubble Space Telescope, Planck, and ground-based observatories.
 
Supercomputing plays an indispensable role in this endeavor. The complex dynamics of recombination and its imprint on cosmological observables involve solving coupled systems of equations that govern plasma physics, radiative transfer, and statistical inference. By breaking down these calculations into parallel tasks, HPC systems such as Cedar and Fir allow researchers to execute large parameter sweeps and statistical fits that would otherwise take prohibitively long on conventional machines. The result is a computational feedback loop in which simulations refine theoretical models, which in turn guide the next generation of simulations.
 
According to Pogosian, “We wouldn’t have been able to carry out our research without the supercomputer. It was crucial for our tests and calculations.” The ability to process vast datasets in parallel not only saves time but dramatically expands the scope of inquiry, enabling tests of subtle physical effects in regimes where analytical approximations fail.
 
The simulations have yielded encouraging outcomes: the primordial magnetic field hypothesis “survives the most detailed and realistic tests available today,” and the work provides clear targets for future observational campaigns. In the coming years, next-generation observatories and more refined simulations will be key to determining whether these ancient magnetic fields indeed influenced the evolution of the early universe.
 
For the supercomputing community, this research embodies the inspirational synergy between numerical simulation and fundamental physics. Here, HPC is not a mere amplifier of computational throughput; it is an enabler of discovery, allowing scientists to probe phenomena at the intersection of theory and observation. As cosmologists continue to confront deep questions about the universe’s origin, composition, and fate, supercomputers like Cedar and Fir stand at the forefront of a new era in astrophysical research.

Supercomputers power evolutionary insight: From house sparrows to conservation strategies

Amidst growing concerns over biodiversity loss and environmental change, scientists are employing advanced computational methods to reveal the genetic and evolutionary factors that contribute to species' resilience. At the Norwegian University of Science and Technology (NTNU), researchers are at the forefront of this movement, utilizing decades of ecological data on house sparrows in northern Norway and harnessing the powerful computational capabilities of NTNU’s flagship supercomputer, IDUN. These efforts not only enhance our knowledge of evolutionary dynamics in wild populations but also provide robust quantitative tools that could inform conservation strategies for a diverse range of species.
 
House sparrows (Passer domesticus), though ubiquitous across much of the world, present a compelling model for studying evolution in fragmented, wild populations. Along the coast of Helgeland, archipelagos of small islands have been the site of continuous biological monitoring for over three decades. Biologists have meticulously recorded the life histories, from birth to death, of tens of thousands of individual sparrows, amassing an unparalleled dataset of genetic, morphological, and ecological measurements.
 
In a recent study published in Evolution, NTNU researchers applied a sophisticated statistical method known as genomic prediction (GP) to this extensive dataset, aiming to assess the accuracy of predicting genetic traits across distinct wild populations. Although widely used in agriculture and breeding programs, genomic prediction has rarely been applied within the context of wild populations due to the complexity and scale of the data.
 
Where observational fieldwork leaves off, supercomputing fills the gap. Kenneth Aase, a Ph.D. research fellow at NTNU’s Department of Mathematical Sciences, emphasizes that testing model assumptions and running high-dimensional simulations requires computational resources capable of handling large datasets and complex statistical models. For the most challenging computations in his analyses, Aase turns to IDUN, NTNU’s powerful HPC system, enabling large-scale simulations and hypothesis testing that would be infeasible on standard computing platforms.
 
Supercomputers such as IDUN provide not only raw processing power but also the ability to manage multifactorial models involving hundreds of thousands to millions of genetic markers, environmental variables, and phenotypic traits. This capability enables researchers to simulate the interaction of genetic variation and environmental pressures over time, a crucial step in understanding evolutionary trajectories in fluctuating habitats.
 
The insights emerging from this work extend far beyond the sparrow populations themselves. By evaluating how genomic prediction performs across separated island populations, the researchers revealed limitations and opportunities in applying such models to wild species with distinct genetic backgrounds. These findings inform not only evolutionary biology but also conservation strategies for species facing rapid environmental change.
 
Crucially, the computational framework developed and tested with IDUN simulations lays the groundwork for broader applications. The GPWILD project, funded by a European Research Council grant, aims to generalize these methods to other species, including Svalbard reindeer, Scottish deer, and arctic foxes, each with unique evolutionary dynamics and conservation challenges.
 
As climate change and habitat loss continue to exert pressure on wild populations globally, quantitative tools that couple genomic data with supercomputing-enabled modeling become indispensable. They allow scientists to evaluate adaptive potential, predict responses to environmental stressors, and identify populations at greatest risk of decline, all through simulation frameworks that capture the complex interplay of genetics and ecology.
 
For SC Online readers, the NTNU house sparrow initiative highlights a key insight: supercomputers now play a pivotal role beyond physical sciences and artificial intelligence, serving as powerful catalysts in evolutionary biology and conservation research. By merging decades of detailed ecological data with high-performance computing simulations and advanced statistical models, scientists are forging innovative approaches to better understand and safeguard the natural world amid rapid global change.

Supercomputing accelerates breakthroughs in diabetes drug discovery

Showcasing the transformative impact of high-performance computing on biomedical research, scientists at The Herbert Wertheim UF Scripps Institute for Biomedical Innovation & Technology have leveraged the HiPerGator supercomputer to fast-track the discovery of new treatments for Type 2 diabetes. By employing advanced computational simulations, their research is overcoming some of the toughest challenges in drug design, reducing development timelines, and significantly improving predictive accuracy at the earliest stages.
 
Type 2 diabetes affects tens of millions of people worldwide and is characterized by the body’s reduced sensitivity to insulin, a hormone essential for glucose metabolism. Current treatment options, while effective for some patients, carry limitations and significant side effects, particularly for individuals with chronic kidney disease. Researchers led by molecular biologist Patrick Griffin, Ph.D., and his team set out to design compounds that improve insulin sensitivity by modulating a complex cellular protein known as PPAR gamma, a “master regulator” of fat cell and insulin metabolism that has long eluded safe, effective therapeutic targeting.
 
Crucially, the team integrated multiple technologies in their workflow, combining biochemical assays, structural analyses, and high-fidelity molecular simulations performed on HiPerGator, one of academia’s most powerful supercomputers. These simulations allowed researchers to model the dynamic motion and flexibility of PPAR gamma when bound to potential therapeutic compounds, yielding insights that would be exceedingly difficult to obtain through laboratory experiments alone.
 
Molecular dynamics simulations are indispensable tools in modern drug discovery. For this project, a single 100-nanosecond simulation run on HiPerGator required approximately six hours, and with 26 candidate compounds and three replicates for each, the total compute time approached 20 days of continuous processing. This illustrates not only the computational intensity of structure-based drug design but also the indispensable role of HPC in making such calculations feasible.
 
Without access to a high-performance infrastructure like HiPerGator, such simulations could take months or longer on conventional computing systems, a pace that stands at odds with the urgency of unmet medical needs. HiPerGator’s vast array of CPU and GPU resources provides the parallel processing capabilities necessary to execute numerous complex simulations concurrently, enabling researchers to explore multiple molecular interactions and conformations in a compressed timeframe.
 
Beyond accelerating individual simulation runs, supercomputing enables scientists to adopt iterative, data-driven design strategies. By rapidly simulating how different chemical modifications influence protein dynamics, researchers can refine their hypotheses and prioritize the most promising compounds for subsequent experimental validation. This creates a computational feedback loop that bridges theory and laboratory work, ultimately streamlining the early phases of drug development.
 
The implications of this work extend well beyond diabetes. The framework established by Griffin’s team, integrating structural characterization with HPC-driven simulations and biological testing, provides a transferable blueprint for other drug discovery challenges, particularly those involving “difficult” signaling proteins with complex, multifaceted roles in human physiology.
 
As supercomputing resources such as HiPerGator evolve, with increased core counts and architectures tailored for scientific modeling and artificial intelligence, their impact on biomedical innovation is set to expand dramatically. For diseases that have long defied conventional treatments, advanced computational power now opens a new frontier, enabling researchers to test hypotheses in silico with speed and precision previously unimaginable.
 
For SC Online readers, this story underscores a clear reality: supercomputers are no longer just tools of physics, climate, or astrophysics research; they have become indispensable engines of discovery in biology and medicine. By enabling detailed simulations that inform experimental science, HPC platforms like HiPerGator are helping transform the pace and promise of drug discovery for diseases that affect millions worldwide.

Supercomputers illuminate the cosmic life cycle: Charting stars off the beaten path

In the grand cosmic ballet, stars live tumultuous lives,  forming in blazing clouds of gas, burning for millions of years, and ultimately exploding as supernovae that reshape entire galaxies. Now, thanks to cutting-edge astronomical surveys and the next generation of supercomputer simulations, scientists are beginning to see where and how these cataclysmic events unfold across the vast tapestry of space, even in places once thought unlikely.
 
A collaborative team of astronomers has produced the first large-scale census of evolved massive stars,  those on the brink of explosive death, across the nearby spiral galaxy M33. By overlaying high-resolution gas maps from the NSF’s Very Large Array and ALMA with catalogs of thousands of red supergiants, Wolf–Rayet stars, and known supernova remnants, researchers uncovered a surprising truth: a majority of future stellar explosions are likely to occur outside the dense clouds where stars are born.
 
This revelation reshapes our understanding of how galaxies evolve. Supernovae don’t merely spew heavy elements into dense star-forming chambers; many detonate within the more diffuse interstellar medium. In these off the beaten path locales, their shock waves travel farther before dissipating, stirring gas over larger scales and influencing the cosmic ecosystem in ways that traditional models hadn’t fully captured.

Supercomputing: The Engine Behind Cosmic Insight

Bringing this level of detail to astrophysics isn’t possible without supercomputing, the computational backbone of modern galaxy simulations. Observational efforts like the Local Group L-Band Survey provide exquisite maps of gas and stars, but only large-scale cosmological simulations can trace millions to billions of years of galactic evolution, modeling how stars interact with their environments over cosmic time.

These simulations, ambitious in both scale and physics, run on some of the world’s most powerful supercomputers, incorporating gravity, hydrodynamics, radiative feedback, and turbulent gas flows.
 
Models such as FIRE, Illustris, TIGRESS, and SILCC integrate complex subgrid physics to approximate processes occurring at scales far smaller than individual simulation cells. The new stellar census from M33 provides a critical benchmark for these simulations, giving astrophysicists real-world data against which to test and refine their codes.
 
Without high-performance computing, tracking the intricate interplay between massive stars and their gaseous surroundings across an entire galaxy, from cold molecular clouds to tenuous atomic hydrogen, would be unthinkable. Supercomputers enable researchers to explore how stellar winds, supernova blasts, and runaway stars shape the evolution of galaxies over billions of years, bridging the gap between theoretical physics and observable astrophysical phenomena.

Polishing the Future of Galaxy Modeling

The realization that many stars meet their end far from dense clouds is reshaping our view of galactic evolution. This new understanding challenges long-held beliefs about where energy and momentum are distributed throughout galaxies, alters predictions for galactic winds and the spread of elements, and drives simulation models to include more accurate feedback mechanisms. As new data from ALMA and future telescopes like the Next Generation Very Large Array become available, astronomers will continue to refine their insights with supercomputers playing a critical role in making sense of it all.
 
In this era of astronomical breakthroughs, supercomputing is more than just a tool for simulating the cosmos; it is a key to understanding our own cosmic origins. By combining detailed observations with immense computational power, scientists are piecing together the life cycles of stars and, through them, the evolution of galaxies. This blend of data and simulation marks a pivotal step forward in humanity’s journey to understand the universe.