In a defining moment for the high-performance computing (HPC) and artificial intelligence (AI) landscape, NVIDIA and CoreWeave have announced an expanded collaboration to accelerate the construction of massive AI factories, purpose-built data centers optimized for large-scale AI workloads. This partnership marks a significant leap forward for the supercomputing community, combining cutting-edge hardware, software innovation, and strategic infrastructure expansion to meet the growing demand for AI compute resources.
At the heart of the announcement is a $2 billion investment by NVIDIA in CoreWeave’s Class A common stock, underscoring NVIDIA’s confidence in CoreWeave’s strategy and setting the stage for an ambitious build-out of more than 5 gigawatts of AI-optimized compute capacity by 2030. These facilities, often referred to as AI factories, are expected to become the backbone of next-generation AI research, training, and deployment, offering unprecedented access to accelerated computing for enterprises, startups, and scientific institutions alike.
This deepening partnership goes beyond financial backing. Under the expanded agreement, CoreWeave will adopt NVIDIA CPU and storage platforms and deploy multiple generations of NVIDIA accelerated computing architectures across its cloud infrastructure, including future innovations such as the Rubin AI platform, Vera CPUs, and advanced Bluefield storage systems. CoreWeave’s purpose-built software stacks, such as CoreWeave Mission Control and its reference architectures, will be jointly tested and validated to ensure seamless performance at scale.
For the supercomputing community, this represents more than a business transaction; it heralds the maturation of an ecosystem where dense GPU clusters, optimized interconnects, and advanced orchestration software come together to deliver supercomputing-class performance for AI workloads. These AI factories will support ultra-large neural network training, complex simulations, and inference tasks that push the limits of parallel processing and memory bandwidth, work that would be inconceivable without HPC-grade infrastructure underpinning the operations.
CoreWeave’s CEO, Michael Intrator, encapsulated this vision poignantly in his “The Year AI Gets to Work” blog post: the era of AI is no longer about possibility, but about making it operational at a global scale, powering real-world impact across industries and scientific fields. In his reflection, Intrator emphasized that AI has crossed a crucial threshold where the challenge has shifted from what’s possible to how do we deliver it everywhere it’s needed? This “working” phase requires infrastructure that can meet the relentless pace of innovation, and that is exactly what the expanded collaboration with NVIDIA seeks to enable.
What makes this partnership especially noteworthy for HPC practitioners is the tight integration of evolving hardware platforms with cloud-native supercomputing architectures. CoreWeave has been among the first cloud providers to deploy NVIDIA’s advanced GPU platforms, such as the GB200 NVL72 systems, at scale, demonstrating that purpose-built AI infrastructure can rival traditional supercomputer installations in both performance and flexibility. These deployments exemplify how the modern supercomputing stack is increasingly GPU-centric, designed to support massive parallel workloads with efficiency and resilience.
Moreover, the collaboration underscores a broader industry trend: the convergence of HPC and AI infrastructure, where the traditional boundaries between scientific computing, enterprise AI, and cloud-native services continue to blur. The AI factories envisioned by NVIDIA and CoreWeave will serve not only core AI model training and inference but also data-intensive simulation tasks, real-time reasoning engines, and agentic AI, workloads that demand HPC-level compute, networking, and orchestration.
For the supercomputing community, this development is inspirational on multiple fronts. It validates the central role of accelerated computing architectures in driving the next wave of AI and scientific discovery. It illustrates how deep collaboration between hardware innovators and infrastructure builders can unlock new levels of performance and accessibility. And it signals that the age of supercomputers is expanding from traditional national-lab behemoths into a distributed ecosystem of cloud-native AI super-infrastructure that anyone with visionary applications can tap into.
As we enter the AI era, our ability to construct, expand, and make these AI factories widely accessible will shape the years to come. The NVIDIA and CoreWeave partnership stands as a model for realizing these remarkable opportunities.

How to resolve AdBlock issue?