Lawrence Livermore National Laboratory Selects QLogic InfiniBand Networks for One of the World's Most Powerful Supercomputers

QLogic has announced that its Quad Data Rate (QDR) InfiniBand switches and adapters have proven to be a key force in providing unprecedented levels of high performance computing (HPC) for unclassified research. Specifically, QLogic 12000 Series switches and 7300 Series adapters are now providing the QDR InfiniBand infrastructure for four high performance computing (HPC) clusters, code named Sierra, Muir, Ansel and Titan, totaling more than 4,000 nodes and associated switch ports, at the Lawrence Livermore National Laboratory (LLNL). The mission of LLNL not only involves national security, but also the application of advanced science and technology to the important issues of today.

Sierra is part of LLNL's Multi-programmatic and Institutional Computing (M&IC) program and supports "Grand Challenge" and other unclassified research efforts at LLNL. Grand Challenge projects are collaborative research efforts with academic institutions and other labs that push the envelope of "capability" computing, which dedicates powerful HPC systems to taking on the most difficult and complex problems.

"Our performance requirements escalate faster than we can address them year after year," said Brian Carnes, program director for M&IC at LLNL. "With these latest results, Sierra would rank just inside the top 25 of the world's most powerful supercomputers on the current TOP500 list. Sierra boosts Livermore Computing's total unclassified computing resources to more than 870 teraflops. Combined with some of the world's most talented simulation scientists, Sierra will enable Livermore Computing to further advance our leadership in science and technology."

"Sierra's performance and scalability outperform all other platforms for several of our critical simulation codes," said Matt Leininger, deputy for advanced technology projects at LLNL. "This translates into higher productivity, more cost-effective computing, and accelerated scientific discovery."

"Livermore Computing was able to double its HPC resources with an affordable, standards-based solution from QLogic and Dell," said Jesse Parker, vice president and general manager, Network Solutions Group, QLogic. "This is primarily due to the fact that unlike competitive offerings, QLogic QDR InfiniBand solutions are designed to deliver superior message rates and protect cluster investments by delivering increased application performance as computing resources are scaled across multiple dimensions."

Sierra, a Dell-based supercomputing system, is clocking in at record-breaking speeds of up to 261 teraflops (trillion floating operations per second), driving faster results in climate modeling and atmospheric simulations, supernova and planetary science, materials and strength modeling, laser and plasma simulations, and biology and life sciences. Each QLogic 12000 Series QDR InfiniBand switch in the Sierra cluster can move more than the equivalent of 600 DVDs per second. Every QLogic 7300 Series QDR InfiniBand adapter in the cluster can send and receive more than 25 million messages per second. This network performance coupled with the power of the Dell computing nodes enables the Sierra cluster to tackle a broader and more difficult set of research challenges.

"Dell's Data Center Solutions group custom designs servers for many of the leading HPC sites such as Lawrence Livermore Labs with a strong focus on density, power and performance," said Andy Rhodes, director of marketing, Dell Data Center Solutions. "By working with fellow performance leaders like QLogic we are able to deliver HPC solutions such as Sierra that deliver superior results for our joint customers."

InfiniBand Networks Powered by QLogic

QLogic offers a comprehensive, end-to-end portfolio of InfiniBand networking products for HPC, including QDR host channel adapters, QDR multi-protocol director and edge switches and pass-thru modules and intuitive tools to install, operate, and maintain high-performance fabrics. Delivering the industry's highest message rate, lowest MPI latency and highest effective bandwidth, QLogic InfiniBand adapters enable MPI and TCP applications to scale to thousands of nodes with industry-leading price-performance. As the most comprehensive and flexible HPC cluster interconnect fabric solutions on the market, QLogic QDR InfiniBand switches incorporate the industry's only management tools that enable an administrator to install and boot an InfiniBand fabric in minutes, helping to accommodate the growing demand for high performance computational clusters and grids.