Mellanox FDR 56Gb/s InfiniBand Interconnect Solutions in the TOP500 Deliver Unrivaled ROI

Mellanox Accelerates Half of the World’s Petaflops Systems; Delivers Scalable Networking for Next Generation Supercomputers

Mellanox Technologies has continued its command as the leading global interconnect solution provider for the TOP500 list of supercomputers. For the first time, the November 2011 TOP500 list includes FDR 56Gb/s InfiniBand-based systems, all based on Mellanox interconnect solutions. Mellanox FDR InfiniBand-connected systems demonstrate the highest x86 clustering efficiency, delivering the highest performance per node. According to the list, InfiniBand-connected systems grew at a CAGR of 15 percent from 2008-2011, connecting and delivering leading performance and efficiency to 210 supercomputers, closing the gap with the 224 Ethernet-based clusters on the list. Mellanox’s scalable interconnect solutions accelerate all five of the InfiniBand-based Petascale systems on the TOP10 and deliver the best power-efficient Petascale system: 1.7X better versus the combined Petascale systems average. The advanced offloads and accelerations within Mellanox InfiniBand solutions enable the most performance efficient system on the TOP500 list, at nearly 96 percent system and CPU efficiency.

From November 2010 to November 2011, the total number of InfiniBand-connected CPU cores on the TOP500 list grew 24 percent and the amount of InfiniBand-based system performance grew 44 percent. This growth highlights the surging demand for InfiniBand as a way to maximize computing resources, productivity and scalable performance in the world’s fastest computer systems.

Mellanox ConnectX InfiniBand adapters and switch systems optimize server and storage performance and provide the scalable, low-latency, and power-efficient interconnect for the world’s fastest supercomputers, representing 50 percent of the PetaScale systems and 50 percent of the TOP10 (five systems). InfiniBand connects the majority of the TOP100 with 55 percent (55 systems), the TOP200 with 59.5 percent (119 systems), the TOP300 with 50.7 percent (152 systems), and the TOP400 with 46.8% (187 systems) versus Ethernet or any other technology. Mellanox end-to-end FDR 56Gb/s InfiniBand solutions deliver the leading performance and utilization and provide users with the best return-on-investment for their high-performance computing server and storage infrastructure computing requirements.

“Mellanox InfiniBand interconnect solutions continue to demonstrate leading performance, efficiency, scalability and reliability for the world’s fastest supercomputer systems,” said Eyal Waldman, president, chairman and CEO of Mellanox Technologies. “With over half of the world’s Petaflop systems, as well as the top three most efficient systems on the list, Mellanox FDR 56Gb/s InfiniBand and 10/40GbE interconnect solutions with PCI Express 3.0 provide the best return-on-investment with leading system efficiency without sacrificing performance.”

Highlights of InfiniBand usage on the November 2011 TOP500 list include:

  • Mellanox InfiniBand provides the highest system utilization on the TOP500, up to 96 percent system utilization, and connects the top three and eight out of the top ten most efficient systems.
  • InfiniBand is the most used interconnect in the TOP100, TOP200, TOP300 and TOP400: 55 percent of the TOP100, 59.5 percent of the TOP200, 50.7 percent of the TOP300 and 46.8 percent of the TOP400.
  • InfiniBand connects half of the world’s most powerful Petaflop systems on the list.
  • InfiniBand connects 7X times the number of Cray-based systems in the TOP500 systems and 3X the number of Cray-based systems in the TOP100 systems.
  • Clusters continue to be the dominant system architecture with 82 percent of the TOP500 list.
  • Mellanox end-to-end InfiniBand scalable HPC solution accelerates 92 percent of the GPU-based systems
  • Mellanox InfiniBand interconnect solutions present in the TOP500 are used by a diverse list of applications, from large-scale, high-performance computing to commercial technical computing and enterprise data centers.

Supporting Resources: