Mellanox InfiniBand Delivers World Record Message Rate Performance for High-Performance Applications

MPI Performance of 90M Messages per Second Provides Highest Performance and Scalability for CPU/GPU Compute Demanding Systems

Mellanox Technologies has announced that the Company’s ConnectX-2 InfiniBand adapters and IS5000 InfiniBand switches have demonstrated world-record MPI performance for node-to-node communications. Recent cluster benchmarking performed at the Mellanox performance optimizations lab highlight Mellanox InfiniBand’s achievement of nearly 90 million messages per second for MPI message rate for node-to-node communications, more than three times better than other comparable InfiniBand solutions. Further benchmarks also showed that Mellanox end-to-end InfiniBand connectivity provided applications with linear scalable message rate as nodes and processes were increased.

“Delivering the highest node-to-node MPI message rate coupled with complete transport offload and MPI accelerations such as MPI collectives offload enable HPC users to build balanced, very efficient, bottleneck-free CPU/GPU systems,” said Gilad Shainer, senior director of HPC and technical computing. “Mellanox’s high-performance interconnect solutions are designed to support the growing needs of scientists and researchers worldwide with higher application performance, faster parallel communications and the highest scalable message rate.”

Mellanox’s end-to-end InfiniBand connectivity, consisting of the ConnectX-2 line of I/O adapter products, cables and comprehensive IS5000 family of fixed and modular switches, deliver industry-leading performance, efficiency and economies of scale for the best return-on-investment for performance interconnects. Mellanox provides its worldwide customers with advanced and highest performing, end-to-end networking solutions for the world’s most compute-demanding applications.