ISC 2005: PathScale's New InfiniPath Continues to Set Performance Records

PathScale, developer of innovative software and hardware solutions to accelerate the performance and efficiency of Linux clusters, has released new benchmark results that show its new InfiniPath interconnect for InfiniBand dramatically outperforms competitive interconnect solutions by providing the lowest latency across a broad spectrum of cluster-specific benchmarks. These results were released today at the International Supercomputer Conference 2005 in Heidelberg, Germany. The InfiniPath HTX Adapter is a low-latency cluster interconnect for InfiniBand™ that plugs into standard HyperTransport technology-based HTX slots on AMD Opteron servers. Optimized for communications-sensitive applications, InfiniPath is the industry's lowest-latency Linux cluster interconnect for message passing (MPI) and TCP/IP applications. PathScale InfiniPath achieved an MPI latency of 1.32 microseconds (as measured by the standard MPI "ping-pong" benchmark), n1/2 message size of 385 bytes and TCP/IP latency of 6.7 microseconds. This represents performance advantages that are 50 percent to 200 percent better than the recently announced Mellanox and Myricom interconnect products. InfiniPath also produced industry-leading benchmarks on more comprehensive metrics that predict how real applications will perform. "When evaluating interconnect performance for HPC applications, it is essential to go beyond the simplistic zero-byte latency and peak streaming bandwidth benchmarks," said Art Goldberg, COO of PathScale. "InfiniPath delivers the industry's best performance on simple MPI benchmarks and provides dramatically better results on more meaningful interconnect metrics such as n1/2 message size (or half-power point), latency across a spectrum of message sizes, and latency across multiprocessor nodes. These are important benchmarks that give better indications of real world application performance. We challenge users to benchmark their own applications on an InfiniPath cluster and see what the impact of this breakthrough performance means to them." PathScale InfiniPath uniquely exploits multi-processor nodes and dual-core processors to deliver greater effective bandwidth as additional CPUs are added. Any of the existing serial offload HCA designs cause messages to stack up when multiple processors try to access the adapter. By contrast, the unique messaging parallelization capabilities of InfiniPath enable multiple processors or cores to send messages simultaneously, maintaining constant latency while dramatically improving small message capacity and further reducing the n1/2 message size and substantially increasing effective bandwidth. "We compared the performance of PathScale's InfiniPath interconnect on a 16-node/32-CPU test run with VASP, a quantum mechanics application used frequently in our facility, and found that VASP running on InfiniPath was about 50 percent faster than on Myrinet," said Martin Cuma, Scientific Applications Programmer for the Center for High-Performance Computing at the University of Utah. "Standard benchmarks do not give an accurate picture of how well an interconnect will perform in a real-world environment. Performance improvement will vary with different applications due to their parallelization strategies, but InfiniPath almost always delivers better performance than other interconnects when you scale it to larger systems and run communications-intensive scientific codes. InfiniPath has proven to be faster and to scale better for our parallel applications than other cluster interconnect solutions that we tested." PathScale InfiniPath Performance Results PathScale has published a white paper that includes a technical analysis of several application benchmarks that compare the new InfiniPath interconnect with competitive interconnects. This PathScale white paper can be downloaded from: www.pathscale.com/whitepapers.html PathScale Customer Benchmark Center PathScale has established a fully-integrated InfiniPath cluster at its Customer Benchmark Center in Mountain View, California. Potential customers and ISVs are invited to remotely test their own MPI and TCP/IP applications and personally experience the clear performance advantages of the InfiniPath low-latency interconnect. InfiniPath Availability Over 25 leading Linux system vendors around the world have signed on to resell the InfiniPath HTX Adapter, including vendors in every major European market. The InfiniPath HTX Adapter card will ship in late June and is orderable immediately from vendors participating in the PathScale FastPath reseller program as described at www.pathscale.com/authorized_resellers.html