Mellanox Technologies Simplifies 10Gb/s High Performance Computing

Mellanox Technologies Ltd., the leader in InfiniBand solutions, today announced availability of the HPC (High Performance Computing) Gold Collection to facilitate rapid installation and deployment of 10Gb/s high performance computing clusters. HPC Gold includes the key InfiniBand software components needed to build large scale compute clusters in a single, flexible, easy-to-install package. Included is a snapshot of the latest drivers and management tools from the OpenIB.org project and a choice of MPI (Message Passing Interface) implementations optimized for 10Gb/s, 30Gb/s and future InfiniBand networks. The software is integrated, tested, and validated together on mainstream 32 and 64-bit platforms including Intel PCI Express enabled systems, Intel® Extended Memory 64 Technology, Itanium, and AMD Opteron. Together with Mellanox HCA silicon devices, HPC Gold offers a turnkey solution for OEMs, integrators, motherboard manufacturers, and software vendors, enabling affordable industry standard clustering solutions with 10Gb/s and 30Gb/s InfiniBand equipment. "Building clusters often requires significant tuning and configuration effort with the various mix of protocols, devices, libraries, and new 64-bit architectures coming to market," said Michael Kagan, Vice President of Architecture for Mellanox Technologies, Ltd. "InfiniBand, being an industry standard interconnect, significantly reduces this complexity with native 64-bit support, in-band management capabilities, and the ability to simultaneously support high performance data traffic for MPI, storage, and TCP/IP. We are pleased to offer the HPC Gold Collection which encapsulates these capabilities in a single, easy-to-install package." HPC Gold offers programming and performance features attractive to both commercial and open source application developers. The package eases support and development of cluster computing applications by offering a choice of integrated performance-tuned programming tools based on the Message Passing Interface (MPI) standard, the most commonly used protocol in cluster and supercomputer systems. Included are optimized MPI protocols from two popular open source projects from NCSA (National Center of Supercomputing Applications) and Ohio State University. Both MPI stacks offer superb bandwidth and latency performance that fully utilize the performance benefits of InfiniBand. The NCSA package offers a rich set of development tools including the ability to "compile once, run anywhere," eliminating the need for developers to re-compile for every different platform. HPC Gold also supports tools from third party vendors. Furthermore, HPC Gold enables Grid computing by simplifying the process of connecting InfiniBand clusters to the Grid, i.e. building a "Cluster of Clusters" (CoC). NCSA's MPI is Grid-Enabled with the ability to spawn a single MPI job across geographically distributed clusters. With the ability to load multiple communication devices at runtime, a high performance InfiniBand network can be used for intra-cluster communication while using other protocols for inter-cluster communication, transparently. "NCSA has years of experience using our MPI protocol with a broad range of scientific applications and production supercomputing systems," said Rob Pennington, Interim Director of NCSA. "The performance we see with InfiniBand is exciting, and clearly the HPC Gold Collection with NCSA's MPI, 10Gb/s and 30Gb/s connectivity, PCI Express local I/O, and broad 64-bit platform support will help users get the most performance out of their clusters as possible." HPC Gold is available now from Mellanox and can be downloaded free of charge from www.mellanox.com.