APPLICATIONS
NCSA, Intel and Mellanox to feature MPI over InfiniBand at SC2002
BALTIMORE--The National Center for Supercomputing Applications (NCSA), Intel, and Mellanox have collaborated to develop an Open Source MPI stack optimized for 10 gigabit-per-second (4X) InfiniBand networks, which will be used in a demonstration utilizing InfiniBand in the National Computational Science Alliance(Alliance)/ NCSA booth at SC2002 (R1249). The demonstration will feature the open source Cactus Computational Toolkit, used in a wide range of scientific disciplines, and NAMD, an application used to simulate large biomolecular systems. The demonstration will build on NCSA's previous work with applications that used the first InfiniBand solutions, the Message Passing Interface (MPI) protocol, the industry standard protocol for HPC implementations, and the Virtual Machine Interface (VMI), NCSA software developed to support messaging across high performance networks. "Our work with Intel has offered us the opportunity to build high-performance clusters using industry-standard InfiniBand technology on an open source software base," said Rob Pennington, director of NCSA's Computing and Data Management Directorate. "Our demonstration of InfiniBand HPC clusters in action is important because it offers a glimpse of a new generation of high-performance computing interconnect." The demonstration will use the NCSA/Alliance booth's clustered Linux systems running MPI applications and NCSA's VMI software. VMI is a middleware communication layer that supports MPI codes in a cluster environment and can support different interconnect technologies without having to recompile the application. The InfiniBand architecture, combined with VMI, is an interconnect for HPC clusters that provides a low-latency, high-performance interconnect infrastructure. The InfiniBand cluster demonstration is a product of ongoing work between NCSA and Intel to deliver InfiniBand architecture-based HPC solutions. NCSA has recently joined Intel's InfiniBand evaluation program to continue its work on InfiniBand fabric connectivity, including delivery of a 10 Gb/s -based InfiniBand cluster. Mellanox has provided 10 Gb/s InfiniHost HCA and InfiniScale's switch hardware to the collaboration. "Mellanox is extremely pleased with the great support that the NCSA is offering for InfiniBand solutions," said Michael Kagan, vice president of architecture at Mellanox Technologies. "This collaboration of world class system support from Intel, MPI software leadership from the NCSA, and InfiniHost silicon is proving to be a winning combination for introducing a generational step forward to the 10 Gb/s clustering that InfiniBand has to offer for the HPC market." Demonstrations on the SC2002 exhibit floor will take place Monday, Nov. 18, from 7 - 9 p.m., Tuesday, Nov. 19, and Wednesday, Nov. 20, from 10 a.m. to 6 p.m., and Thursday, Nov. 21, from 10 a.m. to 4 p.m. Stop by the Alliance/NCSA booth for a demonstration schedule.