SYSTEMS
InfiniCon Systems Demonstrates 10Gbps InfiniBand Cluster
InfiniCon Systems, the premier provider of shared I/O and switching solutions for next-generation server networks, is demonstrating a 16-node InfiniBand cluster running a high performance computing (HPC) application based on the award-winning NAMD code at SuperComputing 2003, November 17-20 (Phoenix, Arizona). Winner of a 2002 Gordon Bell Award at SC2002, NAMD is a molecular dynamics code that renders an atom-by-atom blueprint of large biomolecular systems. Developed by the University of Illinois, NAMD is used worldwide at universities and research laboratories, enabling scientists to harness the nation's fastest supercomputers to decipher the tiniest components of living cells. "10Gbps networking, via InfiniBand, is being applied today to enable much more powerful and scalable computing environments," stated Todd Matters, InfiniCon chief technology officer. "Our NAMD demonstration at SC2003 underscores InfiniCon's leadership in the HPC market, established by the readiness of our InfinIO product suite to support large scale, real world application deployments." The NAMD HPC demonstration (Booth # 2228) consists of InfiniCon's InfinIO 3000 Switch configured to support a 10Gbp, full bisectional bandwidth fabric for 16 Opteron-based, clustered servers from Aspen Systems. The cluster will utilize the industry-standard Message Passing Interface (MPI) protocol for server-to-server communications. SC2003 attendees will be able to see a real-time graphical simulation of a glycerol molecule entering a cell through a membrane protein - and view data that compares how the InfiniBand-based InfinIO 3000 fabric measures up against performance and scaling data from alternative interconnect technologies, such as Gigabit Ethernet. "Attaining maximum performance for small simulations such as this isolated glycerol channel is a critical challenge for the continuing development of interactive molecular dynamics simulation methods by our group," stated Klaus Schulten, professor of physics and director of the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana-Champaign. Satisfying HPC Requirements for Performance and Scale Featuring low-latency, high bandwidth, and RDMA-transport technology, the InfiniBand Architecture is exceptionally suited to satisfy HPC application requirements. Its design attributes eliminate the bottlenecks of traditional server networking, and equip end-users to build incredibly powerful, distributed clusters that leverage advances in processor technology and low-cost, commodity hardware. InfiniCon's InfinIO 3000 Switch Series packs thirty-two 10Gbps InfiniBand ports into 1U of rack space, making it the densest switching solution in the industry and the leading platform to enable an enhanced model of HPC clustering. The InfinIO 3000 was recently selected by Fujitsu as the interconnect for the fastest Linux-based cluster in the world, a 512-node fabric that will deliver a peak performance of 12.4 trillion FLOPs (floating point operations per second).