ACADEMIA
Stanford University Advances High-Performance Computing
Stanford Engineering School Speeds Communications for Research Efforts in Large-Scale Simulations and Visualization: Cisco today announced that Stanford University, a premier research and education institution, is able to conduct first-of-a-kind complex computer simulations at its High-Performance Computing Center (HPCC) using Cisco 7008 InfiniBand Server Fabric Switches as the platform for server and inter-switch. After stringent benchmarking to determine the overall price-performance of the top-ranked InfiniBand switches available in the market, Stanford selected the Cisco switches based on superior reliability, high availability, and serviceability.
The new Stanford HPCC supports sponsored research efforts and credit-based courses within the School of Engineering, and has already become a leading center for large-scale simulations of computational fluid dynamics (CFD) and other engineering problems that require massively parallel computing resources. "The goal was to evaluate and choose the best price-performance options for each key cluster component and establish a reproducible best practice for rapid deployment," said Steve Jones, founder of the HPCC and HPC manager for flow physics and computational engineering. "We were looking for an inter-connect based on InfiniBand technology, but it wasn't just about finding the best hardware component. We wanted a complete solution including the message-passing layer -- a solid hardware and software combination." The Cisco 7008 Server Fabric Switches support dual-speed InfiniBand 4X double data rate (DDR) and single data rate (SDR) interfaces that deliver 20 and 10 gigabits-per-second bandwidth per port, respectively. The non-blocking cross-sectional bandwidth with low port-to-port latency enables the creation of high-performance server fabrics within large-scale clusters. The Cisco solution also integrates easily into the open, standards-based compute environment of the HPCC, which includes Linux-based servers and Rocks cluster management software. Additional benefits of the Cisco Server Fabric Switch solution include ease of installation and management, clean drivers and software stack to simplify support efforts, highly skilled Cisco engineers with in-depth understanding of HPC paradigms. The results have been very successful for the Stanford HPCC and its prestigious base of researchers. "By finding an optimal combination of foundational cluster technologies, we've been able to refine the art of cluster deployment and operation," says Jones. "The InfiniBand solution has helped us achieve very scalable results with CFD and other simulation codes. Just as important, we've had no failures that require restarts of applications. When codes can take more than a week to run, it's imperative that we provide cluster solutions with the best possible sustained uptimes." The HPCC server clusters are now enabling first-of-a-kind simulations for the study of structural dynamics, contact problems, nonlinear aeroelasticity of fighter aircraft, fluid-structure interaction, underwater acoustics, inverse problems, and shape optimization. "Storage systems could also benefit from the high-speed, low-latency characteristics of an InfiniBand fabric," said Jones. "We plan to further explore InfiniBand switching solutions to continue to push cluster platforms beyond current capacities and capabilities."