GOVERNMENT
Joint Industry and Government Initiative to Demonstrate Long Distance InfiniBand
Cisco Systems, Intel Corporation, Lawrence Livermore National Laboratory, Microsoft, Naval Research Laboratory, Obsidian Research, the OpenIB Alliance and Qwest Communications today announced they are demonstrating extended computing resources using InfiniBand technology as part of SCinet at SC05. Sponsored by the IEEE and ACM, SC05 is the premier international conference on high performance computing, networking and storage. These leading telecommunications and technology organizations are jointly providing the equipment, software and applications for industry and research partners to demonstrate the value of high-performance, low-latency direct access networking at 10 Gigabits per second over a long distance infrastructure. InfiniBand is a high performance, switched fabric interconnect standard for servers and OpenIB is an industry Alliance that supplies open source InfiniBand software. Both are quickly becoming the preferred standard in high performance computing, grid and enterprise data centers. The demonstration will link servers, clusters, storage systems, switches and optical service platforms over circuits provided by Qwest originating at the Intel data center in DuPont, Wash. and extending over 50 miles to the Washington State Convention Center via the Pacific Wave NorthWest GigaPop at the University of Washington. During the conference several demonstrations will highlight some of the possible applications for long distance InfiniBand: -- Remote data center replication -- High performance interfaces to the WAN for IB-based clusters+supercomputers -- GRID computing -- High performance media streaming -- Campus area InfiniBand: aggregating departmental clusters into super-clusters -- Campus and metro InfiniBand storage At each endpoint, InfiniBand over optical (either DWDM or SONET OC192) is converted by Longbow XRs from Obsidian Research. The Longbow enables globally distributed InfiniBand fabrics to seamlessly cross connect by encapsulating 4X InfiniBand over OC-192c SONET, ATM or 10GbE WANs at full InfiniBand data rates. The conversion is totally transparent to the InfiniBand fabric and is interoperable with OpenIB's software stack and subnet manager. The Naval Research Laboratory (NRL) initiated and supported early development of this capability. "Microsoft's keynote demonstration at Supercomputing 2005 showcased the improved productivity for scientists made possible by seamless access from the workstation to structured data stores, personal desk-side clusters for interactive analysis and large heterogeneous pools of computing resources for detailed studies," said Kyril Faenov, director of high performance computing, Microsoft Corp. "The high-bandwidth connectivity to Intel's Dupont location allowed us to seamlessly incorporate a 256-core Intel Xeon cluster running Windows Compute Cluster Server 2003 to the mix of computing resources." "The goal of this demonstration is to show that InfiniBand and the OpenIB software can support advanced simulation, computing and visualization across wide area networks," said Bill Boas vice-chair of OpenIB and a computer scientist at Lawrence Livermore National Laboratory. "Our researchers need such capabilities to enable the range of simulations they perform for our stockpile stewardship mission for the National Nuclear Security Agency of the Department of Energy." "InfiniBand is an industry standard that is currently deployed worldwide, and this demonstration highlights the benefits that are driving so many organizations to make InfiniBand their interconnect of choice," noted Jim Pappas, director of technology initiatives, Server Platforms Group, Intel Corporation. "Companies that need high performance, low latency, enterprise-level communication, particularly over significant distances, will be particularly interested in the demonstration." Cisco Systems has contributed the Cisco ONS 15454 SONET Multiservice Provisioning Platform (MSPP) chassis with Cisco ONS 15454-OC192-LR2 Line cards and Cisco ONS 15454 DWDM Multiservice Transport Platform (MSTP). Intel's contribution includes use of 128-nodes (512-cores) from the Intel Dual-Core HPC Cluster physically located in DuPont, Wash. Based on off-the-shelf technologies, including the next-generation dual-core Intel Xeon processor and an InfiniBand interconnect, the cluster represents a new era that rapidly increases performance while reducing or holding steady the requirements for power, heat and floor space. Industry collaborators and end users can access the machine through the Intel Remote Access Service and use it to test drive their codes and accelerate their move to Intel multi-core computing. Alone, this cluster delivers theoretical peak performance of 3.2 teraflops. NRL and Obsidian Research donated the Longbow XR InfiniBand range extenders and Qwest provided the optical fiber infrastructure.