ACADEMIA
IU showcases innovative approach to networking at SC11 SCinet Research Sandbox
This experimental network was created to support testing by several universities during the SCinet Research Sandbox (SRS), part of this year's SC11 conference in Seattle, Washington. SRS let researchers assess experimental networking methods in a 100Gbps environment provided by SCinet, ESnet, and Internet2.
The first of its kind production network was equipped with multi-vendor OpenFlow-capable switches. IU's SRS entry, "The Data Superconductor: An HPC cloud using data-intensive scientific applications, Lustre-WAN, and OpenFlow over 100Gb Ethernet," used the Lustre file system and cutting-edge network infrastructure to address challenges created by the exponential growth in volume of digital scientific research data.
A complete cluster and file system operated at each end of the 2,300 mile 100Gbps link running between Indianapolis and Seattle. In a series of demonstrations, IU researchers achieved a peak throughput of 96Gbps for network benchmarks, 6.5Gbps using IOR (a standard file system benchmark), and 5.2Gbps with a mix of eight real world application workflows.
As of press time (Nov. 22), this appears to be the fastest data transfer ever achieved with a 100Gbps network at a distance of thousands of miles.
"100 Gigabit per second networking combined with the capabilities of the Lustre file system could enable dramatic changes in data-intensive computing," said Stephen Simms, manager of the High Performance File Systems group at Indiana University. "Lustre's ability to support distributed applications, and the production availability of 100 gigabit networks connecting research universities in the US, will provide much needed and exciting new avenues to manage, analyze, and wrest knowledge from the digital data now being so rapidly produced."
US scientists need this capability to enhance scientific competitiveness and open new frontiers of digital discovery. The rapid acceleration of data growth presents obstacles for researchers who manage and transfer large data sets and participate in widely distributed collaborations.
IU's Data Superconductor is optimized for file system operations over the wide area network, and includes features for collaborating across administrative domains using multi-site workflows and distributing data from instruments to compute resources.
Since IU's Data Superconductor is a Lustre-based, high performance file system, it requires no special tools or software to transfer data. It also behaves as a standard POSIX-compliant file system, and features cross-domain authorization capabilities developed at Indiana University.
Notes Robert Henschel, manager of the High Performance Applications group, "The beauty of dealing with data distribution at a file system level is its simplicity. With a centralized file system serving thousands of computational resources around the world, user data can be available everywhere, all of the time."
IU also demonstrated how to have high-performance applications dynamically signal resource requests to the network.
"We used the Extensible Session Protocol (XSP) and OpenFlow to dynamically move one application's traffic between Seattle and Indianapolis from a congested path to one with unused capacity, vastly improving performance," said Matt Davy, director of InCNTRE and chief network architect for the IU GlobalNOC. "IU is dedicated to exploring and demonstrating these types of exciting new advancements in high performance networking—the Sandbox challenge was a great opportunity for us to showcase the work we are doing in these areas."
Internet2, a key collaborator on IU's SRS entry, contributed a 100GbE circuit between Indianapolis and Chicago, as well as the optical system that brings that traffic to Seattle at 100Gb. In addition, Brocade contributed MLXe Ethernet routers equipped with 100GbE blades and a 15.36Tbps fabric for increased performance with less infrastructure and operational overhead. The 100GbE blades let IU aggregate multiple ports to create a single logical link for greater bandwidth and reduced management. IBM provided a pair of G8264 switches with OpenFlow firmware.
IBM, Brocade, Ciena, DataDirect Networks, Whamcloud, and TU Dresden provided support for IU's SC11 demonstrations.