SYSTEMS
Max Planck Institute for Gravitational Physics Chooses Woven Systems
Woven’s Non-Blocking 10 Gigabit Ethernet Fabric Achieves Over 30 Teraflops of Performance With Unprecedented Efficiency for Ethernet in Large-Scale Clusters: Woven Systems today announced that the prestigious Max Planck Institute for Gravitational Physics (Albert Einstein Institute, Hannover, Germany) is using Woven’s EFX 1000 10 Gigabit Ethernet (10 GE) Fabric Switch and TRX 100 Ethernet Switch in a large HPC cluster to search for gravitational waves predicted by Albert Einstein’s General Theory of Relativity. The Woven’s Ethernet Fabric provides access to more than one petabyte of data supplied by a worldwide network of gravitational wave detectors. The data is distributed to compute cluster nodes via the Woven all-Ethernet solution. “Gravitational wave research is one of the most exciting fields of science. It will open a complete new window to the universe, and requires very large-scale and sophisticated computing technologies. Our research is pushing the state-of-the-art in computational analysis, and Woven’s innovative technology gives us a higher-performing and more flexible 10 GE network than traditional HPC switch suppliers,” says Professor Bruce Allen, director of the Institute. “The price/performance and flexibility of the Woven 10 Gigabit Ethernet Fabric is unmatched by any other switching solution we could find. This allows us to get more computing cycles for our money. It also makes it easier to evolve and upgrade the system in the future, as our needs and hardware base change.” During the Institute’s extensive acceptance testing, the Woven Ethernet Fabric achieved over 30 Teraflops of performance using the HPC Linpack benchmark, which places the application on par with the Top 50 of the www.Top500.org list from November 2007. Based on Woven’s vSCALE Dynamic Congestion Avoidance capability in a non-blocking 10 Gigabit Ethernet Fabric, the cluster was able to reach 64% of the theoretical peak efficiency possible. “The HPC Linpack experts we consulted tell us that they have never seen such high Gigabit Ethernet efficiencies on such a large cluster,” Professor Allen adds. The Institute’s data center, located in Hannover, Germany, is used for research in the division of Observational Relativity and Cosmology. The most active research area develops and implements data analysis algorithms for evidence of the gravitational waves predicted by Einstein’s General Theory of Relativity. The Institute is part of an international collaboration that shares data from the latest generation of sensitive detectors based in the USA (LIGO) Italy (VIRGO) and Germany (GEO). The Institute also helps to operate the distributed computing project Einstein@Home, which searches for gravitational wave signals from pulsars. Max Planck’s fully non-blocking 10 GE Woven Fabric consists of a single EFX 1000 10 Gigabit Ethernet Fabric Switch, configured with 144 10 GE ports, and 34 TRX 100 “Top-of-Rack” Ethernet Switches. Each 48-port TRX 100 provides Ethernet connectivity for individual servers with Intel Quad-Core processors. Each server has a dedicated 1 Gbps Ethernet port on the TRX 100, which has four separate 10 GE uplinks to the EFX 1000 at the core of Woven’s 10 Gigabit Ethernet Fabric. The EFX 1000 also provides Ethernet connectivity to a large storage system housing a petabyte of measurement data. Collectively the system has a storage capacity in excess of 1,100 Terabytes, and more than 5,000 CPU cores. “The Max Planck Institute showcases Woven’s unique 10 Gigabit Ethernet Fabric technology, which is now part of this important research at the leading edge of astronomy and cosmology,” says Joe Ammirato, Woven’s vice president of marketing. “The EFX 1000 10 Gigabit Ethernet Fabric Switch was designed specifically for advanced projects like this, which require non-blocking 10 GE throughput with ultra-low latency and jitter on a large scale.” Please see Woven Systems this week at Interop Las Vegas in booth 637.