Woven Systems Ethernet switch powers world's fastest Ethernet cluster

Max Planck Institute’s ATLAS cluster becomes the world's fastest Ethernet cluster based on unprecedented efficiency of Woven's vSCALE technology: Woven Systems today announced that the ATLAS compute cluster at the Max Planck Institute for Gravitational Physics in Hannover, Germany, achieved top ranking among the 285 Gigabit Ethernet clusters in the most recent TOP500 list of supercomputer sites. The ATLAS cluster successfully leverages the Dynamic Congestion Avoidance feature in Woven’s 10 Gigabit Ethernet (GE) fabric to reach a performance level of 32.8 Teraflops, making it the fastest Ethernet cluster in the world today. “What is remarkable about the ATLAS cluster is that we were able to take the lead very cost-effectively with a creative combination of more processors at lower clock rates and a higher Ethernet switching efficiency,” explained Professor Bruce Allen, director of the Max Planck Institute for Gravitational Physics. “Woven’s 10 Gigabit Ethernet Fabric switch is able to deliver sustained performance at an impressive 64 percent of the theoretical peak. The HPC Linpack experts we consulted tell us that they have never seen such a high level of Ethernet efficiency on such a large cluster. Without the Woven switch, ATLAS would not be the world’s fastest Ethernet cluster. It’s that simple.” The Institute attributes Woven’s unprecedented efficiency to the vSCALE ASIC’s Dynamic Congestion Avoidance capability that is constantly balancing traffic loads intelligently and in real-time along available paths in the non-blocking 10 GE fabric. A cluster’s computational efficiency is critical to its performance, and efficiency is determined by the network’s ability to handle inter-processor communications. The efficiency is calculated using the Linpack benchmark from the ratio of the actual performance to the theoretical peak performance for all processors in the cluster. Of the 285 Gigabit Ethernet clusters on the Top 500 List, only six achieved a computational efficiency greater than 60 percent and of those, ATLAS was the most powerful. Of the remaining Gigabit Ethernet clusters, 76 had efficiencies of 50 percent or less; 97 had efficiencies of 51-55 percent and 106 had efficiencies of 56-60 percent. The ATLAS Cluster The new ATLAS compute cluster is located in Hannover, Germany at the Max Planck Institute for Gravitational Physics, also known as the Albert Einstein Institute. The division of Observational Relativity and Cosmology, which built and operates the cluster, is part of an international collaborative effort to make direct detections of gravitational waves, first predicted by Einstein’s General Theory of Relativity in 1917. The cluster must process a multitude of complex data analysis algorithms that scrutinize past and present measurements to search for patterns expected to result from passing waves. The four next-generation detectors deployed throughout the world generate over a terabyte of data daily, which is sent to all participating members for analysis. The ATLAS cluster consists of 1342 compute nodes occupying 32 full racks. The use of Intel Xeon Quad-core processors gives the cluster over 5,000 CPU cores. Each server node has its own dedicated Gigabit Ethernet connection to a Woven TRX 100 Ethernet switch, which has four separate 10 GE uplinks to the EFX 1000 Ethernet Fabric Switch at the core of the Ethernet fabric. Because this configuration is not over-subscribed, it assures non-blocking throughput performance for both inter-processor communications and storage access. “We specifically designed vSCALE to create an Ethernet fabric that constantly performs at peak efficiency, and the Max Planck Institute has confirmed this in the world’s fastest Ethernet cluster,” says Joe Ammirato, Woven’s vice president of marketing. “Gravitational waves are seminal to Einstein’s General Theory of Relativity, which is at the core of modern cosmology. Woven is proud to play in a role the Institute’s search for the existence of these waves as the enabling interconnect technology for the ATLAS cluster.”