Myricom Demonstrates Low-Latency 10-Gigabit Ethernet

SPECIAL COVERAGE FROM ISC2006 -- New Software Achieves HPC-Calibre Performance With Standard Ethernet Switches; Sets New Benchmark for Open Solutions -- At the International Supercomputer Conference in Germany this week, Myricom is introducing new message-passing software for its Myri-10G products, delivering 10-Gigabit Ethernet latencies formerly associated only with specialty High-Performance Computing (HPC) interconnects. Myricom extended its Myrinet Express (MX) software, already widely used in HPC clusters interconnected with Myrinet, to operate also over 10-Gigabit Ethernet. "MX over Ethernet" works with Myricom's dual-protocol Myri-10G network-interface cards (NICs) and conventional 10-Gigabit Ethernet switches to achieve latencies 5 to 10 times lower than with TCP/IP over Ethernet. MX/Ethernet delivers 2.4micros latency at the application level, 1.2 GigaByte/s (9.6 Gigabit/s) throughput, and very low host-CPU utilization. These performance metrics are nearly on par with those achieved by MX over Myrinet. MX/Ethernet extends the power of HPC beyond clusters and grids to enterprise applications already employing Ethernet switching. Creating a new standard in open Ethernet networking, MX/Ethernet is "plug-and-play," transparent to Ethernet switch makers, less expensive than proprietary HPC solutions, and applicable both to HPC and to enterprises. Besides latencies 5 to 10 times lower than TCP/IP over Ethernet, MX/Ethernet provides host-CPU utilization in a range below 1%, e.g., 1micros of host CPU time to transfer one MegaByte in 833micros. "Networking experts have known for ages that there is no technical reason why Ethernet can't perform better," says Dr. Chuck Seitz, founder & CEO of Myricom. "What we're doing with MX/Ethernet is creating an open solution that brings the best of HPC to Ethernet with no compromises in terms of standards, interoperability, or software compatibility. One can think of this convergence as 'HPC for everyone.'" How It Works Myricom's Myri-10G solutions introduced a convergence at 10-Gigabit/s data rates of Myrinet, the most successful specialty network for HPC applications, and mainstream Ethernet. Myri-10G created an open, flexible, new standard for HPC interconnects and high-end enterprise networking. Dual-protocol Myri-10G Network Interface Cards (NICs) initially achieved optimal performance running MX software using the Myrinet protocol through Myri-10G switches. MX's kernel-bypass techniques achieve low latency and low host-CPU overhead by allowing application programs to communicate directly with firmware in the programmable Myri-10G NICs. Now, the availability of MX/Ethernet extends MX's techniques to standard 10-Gigabit Ethernet switching. OEMs and cluster integrators can achieve HPC performance with purely open technology. Myricom will make the MX/Ethernet protocols fully open and accessible, just as with earlier Myrinet protocols and source code. MX/Ethernet with Myri-10G NICs uses 10-Gigabit Ethernet as a layer-2 network with an MX EtherType to identify MX packets. The EtherType, a part of the Ethernet standards since the earliest days, identifies the protocol of an Ethernet frame. For example, there are EtherTypes for the Internet Protocol (IP), Address Resolution Protocol (ARP), and AppleTalk, and all of these protocols can be carried concurrently on the same Ethernet network. Ethernet switches normally ignore the EtherType. Myri-10G NICs can carry TCP/IP and other traffic along with MX traffic, but achieve the best performance when they circumvent TCP/IP. Early MX/Ethernet trials were conducted using a Fulcrum Microsystems FM2224, 24-port, 10-Gigabit switch and a Fujitsu XG700, 12-port, 10-Gigabit Ethernet switch. Both of these layer-2 switches are relatively low-latency, and delivered user-level latency and host-CPU utilization comparable with pure Myrinet configurations. These solutions will be limited to smaller clusters that can be served with a single 10-Gigabit Ethernet switch, because of performance losses in building larger networks by connecting multiple Ethernet switches. Inasmuch as there are no high-port-count, low-latency, full-bisection, 10-Gigabit Ethernet switches on the market today, MX/Myrinet with 10-Gigabit Myrinet switches will continue to be preferred for large clusters because of the economy and scalability of Myrinet switching. "MX/Ethernet provides strong new evidence that 10-Gigabit Ethernet will become the interconnect technology of choice for HPC clusters, initially for small clusters, but, as 10-Gigabit Ethernet switch technology advances, for larger clusters as well," says Seitz. "Gigabit Ethernet dominates HPC clustering and the TOP500 Supercomputer list today, but is too slow for high-end cluster hosts with multiple, multi-core processors. Both in HPC and in enterprise networking, the MX/Ethernet innovation will enable 10-Gigabit Ethernet to gain market share rapidly over specialty networks. I'd like to think Bob Metcalfe would be proud." Myricom at ISC2006 Myricom's ISC2006 booth (C19-22) features a cluster comprised of eight dual 3GHz dual-core Intel Xeon (Woodcrest) hosts, 32 processors total, connected with Myri-10G NICs to both a 128-port 10G-Gigabit Myrinet switch and a Fulcrum Microsystems FM2224 10-Gigabit Ethernet switch. The cluster can be booted into either Linux or Microsoft Windows Compute Cluster Server 2003. With an Rpeak of 384 Gflops and an Rmax of ~300 Gflops, this cluster is as fast as those qualifying for the TOP500 Supercomputer list just a few years ago. The cluster was provided by Myricom's Swiss cluster integrator, Dalco AG (Web site), a highly experienced supplier of Myrinet clusters used in many applications, including the design of Formula-1 race cars. Visitors to the Myricom booth will be able to observe benchmarks and applications in both MX/Myrinet and MX/Ethernet modes.