RETAIL
Mellanox Delivers 300% I/O Bandwidth Advantage
Mellanox Technologies Ltd., the leader in InfiniBand technology, today announced that data centers and compute clusters can immediately gain a 300% I/O bandwidth increase with the integration of Intel Extended Memory 64 Technology (Intel EM64T) and PCI Express servers into the InfiniBand I/O fabric. This dramatic increase over legacy PCI-X based machines is directly due to the optimized interaction of Intel EM64T and PCI Express with the high bandwidth, low latency capabilities of InfiniBand, the only industry-standard 10Gb/s compute and storage interconnect with remote direct memory access (RDMA) and hardware transport. In addition to a 3X bandwidth improvement, using InfiniBand with PCI Express results in a measured 35% node-to-node latency reduction at the protocol level for implementations of the Message Passing Interface (MPI) Protocol commonly used by technical compute clusters. "The impressive performance that Intel EM64T and PCI Express servers interconnected with InfiniBand generates will accelerate the migration to scalable server clusters from expensive multi-processor machines," said Kevin Deierling, vice president of product marketing for Mellanox Technologies. "These complementary technologies from Intel and Mellanox are essential to realize the benefits of grid computing." "No I/O technology fully exploits the benefits of Intel EM64T and PCI Express technology more than InfiniBand," said Jim Pappas, director of initiative marketing for Intel's Enterprise Platform Group. "The combination of these technologies powered by the latest Intel Xeon processor-based platforms are the key ingredients to a perfectly balanced server and cluster architecture that is quickly proliferating into data centers and high performance computing environments. Intel EM64T, PCI Express and InfiniBand technology is changing the shape of business and technical computing." Intel® Extended Memory 64 Technology and InfiniBand The introduction of Intel Extended Memory 64 Technology in the recently announced Intel® Xeon™ processor at 3.60GHz enables addressing of large databases and complex technical computing data sets that span more than 4GB, a limitation of 32-bit addressing. While Intel EM64T solves the addressability of large sets sources for a single node, Mellanox's scalable InfiniBand interconnect takes this to the next level by enabling the distribution of multiple large data sets over many servers for parallel processing while maintaining the same efficiency. Fluent Inc., a supplier of flow modeling software that runs over a cluster of servers, has seen up to a 25% performance advantage using an Intel EM64T enabled cluster interconnected with InfiniBand as compared to the same cluster running in 32-bit mode. "We expect that an Intel Xeon processor-based platform with Intel EM64T running over InfiniBand will be an excellent solution for Fluent customers, who will benefit from the combination of 64-bit capability to address large problem sizes and the high speed, low latency communication medium necessary for superior scaling on large parallel clusters," said Paul Bemis, VP of product marketing at Fluent. "We are seeing significant interest in this solution and are very pleased to be partnering with Intel and Mellanox to ensure success in this rapidly evolving technology area." PCI Express and InfiniBand Throughput benchmarks run on servers connected by dual port InfiniBand HCAs result in 2628 Mbytes/s (21Gb/s) of aggregate data throughput across an 8X PCI Express slot. This is a 300% bandwidth improvement as compared to 853 Mbytes/s (6.8Gb/s) of aggregate data throughput across a 133MHz PCI-X slot. Today, leading server manufacturers such as SuperMicro, in addition to other major server OEMs, are delivering solutions to the market that offer these significant performance advantages with InfiniBand and PCI Express technologies. "We are delighted that Mellanox's PCI Express InfiniBand HCA solutions are immediately available in tandem with the launch of our PCI Express based server solutions," said Hermann von Drateln, Director of Business Development at SuperMicro. "Our PCI Express servers with InfiniBand enable end users with the performance improvements desired in clustered database, storage and High Performance Computing (HPC)." InfiniBand, Always A Generation Ahead InfiniBand has garnered support from all first tier server vendors and leading storage vendors. The OpenIB alliance (www.openib.org) is driving common software that will unify and accelerate the development of future InfiniBand solutions. With a roadmap to 120Gb/s per port, InfiniBand has established a performance leadership position in the interconnect market. As the leading supplier of silicon products for InfiniBand technology, Mellanox is executing on a product roadmap that delivers both industry leading performance as well as unsurpassed value for compute server, communications, and storage connectivity. Come Visit Us at LinuxWorld 2004 (San Francisco, CA, August 2-5) A live demonstration of Mellanox's InfiniBand technology that interconnects a cluster of Intel EM64T and PCI Express based servers will be running at LinuxWorld 2004. The demonstration is located at SuperMicro's Booth (#1465) during exposition hours.