Mellanox Announces Availability of Turnkey NFS-RDMA SDK for InfiniBand

Linux Network File System with InfiniBand RDMA Delivers Significant Cost, Scaling and Performance Benefits to End User Applications: Mellanox Technologies today announced the general availability of the NFS-RDMA SDK (Network File System over Remote Direct Memory Access Software Development Kit) for its InfiniBand adapter products. The SDK supports the OpenFabrics Enterprise Distribution (OFED) version 1.2 software stack and delivers 1.3GB/s (gigabytes per second) of read throughput and 600MB/s (megabytes per second) of write throughput over a single InfiniBand link – a ten fold improvement to existing NFS over Gigabit Ethernet solutions available in the market. “Mellanox InfiniBand solutions offer the best price/performance when it comes to delivering compute and storage capacity scaling, and performance using multi-core CPU-based commodity servers,” said Sujal Das, director of software product management at Mellanox Technologies. “With the release of the NFS-RDMA SDK for InfiniBand, we are accelerating the time to market for OEMs and end users, enabling them to gain a competitive edge and better ROI for their network file systems based applications.” Unprecedented Network File System Performance Using the Mellanox MTD2000 storage platform reference design (based on Intel CPUs, Mellanox InfiniHost III adapters, OFED 1.2, RHEL5 or SLES 10 SP1) as an NFS-RDMA server, and up to four NFS-RDMA clients, read performance of 1.3GB/s has been achieved for file sizes ranging from 64 to 1024 megabytes, and write performance of 550 to 590MB/s for file sizes ranging from 64 to 512 megabytes (using the IOzone file system benchmark available at www.iozone.org/). Also, read and write throughput is maintained for all record sizes ranging from 4 kilobytes to 512 kilobytes. Cost-effective Scaling and Resource Consolidation for Real Applications Through the use of low latency, node-to-node connectivity and low price and power per megabyte of available bandwidth, InfiniBand enables cost effective scaling of both compute and storage capacity. The high storage throughput of InfiniBand and NFS-RDMA equates to an estimated 5X price/performance improvement of the I/O compared to existing enterprise-class storage platforms based on Gigabit Ethernet interfaces. Use of white-box Linux-based commodity storage platforms such as the MTD2000 can further reduce the capital cost of deployment. Applications where storage and compute capacity growth are critical can benefit from the Mellanox’s NFS-RDMA SDK. These applications include clustered databases, CAD (computer-aided design), CAE (computer-aided engineering), DCC (digital content creation), EDA (electronic design automation), financial services, order management, and web services. For example, in web services applications where managing storage capacity growth and delivering high levels of transaction performance at minimal cost are critical, NFS-RDMA over InfiniBand has been proven to deliver compelling values. “To deliver end user satisfaction with applications that demand multi-terabytes of files-based storage capacity, cost and power savings are as important as ability to scale in a flexible way,” said Ekechi Nwokah, Storage Architect at Alexa Internet, an Amazon.com subsidiary. “Our data mining applications using NFS-RDMA and Mellanox-based InfiniBand solutions are enabling us to deliver on that promise with maximum ROI.” Availability The NFS-RDMA SDK is an open source and free package available now from Mellanox (www.mellanox.com/products/nfs_rdma_sdk.php). The package accelerates OEM development of a complete NFS-RDMA storage solution and includes both the NFS-RDMA client and server software stack that is compatible with virtually any commodity, Linux-based x86 server. Come see a demonstration of the NFS-RDMA SDK using the Mellanox MTD2000 platform reference design (www.mellanox.com/products/mtd2000.php) at the Mellanox booth (#314) at Linux World 2007 in San Francisco, August 6-9, 2007.