SYSTEMS
Texas Memory Systems Demonstrates High-Performance InfiniBand Solid State
- Written by: Writer
- Category: SYSTEMS
Texas Memory Systems’ demonstrates new InfiniBand interface that delivers up to 3Gbytes per second of sustained random data access over just four interface ports. Texas Memory Systems announced it has developed solid state storage technology that utilizes an InfiniBand interface, an open I/O architecture that provides scaleable performance of 2.5 Gigabits per second to 120 Gigabits per second. Texas Memory Systems will be demonstrating their 4x InfiniBand interface this week at SC|05, the Super Computing conference in Seattle, Washington. Utilizing 4x InfiniBand a single RamSan solid state disk will offer up to 3-Gigabytes per second of sustained random data access over just four interface ports. The InfiniBand interface will allow Texas Memory Systems’ RamSan solid state storage to natively connect to the high-bandwidth, low latency servers used in high performance computing and Oracle grid computing environments. Currently, InfiniBand enables increased network bandwidth of up to 10 Gigabits, provides redundant connectivity support, allows for shared resources, and offers lower CPU utilization. "High performance computing environments are increasingly adopting InfiniBand as the network of choice for server to server interconnects due to its low latency," said Woody Hutsell, Texas Memory Systems’ Executive Vice President. "Conveniently, this same low latency network has enormously high bandwidth which perfectly accommodates our high bandwidth solid state disk systems and is critically important to our plans for a next generation disk storage system that we will announce next year." Texas Memory Systems is working closely with other InfiniBand technology providers to ensure flawless interoperability across every component in the system. Host adapters from one such provider, Mellanox, will be used by Texas Memory Systems as part of their demonstration at the SC|05 conference. “Texas Memory Systems’ native InfiniBand solid state storage opens storage I/O bottlenecks for bandwidth-hungry applications bypassing gateways or separate Fiber Channel SANs,” said Thad Omura, Vice President of Product Marketing for Mellanox Technologies. “In addition, the convergence of computing and storage traffic on the same InfiniBand fabric simplifies the network and eases the management of the entire cluster.” InfiniBand-based RamSan solid state disk systems are expected to be generally available in early 2006.