JNI Corp Announces Dual 10 Gb PCI-X to InfiniBand HCA Modules

BALTIMORE - JNI; Corporation (Nasdaq:JNIC) today announced two new dual-port 10 gigabit per second (Gb) PCI-X to InfiniBandc174; host channel adapter (HCA) modules for use in server cluster applications for leading database and High Performance Clustering applications such as Oracle 9i RAC, IBM DB2 Universal Database and MPI/Pro from MPI Software Technology. The InfiniBand Architecture has been selected by the industry's leading OEMs for their next generation of server clustering solutions. The JNI IBX-4x02i-C and the IBX-4x02m-C are built on second generation InfiniBand technology from IBM Microelectronics and Mellanox Technologies, respectively. JNI's InfiniStarâ„¢ line of HCA modules are shipped with the industry's only enterprise class software stack and will be featured in demos at the SC2002 Conference, Nov. 17-22 in Baltimore, Md. (JNI Booth #1845, MPI Software Technology Booth# 1903). "JNI Corporation is extremely pleased to be working with the two leading InfiniBand ASIC suppliers in the industry," said Shaun Walsh, General Manager I/O Solutions Group for JNI Corp. "JNI's InfiniStar HCA's provide enterprise-class software drivers for Linux and Microsoft. "Additionally, our HCA's are fully compatible with key open source software available on sites such as SourceForge. These relationships will enable us to continue to offer our customers the broadest level of support for their InfiniBand server cluster applications." Benefits of InfiniBand InfiniBand is an open standard supported by more than 100 leading companies via the IBTA (Infiniband Trade Association). An important feature of InfiniBand is the ability to implement Remote Direct Memory Access (RDMA) technology. RDMA allows server-to-server and server-to-storage data transfers to occur with minimal interruptions to the host system. InfiniBand's high performance and low latency has made it the choice of the major OEM systems providers for next-generation servers. All InfiniBand HCA modules support RDMA in hardware, so server networks are no longer constrained by I/O processing overhead and bandwidth limitations. As a result, high performance server networks that use dense, low-cost rack systems and blades have become attractive alternatives for many applications.