Mellanox InfiniBand Accelerates IBM System Cluster 1350

ConnectX 20Gb/s InfiniBand Adapters Provide Unprecedented Performance for Leading Compute Clusters: Mellanox Technologies today announced that its ConnectX 20Gb/s adapters with PCI Express Gen2 support provide the optimal low latency, high-performance interconnect for the IBM System Cluster 1350, IBM's high-performance, high-density cluster solution. ConnectX adapters deliver the highest data throughput and lowest latency of any standard PCI Express adapter available today, thereby accelerating applications and data transfers in High Performance Computing (HPC) environments. “Our customers require interconnect with best-in-class price/performance for their high-performance computing and business performance computing workloads,” said Sergio Amoni, director of marketing for System x and BladeCenter at IBM. “Mellanox ConnectX adapters deliver unparalleled performance for the most demanding applications and are ideally suited for the extreme performance supercomputing environments served by the System Cluster 1350.” “Mellanox’s ConnectX InfiniBand adapters deliver unparalleled I/O connectivity for high-throughput and latency-sensitive clusters,” said Thad Omura, vice president of product marketing at Mellanox Technologies. “The IBM Cluster 1350 with Mellanox ConnectX connectivity provides an ideal solution for a broad range of application environments, including financial services, government and education, industrial design and manufacturing, life sciences, and web serving and collaboration.” In addition to ConnectX, Mellanox’s single-port InfiniHost III Lx 20Gb/s InfiniBand adapters and line of copper InfiniBand cables have been certified by IBM for use with the IBM System Cluster 1350. The InfiniHost III Lx adapter is an ideal solution for cost-sensitive applications seeking to benefit from InfiniBand’s high-performance and low latency characteristics.