PathScale Introduces World’s Lowest-Latency Cluster Interconnect

PathScale Introduces World’s Lowest-Latency Cluster Interconnect; Standards-Based Technology Breakthrough Reduces Linux Cluster Latency Up To 200% -- PathScale InfiniPath Interconnect Offers Industry’s Lowest Latency for Commodity AMD Opteron Processor based Cluster Nodes -- PathScale, developer of innovative software and hardware solutions to accelerate the performance and efficiency of Linux clusters, today announced the PathScale InfiniPath Interconnect. Building on the success of the PathScale EKOPath Compilers, PathScale InfiniPath is the industry’s lowest latency Linux cluster interconnect, delivering SMP-class performance to commodity-priced clustered computing. The PathScale InfiniPath Interconnect leverages three important industry standards, HyperTransport, InfiniBand, and the AMD64 architecture, to maximize performance and make low-latency interconnects more affordable to a broader range of high-performance computing users. Unprecedented Linux Cluster Performance The PathScale InfiniPath Interconnect dramatically increases cluster performance, scalability and throughput to empower High Performance Computing (HPC) users to leverage the flexibility and cost effectiveness of Linux clusters for both parallel applications and applications that previously had been run on large, expensive, proprietary symmetric multiprocessing (SMP) computers. In order to effectively migrate applications from these systems to commodity clusters, an interconnect with similar latencies is required. PathScale delivers the only low-latency Linux cluster interconnect that can support these application requirements. With MPI latency of 1.5 microseconds and a bi-directional data rate of 1.8 gigabytes per second, PathScale’s InfiniPath Interconnect offers the industry’s lowest latency with extremely high bandwidth, delivering unmatched application scalability for Linux Clusters. These characteristics combine to improve cluster efficiency and allow HPC applications to effectively scale to thousands of computing nodes. In addition, the performance of an InfiniPath-enabled cluster will continue to improve with the speed of the processor – the faster the processor, the lower the latency of the InfiniPath Adapter and the greater the efficiency of the cluster. "Sandia's Advanced Simulation and Computing Program applications are very demanding on system interconnect performance and require extremely low latency for scalability," said Douglas Doerfler, Principal Member of Technical Staff at Sandia National Laboratories, "The PathScale EKOPath compilers on our AMD Opteron systems meet these high demands and we look forward to testing the PathScale InfiniPath Interconnect." Greater efficiency also allows users to move complex computational jobs into larger cluster environments and obtain faster results. Examples of HPC applications that can benefit from increased cluster efficiency include computational fluid dynamics, reservoir simulation, weather forecasting, crash analysis, weapons simulation, and molecular modeling. Key enterprise applications such as business intelligence and financial modeling can also realize substantial benefits from PathScale InfiniPath. Adherence to Industry Standards PathScale is a strong advocate of the use of industry standards to drive commodity pricing for HPC technologies. Widely adopted standards used to connect cluster system components are used by the PathScale InfiniPath Interconnect. These include the InfiniBand switched fabric architecture, Linux, Message Passing Interface (MPI), HyperTransport and the HTX connector, and the AMD64 processor architecture. PathScale InfiniPath connects AMD Opteron processor based clusters together via external InfiniBand 4X switches. InfiniPath has been tested with the leading InfiniBand switch suppliers - including Topspin, Voltaire, Mellanox and Infinicon. This ensures full interoperability and allows InfiniPath to be fully managed by the subnet management offerings of the leading InfiniBand switch vendors. The PathScale InfiniPath Interconnect attaches directly into the HyperTransport port on the Opteron. The InfiniPath ASIC can be placed directly onto a processor motherboard or implemented as an adapter card that plugs into an industry-standard HyperTransport HTX slot. Compared to existing high-speed interconnects, the PathScale InfiniPath Interconnect offers significantly lower latency and higher bandwidth at lower, commodity-like prices. “The acceptance of AMD Opteron processor-based servers in HPC markets has been phenomenal, and the standards-based, high-performance InfiniPath interconnect will help that trend continue,” said Ben Williams, vice president of enterprise and server/workstation business for AMD’s Microprocessor Business Unit. “InfiniPath takes full advantage of the AMD Opteron processor’s Direct Connect Architecture and HyperTransport technology.” Iwill, a leading manufacturer of motherboards for the AMD Opteron market, recently announced the Iwill DK8-HTX™ dual processor AMD Opteron motherboard. This is the industry’s first motherboard manufacturer to support the new industry standard HyperTransport HTX slot. The Iwill motherboard is also the first to support the PathScale InfiniPath HTX Adapter. Initial Partners Already Engaged PathScale InfiniPath offers significant opportunities for resellers, integrators and OEMs as its enables them to further differentiate their product offerings. Many of the leading AMD system OEMs, including Linux Networx, Microway, Angstrom, Appro, GridCore, Dalco, Hard Data and TeamHPC have already committed to resell PathScale InfiniPath to their HPC customers who require ultra low-latency. As InfiniPath can be deployed either as an adapter card or on the motherboard, it offers our partners even greater flexibility and potential differentiation. PathScale has worked with AMD system OEMs over the last 6 months, bidding Pathscale InfiniPath in many large-scale cluster proposals that are planned for deployment in 2005. “Latency has been the last great barrier to real application scalability on commodity Linux clusters,” said Scott Metcalf, CEO of PathScale. ”InfiniPath removes that barrier and accelerates the migration away from expensive, large-scale SMP solutions for HPC applications.” PathScale InfiniPath will be shown live at the SuperComputing SC2004 show in Pittsburg, PA on November 8-11. InfiniPath will be shown both at the PathScale booth (#1849) running on Microway servers with Iwill DK8-HTX motherboards and at the AMD booth (#1841). The PathScale InfiniPath HTX Adapter will be generally available from PathScale and its Authorized FastPath Partners in the second quarter of 2005 with engineering samples available to select OEM’s earlier.