ACADEMIA
QLogic InfiniPath Host Channel Adapters Now Supported by Scali MPI Connect
Customers Using Scali MPI Connect Can Now Scale Large Clusters Higher with Support for HCAs Delivering the Lowest Latency and 10x the Message Rate: QLogic, the only end-to-end provider of host adapters to switched fabrics for storage and high-performance computing (HPC) networks; and Scali, the leader in high-performance clustering solutions for datacenters, announced that Scali MPI Connect v5.3.1 will support the QLogic InfiniPath family of InfiniBand host channel adapters (HCAs). MPI is the Message Passing Interface that programmers use to make a program run on thousands of processors and share data and messages among them. Customers use the advanced communication libraries of Scali MPI Connect to enable faster execution of their applications. Now, with Scali MPI Connect support for InfiniPath HCAs, customers can further enhance the performance of their applications, using HCAs that deliver the industry's lowest latency and 10x the message rate of competitive products.
"The performance of QLogic InfiniPath adapters makes them an ideal fit for Scali customers seeking a complete solution for their intensive computing needs," said Rick Friedman, Scali’s vice president of marketing." We are excited to work with QLogic and expect that our customers will be enthusiastic about using superior QLogic interconnect products.” Scali MPI Connect: Enabling Leading Interconnect Technologies Scali MPI Connect is a fully integrated MPI solution that enables companies to take advantage of leading interconnect technologies to build high-performance clusters. Heterogeneous support allows customers to utilize these features without constraints based on hardware platform, operating system, interconnect or configuration. "Working with Scali MPI Connect allows us to extend our customer reach and provide end users the highest performing interconnect for the HPC community," said Scott Metcalf, general manager, SIG Business Unit, QLogic Corp. "ISVs using Scali MPI will be able to avoid the expense of testing and supporting numerous variations of their applications for Linux clusters, a major benefit of the arrangement.” InfiniPath HCAs: The Performance Leaders for Cluster Interconnect
Available for servers with Hyper Transport and PCI-Express slots, InfiniPath HCAs are the performance leaders for cluster interconnect. With Scali MPI Connect support, InfiniPath adapters now interoperate smoothly with applications from the numerous ISVs who supporting the Scali MPI Connect standard. Customers can run many different applications on top of InfiniPath to solve their higher-level problems in reduced time and scale their problems more efficiently on large clusters.