Mellanox Releases ScalableSHMEM 2.0, ScalableUPC 2.0

Solutions Provide Unprecedented Scalability of SHMEM and PGAS/UPC Applications over InfiniBand

Mellanox Technologies has released of ScalableSHMEM 2.0 and ScalableUPC 2.0 for High Performance Computing applications. These parallel programming interfaces extend Mellanox’s I/O capabilities of low latency, high-throughput, low CPU overhead, Remote Direct Memory Access (RDMA) and advanced collective offloads, into areas of the high-performance computing market that require the unique capabilities, one-sided communication and shared memory semantics of SHMEM and PGAS/UPC programming languages. Used in conjunction with Mellanox CORE-Direct collective offloads and Mellanox Accelerated Messaging (MXM), ScalableSHMEM and ScalableUPC provide users the highest performance, efficiency and scalability for their parallel applications.

“We are pleased to provide the highest performance capabilities for SHMEM and PGAS-based applications, utilizing our leading I/O solutions and our network offloading technology,” said Gilad Shainer, Senior Director, HPC and Technical Computing at Mellanox Technologies. “The new release is a result of a tight collaboration with Oak Ridge National Laboratory and our OEM partners as part of our mutual plans to provide solutions on the path for Exascale computing.”

“We have been strong proponents of getting PGAS environments such as OpenSHMEM, UPC and Chapel optimized for high-performance InfiniBand networks,” said Stephen Poole, Sr. Technical Director, ESSC and Chief Scientist of CSMD for attribution. “We are pleased to partner and work with Mellanox to increase InfiniBand usage through support of new communication libraries and optimized performance capabilities.”

Mellanox ScalableSHMEM 2.0 is developed in conjunction with the API definition from OpenSHMEM.org, and Mellanox ScalableUPC is developed in conjunction with the Berkeley UPC project.

Availability

Mellanox ScalableSHMEM 2.0 and ScalableUPC 2.0 are available as individual packages running over the industry-standard OpenFabrics Communication stack.