PSSC Labs releases new CBeST cluster management stack

PSSC Labs releases new CBeST cluster management stack

PSSC Labs has refreshed its CBeST (Complete Beowulf Software Toolkit) cluster management package. CBeST is already a proven platform deployed on over 2200 PowerWulf Clusters to date and with this refresh PSSC Labs is adding a host of new features and upgrades to ensure users have everything needed to manage, monitor, maintain and upgrade their HPC cluster.

PSSC Labs is a superstar in the supercomputer server market. Unlike other entrants in this field, PSSC labs delivers customized turn key solutions that offer lower power consumption, higher performance, and faster time of production, all with immediate access to support from highly skilled, US based engineers. 
 
The company prides itself on creating the greenest servers in three main categories: power, density, and reusability. For example their CloudOOP 12000 rack can hold up to 320 terabytes of data with 10 terabytes per unit of usable SSD. Each terabyte pulls about 5 watts of electricity, making the CloudOOP 12000 the most space and power efficient server system on the market. The CloudOOP 12000 server draws up to 40% less power than comparable models. It does this by stripping out unnecessary components that only serve to use more energy and increase costs. Additionally, it uses Micron SSDs instead of traditional spinning disks with SSDs, which also reduces power consumption. This saves about 30 – 50 watts of power per server. Although that might sound small, once you scale to the number of servers that a multi-terabyte customer uses, customers get significant power and cost savings.
 
The CBeST software stack is integrated into PSSC Labs’ PowerWulf Clusters to deliver a preconfigured solution with all the necessary hardware, network settings and cluster management software prior to shipping. Due to its component nature CBeST is the most flexible cluster management software package available.
 
“PSSC Labs is unique in that we manufacture all of our own hardware and develop our own cluster management toolkits in house. While other companies simply cobble together third party hardware and software, PSSC Labs custom builds every HPC cluster to achieve performance and reliability boosts of up to 15%,” said Alex Lesser, Vice President of PSSC Labs. “Our highly skilled and deeply knowledgeable engineers can modify every CBeST component to compliment the customer’s unique hardware specifications and computing needs and are here to provide responsive support for the lifetime of the product. The end result is a superior, ready-to-run HPC solution at a cost-effective price.”
 
New CBeST Version 4 features include:
 
Support for CentOS 7 & RedHat 7
·         Previous version of CBeST only supported CentOS 6 and RedHat 6
 
Diskless Compute Node Support
·         Cost -- Because the compute nodes have no disks, the cost is reduced. The budget typically allocated for traditional hard disks/SSDs can either be saved entirely or reinvested into other areas of the cluster (network storage, additional RAM, or even extra compute nodes).
·         Stability -- Hard drives are the most failure-prone component. Eliminating them also removes the biggest potential point of failure from each compute node.
·         Performance -- Since the operating system runs in a minimal footprint of RAM as opposed to a hard drive, performance is generally superior.
·         Security -- Some companies and government agencies have IT security requirements for the disposal of failed storage devices. Diskless compute nodes eliminate this issue.
·         Management/Provisioning -- Compute node software can be managed from a single chroot (change root) environment. It's also very simple to test software changes/upgrades. Users simply back up the existing image, make their changes, and reboot the nodes. If something goes wrong, just revert to your backup and reboot the nodes to restore them to their previous state.
 
Support for the latest high speed network fabrics
·         Support for Intel Omnipath (56 Gbps & 100 Gbps) Network Backplane
·         Support for Mellanox EDR Infiniband (100 Gbps) Network Backplane
·         Higher speed network fabrics allow faster computational speed and overall cluster performance
 
Support for the latest processor and coprocessor technologies including
·         Intel Xeon PHI
·         nVidia P100 GPU
·         Altera FPGAs
 
Offering support for these new processor and co-processor technologies widens the breadth of computation problems that can be solved using PowerWulf Clusters.  Support for Xeon PHI and nVidia P100 GPUs is key because they are often central to deep learning, machine learning and artificial intelligence applications.  
 
Every PowerWulf HPC Cluster with CBeST installation includes a one year unlimited phone / email support package (additional year support available). Prices for a custom built PowerWulf solution start at $20,000.