SCIENCE
Cycle Computing Leverages Opscode's Hosted Chef to Support Large Scale Compute Cluster Protein Analysis
- Written by: Webmaster
- Category: SCIENCE
Opscode announced that Cycle Computing successfully utilized Opscode’s Chef to test the limits of cloud computing and create a 10,000-core cluster in the cloud for a major biomedical client. Using Chef, Cycle Computing provisioned the entire cluster in 45 minutes, significantly decreasing the overall costs of configuring a reliable and secure solution from an estimated $5 million to $8,500.
Cycle Computing delivers secure and flexible HPC and data solutions for a range of clients, helping them maximize use of their existing infrastructure and speed computations on desktops, servers, and on-demand in the cloud. Cycle clients experience faster time-to-market, decreased operating costs, and unprecedented service and support. Recently, Cycle Computing decided to test the limits of HPC computing by provisioning 10,000 cores on Amazon’s Elastic Compute Cloud (EC2) in an HPC cluster using batch-scheduling technology.
“We augmented our Chef infrastructure and streamlined configuration scripts to increase scalability,” said Jason Stowe, CEO for Cycle Computing. “We developed a novel converge timing system which increased both the peak number of nodes supported and the nodes deployed per unit time. Opscode’s Chef enabled us to easily and reliably configure our systems to meet our demanding scale requirements.”
Cycle Computing deployed the open-source Condor grid-management system along with CycleServer, a cluster telemetry analytics and visualization engine for HPC environments, and CycleCloud, a cloud service that rides atop of the EC2 compute cloud at Amazon. CycleCloud is engineered to launch large-scale clusters on-demand to foster clients’ supercomputing power for extensive computational analysis. Additionally, Cycle employed Opscode’s Chef cloud infrastructure automation engine to automate the software installation for each server. With these tools, it took Cycle a mere 15 minutes to fire up the first 2,000 cores in the virtual cluster, and within 45 minutes all of the 10,000 virtual cores on the 1,250 servers were spinning.
Scientific researchers installed their protein analysis application code and ran the job for eight hours at a total cost of approximately $8,500, which included storage capacity and fees for using CycleCloud as a service. This pencils out to just over $1,000 per hour for a 10,000-core cluster. According to estimates, if enterprises were to ramp up their own datacenter for a project of this size, with physical servers, storage, and switching, plus operation and maintenance costs, the cost would have been somewhere near $5 million.
“The beauty of open-source solutions like Chef is that resourceful, innovative companies such as Cycle Computing are able to pool a variety of tools to affordably solve a myriad of problems,” said Adam Jacob, chief product officer and co-founder, for Opscode. “For researchers at large life science companies who need to gather results in days and not weeks, they can leverage hosted automated infrastructure technology in order to run their own code, rather than creating a datacenter saving a significant chunk of change.”