GOVERNMENT
Michigan's FLUX offers researchers new supercomputing options
Five researchers in LSA and the College of Engineering (CoE) are piloting a high-performance computing (HPC) cluster. Known as FLUX, the cluster ultimately is designed to deliver greatly enhanced research supercomputing resources in more cost-effective and energy-efficient ways.
This effort is a component of the Computing and Information Resources for Research as a Utility Service (CIRRUS) project and is one of the first steps toward expanding University of Michigan's research supercomputing capacity to be more contemporary, efficient and research effective. The pilot began May 3 and is expected to expand to a larger system later this summer.
“With FLUX, we’re taking a holistic view of HPC, provisioning for both compute- and data-intensive research,” says Dan Atkins, associate vice president for research cyberinfrastructure. “This service model minimizes idle computing capacity with allocation-on-demand and resource scheduling policies that improve efficiency. It also provides services to those not owning machines or needing capacity beyond what they own.”
The FLUX initiative offers researchers new options beyond the private ownership model now prevalent at U-M and other research universities. It includes access to a substantial software library, and the hardware and software will be refreshed regularly to stay current with evolving technology. Usage polices and rates for use of the full-scale FLUX are expected to be made public in early fall.
“As part of my negotiations for joining U-M’s astronomy faculty this year, I asked for a 48-core HPC cluster and 24 terabytes of storage,” says Chris Miller, assistant professor of astronomy and pilot participant. “The university offered me guaranteed access to the hardware I needed on FLUX through the end of my tenure review. The offer was hard to turn down. I did not want to manage, operate, maintain, or update my own cluster. The fact that U-M had a plan in place to meet my HPC needs was a welcome surprise.”
The FLUX system is managed and operated by CoE’s Center for Advanced Computing (CAC) under a partnership with the Office of Research Cyberinfrastructure (ORCI) and Information and Technology Services.
The pilot phase technology consists of a modest number of processing units using the Lustre cluster file system and is located at the Michigan Academic Computing Center.
“With the implementation of the first phase of FLUX, U-M now has a research computing environment that supports the scalable and expandable model of a private cloud,” says Andrew Caird, director of HPC for CoE. “As the FLUX cluster grows and the CIRRUS project is expanded, we will efficiently and effectively address the research computing needs of much of the community of computational-based researchers at U-M.”