GAMING
TACC's Ranger Supercomputer Surpasses 1.1 Million Jobs in Less Than Two Years
- Written by: Tyler O'Neal
- Category: GAMING
The Ranger supercomputer, one of the most powerful systems in the world for open science research, has run about 1.1 million jobs in under two years.
When it entered full production on Feb. 4, 2008, this first-of-its-kind system marked the beginning of the Petascale Era in high-performance computing (HPC) where systems now approach a thousand trillion operations per second and manage a thousand trillion bytes of data.
"Ranger has already enabled hundreds of research projects and thousands of users to do very large-scale computational science in diverse domains," said Jay Boisseau, director of the Texas Advanced Computing Center (TACC). "We're very proud of the tremendous impact it has had on open science, and the impact is growing as it matures and more researcher applications are optimized to use its tremendous capabilities."
Bill Barth, director of TACC's HPC group, said, "The demand for time on Ranger has been very high and instrumental to making TeraGrid the nation's largest resource for open science computational research. The system has run more than 600 million central processing unit hours so far."
As for the user who ran the millionth job, Barth said it was a small post-processing job (16 processors) completed by Dr. Yonghui Weng, research associate, in Professor Fuqing Zhang's hurricane research group at the Pennsylvania State University Department of Meteorology.
"Researchers need to perform a variety of tasks on Ranger and they are all important to the research process," Barth said. "In addition, we have different types of researchers—ones who are interested in running large single-simulation problems, and ones who are interested in running thousands or millions of really small problems. Our job is to support science at whatever scale."
Weng's research explores the potential of on-demand HPC to support hurricane forecast operations and to evaluate high-resolution ensembles to achieve Hurricane Forecast Improvement Program (HFIP) goals for the development and implementation of the next-generation hurricane forecast system.
Weng said he has been using Ranger consistently since July 2008 to produce improvements in hurricane forecast accuracy. Zhang's hurricane research group at Pennsylvania State is sponsored by grants from the National Science Foundation, Office of Naval Research and the National Oceanic and Atmospheric Administration HFIP project.
"During the hurricane season from July to October, I run an operational hurricane ensemble data assimilation system twice per day, and my team runs an operational deterministic forecast system at the same frequency," Weng said. "In addition to the operational jobs during hurricane season, we use Ranger for sensitivity experiments, model development, and exploration of dynamics and predictability of hurricanes."
To illustrate the variety of ways one researcher can use a system like Ranger, Weng said he ran a cloud-scale ensemble analysis and prediction experiment that used 23,808 processors, and a deterministic forecast job that used 8,192 processors in real-time during Hurricane Ike.
"The system is wonderful and I'm impressed with the TACC support staff which make our jobs run so efficiently," Weng said.
"During the first several months of large-scale system deployment, every tweak is important," Barth said. "As time goes on the system settles out and begins to operate as a well-oiled machine. It's still many people's full-time jobs to keep Ranger running, but at the same time we can start to think about deploying new systems."
The Ranger supercomputer is funded through the National Science Foundation (NSF) Office of Cyberinfrastructure "Path to Petascale" program. The system is a collaboration among the Texas Advanced Computing Center (TACC), The University of Texas at Austin's Institute for Computational Engineering and Sciences (ICES), Sun Microsystems, Advanced Micro Devices, Arizona State University, and Cornell University.
The Ranger supercomputer is a key system of the NSF TeraGrid (www.teragrid.org), a nationwide network of academic HPC centers, sponsored by the NSF Office of Cyberinfrastructure, which provides scientists and researchers access to large-scale computing, networking, data-analysis and visualization resources and expertise.