Pittsburgh System Marks a Watershed in Weather Prediction

PITTSBURGH, PA -- A watershed has been crossed in the relationship between weather prediction and computers. In tests on the Terascale Computing System (TCS) at Pittsburgh Supercomputing Center (PSC), two computational models used to predict weather have sustained the best performance yet recorded, doubling previous records, says John Michalakes, a software engineer at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. "This signals a fundamental improvement," he says, "for climate and weather computational science in the United States." Michalakes leads the working group for software architecture and implementation for the Weather Research and Forecast (WRF) project, a large, multi-institutional effort that includes the National Science Foundation, the Department of Commerce, the Department of Defense and university scientists. He and his colleagues are working with PSC staff and with Compaq to exploit the newly installed TCS -- which comprises 3,000 Compaq EV68 Alpha microprocessors -- to its fullest potential for weather modeling. Speaking on Nov. 14 at the Supercomputing 2001 conference in Denver, Michalakes presented results from a series of late October runs testing the WRF model, a next-generation community regional model designed to serve both research and operational forecasting. He also ran a predecessor model, the Penn State/NCAR MM5, currently in wide use. Performance data shows that using 512 processors, MM5 runs on the TCS at 100 gigaflops (billions of calculations per second), which more than doubles MM5's performance on a leading vector system, a 40-processor Fujitsu VPP5000. Even using only 128 processors, the TCS outperforms the vector system. The WRF model, still under development, achieved 80 gigaflops on 512 TCS processors. This level of performance is significant, says Michalakes, because U.S. weather-prediction and climate research have suffered in recent years from the inability to acquire and run on the fastest computing systems. MM5 performance benchmarks are available at: http://www.mmm.ucar.edu/mm5/mpp/performance Vector systems, such as the VPP5000, represent a fundamentally different approach to supercomputer design from the "massively parallel" approach of the TCS. Vector processing uses pipelining and high-memory bandwidth to achieve very high single processor performance by exploiting fine-grained parallelism within the inner loops of atmospheric codes. The design, however, significantly increases the cost per processor relative to microprocessors like the EV68. "Vector systems also require greater software engineering effort to tailor code and data structures for efficient execution, especially in atmospheric physics routines," Michalakes noted. The VPP5000 has a peak performance rating per processor of 9.6 gigaflops, compared to two gigaflops for each TCS processor. Using 40 processors, the VPP5000 runs the MM5 weather model at a little more than 40 gigaflops. As recently as a year ago, this outstripped all but the very largest available U.S. systems. "The situation was that if you could get a vector machine you would," says Michalakes, "and that meant you had to be overseas. That's not the case anymore. With the advent of the latest generation of microprocessors -- the Compaq EV68 and the new IBM Power 4 systems, which sustain a significant percentage of the performance of vector processors -- the advantage of vector systems is going away. We've never seen these NWP models run this fast on any other architecture. This represents a watershed in high-performance computing and atmospheric modeling." Numerical weather prediction (NWP) has had a long and fruitful relationship with computing, Michalakes explained in his Nov. 14 talk. In the 1960s, NWP revolutionized weather forecasting by consistently providing better predictions than subjective forecasts. Since then, forecasting has improved side-by-side with the evolution of computing technology, and advances in computing continue to drive better forecasting as weather researchers develop improved numerical models. The WRF project is an effort to develop a next-generation "mesoscale" model. Distinguished from global climate models and from microscale models, mesoscale models forecast at scales from a continent to a single state. Designed to be fully operational in 2004, WRF is expected to replace MM5, now in use as a community model by more than 200 institutions, including several operational forecast centers, such as the U.S. Air Force Weather Agency. Like MM5, WRF is designed to be portable among a range of computing systems, including IBM, Silicon Graphics, Compaq, Fujitsu, NEC, and clustered Linux systems. The ongoing effort with the TCS, says Michalakes, will be to improve scaling -- the ability to use more and more processors without loss of efficiency per processor. For more information about WRF see http://www.wrf-model.org For more information about the TCS, see http://www.psc.edu The Pittsburgh Supercomputing Center is a joint effort of Carnegie Mellon University and the University of Pittsburgh together with Westinghouse Electric Company. It was established in 1986 and is supported by several federal agencies, the Commonwealth of Pennsylvania and private industry.