INDUSTRY
LCI Conference Continues with Hardware and Software Sessions
By Gary Montry -- One more issue pertaining to large I/O systems: "operability" is not a synonym for "capability." An interesting talk by Andrew Uselton and Brian Behlendorf from Lawrence Livermore National Laboratory discussed the difficulties they had with the I/O system delivered with BlueGene/L. They "sweated bullets" (their term, not mine) for six months trying to get the I/O system to perform up to design specs. Internally, they referred to it as "the death march." The system, as delivered, "worked." However, the severely oversubscribed network design left them with an initial performance deficit of 50 percent of the target of 30+ GB/sec. This seems to be akin to spending two hundred grand on a Ferrari and discovering that it won't get you to the market faster than your neighbors' Buick without considerable tuning. Not that I'm blaming IBM. This talk could have addressed systems from every other manufacturer. There was no sensible way to build the I/O system without oversubscription at that time. It just points out that these complex systems which push the state of the art do not come out of the box ready for prime time.
The second day was a sandwich of hardware and software sessions. The morning keynote by Norman Miller (UC Berkeley) discussed the usage of cluster-enabled climate modeling software to predict the impact of global warming on California's Sierra mountains snowpack. It's not a pretty picture. This work has thrust him into the state government political system. The message here is the success of the open-source WRF (Weather Research & Forecasting) project. Norman and his colleague Jin have added unique capabilities to the WRF code in order to do these simulations and will deliver these improvements to the WRF project for use by other climate researchers. A short session on DARPAs HPCS program featured presentations from IBM on their PERCS project and from Cray on the Cascade offering. Both presentations were light on technical details, as might be expected. The important fact to take away from this program was highlighted by the IBM speaker (Govindaraju). He pointed out that the last factor of 10x in performance took IBM five years, but the PERCS project has a target of 100x performance gain over the next five years. The evening session was the HPC body-building session, where descriptions of several new big machines were paraded before us and muscles were flexed. The parade included Roadrunner (LANL), Abe (NCSA), Ranger (TACC), Jaguar (ORNL), and the Red Storm upgrade (SNLA). The price prize winner is Ranger, a SUN-built system which is designed to be 529 teraflops with an acquisition cost of $30 million. That works out to slightly less than six cents per megaflop! This is more than a factor of two lower than the typical price range for large clusters. Finally, Brent Gorda, (LLNL) announced the "Cluster Challenge" for Supercomputing '07 in November. The idea here is for undergrads to build a cluster which can use one 30 amp circuit and run some applications to get a feel for the difficulty of provisioning clusters. Brent came up with the idea after realizing that outside of the laboratories and HPC-centric universities there is not much knowledge and experience in how to obtain and provision clusters. Deadlines for application are approaching, so if you are interested in fielding a team for the challenge contact him at bgorda@llnl.gov. Gary Montry is an independent software consultant specializing in parallel applications development and optimization and in attached processor software. Gary can be reached at gary@spsoft.com