GAMING
Big Ben Delivers Short Bursts of Turbulence via TeraGrid
- Written by: Writer
- Category: GAMING
At SC05 in Seattle, Nov. 12-17, the international conference for high-performance computing, networking and storage, University of Minnesota researchers Paul Woodward and David Porter used Big Ben, the 10-teraflop Cray XT3 system at the Pittsburgh Supercomputing Center (PSC), to simulate turbulence in real time. Using the high-performance optical-fiber "backbone" of the TeraGrid, the National Science Foundation's cyberinfrastructure program, the researchers transmitted results from the simulation in Pittsburgh for run-time visualization at the PSC booth. “Scientific productivity requires rapid answers to ‘what if’ questions,” said Woodward, speaking at the PSC booth. “Batch processing gives answers only in weeks or months. Exploratory runs on local resources also take weeks or months. Our challenge was to use the XT3 to compress time to an hour from what normally takes a week or two. We wanted to use a large machine to do a smaller problem fast, which is actually the hardest thing to do.” Woodward and Porter, both astrophysicists, ran their turbulence code, PPM (Piecewise Parabolic Method) on the XT3 to simulate turbulent fluid-dynamics in shear mixing layers, such as those that occur in high Mach number galactic jets spanning thousands of light years. The XT3 features exceptional “interprocessor bandwidth” — the speed at which processors share information with each other — which improves performance for many parallel applications, and this feature in particular encouraged Woodward and Porter to adapt PPM to carry out smaller runs at high efficiency. “This interconnect is fast, with very low latency,” said Woodward, “and the processors are fast, which made it feasible to bring the running time down for a medium-sized problem.” They used 512 XT3 processors - roughly a quarter of the system’s 2,090 processors — and relied on special software, developed by PSC scientists Nathan Stone and Raghurama Reddy, to interact with the simulations in real time. As the numbers crunched in Pittsburgh, the researchers volume-rendered images from the data and displayed them on the Seattle showfloor. The TeraGrid pipeline between PSC and the researchers’ site carries up to 240 megabits per second. The PSC software — called Portals Direct I/O (PDIO) — can route simulation data from Big Ben’s processors in real time to remote users anywhere on the wide-area network. PDIO assembles data fragments written by each processor into complete files at the receiving end. For the PPM simulation, PDIO aggregated 512 data streams from the XT3 every 10 time steps into 11 streams, sent them via TeraGrid to the booth in Seattle, where they were put back together into a single file that could be visualized with software created by Porter. “The ability to have instant response from a supercomputer simulations can be very useful,” said Woodward. “Maybe you want to change the Mach number, and see what that does. With the TeraGrid backbone, we can get as much bandwidth as we need. This is a whole pipeline of utility programs that we have tied together in an automated fashion. Support from PSC has helped enormously. We have worked with several other centers, and the level of support we received on this project from PSC was outstanding.” The data was volume-rendered by software, HVR (Hierarchical Volume Renderer), developed by David Porter. It displayed in three different formats, representing three key variables from the simulation — vorticity, divergence of velocity (which shows shockwaves), and density.