Introduction to IU's Big Red PowerPC Cluster, storage resources via the TeraGrid

Plan to attend this tutorial to learn about, and gain hands-on experience with, Indiana University’s new 20.4 TFLOPS Big Red supercomputer and IU's HPSS archival storage system via the TeraGrid. The primary purpose of this tutorial is to enable existing and new TeraGrid users to learn about the Big Red system so that they can easily use codes already ported and optimized for that system (e.g. WRF, NAMD, MILC) or rapidly migrate other existing applications to Big Red so as to accelerate scientific discovery. Indiana University is offering a TeraGrid-related tutorial at the following conferences: Indy '07 Bioinformatics conference, Indianapolis, IN Tutorial offered 31 May, 8:00 am - noon http://compbio.iupui.edu/indy/ TeraGrid '07 conference, Madison, WI Tutorial offered 4 June, 8:30 am- noon http://www.union.wisc.edu/teragrid07/ BiBE ‘07 (Bioinformatics and Biomedical Engineering), Boston, MA Tutorial offered 14 October 2007 http://www.cs.gsu.edu/BIBE07/index.php Within a TeraGrid environment dominated overall by Intel processors, use of PowerPC processors and the power instruction set provides a perceived barrier to use of Big Red that does not exist for Intel-based systems. This tutorial specifically introduces the compilers and optimizations that provide the best performance with the Power instruction set and the PowerPC processor. In addition, as massive computations more commonly depend on massive data sets as inputs, and produce massive data sets as outputs, it will be useful for participants to obtain a working knowledge of IU's archival HPSS system and how to store and access files via gridftp. Indiana University's Big Red system, a 20.4 TFLOPS IBM e1350 cluster, is presently the second largest supercomputer integrated with the TeraGrid. Big Red includes 1024 dual-core IBM PowerPC 970MP processors, configured in 512 JS21 Blades, with a Myrinet2000 interconnect. Big Red supports the TotalView debugger and the Vampir performance analysis system, and is capable of excellent performance on applications scaling into the 1,000 to 2,000 processor range.