German University Invests Over $18 Million into Supercomputing

University of Technology (TUD) has signed a contract with Silicon Graphics to provide a high-performance computing environment representing an investment of over $18 million, which will give TUD a distinction as Center for Scientific Computing. In two project phases to be completed within twelve months, a state-of-the-art, innovative and flexibly usable infrastructure with computational power of more than a dozen teraflops will be implemented. This will enable investigators in scientific areas such as physics, material sciences, engineering, bioinformatics and nanotechnology to find answers to new types of challenging problems. As central component, SGI will install a large SGI Altix shared-memory system containing 6,000 Gigabytes of contiguously usable main memory and more than 1,500 processor cores based on the most recent Intel Itanium 2 dual-core technology. This HPC platform will pave the way for a new category of capability computing serving as a concentrated resource for selected projects, acting as a knowledge accelerator and allowing the researchers to work on challenging problems beyond the scope of traditional number crunching. Beyond providing high computational performance, the procurement -- running under the designation "HPC/Storage Complex for Data Intensive Computing" -- will be specifically built to achieve very high data bandwidths by drawing on an intelligently architected, multi-level storage system. This tiered storage system will enable very high speed storing, moving and archiving of extremely large datasets. To this end, SGI plans to install a Storage Area Network complex (SAN) containing 60 terabytes (TB) of online disk capacity, which provides a bandwidth of 8 GB/s (gigabytes per second) to the Altix system, and is capable of feeding a Petabyte-sized (PB) archive tape robot with high data rate. Another 50TB large SAN will be provided and connected to the throughput system, with the option of efficient access to the first SAN and hence the archiving system. Both SAN systems are based on the SGI InfiniteStorage SAN solution, using fibre channel disk array systems from Data Direct Network; a PB-sized tape library system from Storage Technology Corp will serve as the archive robot. For hierarchical storage management including life cycle management and data storage and retrieval the SGI(R) InfiniteStorage DMF (Data Migration Facility) software is provided. Shared file system functionality on the HPC system is implemented through SGI(R) InfiniteStorage CXFS, while on the throughput system a Lustre file system - commonly deployed in many of the large US laboratories -- will be used. Both platforms will be running under Novell's SUSE(R) LINUX Enterprise Sever operating environment. Complementary to this system, SGI will integrate a PC farm from Linux Networx with roughly 700 single system boards; acting as a platform for capacity computing, the PC farm will serve the throughput requirements of many hundreds of users throughout the Dresden campus. The procurement is one of the largest HPC contracts to be tendered within Europe in 2005. According to Prof. Hermann Kokenge, Rector of TUD, "The system will effectively strengthen the innovational capabilities of the university, the Dresden area, and the surrounding region. It will provide a critical mass of additional computing power and novel working facilities to obtain groundbreaking discoveries." Accumulated HPC Resources for Bold Questions How can one discover highly robust organic materials that may replace metallic alloys in osteal (bone) surgery? How is it possible to grow novel types of crystals? What methods allow subduing background noise within a vehicle? Which techniques of tracking and understanding of cellular growth processes can be achieved via automatic cell microscopy? How can one analyze and influence the genetic causes of illnesses? These are only a few questions and application areas that will be tackled by researchers using the new TUD computing environment. No matter which area of research a scientist is concerned with, be it the analysis of bio-molecular reactions, the methods for protein docking or quantum chemistry, the folding of three dimensional structures, the analysis of films or the study of turbulent flows in electro-fluid materials under the influence of external magnetic fields using methods of computational fluid dynamics -- the Altix platform provides new perspectives for many computational-based scientific methods. Selected projects will have the opportunity to utilize up to two-thirds of the whole system for some period of time if required. Hundreds of processors working in parallel can then use the memory as a single, contiguously addressable entity, load enormously large data sets in one piece, efficiently perform their calculations on them or investigate them for patterns or similarities. "We intend to enable bold and complex projects on the SGI Altix. Our focus s on providing a novel type of HPC tool to the scientific computing community," said Prof. Wolfgang E. Nagel, Director of ZIH (Center for Information Services & HPC). "Our ef orts do not center on the usual simulation scenarios, we are more concerned with providing a platform which gives our users the opportunity to extract new and concise knowledge from huge amounts of structured or unstructured data encompassing a lot of hidden information." In-memory computing is just one of the innovations offered to the scientists by ZIH via the SGI HPC platform; for the first time it will be possible to simultaneously load several complete scientific databases into the memory subsystem, and to search them for certain correlations at unprecedented speed. The problem besetting and hindering these kinds of investigations up to now -- the need for time-intensive I/O processing and disk accesses -- is being eliminated by in-memory computing. To make capability computing feasible for alternating projects, it must be possible to rapidly load the HPC platform for a single run, and then to rapidly unload it to make the resources available for the next user. The SGI solution can load 4TB of data to memory within 10 minutes, and, at the end of a project run, is capable of saving computing results to the archive system with a 25TB in 4 hours. Nagel: "This is outstanding and allows scientists to use the machine as a real theory accelerator." "We are pleased to implement a project of this size and ambition in Germany, which will be considered a significant achievement by the global HPC community," added Robert Ubelmesser, Director of Strategic HPC Projects, Europe, SGI. "The idea of data-intensive scientific computing with all its challenges and chances has been pursued by ZIH in a visionary manner. We take pride in providing the enabling technology for this future oriented concept." According to Hannes Schwaderer, Executive Director of Intel GmbH: "Intel's Itanium 2 architecture is the fastest growing CPU architecture for HPC deployments. We're pleased by its success at the universities, and we're proud to now also provide Dresden with a very powerful system based on the Itanium processor architecture, after having gained Leibniz Computing Center in Munich as a customer that takes advantage of thousands of our processor cores. The combination of dual-core Itanium 2 CPUs with SGI's innovative shared-memory technology in the Altix systems will provide Dresden with the capability to answer very complex questions." Two-Phase Delivery -- Starting in Autumn 2005 A third of the total capacity -- memory and processing power -- is planned to be installed in autumn 2005. It will primarily serve ZIH as a preparation environment, and allow users to optimize algorithms and prepare themselves for the new possibilities. An SGI Altix 3000 BX2 system will be installed in this first phase. The installation is to be completed in the second phase of the project scheduled for summer 2006. When the system is completely installed, a next-generation Altix system will have taken over the HPC workload. Award of Tender after tough competition "This is the third time in a row that Dresden has selected SGI as preferred HPC partner -- and it's a 128-processor SGI Origin 3800 system we actually use for running our HPC shared memory jobs," explains ZIH Director Nagel. "However, SGI was required to prevail in a tough, very challenging competition. We made our decision in favor of SGI because the company is capable of delivering a system with such a uniquely large shared-memory size. This is a distinguishing factor - enabling us to provide our clients with a unique quality of service for their novel and challenging investigations." Nagel concluded: "We will get an extremely balanced and versatile computing and storage complex -- with excellent components and a consistently high level of bandwidth that allows us to offer a powerful total resource for challenging new scientific computing problems in the homogeneous as well as heterogeneous requirement regime."