INDUSTRY
NCSA at SC06
- Written by: Writer
- Category: INDUSTRY
NCSA partcipated in the technical program and on the exhibit floor at SC06 in Tampa Nov. 11-17. Presentations, papers, and posters on advanced computing technology, cyberenvironments, scientific visualizations, and the innovation and discovery they enable are archived here. MAEviz: A Cyberenvironment for Earthquake Mitigation Terry McLaren, NCSA Cyberinfrastructure provides a powerful way to deliver expertise that can reduce delays in bringing new techniques and data to bear on real-world issues. MAEviz, a network-centric hazard risk management cyberenvironment under development by the Mid-America Earthquake Center, NCSA, and the University of Michigan, combines portal, rich-client, grid, workflow, and provenance capabilities to enable systems-level analysis, the coordination of decisions across organizations, and dynamic enhancements of the cyberenvironment itself with the latest data and algorithms from researchers. Innovative Systems Research at NCSA Mike Showerman, NCSA Petascale computing is now a realizable goal that will impact all scientific and engineering applications, not just those requiring the highest level of capabilities. But the best pathway to petascale computing is unclear. Tomorrow's computing systems may include multi-core processors, reprogrammable logic devices, heterogeneous elements, or other emergent technology. These advanced technologies pose technical challenges that must be addressed before their full potential can be realized. To address these issues, NCSA's Innovative Systems Laboratory tests and evaluates the performance of new systems for key scientific and engineering applications. ISL focuses on architectures that promise to significantly decrease the cost of computational scientific and engineering applications or to greatly extend the range of these applications. NCSA's Vision for the Future of Cyberinfrastructure Thom Dunning, NCSA Director NCSA has been a leader in the development and deployment of cyberinfrastructure to support science and engineering for the past 20 years. Today the center focuses on several thrust areas that will further advance computational science and engineering: Cyberenvironments will enable researchers to fully exploit available resources -- including computing systems, data stores, instruments, and analysis and visualization tools -- and to extend their collaborations into virtual labs that transcend geographic boundaries; tailored cyber-resources ensure that computing, data, and networking resources meet researchers' specific needs and that timely, optimal results can be obtained; innovative systems research will determine the pathway to petascale performance, impacting all scientific and engineering applications; advanced scientific visualization provides critical insights into complex systems and brings the thrill of scientific discovery to the public; and education and training efforts will ensure that the next generation of scientists and engineers is prepared to leverage the power of cyberinfrastructure. Galaxy Cluster Simulations at NCSA using FLASH and Teuthis Paul Ricker, NCSA and University of Illinois at Urbana-Champaign Large-scale surveys of galaxy clusters in several wavebands are recognized as a key method for constraining the properties of dark energy. Simulations of galaxy cluster formation are important for these surveys because they allow astronomers to optimize survey design and cluster-finding algorithms, and because they characterize the expected systematic errors in mass-observable relationships. Accomplishing these tasks requires workflow and data management tools designed for use with simulations. We demonstrate the use of Teuthis, a grid-enabled simulation management tool, with the adaptive-mesh code FLASH to initialize, carry out, and analyze a parameter study of galaxy cluster simulations of the type used in mass-observable scatter studies. The Environmental Cyberinfrastructure Demonstrator (ECID) Cyberenvironment National environmental observatories will soon provide large-scale data from diverse sensor networks and community models. While much attention is focused on piping data from sensors to archives and users, often neglected and also critical to long-term success of the observatories is the need for truly integrating these resources into the everyday research activities of scientists and engineers across the community and enabling their results and innovations to be brought back into the observatory. This talk will give an overview of the Environmental Cyberinfrastructure Demonstrator (ECID) Cyberenvironment for observatory-centric environmental research and education. ECID, which integrates five major components -- a collaborative portal, workflow engine, event manager, metadata repository, and social network personalization capabilities -- is designed to address these issues. Persistent Infrastructure at NCSA: Cyberinfrastructure for Science and Engineering John Towns, NCSA The need for high-end computing resources is driven by the increasing fidelity and the attendant sophistication of computational models used to describe natural and engineered systems. Increasingly, it is also driven by the growing quantity of data that must be managed, analyzed, visualized and understood. This talk with give an overview of the 50 teraflops of computing resources and attendant data and storage resources NCSA provides to the research community, education, and industry. It will also show how this hardware infrastructure is complimented by more than 100 scientific and engineering applications and a dedicated 24/7 support staff. NCSA and TeraGrid: Where Imagination Becomes Reality John Towns, NCSA NCSA is a founding partner of the National Science Foundation's TeraGrid and continues to provide leadership, technology, and user engagement for the program. Additionally, NCSA has been the leader in computing capacity, having consistently provided the most cycles to users. And NCSA also provides unique computing capability via the largest SMP system available on the TeraGrid. This presentation will provide insight into NCSA's role on the TeraGrid for advancing science and engineering research and education. Motif Network: A Computing Environment for Comprehensive Analysis of Proteins and Functional Domains Using High-Performance Computing Eric Jakobsson, NCSA The ability to analyze genome sequences has revolutionized biology. Much of this revolution, however, has occurred via relatively non-intensive computing. The high-performance computing that has been done has been in specialized facilities. This is quite different from the situation for biomolecular simulation, for example, where many users do massive simulations on supercomputers. New forms of biological databases, and the growing size and consequent comprehensiveness of biological databases, provide the potential ability to do much richer analysis and draw deeper inferences from biological data. What is needed is for bioinformatics tools to be grid-enabled, and for useful automated workflows to be created, so that the capability of using high-performance computing genomic analysis can be extended to all biological researchers. The Bioportal project from the Renaissance Computing Institute of North Carolina, in collaboration with NCSA, has produced the prototype of Motif Network, a powerful computational environment for integrated analysis of genes, proteins, and functional domains. Biomedical Computing at NCSA Ian Brooks, NCSA As part of its continuing mission to expand the role of high-performance computing, NCSA has initiated a new program that will focus on the needs of the biomedical community. This talk will describe our plans and preliminary work in three areas of the biomedical field: building a cyberenvironment for infectious disease informatics, which will encompass surveillance, modeling, and response; developing a cyberenvironment for algorithmic medicine, which will concentrate initially on active prescriptions; and medical imaging applications. Toward Integrating Hardware Accelerators into Applications Steven Lumetta, University of Illinois at Urbana-Champaign Hardware-based acceleration of computationally intense portions of applications has demonstrated some promise for specific application areas. However, the need for both good software and hardware skills as well as portability concerns has deterred widespread adoption of these hybrid systems. Several groups at the University of Illinois have been working to develop models that include compiler and operating system support for partitioning applications across processors and reconfigurable logic. This talk will describe the existing state of our infrastructure, highlight some of our successes and publicly available resources, and discuss future directions. Computational science on the TeraGrid with LEAD Brian Jewett, University of Illinois at Urbana-Champaign This presentation will include challenges, successful experiences and future plans for carrying out computational science on the TeraGrid within LEAD. The discussion will include an overview of the LEAD (Linked Environments for Atmospheric Discovery) project, and how we are using LEAD tools such as the Ensemble Broker and Siege client to prepare, launch, execute and post-process large suites of atmospheric flow simulations on TeraGrid computers. Existing and future capabilities of LEAD and near-term evaluation and testing will be discussed. Cyberenvironments and Cyberinfrastructure: Powering Cyber-research in the 21st Century James Myers, NCSA The term "cyberenvironment" was coined to describe cyberinfrastructure with a focus on end-to-end scientific productivity and support for ongoing scientific discourse and the science-driven creation and evolution of digital resources. The presentation will detail the vision for cyberenvironments, outline the design patterns and technologies required to create them, and present some early cyberenvironments being developed by NCSA and its collaborators. Charm++: Scalable Cosmological Simulations and Porting to the Cell Celso L. Mendes, David Kunzman , and Filippo Gioachin, University of Illinois at Urbana-Champaign Charm++, a runtime library allowing C++ objects to communicate with each other efficiently in a parallel system, improves performance of parallel applications while also improving programmer productivity. Using Charm++, parallel applications like the molecular dynamics code NAMD, which won a Gordon Bell award in 2002, have been efficiently scaled to thousands of processors. In this presentation, after briefly introducing the main features in Charm++, we will present results obtained in a real scientific application and will describe other ongoing Charm++ developments. First, we will present performance achieved with ChaNGa, a new cosmological simulator based on Charm++. The performance data collected via automatic tracing by our runtime can be visualized and navigated through Projections, our graphical performance browser. We will show various views enabled in Projections, which help the user understand the parallel behavior of ChaNGa. Next, we will describe our current steps to port Charm++ to the Cell processor. After discussing why Charm++ is a good fit to the Cell, we will describe an API that we developed to enable dispatching of work to the Cell's SPE processors. We will present code based on this API running on a Cell-based blade, and the corresponding performance achieved. Computational Science and Engineering Online (CSE-Online): A Grid-Enabled Cyberenvironment for Research and Education in Computational Science Thanh Truong, University of Utah Truong will demonstrate CSE-Online, which introduces a paradigm shift in that data, tools, and computing resources are delivered to the user desktop environment transparently. It is capable of accessing computing resources from the TeraGrid. Currently it has more than 30 tools for research and teaching in computational chemistry, biology, and material science. Prototyping Cyberinfrastructure for Interactive Ocean Observatories Larry Smarr, California Institute for Telecommunications and Information Technology The National Science Foundation is in the early stages of funding the deployment of a new generation of global ocean observatories. This presentation will describe work at the University of California-San Diego, University of Washington, Oregon State University, and partnering institutions on prototyping a Laboratory for the Ocean Observatory Knowledge Integration Grid (LOOKING). This work envisions a service-oriented architecture, which will allow for remote interactive control of deep undersea instruments connected by fiber optics to the national cyberinfrastructure. NCSA is a partner in this research focusing on scientific visualization and real-time data analysis. More information can be found at the joint Center for Earth Observations and Applications and California Institute for Telecommunications and Information Technology booth (#1647). Simulating the Large Synoptic Survey Telescope Processing Grid on TeraGrid Ray Plante, NCSA We are taking unique advantage of the distributed resources of the TeraGrid to understand how to move and process data coming from the Large Synoptic Survey Telescope (LSST). This new telescope will begin collecting data in 2014 from a mountaintop in Chile. Each night, it will produce 18 TB of raw data, which will be transferred in to a base facility for real-time processing and then on to the archive center for extensive processing. We use three TeraGrid sites to represent the three sites of the LSST grid where we deploy a prototype data management and processing system. This distributed system allows us to test our software architecture, evaluate various grid tools for handling data, and understand the effect of the network on the overall performance of the system. Data Management for the Dark Energy Survey Greg Daues, NCSA The Dark Energy Survey (DES; projected 2010-2015) will address the nature of dark energy using four independent and complementary experiments. The DES will produce 200 TB of raw data to be processed into science-ready images and catalogs and co-added into deeper, higher-quality images and catalogs. In total, the DES dataset will approach 1 PB. The data rate, volume, and duration of the survey require a new type of data management system (DMS) characterized by a high degree of automation and robustness on high-performance computing infrastructure. The DES DMS consists of: (1) processing pipelines with built-in quality assurance testing; (2) a distributed archive to support automated data processing and calibration within a grid computing environment; (3) a catalog archive database to support science analyses; (4) web portals for control, monitoring and scientific analyses; and (5) hardware platforms required for operations. In this presentation we demonstrate the features of the DMS through two web portals. Through the control portal we highlight advances in astronomical algorithms, pipeline launching on HPC platforms, and distributed pipeline monitoring and QA assessment. The archive portal provides a user-friendly interface for community access to multiple archive sites and databases, enabling the download of image files and catalog data.