GAMING
Fermilab Receives SciDAC Funds for Advanced Computing Tools in Physics Research
by Mike Perricone, Fermilab Office of Public Affairs -- Fermilab physicists and computer scientists can stake a larger claim to the future of high-energy physics—and to the next generation of computing—thanks to their part in the first-ever awards in DOE’s Scientific Discovery through Advanced Computing Program (SciDAC), announced August 14 in Washington, D.C. by Secretary of Energy Spencer Abraham. Through the SciDAC awards, Fermilab will receive approximately $1.28 million a year for the next three years as a participant in three nationwide collaborations: the Particle Physics Data Grid (PPDG); Advanced Computing for 21st Century Accelerator Science and Technology; and the National Computational Infrastructure for Lattice Gauge Theory. Together, the efforts share ambitious scientific goals: creating computer tools that will allow physicists to work at their home base with up-to-the-second experimental data from sources anywhere the world; and adapting those access tools to design the high-energy physics discovery machines of the future more efficiently and economically. “We know that there are basic questions of nature that can only be answered by building new accelerators to reach higher energy levels,” said Fermilab Computing Division Head Matthias Kasemann. “These facilities will absolutely require world-wide collaboration, and computing is a critical element in making this collaboration work.” As the major next step enabled by the SciDAC awards, the PPDG, Advanced Computing and Lattice Gauge Theory collaborations will help create a new generation of scientific simulation codes for “terascale” computers: computers capable of making trillions of operations per second (“teraflops”), while handling trillions of bytes of data (“terabytes”). The collaborations will integrate terascale computing into developing the concept of a “collaboratory,” or true collaborative laboratory. Scientists at far-flung institutions could work as if they were side by side at a central site, with the collaboratory distributing data for sharing in the analysis of particle physics experiments; in the design and development of future particle accelerators in new energy realms; and in the continuing expansion of theoretical principles and calculations in quantum chromodynamics (QCD), the sector of the Standard Model that describes the strong forces between particles. “The requirements of high-energy physics push computing technology to ever-higher levels and offer extraordinary opportunities for collaborative efforts,” said Fermilab Director Michael Witherell. “SciDAC represents a promising new approach to integrating science and computing at the Department of Energy.” Lattice Gauge: Calculating a Strategy The elementary particles called quarks are held together by a force called the strong force. The mathematical description of that force is called Quantum Chromodynamics (QCD); the computational method is called Lattice Gauge Theory. Studying quarks and the forces among them is a continuing challenge. “The only way to get meaningful results in QCD is through large-scale numerical simulations,” said theorist Robert Sugar of the University of California at Santa Barbara, principal investigator of the Lattice Gauge Theory collaboration. “In recent years, there have been major advances in the methods for these calculations,” Sugar continued, “and the Fermilab theory group has been a leader in developing the technologies to make these calculations feasible. But, to be effective, these methods require big increases in computing power.” Fermilab physicist Paul Mackenzie is one of the institutional leaders in the Lattice Gauge Theory collaboration, which has 65 members at 11 institutions (Fermilab, University of Illinois at Champaign-Urbana, University of California-Santa Barbara, M.I.T., Boston University, Columbia University, University of Washington, University of Utah, University of Arizona, Brookhaven National Lab, Thomas Jefferson Lab)—“pretty much the entire U.S. lattice gauge community,” Mackenzie said. “Fermilab’s role will be the development of large, cost-effective clusters of commodity computers for lattice calculations,” Mackenzie said. “Essentially, these are ‘off-the-shelf’ computers. But we will be integrating the machines to make them perform together.” Fermilab’s collider experiments are experienced at using computer clusters and have continued developing the concept for Collider Run II of the Tevatron. The Lattice Gauge Theory effort, already underway at the New Muon Lab on site, will also have a major focus on producing the clusters more economically. Mackenzie put the overall goal at reducing costs from the current $10-$20 per megaflop, down to as little as a dollar per megaflop. Fermilab built its own homemade supercomputer called ACPMAPS for lattice QCD calculations in the last decade, and similar computers have been built at Columbia and M.I.T. But the SciDAC program was the catalyst for these various groups to get together and apply for joint funding. “The U.S. has traditionally been a leader in the field of QCD,” said Sugar, “but recently our nation has lagged behind the Europeans and Japanese in the computing facilities required for this research. The SciDAC grant will allow us to take some very important first steps in creating this needed infrastructure.” When the computer clusters are built, the huge amounts of data they generate must be stored somewhere—and physicists must be able to find what they need. SciDAC will fund an effort in Storage Resource Management, for example, placing the data on tapes and using robots to carry out physicists’ commands for search and retrieval. “It means matching tens of thousands of tape cartridges to tens of thousands of tape drives, and doing it quickly and reliably,” said Donald Holmgren of Fermilab’s Computing Division. “We’ll be developing software for a physicist to issue a command to find the kind of data needed, instead of ‘Go To Tape Number 123456.’” The Lattice Gauge Theory collaboration will build computer facilities at Fermilab, Brookhaven (in conjunction with Columbia University) and Thomas Jefferson National Accelerator Facility to serve as hubs making experimental data available to scientists and students at widespread locations. The Grid: Crossing Borders and Boundaries The World Wide Web was developed to exchange information among particle physicists, but particle physics experiments now generate more data than the Web can handle. So physicists often put data on tapes and ship the tapes from one place to another—an anachronism in the Age of the Internet. But that’s changing, and SciDAC will accelerate the change. A major element of Fermilab’s contributions to the Compact Muon Solenoid Detector (CMS) being built for CERN, the European Particle Physics Laboratory, is the formulation of a distributed computing system for widespread access to data when CERN’s Large Hadron Collider begins operation later this decade. Fermilab’s DZero experiment has established its own computing grid called SAM, used to offer access for experiment collaborators at six sites in Europe. With SciDAC support, the nine-institution PPDG collaboration (Fermilab, SLAC, Lawrence Berkeley Lab, Argonne, Brookhaven Lab, Thomas Jefferson National Accelerator Facility, CalTech, University of Wisconsin, University of California-San Diego) will develop the distributed computing concept for particle physics experiments at the major U.S. high-energy physics research facilities. Both DZero and the US/CMS collaboration are member experiments. The goal: to offer access to the worldwide research community, developing what’s called “middleware” to make maximum use of the bandwidths or transmission capacity available on the network. For example, middleware can act as a sort of “search engine” for resources when a physicist isn’t sure of the location of needed data. Middleware can determine the best access point for the data, and the best way to transmit it over the network. Instead of using one process at one speed, middleware can manage different processes at different speeds simultaneously to make the best use of the available bandwidth. The PPDG collaboration will use computing resources of the experiments themselves (such as DZero at Fermilab, BABAR at SLAC and US/CMS, which included the data grid concept in its original planning) for distribution. By reducing travel as a requirement for access, the PPDG hopes the result will dramatically increase the participation by students and the educational uses by universities. “This is a very exciting opportunity for people in high-energy physics software development to collaborate with leaders in the computer science field at the universities,” said computer scientist Ruth Pordes of Fermilab’s Computing Division, who serves as the collaboration coordinator. “All these proposals offer a new dimension to collaboration across laboratories and across technical and scientific domains.” Kasemann described Fermilab as providing the “test bed” as the distributive computing concept meets real-world demands of high-energy physics. Fermilab’s experiments will offer the first implementation and the first trials of the PPDG, but Kasemann pointed out that the benefits aren’t directed only at decentralized experimenters: the access to distributive computing power will expand computing capabilities beyond those available at a single central site. “We as physicists need this greater computing power to fulfill our mission,” Kasemann said. Advanced Computing for 21st Century Accelerators The accelerators that fuel the demands for expanded computing power are also demanding greater computing power in their design and operation. The collaboration on Advanced Computing for 21st Century Accelerator Science and Technology involves 10 institutions (Fermilab, SLAC, Los Alamos National Lab, Lawrence Berkeley National Lab, Brookhaven National Lab, Thomas Jefferson Lab, Stanford University, UCLA, University of California-Berkeley, University of Maryland, USC, Tech-X, Sandia National Lab, University of California-Davis), developing high-performance simulation codes to use existing accelerators more efficiently and help streamline the design of future accelerators. The Fermilab effort will develop simulation software for improving the performance of the lab’s accelerators; and will contribute to design studies for future machines and improvements, including a proton driver upgrade, ionization cooling for neutrino factories and muon colliders, and the Linear Collider, one of the possibilities for the next generation of high-energy physics accelerators. “It’s more cost-effective to simulate accelerators than to build prototypes that may require several changes,” said Fermilab physicist Panagiotos Spentzouris. “It’s also important to note the opportunities that are opened up for students. I have two computer science students from Kansas State University working with me.” SciDAC is an integrated program to help create a new generation of scientific simulation codes. All told, the SciDAC awards will offer $57 million to 51 projects in this fiscal year, advancing fundamental research in such areas as climate modeling, fusion energy sciences, chemical sciences and nuclear astrophysics, as well as high-energy physics and high-performance computing. For more information visit www.fnal.gov or www.science.doe.gov/scidac