AEROSPACE
NASA Relies on Supercomputing in America's Quest to Return to Space
- Written by: Writer
- Category: AEROSPACE
With the eyes of scientists and space flight enthusiasts gazing skyward once more, NASA centers from coast to coast are relying on supercomputing technology from Silicon Graphics to help safely realize the agency's first space shuttle mission in two years. The anticipated July launch of Space Shuttle Discovery marks the genesis of a renewed commitment to space exploration in the US -- one aimed at returning man to the moon by 2020, and someday sending astronauts to Mars. From finding ways to prevent ice from forming on fuel tanks and analyzing if and how debris may break off and collide with the shuttle surface, to what impact re-entry may have on a repair, NASA research and flight centers have spent years diagnosing and then overcoming the potential vulnerabilities unique to shuttle missions. For the kind of compute, visualization and storage technology needed to drive NASA's Return to Flight initiative, five NASA facilities have turned to SGI, which in 2004 rapidly manufactured and deployed NASA's history-making Columbia supercomputer. Named to honor the crew members lost in the Feb. 1, 2003 shuttle accident, Columbia is a powerful asset in NASA's Return to Flight effort, but it by no means is the only one. Use of SGI technology to support NASA Return to Flight includes: -- Marshall Space Flight Center. Using SGI visualization and server systems, Marshall engineers are designing a heating unit to be installed on the expansion joints of the shuttle's liquid oxygen line. The heating unit's design hinders the buildup of ice during launch. Marshall scientists also are analyzing the shuttle's propulsion systems on these systems. -- Michoud Assembly Facility. This government-owned component of Marshall Space Flight Center is using SGI technology to complete impact analysis simulations of foam, ice, and other debris and to model/analyze the design of the shuttle's external tank. -- Kennedy Space Center. Kennedy's Ice/Debris Facility, where NASA gets its first close-up look at launch films, uses a highly advanced SGI imaging system that allows engineers to analyze launch footage, frame by frame, in resolution that exceeds HD quality. NASA recently upgraded the lab's display system, enhancing its ability to assess the effects of debris on the shuttle vehicle in their decision support center for future flights. -- Johnson Space Center. Engineers at Johnson used SGI servers to run sophisticated fluid dynamics calculations as part of their effort to assess the bipod closeout redesign, a piece of hardware that attaches the shuttle's external fuel tank to the orbiter during liftoff. Results from these simulations are key inputs for the NASA-developed debris trajectory prediction codes. -- Ames Research Center. A full range of SGI technologies, including NASA's Columbia supercomputer, comprised of 10,240 Intel Itanium 2 processors, are being used to support several of the agency's Return to Flight activities. These activities include: investigation and analyses of cracks in the main propulsion system's fuel line; aerodynamics studies of the shuttle's ascent; debris transport analyses; development of an automated plotting tool for debris paths; and internal and external aerothermal fluid dynamics studies. The Columbia system is a vital resource for Return to Flight activities underway throughout NASA. "For more than two decades, SGI and NASA have charted the very frontiers of computing," said Bob Bishop, chairman and chief executive officer, SGI. "We're proud to continue our collaboration at a most exciting moment in the agency's history." In thousands of exhaustive tests and analyses aimed at modifying the shuttle vehicle to ensure safer lift-off and re-entry, NASA scientists worked with increasingly massive datasets. SGI computing, visualization and storage solutions were particularly helpful in running NASA's complex scientific applications, due in large part to SGI's third-generation NUMAflex architecture. This unique global shared-memory architecture enables researchers to hold large data sets entirely in memory, allowing for faster and more interactive data analysis, and resulting in more incisive conclusions.