Computer scientists at Sandia National Laboratories in Livermore, Calif., have for the first time successfully demonstrated the ability to run more than a million Linux kernels as virtual machines.

The achievement will allow cyber security researchers to more effectively observe behavior found in malicious botnets, or networks of infected machines that can operate on the scale of a million nodes. Botnets, said Sandia’s Ron Minnich, are often difficult to analyze since they are geographically spread all over the world.

Sandia scientists used virtual machine (VM) technology and the power of its Thunderbird supercomputing cluster for the demonstration. Sandia National Laboratories computer scientists Ron Minnich (foreground) and Don Rudish (background) have successfully run more than a million Linux kernels as virtual machines, an achievement that will allow cybersecurity researchers to more effectively observe behavior found in malicious botnets. They utilized Sandia's powerful Thunderbird supercomputing cluster for the demonstration. (Photo by Randy Wong)

Running a high volume of VMs on one supercomputer — at a similar scale as a botnet — would allow cyber researchers to watch how botnets work and explore ways to stop them in their tracks. “We can get control at a level we never had before,” said Minnich.

Previously, Minnich said, researchers had only been able to run up to 20,000 kernels concurrently (a “kernel” is the central component of most computer operating systems). The more kernels that can be run at once, he said, the more effective cyber security professionals can be in combating the global botnet problem. “Eventually, we would like to be able to emulate the computer network of a small nation, or even one as large as the United States, in order to ‘virtualize’ and monitor a cyber attack,” he said.

A related use for millions to tens of millions of operating systems, Sandia’s researchers suggest, is to construct high-fidelity models of parts of the Internet.

“The sheer size of the Internet makes it very difficult to understand in even a limited way,” said Minnich. “Many phenomena occurring on the Internet are poorly understood, because we lack the ability to model it adequately. By running actual operating system instances to represent nodes on the Internet, we will be able not just to simulate the functioning of the Internet at the network level, but to emulate Internet functionality.”

A virtual machine, originally defined by researchers Gerald J. Popek and Robert P. Goldberg as “an efficient, isolated duplicate of a real machine,” is essentially a set of software programs running on one computer that, collectively, acts like a separate, complete unit. “You fire it up and it looks like a full computer,” said Sandia’s Don Rudish. Within the virtual machine, one can then start up an operating system kernel, so “at some point you have this little world inside the virtual machine that looks just like a full machine, running a full operating system, browsers and other software, but it’s all contained within the real machine.”

The Sandia research, two years in the making, was funded by the Department of Energy’s Office of Science, the National Nuclear Security Administration’s (NNSA) Advanced Simulation and Computing (ASC) program and by internal Sandia funding.

To complete the project, Sandia utilized its Albuquerque-based 4,480-node Dell high-performance computer cluster, known as Thunderbird. To arrive at the one million Linux kernel figure, Sandia’s researchers ran one kernel in each of 250 VMs and coupled those with the 4,480 physical machines on Thunderbird. Dell and IBM both made key technical contributions to the experiments, as did a team at Sandia’s Albuquerque site that maintains Thunderbird and prepared it for the project.

The capability to run a high number of operating system instances inside of virtual machines on a high performance computing (HPC) cluster can also be used to model even larger HPC machines with millions to tens of millions of nodes that will be developed in the future, said Minnich. The successful Sandia demonstration, he asserts, means that development of operating systems, configuration and management tools, and even software for scientific computation can begin now before the hardware technology to build such machines is mature.

“Development of this software will take years, and the scientific community cannot afford to wait to begin the process until the hardware is ready,” said Minnich. “Urgent problems such as modeling climate change, developing new medicines, and research into more efficient production of energy demand ever-increasing computational resources. Furthermore, virtualization will play an increasingly important role in the deployment of large-scale systems, enabling multiple operating systems on a single platform and application-specific operating systems.”

Sandia’s researchers plan to take their newfound capability to the next level.

“It has been estimated that we will need 100 million CPUs (central processing units) by 2018 in order to build a computer that will run at the speeds we want,” said Minnich. “This approach we’ve demonstrated is a good way to get us started on finding ways to program a machine with that many CPUs.” Continued research, he said, will help computer scientists to come up with ways to manage and control such vast quantities, “so that when we have a computer with 100 million CPUs we can actually use it.”

The National Science Foundation (NSF) has awarded the University of Tennessee, Knoxville, $10 million to develop a computer system that will interpret the massive amounts of data created by the current generation of high-performance computers in the agency's national computer grid.
 
Sean Ahern, a computer scientist with UT Knoxville's College of Engineering and Oak Ridge National Laboratory, will create and manage the Center for Remote Data Analysis and Visualization, which will store and examine data generated by computer simulations like those used for weather and climate, large experimental facilities like the Spallation Neutron Source (SNS), and widely distributed arrays of sensors.
 
"Next-generation computing is now this-generation computing," Ahern said. "What's lacking are the tools capable of turning supercomputer data into scientific understanding. This project should provide those critical capabilities."
 
Ahern and colleagues at UT's National Institute for Computational Science will develop Nautilus, a shared-memory computer system that will have the capability to store vast amounts of data, all of which can be accessed by each of its 1,024 core processors. Nautilus will be one of the largest shared-memory computers in the world, Ahern said. It will be located alongside UT's other supercomputer, Kraken, which is the world's most powerful academic supercomputer.
 
Nautilus will be used for three major tasks: visualizing data results from computer simulations with many complex variables, such as weather or climate modeling; analyzing large amounts of data coming from experimental facilities like the SNS; and aggregating and interpreting input from a large number of sensors distributed over a wide geographic region. The computer also will have the capability to study large bodies of text and aggregations of documents.
 
"Large supercomputers like Kraken working on climate simulation will run for a week and dump 100 terabytes of data into thousands of files. You can't immediately tell what's in there," Ahern said. "This computer will help scientists turn that data into knowledge."
 
Nautilus will be part of the TeraGrid XD, the next phase of the NSF's high-performance network that provides American researchers and educators with the ability to work with extremely large amounts of data.
 
Like Kraken, Nautilus will be part of UT's Joint Institute for Computational Sciences on the ORNL campus.
 
The new machine, manufactured by high-performance computing specialist SGI, will employ the company's new shared-memory processing architecture. It will have four terabytes of shared memory and 16 graphics processing units. The system will be complemented with a one-petabyte file system.
 
Through Ahern and co-principal investigator Jian Huang, UT Knoxville is the lead institution on the project. ORNL will provide statistical analysis support, Lawrence Berkeley National Laboratory will provide remote visualization expertise, the National Center for Supercomputing Applications at the University of Illinois will deploy portal and dashboard systems, and the University of Wisconsin will provide automation and workflow services. Huang is on the faculty of UT Knoxville's Department of Electrical Engineering and Computer Science.
 
Nautilus will be joined by another NSF facility at the University of Texas that will use another data-access technique for analysis. The NSF funded both projects under the American Recovery and Reinvestment Act of 2009.
 
"For many types of research, visualization provides the only means of extracting the information to understand complex scientific data," said Barry Schneider, NSF program manager for the project. "The two awards, one to the Texas Advanced Computing Center at the University of Texas at Austin and the other to NICS at the University of Tennessee, will be deploying new and complementary computational platforms to address these challenges."

Two of the nation’s fastest supercomputers will aid a research team, led by a University of Alabama computational chemist, in guiding both the development of new nuclear fuels and clean-up efforts from past nuclear fuel and weapon production.

The U.S. Department of Energy awarded the team, led by Dr. David Dixon, UA professor and Robert Ramsay Chair of Chemistry, 250 million processor hours on supercomputers at Oak Ridge National Laboratory and Argonne National Laboratory.

“Supercomputer simulations can provide detailed information, at the molecular level, about new types of materials that are going to potentially be used for nuclear fuels,” said Dixon. The simulations create detailed pictures of complex phenomena by using codes to solve quantum mechanics equations with complex mathematical expressions.

Experiments to test both potential new fuels and nuclear clean-up techniques are expensive, particularly because radioactive materials are involved, so by simulating experimental results much money is saved, Dixon said.

“This will help guide the experimentalists to new kinds of systems,” Dixon said. “We are also trying to provide details that the experimentalists may not be able to directly measure.”

Developing better ways to process thorium as a nuclear reactor fuel and gaining a better understanding of uranium chemistry are examples of the group’s goals for the project, Dixon said. Results from the project could also eventually help with clean-ups underway at places like the Hanford Site in Washington or the Savannah River Site in South Carolina and at nuclear reactor facilities that store waste, he said.

Team members of the project, which begins Jan. 1, include researchers from the University at Buffalo, Los Alamos National Laboratory, Washington State University, Lawrence Berkeley National Laboratory, the University of Minnesota, Argonne National Laboratory and Rice University.

The supercomputer time allocations come from the Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program.  The researchers are provided remote access to the supercomputers as well as support from computer experts who design code and optimize it for the supercomputers.

“This computer time grant will also let us improve our understanding of radiocative nucleides to improve our capability to provide energy to the nation and to clean up the environment,” Dixon said.

The research focuses on the actinides, 14 heavy, radioactive chemical elements. Because the nuclei are so heavy, Einstein’s theory of relativity has to be combined with quantum mechanics for a proper treatment. This raises, Dixon said, the computational costs considerably, necessitating access to the most powerful computers in the U.S.

“By addressing these problems computationally, we can provide, at less cost, new understanding about the properties of compounds containing the actinides, and that will have a major impact on technology.”

A method smartphones use to simplify images when storage space is limited could help answer tough chemistry problems. In a report appearing in ACS Central Science, researchers apply this technique, called compressed, or compressive sensing, to quickly and efficiently address central questions, like predicting how molecules vibrate. As these predictions get better and faster, researchers could get closer to the ideal of a "virtual laboratory," which could address many issues without ever lifting a pipette.

Alán Aspuru-Guzik and colleagues explain that compressed sensing has already been applied to experiments to reduce the amount of collected data to reproduce a given signal. But its application to calculations of molecular properties has been limited. Compressed sensing could help by removing zeroes (extraneous information) from matrices, which are arrays of numbers widely used in science to understand and analyze physical phenomena. One of the most computer-intensive calculations that chemists perform using these matrices is the simulation of vibrational spectra, essentially a painting picture of how a molecule bends and stretches. This movement is critical to a chemical's properties. So, Aspuru-Guzik's team decided to apply compressive sensing to address this challenge.

The researchers solved the vibrational spectrum of anthracene, which is relevant to molecular electronics, about three times faster with compressed sensing than with traditional methods. Although compressed sensing is a form of approximation, they were able to show that the result was sufficiently accurate. The team also demonstrated that by using cheap, low-accuracy calculations, they needed fewer expensive, high-accuracy ones.

Oklahoma State University has received Phase II funding through Grand Challenges Explorations, an initiative created by the Bill & Melinda Gates Foundation that enables individuals worldwide to test bold ideas to address persistent health and development challenges.  

Drs. Gary Foutch, AJ Johannes and Jim Smay of the chemical engineering department along with Dr. Mason Reichard of the OSU Center for Veterinary Health Sciences will continue to pursue an innovative global health research project, titled "Shear Extrusion to Treat Fecal Waste."  They will be assisted by Jennifer Thomas, a postdoctoral student from CVHS, and Md Waliul Islam and Jagdeep Podichetty, chemical engineering graduate students. 

"In Phase I we documented that extrusion technology could be effective in sanitizing fecal wastes," Foutch said.  "In Phase II the Gates Foundation has asked us to confirm that the technology can destroy Ascaris, also known as giant round worm, which affects about one quarter of the world's population." 

In 2011, Foutch and Johannes were awarded a Phase I grant for "Simple Treatment of Fecal Waste."  They have since added Smay and Reichard to the team.  Grand Challenges Explorations (GCE) Phase II grants recognize successful projects with further funding to test concepts from Phase I.  

These grants seek to engage individuals worldwide who can apply innovative approaches to some of the world's toughest and persistent global health and development challenges.  GCE invests in early stage ideas that have the potential to help bring people out of poverty and realize their human potential. 

During Phase I, Foutch and Johannes developed a small-scale device that can effectively disinfect and dewater feces and other solid wastes. The device results in less surface and ground water contamination and reduces the associated spread of disease agents.  Other benefits include odor reduction and less attraction to insects.  

In Phase II, Foutch, Johannes and colleagues will develop and field test a next-generation stand-alone extruder that can effectively sanitize different types of solid sludge in the field, and is also capable of water recovery via evaporation. They will also design a sanitation module from their extruder that can be incorporated into the Omni-Ingestor technology, which is a modular system that combines sanitation with waste removal and transportation. 

Page 5 of 20