Bright Cluster Manager to manage Cray CX1 and other departmental clusters at Sandia National Laboratories

Sandia has chosen Bright Cluster Manager to manage their Cray CX1 desk-side cluster, as well as other departmental clusters. The Cray CX1 and the Bright Cluster Manager software and services will be delivered by local Bright Computing reseller, SICORP.

Principal Member of the Technical Staff and computer scientist at Sandia, James H. Laros III, comments: "We are using several clusters of different size, architecture and complexity, and they are essential tools for our research. We require a cluster management solution that provides the flexibility to accomplish our diverse requirements, spanning from production to research and development."

"We are proud to have Bright Cluster Manager selected by Sandia National Laboratory for several of their clusters. Sandia will benefit from the ease-of-use, rich feature set and flexibility offered by Bright Cluster Manager", said Dr Matthijs van Leeuwen, CEO of Bright Computing.

The traditional approach to cluster management is to use many different tools that each provide only part of the required functionality. The problem with this "toolkit" approach is that the various tools were rarely designed to work together: each tool may have a different user interface, different configuration files, and a different daemon and database. Making all the tools work together requires a lot of expertise and scripting, and rarely leads to a truly easy-to-use and scalable solution.

Bright Cluster Manager takes a more fundamental and integrated approach to cluster management that does not rely on separate, unrelated tools, and therefore leads to an extremely easy-to-use, scalable and flexible solution.

"SICORP's choice for cluster management is Bright Cluster Manager," says SICORP Business Development Manager Craig Hendren. "A reliable and light-weight daemon, dynamic role-setting, and the ease of provisioning clusters are all key factors for us when we integrate clusters for our clients. The developers at Bright Computing have included all of the necessary functionality as well as many nice-to-have features for managing, scaling, monitoring, and supporting clusters."

Bright Cluster Manager is a Linux-based cluster management software solution specifically designed to make HPC clusters of any size easy to install, use and manage. Its intuitive graphical user interface offers consistent access to all management and monitoring functionality for cluster administrators. Its HPC user environment provides a comprehensive range of HPC software development tools for cluster users.

Computer scientists at Sandia National Laboratories in Livermore, Calif., have for the first time successfully demonstrated the ability to run more than a million Linux kernels as virtual machines.

The achievement will allow cyber security researchers to more effectively observe behavior found in malicious botnets, or networks of infected machines that can operate on the scale of a million nodes. Botnets, said Sandia’s Ron Minnich, are often difficult to analyze since they are geographically spread all over the world.

Sandia scientists used virtual machine (VM) technology and the power of its Thunderbird supercomputing cluster for the demonstration. Sandia National Laboratories computer scientists Ron Minnich (foreground) and Don Rudish (background) have successfully run more than a million Linux kernels as virtual machines, an achievement that will allow cybersecurity researchers to more effectively observe behavior found in malicious botnets. They utilized Sandia's powerful Thunderbird supercomputing cluster for the demonstration. (Photo by Randy Wong)

Running a high volume of VMs on one supercomputer — at a similar scale as a botnet — would allow cyber researchers to watch how botnets work and explore ways to stop them in their tracks. “We can get control at a level we never had before,” said Minnich.

Previously, Minnich said, researchers had only been able to run up to 20,000 kernels concurrently (a “kernel” is the central component of most computer operating systems). The more kernels that can be run at once, he said, the more effective cyber security professionals can be in combating the global botnet problem. “Eventually, we would like to be able to emulate the computer network of a small nation, or even one as large as the United States, in order to ‘virtualize’ and monitor a cyber attack,” he said.

A related use for millions to tens of millions of operating systems, Sandia’s researchers suggest, is to construct high-fidelity models of parts of the Internet.

“The sheer size of the Internet makes it very difficult to understand in even a limited way,” said Minnich. “Many phenomena occurring on the Internet are poorly understood, because we lack the ability to model it adequately. By running actual operating system instances to represent nodes on the Internet, we will be able not just to simulate the functioning of the Internet at the network level, but to emulate Internet functionality.”

A virtual machine, originally defined by researchers Gerald J. Popek and Robert P. Goldberg as “an efficient, isolated duplicate of a real machine,” is essentially a set of software programs running on one computer that, collectively, acts like a separate, complete unit. “You fire it up and it looks like a full computer,” said Sandia’s Don Rudish. Within the virtual machine, one can then start up an operating system kernel, so “at some point you have this little world inside the virtual machine that looks just like a full machine, running a full operating system, browsers and other software, but it’s all contained within the real machine.”

The Sandia research, two years in the making, was funded by the Department of Energy’s Office of Science, the National Nuclear Security Administration’s (NNSA) Advanced Simulation and Computing (ASC) program and by internal Sandia funding.

To complete the project, Sandia utilized its Albuquerque-based 4,480-node Dell high-performance computer cluster, known as Thunderbird. To arrive at the one million Linux kernel figure, Sandia’s researchers ran one kernel in each of 250 VMs and coupled those with the 4,480 physical machines on Thunderbird. Dell and IBM both made key technical contributions to the experiments, as did a team at Sandia’s Albuquerque site that maintains Thunderbird and prepared it for the project.

The capability to run a high number of operating system instances inside of virtual machines on a high performance computing (HPC) cluster can also be used to model even larger HPC machines with millions to tens of millions of nodes that will be developed in the future, said Minnich. The successful Sandia demonstration, he asserts, means that development of operating systems, configuration and management tools, and even software for scientific computation can begin now before the hardware technology to build such machines is mature.

“Development of this software will take years, and the scientific community cannot afford to wait to begin the process until the hardware is ready,” said Minnich. “Urgent problems such as modeling climate change, developing new medicines, and research into more efficient production of energy demand ever-increasing computational resources. Furthermore, virtualization will play an increasingly important role in the deployment of large-scale systems, enabling multiple operating systems on a single platform and application-specific operating systems.”

Sandia’s researchers plan to take their newfound capability to the next level.

“It has been estimated that we will need 100 million CPUs (central processing units) by 2018 in order to build a computer that will run at the speeds we want,” said Minnich. “This approach we’ve demonstrated is a good way to get us started on finding ways to program a machine with that many CPUs.” Continued research, he said, will help computer scientists to come up with ways to manage and control such vast quantities, “so that when we have a computer with 100 million CPUs we can actually use it.”

ARLINGTON, VA – The following two brief releases highlight recent achievements by the Office of Naval Research. The first development is an implantable computer chip that gives the blind a chance at sight. The second is a new computer network aimed at illuminating all the available Naval medical resources in one visual map display.

Video recordings of the Nov. 2-4, 2011 Cloud Computing Forum & Workshop IV hosted by the National Institute of Standards and Technology (NIST) are now available for on-line viewing.

The three-day November meeting featured, among other highlights, the unveiling of the public draft of its U.S. Government Cloud Computing Technology Roadmap.*

The videos from the meeting include:

  • keynote addresses by NIST Director Patrick Gallagher and U.S. Chief Information Officer Steve VanRoekel;
  • presentation on USG Cloud Computing Technology Roadmap Highlights; and
  • panel discussions on:
    • Cloud without Borders: International Perspectives
    • The Case for USG Cloud Computing Priorities
    • USG Security Challenges and Mitigations

To view the workshop videos, go to

HOPKINTON, MA -- EMC Corporation (NYSE:EMC) today reported financial results for the second quarter of 2001, reflecting the extension of its leadership position in the most advanced and fastest-growing areas of the information storage market despite the extremely difficult economic conditions facing many of its customers around the world.

Page 43 of 43