SANTA ANA, CA -- MSC.Software Corp. (NYSE:MNS), a leading global provider of simulation software, services and systems, today announced that Wilmington, MA-based Beacon Power Corporation (Nasdaq:BCON), a leading manufacturer of energy storage systems, utilized MSC.Software simulation tools and professional services to simulate the complex physical stresses associated with a revolutionary energy storage system.

PALO ALTO, CA -- Sun Microsystems, Inc. (Nasdaq: SUNW) today announced another new benchmark world record for PeopleSoft 8 General Ledger. A part of PeopleSoft's industry-leading Financial Management solution, PeopleSoft General Ledger (GL) gives customers extensive financial control, flexible system design, streamlined global processing and audience-relevant reporting.

Watson—the IBM supercomputer that last year mounted an impressive performance against two humans on the game show “Jeopardy”—will be beefing up its healthcare expertise with the help of the Cleveland Clinic.
The hope is to pump as much of the Clinic's medical expertise into Watson and ultimately turn the supercomputer into a useful tool for healthcare providers. The idea is that a single person can't stay on top of all the latest medical research, but Watson's prowess in unlocking answers buried in huge volumes of information could prove helpful in the healthcare space.
“You'd like to bring all that knowledge to bear on the clinical problem and the patient sitting in front of you,” said Dr. James Stoller, chair of the Clinic's education institute. “Watson at its best — if this is successful — would help us at the bedside but not be a good stand in.”
Watson also could be considered the latest student enrolled in the Clinic's Lerner College of Medicine. Medical students will interact with Watson on hypothetical medical cases as part of a problem-based learning curriculum. The supercomputer will assist students by helping them navigate the latest research and suggest various hypotheses that support potential diagnoses and treatment options.
“We're looking to the Cleveland Clinic to provide expertise and guidance on what kind of content would be appropriate to improve Watson's core capability in understanding the language and knowledge used in medicine,” said Eric Brown, an IBM researcher and manager of the IBM TJ Watson Research Center.
Students as well as Watson will benefit from the interaction. Over time, Watson will get to be more adept at understanding medical language and about how to piece together evidence to support a diagnosis.
Researchers from IBM and the Clinic will discuss the role of Watson in the healthcare field at 4 p.m. today during the Cleveland Clinic Medical Innovation Summit, which runs through tomorrow at the InterContinental Hotel on the Clinic's main campus. Watson made an appearance at last year's summit and handily beat teams of Clinic cardiologists in a game modeled after “Jeopardy.”

Computer scientists at Sandia National Laboratories in Livermore, Calif., have for the first time successfully demonstrated the ability to run more than a million Linux kernels as virtual machines.

The achievement will allow cyber security researchers to more effectively observe behavior found in malicious botnets, or networks of infected machines that can operate on the scale of a million nodes. Botnets, said Sandia’s Ron Minnich, are often difficult to analyze since they are geographically spread all over the world.

Sandia scientists used virtual machine (VM) technology and the power of its Thunderbird supercomputing cluster for the demonstration. Sandia National Laboratories computer scientists Ron Minnich (foreground) and Don Rudish (background) have successfully run more than a million Linux kernels as virtual machines, an achievement that will allow cybersecurity researchers to more effectively observe behavior found in malicious botnets. They utilized Sandia's powerful Thunderbird supercomputing cluster for the demonstration. (Photo by Randy Wong)

Running a high volume of VMs on one supercomputer — at a similar scale as a botnet — would allow cyber researchers to watch how botnets work and explore ways to stop them in their tracks. “We can get control at a level we never had before,” said Minnich.

Previously, Minnich said, researchers had only been able to run up to 20,000 kernels concurrently (a “kernel” is the central component of most computer operating systems). The more kernels that can be run at once, he said, the more effective cyber security professionals can be in combating the global botnet problem. “Eventually, we would like to be able to emulate the computer network of a small nation, or even one as large as the United States, in order to ‘virtualize’ and monitor a cyber attack,” he said.

A related use for millions to tens of millions of operating systems, Sandia’s researchers suggest, is to construct high-fidelity models of parts of the Internet.

“The sheer size of the Internet makes it very difficult to understand in even a limited way,” said Minnich. “Many phenomena occurring on the Internet are poorly understood, because we lack the ability to model it adequately. By running actual operating system instances to represent nodes on the Internet, we will be able not just to simulate the functioning of the Internet at the network level, but to emulate Internet functionality.”

A virtual machine, originally defined by researchers Gerald J. Popek and Robert P. Goldberg as “an efficient, isolated duplicate of a real machine,” is essentially a set of software programs running on one computer that, collectively, acts like a separate, complete unit. “You fire it up and it looks like a full computer,” said Sandia’s Don Rudish. Within the virtual machine, one can then start up an operating system kernel, so “at some point you have this little world inside the virtual machine that looks just like a full machine, running a full operating system, browsers and other software, but it’s all contained within the real machine.”

The Sandia research, two years in the making, was funded by the Department of Energy’s Office of Science, the National Nuclear Security Administration’s (NNSA) Advanced Simulation and Computing (ASC) program and by internal Sandia funding.

To complete the project, Sandia utilized its Albuquerque-based 4,480-node Dell high-performance computer cluster, known as Thunderbird. To arrive at the one million Linux kernel figure, Sandia’s researchers ran one kernel in each of 250 VMs and coupled those with the 4,480 physical machines on Thunderbird. Dell and IBM both made key technical contributions to the experiments, as did a team at Sandia’s Albuquerque site that maintains Thunderbird and prepared it for the project.

The capability to run a high number of operating system instances inside of virtual machines on a high performance computing (HPC) cluster can also be used to model even larger HPC machines with millions to tens of millions of nodes that will be developed in the future, said Minnich. The successful Sandia demonstration, he asserts, means that development of operating systems, configuration and management tools, and even software for scientific computation can begin now before the hardware technology to build such machines is mature.

“Development of this software will take years, and the scientific community cannot afford to wait to begin the process until the hardware is ready,” said Minnich. “Urgent problems such as modeling climate change, developing new medicines, and research into more efficient production of energy demand ever-increasing computational resources. Furthermore, virtualization will play an increasingly important role in the deployment of large-scale systems, enabling multiple operating systems on a single platform and application-specific operating systems.”

Sandia’s researchers plan to take their newfound capability to the next level.

“It has been estimated that we will need 100 million CPUs (central processing units) by 2018 in order to build a computer that will run at the speeds we want,” said Minnich. “This approach we’ve demonstrated is a good way to get us started on finding ways to program a machine with that many CPUs.” Continued research, he said, will help computer scientists to come up with ways to manage and control such vast quantities, “so that when we have a computer with 100 million CPUs we can actually use it.”

Speeds Hadoop deployments, improves operational efficiencies, and significantly reduces costly IT support and maintenance requirements
Zettaset has announced Fast-PATH, an advanced software configuration management solution that automates and accelerates Hadoop deployment, significantly reducing the over-reliance on costly and time-consuming professional services that burdens today’s Big Data environment.
Hadoop is rapidly evolving, but has not yet reached the level of maturity and sophistication that traditional relational databases offer.  As a result, users expecting lower operational costs by using Hadoop software and infrastructure are surprised to find they must spend enormous sums for software support and maintenance in the form of recurring “subscription” fees.
Zettaset Fast-PATH streamlines Hadoop deployment for the enterprise with software automation that eliminates many manual configuration processes, thereby reducing ongoing support requirements. Fast-PATH automates multiple Hadoop functions, including provisioning, installation, configuration, and testing of the software.  As a result, cluster deployment can be achieved more rapidly, with much less IT intervention and associated cost.
In recent benchmark testing, Zettaset Fast-PATH was able to fully install a 50-node Hadoop cluster in 140 minutes (2 hours and 20 minutes). The benchmark time included installation of the Hadoop distribution, as well as installation of Kerberos, HBase, Hive, Encryption, Key Management, and Zettaset’s patented High-Availability framework on all nodes.
“Hadoop and other Big Data technologies are complex and challenging to set up, sometimes generating large costs for support and maintenance. This is not a scalable model for customers who want to efficiently move Hadoop into production networks,” said CEO Jim Vogt of Zettaset. “Fast-PATH provides Hadoop users with a powerful solution that accelerates time to deployment and simplifies ongoing management, without putting an unnecessary drain on limited IT resources.  We believe this innovation will spur wider adoption of Hadoop and Big Data technology in medium-sized enterprises, as well as in IT organizations that are more highly resourced.”
“Because Hadoop is not a monolithic platform, deploying and securing it has not been a straightforward process,” said Tony Baer, Principal Analyst for Ovum. “Our clients have long been asking for single, unified install capabilities that leverage wizard-based automation to take the bottlenecks and guesswork out of securing Hadoop.”
Zettaset Fast-PATH Facts:
•       Fast-PATH can automatically deploy a fully-functional Hadoop cluster with nodes, services and the customer’s distribution of choice with minimal user intervention
•       Provisions, installs, configures, and tests 10 nodes in 45 minutes
•       Provisions, installs, configures, and tests 50 nodes in 140 minutes
•       Time includes installation of Hadoop distribution, plus Kerberos, HBase, Hive, Encryption, Key Management, and patented High-Availability framework on all nodes
Additional Zettaset Orchestrator features and benefits include:
·       High-availability – provides automated failover for all Hadoop services and ensures cluster reliability for critical applications
·       Enterprise security – including encryption for data at-rest and in-motion, fine-grained role-based access control, and policy enforcement via automated AD/LDAP integration
·       Activity Monitoring – for audit, reporting, and compliance purposes
For more information on Zettaset Orchestrator and the new Fast-PATH installer, please visit

Page 1 of 13