Appro has announced that it has been awarded a subcontract for a 147.5TF Appro 1U-Tetra supercomputers from Lockheed Martin in support of the DoD High Performance Computing Modernization Program (HPCMP). The HPCMP supports DoD objectives to strengthen national prominence by advancing critical technologies and expertise through use of High Performance Computing (HPC). Research scientists and engineers benefit from HPC innovation to solve complex US defense challenges.

As a subcontractor of Lockheed Martin, Appro will provide system integration, project management, support and technical expertise for the installation and operation of the supercomputers and Lockheed, as a prime contractor will provide overall systems administration, computer operations management, applications user support, and data visualization services supporting five major DoD Supercomputing Resource Centers (DSRCs). This agreement was based on a common goal of helping customers reduce complexity in deploying, managing and servicing their commodity High Performance Computing solutions while lowering their total cost of ownership.

The following are the supercomputing centers where Appro clusters will be deployed through the end of 2010:
Army Research Laboratory DSRC at Aberdeen Providing Ground, MD,
US Air Force Research Laboratory DSRC at Wright Patterson AFB, OH,
US Army Engineer Research and Development Center DSRC in Vicksburg, MS,
Navy DoD Supercomputing Resource Center at Stennis Space Center, MS,
Arctic Region Supercomputing Center DSRC in Fairbanks, AK.

“We are extremely pleased to work with Lockheed Martin and be part of providing advanced cluster technologies and expertise in High Performance Computing (HPC) in support of the DoD High Performance Computing Modernization Program (HPCMP), said Daniel Kim, CEO of Appro. "Lockheed Martin leads its industry in innovation and has raised the bar for reducing costs, decreasing development time, and enhancing product quality for this important government program, and our products and solutions are a perfect fit for their demanding expectations."

Scientists at the Department of Energy’s (DOE) Lawrence Berkeley National Laboratory (Berkeley Lab) have been awarded massive allocations on the nation’s most powerful supercomputer to advance  innovative research in improving the combustion of hydrogen fuels and increasing the efficiency of nanoscale solar cells. The awards were announced today by Energy Secretary Steven Chu as part of DOE’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. 

The INCITE program selected 57 research projects that will use supercomputers at Argonne and Oak Ridge national laboratories to create detailed scientific simulations to perform virtual experiments that in  most cases would be impossible or impractical in the natural world. The program allocated 1.7 billion processor-hours to the selected projects. Processor-hours refer to how time is allocated on a supercomputer. Running a 10-million-hour project on a laptop computer with a quad-core processor would take more than 285 years. 

“The Department of Energy’s supercomputers provide an enormous competitive advantage for the United States,” said Secretary Chu. “This is a great example of how investments in innovation can help lead the way to new industries, new jobs, and new opportunities for America to succeed in the global marketplace.” 

Reducing Dependence on Fossil Fuels 

One strategy for reducing U.S. dependence on petroleum is to develop new fuel-flexible combustion technologies for burning hydrogen or hydrogen-rich fuels obtained from a gasification process. John Bell and Marcus Day of Berkeley Lab’s Center for Computational Sciences and Engineering, were awarded 40 million hours on the Cray supercomputer “Jaguar” at the Oak Ridge Leadership Computing Facility (OLCF) for “Simulation of Turbulent Lean Hydrogen Flames in High Pressure” to investigate the combustion chemistry of such fuels. 

Hydrogen is a clean fuel that, when consumed, emits only water and oxygen making it a potentially promising part of our clean energy future. Researchers will use the Jaguar supercomputer to better understand how hydrogen and hydrogen compounds could be used as a practical fuel for transportation and power generation. 

Nanomaterials Have Big Solar Energy Potential 
Nanostructures, tiny materials 100,000 times finer than a human hair, may hold the key to improving the efficiency of solar cells – if scientists can gain a fundamental understanding of nanostructure behaviors and properties. To better understand and demonstrate the potential of nanostructures, Lin-Wang Wang of Berkeley Lab’s Materials Sciences Division was awarded 10 million hours on the Cray supercomputer at OLCF. Wang’s project is “Electronic Structure Calculations for Nanostructures.” 

Currently, nanoscale solar cells made of inorganic systems suffer from low efficiency, in the range of 1–3 percent. In order for the nanoscale solar cells to have an impact in the energy market, their efficiencies must be improved to more than 10 percent. The goal of Wang’s project is to understand the mechanisms of the critical steps inside a nanoscale solar cell, from how solar energy is absorbed, then converted into usable electricity. Although many of the processes are known, some of the corresponding critical aspects of the nano systems are still not well understood. 

Because Wang studies systems with 10,000 atoms or more, he relies on large-scale allocations such as his INCITE award to advance his research. To make the most effective use of his allocations, Wang and collaborators developed the Linearly Scaling Three Dimensional Fragment (LS3DF) method. This allows Wang to study systems that would otherwise take over 1,000 times longer on even the biggest supercomputers using conventional simulation techniques. LS3DF won an ACM Gordon Bell Prize in 2008 for algorithm innovation. 

Advancing Supernova Simulations 
Berkeley Lab’s John Bell is also a co-investigator on another INCITE project, “Petascale Simulations of Type Ia Supernovae from Ignition to Observables.” The project, led by Stan Woosley of the University of California-Santa Cruz, uses two supercomputing applications developed by Bell’s team – MAESTRO, to model the convective processes inside certain stars in the hours leading up to ignition – and CASTRO to model the massive explosions known as Type Ia supernovas. The project received 50 million hours on the Cray supercomputer at OLCF. 

Type Ia supernovae (SN Ia) are the largest thermonuclear explosions in the modern universe. Because of their brilliance and nearly constant luminosity at peak, they are also a “standard candle” favored by cosmologists to measure the rate of cosmic expansion. Yet, after 50 years of study, no one really understands how SN Ia work. This project aims to use these applications to model the beginning-to-end processes of these exploding stars. 

Read more about the INCITE program: http://www.energy.gov/news/9834.htm

Northrop Grumman Corporation has formed the Maximizing and Optimizing Renewable Energy (M.O.R.E.) POWER initiative, which leverages RMSC's on-demand supercomputing resources and Northrop Grumman's unique site selection tool to help identify the most efficient and productive networks of wind and solar farms for renewable energy projects. With support from the Montana Governor's Office of Economic Development for a proof of concept, it has been demonstrated that M.O.R.E. POWER can reduce the financing and operating costs of a network of wind energy farms and accelerate their return on investment.

"More than 15 years of research and operations in weather and climate modeling, supercomputing applications, and optimization technology for the U.S. government has been applied to the site selection tool that enables M.O.R.E. POWER, and this technology has now been adapted for wind and solar farm networks," said Dr. Robert Brammer, vice president and chief technology officer for Northrop Grumman's Information Systems sector. "Significant progress is being made in Montana and Northrop Grumman looks forward to a continued partnership with the state and RMSC in support of establishing the Rocky Mountain region as a renewable energy leader in North America."

Renewable energy generation experiences variability of power generation, which presents integration challenges for the electrical grid. M.O.R.E. POWER lowers the cost of operations by identifying an optimized network of farm locations which minimizes intermittency through site diversity, while still maximizing saleable energy. This is accomplished through the selection of a distributed network of farms that are not dependent on the same localized wind and/or cloud cover conditions.

The M.O.R.E. POWER solution employs a network optimization model developed by Northrop Grumman for wind and/or solar farm site selection. The solution uses wind and solar radiation databases developed by Northrop Grumman as the basis for choosing the most productive alternative energy farm locations. The network optimization model and databases are hosted on RMSC computational resources where the actual computations are performed.

"M.O.R.E. POWER answers the 'where' question in green energy development by calculating which combination of candidate farm locations will result in the highest wind or solar energy production and the least variance in power generation," said Earl J. Dodd, executive director for RMSC. "M.O.R.E. POWER was designed for renewable energy developers and investors, as well as state governments and regional energy authorities. Once a wind or solar energy site has been built, this service could also provide operational forecasts to maintain maximum efficiency of the facility, a farm and even multiple farms geographically dispersed."

Northrop Grumman's investment in regional climate modeling is addressing the potential mitigation and adaptation needs of climate change. This investment is also being leveraged with M.O.R.E POWER to address the needs of renewable energy. M.O.R.E. POWER is capable of finding an optimized network of farm locations for combined wind and solar networks, and accommodating various constraints including number of sites, geographic restrictions, power-grid locations and total power generation.

Northrop Grumman and RMSC are collaborating to provide M.O.R.E. POWER services for the state of Montana to help establish governance guidelines for the state's expansion of wind generation.

Nine supercomputers have been tested, validated and ranked by the new “Graph500” challenge, first introduced this week by an international team led by Sandia National Laboratories. The list of submitters and the order of their finish was released Nov. 17 at the supercomputing conference SC10 meeting in New Orleans.

The machines were tested for their ability to solve complex problems involving random-appearing graphs, rather than for their speed in solving a basic numerical problem, today’s popular method for ranking top systems.

“Some, whose supercomputers placed very highly on simpler tests like the Linpack, also tested them on the Graph500, but decided not to submit results because their machines would shine much less brightly,” said Sandia computer scientist Richard Murphy, a lead researcher in creating and maintaining the test.

Murphy developed the Graph500 Challenge with researchers at the Georgia Institute of Technology, University of Illinois at Urbana-Champaign, and Indiana University, among others.

Complex problems involving huge numbers of related data points are found in the medical world where large numbers of medical entries must be correlated, in the analysis of social networks with their huge numbers of electronically related participants, or in international security where huge numbers of containers on ships roaming the world and their ports of call must be tracked.

Such problems are solved by creating large, complex graphs with vertices that represent the data points — say, people on Facebook — and edges that represent relations between the data points — say, friends on Facebook. These problems stress the ability of computing systems to store and communicate large amounts of data in irregular, fast-changing communication patterns, rather than the ability to perform many arithmetic operations. The Graph500 benchmarks are indicative of the ability of supercomputers to handle such complex problems.

The Graph500 benchmarks present problems in different input sizes. These are described as huge, large, medium, small, mini and toy. No machine proved capable of handling problems in the huge or large categories.

“I consider that a success,” Murphy said. “We posed a really hard challenge and I think people are going to have to work to do ‘large’ or ‘huge’ problems in the available time.” More memory, he said, might help.

The abbreviations “GE/s” and “ME/s” represented in the table below describe each machine’s capabilities in giga-edges per second and mega-edges per second — a billion and million edges traversed in a second, respectively.

Competitors were ranked first by the size of the problem attempted and then by edges per second.

The rankings were:

Rank #1 – Intrepid, Argonne National Laboratory – 6.6 GE/s on scale 36 (Medium)

Rank #2 – Franklin, National Energy Research Scientific Computing Center – 5.22 GE/s on Scale 32 (Small)

Rank #3 – cougarxmt, Pacific Northwest National Laboratory – 1.22 GE/s on Scale 29 (Mini)

Rank #4 – graphstorm, Sandia National Laboratories’ – 1.17 GE/s on Scale 29 (Mini)

Rank #5 – Endeavor, Intel Corporation, 533 ME/s on Scale 29 (Mini)

Rank #6 – Erdos, Oak Ridge National Laboratory – 50.5 ME/s on Scale 29 (Mini)

Rank #7 – Red Sky, Sandia National Laboratories – 477.5 ME/s on Scale 28 (Toy++)

Rank #8 – Jaguar, Oak Ridge National Laboratory – 800 ME/s on Scale 27 (Toy+)

Rank #9 – Endeavor, Intel Corporation – 615.8 ME/s on Scale 26 (Toy)

A more detailed description of the Graph500 benchmark and additional results are available at graph500.org. Any organization may participate in the ratings. The next Graph500 Challenge list is expected to be released at the International Supercomputing Conference 2011 next summer, and then at SC 2011 again in the fall.

Sandia National Laboratories is a multiprogram laboratory operated and managed by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration. With main facilities in Albuquerque, N.M., and Livermore, Calif., Sandia has major R&D responsibilities in national security, energy and environmental technologies, and economic competitiveness.

Appro, once again takes the leadership role introducing the new 3U Appro HF1 server, the Industry's first High Frequency server available based on Intel Xeon Processors 5600 series. This product will be available to order in limited quantities in 2010 and will ship in volume in Q1 2011.

The Appro HF1, High Frequency server is a rack mount industry standard platform offering optimized overclocked capabilities and liquid-cooling features to maintain a stable thermal operation. This server is finely tuned to provide exceptional speed and performance for the Financial Industry delivering reliability and price/performance to support high-speed trading applications.

The server offers dual Intel Xeon Processors X5680 series overclocked up to 4.4GHz on each CPU core, with a theoretical peak performance of up to 211GF. It offers up to 48GB of memory operating up to 1440MHz. This server provides extreme processor and memory performance while offering server-class reliability, availability, serviceability and manageability. The Appro HF1 server is integrated with two on board Gigabit Ethernet ports and seven PCI-e Gen 2 slots per server featuring ease of integration, network compatibility, and fast deployment. The Appro HF1 server also offers remote server management capabilities, high-speed interconnect options and a variety of configurations to include Linux Operating System.

Ideal customers are in the financial industry, especially the high-frequency and electronic trading, hedge funds and proprietary trading firms. Industries that can take advantage of the high CPU clock frequency and overclocked memory will also benefit from this solution.

"Appro's announcement today represents a fundamental change of how we use technology," said John Lee, vice president of advanced technology solutions at Appro. "Now, customers can have high performance computing that is differentiated for their market segment demands. Appro offers innovative, customer-focused server platforms and cluster technologies that optimize the performance and utilization of IT investments."

"Performance-passionate customers in high frequency trading and financial systems need cutting-edge performance at the boundary of innovation," said Rajeeb Hazra, general manager of high-performance computing at Intel Corporation. "The Appro HF1 server exemplifies Appro's innovation in high performance computing systems. By taking full advantage of the smaller, faster and more energy-efficient transistors in our Intel Xeon Processor 5600 series, the Appro HF1 server offers an excellent platform for high-speed processing deployments and lower-latency solutions."

Page 3 of 26