Lab physicist Ron Soltz is part of an international team seeking to go back in time 14 billion years to the origins of the universe. In what would have been science fiction not so long ago, scientists may now have the means to time travel experimentally -- the Large Hadron Collider (LHC) particle accelerator at CERN in Switzerland.From left: Teresa Kamakea, Jeff Cunningham (seated) and Ron Soltz examine map showing international ALICE collaboration on the Green Linux Compute Cluster.

Soltz and LLNL colleagues have been involved in the design and now the operation of "A Large Ion Collider Experiment," or ALICE, one of the four principal particle detector experiments on the LHC accelerator, a 17-mile-long magnetic ring that can smash particles together at the highest energy ever achieved. Massive particle detectors record and analyze the collision debris as part of an effort to better understand the origins and the very substance of the universe (see the April 2, 2010 edition of Newsline).

"We are recreating what the universe looked like microseconds after the Big Bang," said Soltz, who serves as the computing coordinator for the DOE Office of Science, funded institutions in ALICE. "And creating the hottest, densest matter ever; the heaviest atoms at the highest energy.

"To figure out what happened requires collecting and analyzing enormous amounts of data from the experiment and to do this we need a lot of high-performance computing power," he said.

LLNL has partnered with Lawrence Berkeley Laboratory's National Energy Research Scientific Computing Center (NERSC) to create a Tier-2 U.S. site on the grid to manage storage and processing of experimental data from ALICE. In a project funded by DOE Nuclear Physics Office, LLNL and LBL are providing the primary computing and storage resources for ALICE collaborators in North and South America. These computing resources are being made available to the more-than 1,000 collaborators worldwide using the ALICE Grid.

ALICE is the only one of the four experiments that was built especially to measure the collisions of lead ions that will occur during one month out of every year. The data collected from the proton collisions during the remaining LHC running will be used by ALICE to perform baseline measurements in contrast to the other LHC experiments that will be searching for the Higgs particle and physics beyond the standard model. ALICE will collect some 10 terabytes of data per day, 10 percent of which will be transmitted from Switzerland to LLNL and LBNL's NERSC over DOE's Energy Science Network (ESnet).

LLNL's ALICE cluster, installed in September, is called the Green Linux Compute Cluster, or GLCC, and is on Livermore Computing's special 10-gigabytes-per-second collaboration network that also houses the Green Data Oasis. The compute nodes are from Dell - they're basically identical to Sierra nodes, but without any interconnects - and the storage is from SuperMicro. The system totals about 10 teraFLOP/S (trillion floating point operations per second). This cluster is now the third largest cluster in ALICE as measured by jobs completed per 24-hour period, and the associated 650-terabyte storage element used to store processed data is among the most heavily used. The cluster is currently processing a final round of simulations in preparation for the first heavy-ion collision data expected in November.

"LHC experiments such as ALICE generate so much data it is not possible to store and process it all in one place, so CERN is using the grid to funnel jobs around the world. These experiments require an international collaboration," Soltz said, noting that "Lawrence Berkeley and Livermore laboratories have a history of supporting ALICE science. Researchers from both labs have already collaborated to design and build the experiment's electromagnetic calorimeter detector, and the partnership between NERSC and GCE to provide computing resources is an extension of that tradition."


A view of the LHC cryo-magnet in the tunnel.
Photo credit: Maximilien Brice/CERN

Livermore will concentrate on analyzing ALICE data and understanding the underlying science. By studying what happens when nuclei of lead atoms smash together at nearly the speed of light, generating mini fireballs orders of magnitude hotter than the core of the sun, scientists hope to recreate the quark-gluon plasma state of matter that first existed nearly 14 billion years ago, just millionths of a second after the Big Bang. Physicists believe this will provide insight into the very nature of the matter that makes up the universe.

"Jeff Cunningham and Jeff Long of Livermore Computing have done an outstanding job getting the Green Linux Compute Cluster up and into production," Soltz said. "This opens a way for us to enter an exciting new realm of physics."

The GLCC represents LLNL's Livermore Computing's first grid computing project involving international collaboration. This was made possible by the Green Collaboration Environment, established for international collaboration, notably the Program for Climate Model Diagnosis and Intercomparison (PCMDI) and Hyperion, the high-performance computing testbed developed in collaboration with industrial partners. GLCC also represents the first time the DOE Office of Science has invested in computing at LLNL.

The EDGI (European Desktop Grid Initiative) project is looking for four subcontractors.
  • Two subcontractors should bring in new application user communities;
  • Two subcontractors as infrastructure providers.

The two infrastructure subcontractors are expected to extend the resources the original partners bring into the EDGI infrastructure.

These subcontractors can bring the following type of resources:

  1. Academic cloud;
  2. Local desktop grid (e.g. university level desktop grid) that is ready to support the Service Grid applications ported by EDGeS and will be ported by EDGI; 
  3. Existing public desktop grid that is ready to support the Service Grid applications ported by EDGeS or that will be ported by EDGI.

The selection criteria include the size of the resource the subcontractor will bring into the infrastructure.

The two application user communities subcontractors are expected to extend the number of ported applications to the EDGI infrastructure with new application domains. The selection criteria include the size of the community using the application domain. For all the four subcontractors a common task is to contribute to the dissemination of the EDGI technology through their user communities.

The EDGI project
The goal of the EDGI project is to extend the EDGeS (Enabling Desktop Grids for eScience) with scientific clouds, and extension to Unicore and KnowArc based Grids. The EDGeS infrastructure already connected gLite based Service Grids to Desktop Grids using BOINC, XtremWeb or OurGrid Desktop Grid technology. There is already a portfolio of two dozen scientific applications that have been ported to this infrastructure.

We expect that unlocking vast numbers of Grid computing resources is of interest to new scientific user communities and that linking an existing computational infrastructure to the European e-Infrastructure is of interest, because the new opportunities it offers to users of these infrastructure.. The EDGI project started on the 1st of June 2010.

As part of the EDGI project we issue a call for subcontractors to get in touch with potential new user communities and infrastructure providers.

What do we expect?
We are looking for organisations that represent a user group that has a real computing need that can potentially be full-filled by a large scale Grid and for existing infrastructures that want to serve a large user community better by connecting to the EDGI infrastructure.

Also organisations that have an application with a potentially large impact are considered.

Our call is not about Grid research, i.e. we are not looking for basic Grid algorithms or basic application development, nor are we looking to extend the software foundations of Grids. The application must be proven and mature in itself, the only need should be porting it to the EDGI Grid. The infrastructure of a new infrastructure provider must already exist. We are not looking for organisations either that have an application that they only use themselves or for isolated, very small infrastructures. There must be a (potentially) large user community requiring this application, or needing the infrastructure preferably a European wide user community.

We expect that the organisation that submits a response to this call has got full control or access to the application or the infrastructure, that is has the technical capability and the commitment to actively participate in making the application Grid enabled or connecting the infrastructure. We expect that at the end of the EDGI project the application is fully operational on the Grid, benefiting users and the connected infrastructures can exchange jobs automatically.

The successful applicant is also expected to bring into the project at least a matching unfunded effort (more effort will get higher score during the evaluation of the applications). Furthermore subcontractors are expected to participate at least in two project meetings and at one dissemination event.

What we offer
What we (the EDGI project) offer is analysis of the application, assistance in making it Grid enabled, testing it on our Grid and later porting it onto the production EDGI Grid, and for Infrastructures, analysis of the connection possibilities through the EDGeS 3G Bridge, assistance in writing or configuring a connector, and testing of a workflow on the combined infrastructure. In addition we offer a financial contribution of 20.000 Euro. Successful applicants will be invited to join the EDGI project as subcontractors, following the rules laid out by the European Commission for subcontracts in Framework 7 projects.

This call offers a unique opportunity to applicants to be able in the near future to supply users, or user communities with a very large amount of computational resources and have an application that is Grid enabled, i.e. can run on almost any Grid in the world. If you connect your infrastructure it becomes part of the European infrastructure for eScience. Being part of the EDGI consortium also gives you access to a number of Europe's leading Grid experts and provides additional visibility for you and your user community.

Even if you are not selected as application user community subcontractor - we can only select two - you still can get benefit of collaborating with EDGI in porting your application(s) since in this case EDGI will port your application(s) to the EDGI infrastructure requiring only minimal consultancy from your side. The same is true for infrastructure providers.

This call is open both to academic organisations that we expect to represent a large European wide user community, and to industrial organisations that we expect to be part of a broad user community that includes at least several user companies. In both cases we expect the potential impact of applying Grid/Cloud technology is very high.

The annex to the printed version of this call is describing the formal application and the open selection procedure and the application form to be filled out. You can also find this on the EDGI project website.

Please, feel free to contact the EDGI consortium or the representative from the EDGI Consortium at any time. Our goal is to select the best possible candidates and hence we will help you to fill in the application form.

The EDGI consortium
http://edgi-project.eu
mailto:edgi-asc@mail.edgi-project.eu

Presentation will address the role of today’s optical transport networks in implementing a successful cloud strategy

Stephan Rettenberger, Vice President Marketing, ADVA Optical Networking will be presenting “Cloud Computing – Opportunities and Challenges for Service Providers” at the IIR WDM and Next-Generation Optical Networking Conference 2010, June 14-17, 2010 at the Fairmont Monte Carlo, Monaco.

Europe's premier optical networking event moves to Monaco for 2010 and will once again bring together over 400 optical professionals for four days of structured networking and debate. The educational operator led program and professionally organized netwokring agenda make WDM & Next Generation Optical Networking an unmissable event in the industry calendar.

DETAILS:

Cloud computing – Opportunities and challenges for service providers

This presentation will address:

• Cloud computing trends changing the ITC industry

• The role of the transport network for enabling cloud services

• Service provider opportunities to capitalize on the cloud revolution

• Ways to increase efficiency by evolving intra-cloud connectivity networks

• Enabling new applications by reliable and secure access to cloud resources

WHEN:            Thursday, June 17, 2010, 1.50-2.15 pm

WHERE:         Main conference hall

Supercomputer simulations at the Department of Energy’s Oak Ridge National Laboratory are helping scientists unravel how nucleic acids could have contributed to the origins of life.

A research team led by Jeremy Smith, who directs ORNL’s Center for Molecular Biophysics and holds a Governor’s Chair at University of Tennessee, used molecular dynamics simulation to probe an organic chemical reaction that may have been important in the evolution of ribonucleic acids, or RNA, into early life forms.

Certain types of RNA called ribozymes are capable of both storing genetic information and catalyzing chemical reactions – two necessary features in the formation of life. The research team looked at a lab-grown ribozyme that catalyzes the Diels-Alder reaction, which has broad applications in organic chemistry.

“Life means making molecules that reproduce themselves, and it requires molecules and are sufficiently complex to do so,” Smith said. “If a ribozyme like the Diels-Alderase is capable of doing organic chemistry to build up complex molecules, then potentially something like that could have been present to create the building blocks of life.”

The research team found a theoretical explanation for why the Diels-Alder ribozyme needs magnesium to function. Computational models of the ribozyme’s internal motions allowed the researchers to capture and understand the finer details of the fast-paced reaction. The static nature of conventional experimental techniques such as chemical probing and X-ray analysis had not been able to reveal the dynamics of the system.

“Computer simulations can provide insight into biological systems that you can’t get any other way,” Smith said. “Since these structures are changing so much, the dynamic aspects are difficult to understand, but simulation is a good way of doing it.”

Smith explained how their calculations showed that the ribozyme’s internal dynamics included an active site, or “mouth,” which opens and closes to control the reaction. The concentration of magnesium ions directly impacts the ribozyme’s movements.

“When there’s no magnesium present, the mouth closes, the substrate can’t get in, and the reaction can’t take place. We found that magnesium ions bind to a special location on the ribozyme to keep the mouth open,” Smith said.

The research was published as “Magnesium-Dependent Active-Site Conformational Selection in the Diels-Alderase Ribozyme” in the Journal of the American Chemical Society. The research team included Tomasz Berezniak and Mai Zahran, who are Smith’s graduate students, and Petra Imhof and Andres Jäschke from the University of Heidelberg.

Smith’s research was supported by Laboratory Directed Research and Development program funding. The bulk of the simulations were performed on the Kraken supercomputer at the UT/ORNL National Institute for Computational Sciences, supported by a National Science Foundation Teragrid allocation, and the resulting data were analyzed on the Heidelberg Linux Cluster System at the Interdisciplinary Center for Scientific Computing of the University of Heidelberg.

Grid computing can jet-propel research and development. An EU-funded programme that lets European and Chinese grids work together has already produced results in aircraft design, drug development and weather prediction. 

In 2007, the EU-funded project, BRIDGE (for Bilateral Research and Industrial development enhancing and integrating Grid Enabled technologies), set out to link European and Chinese computing grids and enable researchers to carry out joint research.

The project was inspired by the realisation that China is rapidly becoming a world leader in research and development, as well as a booming market for European products. Developing the infrastructure to link computing grids was seen as a key step towards future scientific and industrial cooperation.

“If Europe does not want to lose ground, the response can only be to synchronise with these developments,” says Gilbert Kalb, BRIDGE project coordinator.

Building a shared infrastructure

The BRIDGE team’s first challenge was to make the software systems that manage the European and Chinese grids compatible. The European Grid infrastructure, GRIA, and the Chinese system, CNGrid GOS, provide comparable services, but are organised differently.

The team were able to get GRIA and GOS to work together by building a new software superstructure to access them and tap their capabilities. The system included new gateways into the two grids, plus a shared platform to manage overall workflow, access needed applications, and translate higher-level commands into steps that each grid could carry out.

Not surprisingly, security was an important consideration on both sides. Kalb says that many or the scientific and industrial problems that BRIDGE was developed to address require intensive cooperation, yet involve highly sensitive information.

BRIDGE resolved this issue by letting selected processes remain private. That allows one group to contribute data or results to all collaborating parties without having to share proprietary software or analytic tools.

“You can interface in terms of the input and the output, while the algorithms remain hidden,” says Kalb.

Putting BRIDGE to work

The BRIDGE team tested the intercontinental grid they built by attacking three problems, each of which made different demands on the system.

Discovering new drugs remains an extremely costly process. One way to speed research is to use computers to simulate the chemical fit between millions of small molecules and proteins that play vital roles in disease-causing organisms. A molecule that binds strongly to a key protein has the potential to be turned into a potent new drug. This kind of research demands enormous computing power.

Researchers in Europe and China contributed four different docking tools – programs that calculate bonding between a small molecule and a particular protein. Each program used a different approach and produced somewhat different results.

The researchers then examined millions of molecules to see if they held promise against malaria or the H5N1 bird flu virus. By combining the results of the four different simulations, they were able to identify promising molecules more efficiently.

“Making the outcomes of these different docking tools comparable is very new,” says Kalb.

The four-pronged approach produced promising results. The BRIDGE infrastructure has already been adopted in Egypt to target the malaria parasite.

BRIDGE was also used to solve a complex aeronautic problem – designing and positioning wing flaps to maximise lift and minimise noise as an aircraft lands.

Like drug-discovery, these aerodynamic simulations required huge computational resources. In addition, because different parts of each simulation took place in different research centres, optimising the flow of work from centre to centre was also challenging.

The BRIDGE team was able to meet these challenges, carry out intensive distributed computations, and determine optimal wing flap parameters. “It proved to be an effective method for solving multi-objective and multi-disciplinary optimisation in aircraft design,” Kalb says.

Weather data on the fly

Weather and climate represent a third area where international cooperation is vital. The BRIDGE researchers set out to link three large meteorological databases located in Europe, North America and Asia.

The key challenge they faced with this project was to handle enormous volumes of data efficiently.

“You could do a calculation in the United States and transfer the results to Europe, or you could fetch the data from the USA and do the calculations here,” says Kalb. “The best way to do it depends on what calculation and what data and what’s the best available way to transfer the data from place to place. Bridge does all this on the fly.”

“Because there was a big organisation behind it, and our work fits very well, it was taken up right away,” says Kalb. “I believe that meteorologists are already using it to access data and perform certain calculations.”

To Kalb, the importance of what BRIDGE accomplished goes far beyond any single piece of research. He feels that the project has built the foundation for the kind of multinational collaboration that is needed to tackle global problems.

“Problems like energy and climate change can only be attacked or really solved with efforts from different players around the world, and we’ve built a platform to do that,” he says. We proved that this is feasible and useful. Now it’s time for other people to jump on this, develop it further, and use it.”

The BRIDGE project received funding from the Sixth Framework Programme for research.

Source:  ICT Results site (http://cordis.europa.eu/ictresults)

Page 4 of 26