We are all familiar with the change water goes through under the influence of falling temperatures, evolving from a disordered liquid to an ordered solid. What we may not know, however, is that such transitions - known as phase transitions - can also be found within the nucleus of a single atom.

In this case the ordering is found when nucleons - neutrons (or protons) - pair up to make the nucleus more stable. Physicists from Oak Ridge National Laboratory and the University of Tennessee have offered the first realistic description of phase transition in an atomic nucleus, using ORNL's Jaguar supercomputer to analyze the odd behavior of germanium-72, a medium-mass nucleus with 32 protons and 40 neutrons, as it is heated and rotated.

Nuclei typically lose their pairing-and therefore their order-when they are exposed to high heat and rotation. In germanium-72, however, that pairing re-emerges and peaks at a critical temperature-nearly 2 billion degrees Fahrenheit. Any realistic nuclear theory must take this behavior into account. The team's results are documented in the Nov. 19 edition of Physical Review Letters (http://link.aps.org/doi/10.1103/PhysRevLett.105.212504).

Delegates attending the 157th session of the CERN Council have congratulated the laboratory on the LHC's successful first year of running, and looked forward to a bright future for basic science at CERN. Top of the agenda was the opening of CERN to new members. Formal discussions can begin now with Cyprus, Israel, Serbia, Slovenia and Turkey for accession to Membership, while Brazil's candidature for  Associate Membership was also warmly received. 

"It is very pleasing to see the increasing global support for basic science that these applications for CERN membership indicate," said CERN Director General Rolf Heuer. "Basic science responds to our quest to understand nature, and provides the very foundations of future innovation." 

Established in 1954 by 12 European states, CERN's membership had grown to 20 by the end of the 1990s, with many countries from beyond the European region also playing an active role. Discussions on opening CERN to membership from beyond Europe, while at the same time allowing CERN to participate in future projects beyond Europe, reached a conclusion at the Council's June session this year. As of now, any country may apply for Membership or Associate Membership of CERN, and if CERN wishes to  participate in projects outside Europe, mechanisms are in place to make that possible. 

Under the scheme agreed by Council in June, Associate Membership is an essential pre-requisite for Membership. Countries may therefore apply for Associate Membership alone, or Associate Membership as a route to Membership. At this meeting, Council formally endorsed model agreements for both cases, and these will now serve as the basis for negotiations with candidates, which could lead to CERN welcoming its first Associate Members as early as next year. 

The other highlight of the meeting was the success of the LHC programme in 2010. Dozens of scientific papers have been published by the LHC experiments on the basis of data collected this year. These re-measure the science of the Standard Model of Particle Physics, and take the LHC's first steps into new territory. 

"The performance of the LHC this year has by far exceeded our expectations," said President of the CERN Council, Michel Spiro. "This bodes extremely well for the coming years, and I'm eagerly looking forward to new physics from the LHC." 

The LHC switched off for 2010 on 6 December. Details of the 2011 LHC run and plans for 2012 will be set following a special workshop to be held in Chamonix from 24-28 January, while the first beams of 2011 are scheduled for mid-February.

Cloud computing has proven to be a cost-efficient model for many commercial web applications, but will it work for scientific computing? Not unless the cloud is optimized for it, writes a team from the Lawrence Berkeley National Laboratory.

After running a series of benchmarks designed to represent a typical midrange scientific workload—applications that use less than 1,000 cores—on Amazon's EC2 system, the researchers found that the EC2's interconnect severely limits performance and causes significant variability. Overall, the cloud ran six times slower than a typical mid-range Linux cluster, and 20 times slower than a modern high performance computing system.

The team's paper, "Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud," was honored with the Best Paper Award at the IEEE's International Conference on Cloud Computing Technology and Science (CloudCom 2010) held Nov. 30-Dec.1 in Bloomington, Ind.

"We saw that the communication pattern of the application can impact performance, Applications like PARATEC with significant global communication perform relatively worse than those with less global communication," says Keith Jackson, a computer scientist in the Berkeley Lab’s Computational Research Division (CRD) and lead author of the paper.

He also notes that the EC2 cloud performance varied significantly for scientific applications because of the shared nature of the virtualized environment, the network, and differences in the underlying non-virtualized hardware.

The benchmarks and performance monitoring software used in this research were adapted from the large-scale codes used in the National Energy Research Scientific Computing Center's (NERSC) procurement process. NERSC is located at the Berkeley Lab and serves approximately 4,000 Department of Energy (DOE) supported researchers annually in disciplines ranging from cosmology and climate to chemistry and nanoscience.In this study, the researchers essentially cut these benchmarks down to midrange size before running them on the Amazon cloud.

"This set of applications was carefully selected to cover both diversity of science areas and the diversity of algorithms," said John Shalf, who leads NERSC’s Advanced Technologies Group."They provide us with a much more accurate view of the true usefulness of a computing system than ‘peak flops’ measured under ideal computing conditions." 

The benchmark modifications and performance analysis in this research were done in collaboration with the DOE’s Magellan project, funded by the American Recovery and Reinvestment Act."The purpose of the Magellan Project is to understand how cloud computing may be used to address the computing needs for the Department of Energy's Office of Science.  Understanding how our applications run in these environments is a critical piece of the equation," says Shane Canon, who leads the Technology Integration Group at NERSC.

In addition to Canon, Jackson and Shalf, Berkeley Lab's Lavanya Ramakrishnan, Krishna Muriki, Shreyas Cholia, Harvey Wasserman and Nicholas Wright are also authors on the paper.

"This was a real collaborative effort between researchers in Berkeley Lab's CRD, Information Technologies and NERSC divisions, with generous support from colleagues at UC Berkeley—it is a great honor to be recognized by our global peers with a Best Paper Award," adds Jackson.

The award is the second such honor for Jackson and Ramakrishnan this year. Along with Berkeley Lab colleagues Karl Runge of the Physics Division and Rollin Thomas of the Computational Cosmology Center, they won the Best Paper Award at the Association for Computing Machinery’s ScienceCloud 2010 workshop for"Seeking Supernovae in the Clouds: A Performance Study."

The Department of Energy's Office of Advanced Scientific Computing Research and the National Science Foundation funded the work; and CITRIS at the University of California, Berkeley donated Amazon EC2 time.

Read the paper here.

Page 42 of 42