Scientists colliding football- and sphere-shaped ions discover evidence supporting a paradigm shift in the birth of the quark-gluon plasma

Peering into the seething soup of primordial matter created in particle collisions at the Relativistic Heavy Ion Collider (RHIC)—an "atom smasher" dedicated to nuclear physics research at the U.S. Department of Energy's Brookhaven National Laboratory—scientists have come to a new understanding of how particles are produced in these collisions. This understanding represents a paradigm shift consistent with the presence of a saturated state of gluons, super-dense fields of the glue-like particles that bind the building blocks of ordinary matter. 

As described in a paper just published in Physical Review Letters, the discovery emerged as the scientists searched for a way to separate collision results by shape at RHIC, a DOE Office of Science User Facility.

"We want to compare differently shaped collision events to be able to tease out the influence of geometry on the patterns of particles we see streaming into our detector," said Brookhaven physicist Paul Sorensen, a member of RHIC's STAR detector collaboration. Those experiments would help scientists better understand fluctuations in the ions' internal composition and their influence on the characteristics of the matter created at RHIC—a soup of subatomic quarks and gluons known as quark-gluon plasma (QGP). 

"This study shows that we can separate the data from differently shaped ion collisions," Sorensen said. "And along the way, we also found surprising and possibly even more interesting results regarding the role the particles' internal structure likely plays in creating the QGP." 

Gold vs. Uranium

RHIC was first set up to collide gold ions to create and study the quark-gluon plasma. Even the earliest results hinted that the shape of the collision zone matters. 

For example, in collisions where the spherical gold ions pass through each other in an off-center way, overlapping like a Venn diagram, they create an oblong, football-shaped interaction region. In these collisions, the flow of particles emerging around the "equator" of the collision zone is enhanced relative to that at the "poles," especially when compared with more completely overlapping smashups. Scientists have been studying the detailed characteristics of this "elliptical flow" to better understand the properties of the quark-gluon plasma and the internal characteristics of the colliding ions. 

But the off-center gold-gold collisions also cause the positively charged protons packed inside the colliding nuclei to swirl. And that swirling positive charge produces a very powerful magnetic field. To study the effects of the oblong interaction region without the magnetic field, scientists at RHIC turned to uranium.  

Unlike gold ions, uranium ions have an oblong, football-like shape—similar to the overlap region in the off-center gold collisions. Comparing collisions where two "footballs" collide tip-to-tip, producing a spherical overlap, with those where the footballs collide upright, or body-to-body—in both cases with the ions completely overlapping—would give scientists a way to study the effect of shape without the swirl-induced magnetic field seen in the gold smashups. But the first question was whether the scientists could sort out results from the two differently oriented types of uranium collisions.

"There's no way to selectively orient the ions so they collide the way you want them to," Sorensen said. "You are stuck with what you get. The key is to be able to figure out which is which from the data."

Sorting out the shapes

The physicists set out to sort the results by analyzing how, and how many, particles emerge from the collisions. There should be more elliptical flow in upright body-body events than for the tip-tip collisions.  The tip-tip collisions should also, on average, produce more particles than the body-body ones, because as the oblong football shapes pass through one another tip to tip, there are more chances for the internal particles to undergo multiple collision events within the same nucleus. So they plotted the data from the whole range of uranium collisions on a graph to look for a downward trend in elliptical flow with increasing particle production.

They also plotted a similar set of measurements from gold-gold collisions, where the spherical shape and number of internal collisions doesn't change. For these data, used as a control, they expected the pattern of flow to remain steady even as the number of particles produced increased. 

Initially, the sets of data from uranium and gold collisions both showed the same downward sloping trend. But when the scientists narrowed their analysis to include only the 0.1 percent of collisions with the most complete particle overlap, the hoped-for signature appeared—a downward sloping trend for uranium, and a nearly flat line for gold. 

"This was a big success because we now have a way to 'manipulate' the geometry. We can pick out samples of uranium collision events with a desired orientation by selecting only those with low particle production (if we want body-body collisions) or high particle production (for tip-tip)," Sorensen said. 

By selecting events with different orientations the scientists can now explore the effect of geometry on other research questions. For example, they can study whether the flow that persists in the oblong shape affects things like the separation of positive and negative charges to disentangle the effects of flow from those of the magnetic field. They can also compare the tendency of particles traveling through the short vs. long side of football-shaped events to get stuck, or "quenched," in the plasma.

Yet even with the success of being able to separate tip-tip from body-body collisions, one thing still didn't fit. The downward trend in flow with particle production measured by the STAR physicists wasn't as steep as the one predicted by the model they were using to analyze the data.

"This was really confusing, perplexing, and a bit demoralizing," Sorensen said. "But that's often an indication that you are about to learn something new and interesting. So it was both perplexing and exciting." 

The model describes the initial density of the colliding particles. But even when the scientists tried different ways of doing the calculations—changing the assumptions they included about the number of particles, the shape of the nucleus, and how the number of particles can fluctuate—they couldn't get the calculations to fit the data. 

They turned to Prithwish Tribedy, a student at the Variable Energy Cyclotron Center in Kolkata, India, who had been working with Brookhaven's nuclear theory group and has since become a postdoctoral fellow for STAR. Tribedy had been performing calculations related to gluon saturation—another model of the initial state that scientists suggest emerges as gluons multiply and linger in ions accelerated close to the speed of light, as they are at RHIC. When Tribedy used those calculations to model the particle production that should be expected from the differently oriented uranium collisions, Sorensen said, "The calculations basically nailed the data."

It turns out that, to be successful, the model needs to account for the quark and gluon substructure of the protons and neutrons, Sorensen said. 

"That's not surprising, but what is surprising is that the number of particles produced in the collisions doesn't seem to depend on how many times any given quark collides with another quark, just whether it had a collision or not. This dependence on collision or no collision, rather than the number of collisions, would be something like returning from a party without being able to remember whether you had met one person or a hundred people," Sorensen said.  

Before these measurements, the standard picture suggested that the number of particles produced would depend on the number of collisions, so these results directly rule out that picture. The new results are consistent with at least two models that don't require a dependence on the number of collisions. 

"One of these is the gluon saturation model—which is why we think that model did a better job at predicting how well we'd be able to resolve the tip-tip vs. body-body collisions," Sorensen said. d4061115 star detector hr

"This result is just one piece of evidence. But it does support a new picture of how particles are born and that picture looks in many ways a lot like gluon saturation," Sorensen said. Future research will focus on further testing the idea of gluon saturation in comparison with other alternative explanations.

Research at RHIC is supported by the DOE Office of Science (NP) and these agencies and organizations.

Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy.  The Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time.  For more information, please visit

Since May 2015, Dr. Samuel Jones (University of Victoria, Canada) has been working at the Heidelberg Institute for Theoretical Studies (HITS) as a visiting scientist. The Alexander von Humboldt foundation awarded Dr. Jones a Humboldt Research Fellowship for two years. This allows him to carry out a research project with an academic host in Germany. Samuel Jones was invited by Prof. Friedrich Röpke, head of the Physics of Stellar Objects (PSO) research group. He will stay at HITS until April 2017, investigating the evolution and explosion of stars whose chemical imprint is postulated to be stamped on some of the oldest stars in the Universe.

Born near Birmingham, Samuel Jones studied astrophysics and music technology at Keele University, UK. After his Ph.D. in 2014, he assumed a Postdoc position at the University of Victoria, Canada, in the group of Dr. Falk Herwig. Samuel Jones is one of the Principal Investigators of “NuGrid” a group of more than 50 astrophysicists from 21 institutions in 8 countries working on stellar evolution, supernovae, and nucleosynthesis.

At HITS, he is a visiting scientist in the Physics of Stellar Objects (PSO) group led by Friedrich Röpke. ”The collective expertise of Fritz and the group provides the optimal setting in which to take on some of the unsolved problems in stellar physics,” he says. Jones investigates the so-called “light massive stars.” These are stars that are 8 to 12 times more massive than our Sun. “It is not clear if they turn into neutron stars or into white dwarfs … or neither,” Jones explains. Having modeled their lives from the cradle to their dying moments, he is now turning his attention to their final few seconds. In his project, Jones is using the LEAFS (Level-set based Astrophysical Flame Simulations) code co-authored by Friedrich Röpke. “It´s multi-scale multi-physics,” he explains. “The time scales of interest in stellar evolution range from millions of years to the order of seconds, and the spatial scales from a few hundreds of millions of kilometers to the sub-centimeter.” This research aims to help stellar physicists to understand better the origin of neutron stars and the abundances of the chemical elements in the oldest stars that we know of.

From left, Greg Aldering, Kyle Boone, Hannah Fakhouri and Saul Perlmutter of the Nearby Supernova Factory. Behind them is a poster of a supernova spectrum. Matching spectra among different supernovae can double the accuracy of distance measurements. (Photo by Roy Kaltschmidt/Berkeley Lab)

Less than 20 years ago the world learned that the universe is expanding ever faster, propelled by dark energy. The discovery was made possible by Type Ia supernovae; extraordinarily bright and remarkably similar in brightness, they serve as "standard candles" essential for probing the universe's history.

In fact, Type Ia supernovae are far from standard. Intervening dust can redden and dim them, and the physics of their thermonuclear explosions differs -- a single white dwarf (an Earth-sized star as massive as our sun) may explode after borrowing mass from a companion star, or two orbiting white dwarfs may collide and explode. These "normal" Type Ia's can vary in brightness by as much as 40 percent. Brightness dispersion can be reduced by well-proven methods, but cosmology continues to be done with catalogues of supernovae that may differ in brightness by as much as 15 percent.

Now members of the international Nearby Supernova Factory (SNfactory), based at the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab), have dramatically reduced the scatter in supernova brightnesses. Using a sample of almost 50 nearby supernovae, they identified supernova twins -- pairs whose spectra are closely matched -- which reduced their brightness dispersion to a mere eight percent. The distance to these supernovae can be measured about twice as accurately as before.

The SNfactory results are reported in "Improving cosmological distance measurements using twin Type Ia supernovae," accepted for publication by the Astrophysical Journal (ApJ) and available online at

Comparing apples to apples

"Instead of concentrating on what's causing the differences among supernovae, the supernova-twins approach is to look at the spectra and seek the best matches, so as to compare like with like," says Greg Aldering, the Berkeley Lab cosmologist who leads the SNfactory. "The assumption we tested is that if two supernovae look the same, they probably are the same."

Hannah Fakhouri, the lead author of the ApJ paper, initiated the twin study for her doctoral thesis. She says that the theoretical advantages of a twins match-up had long been discussed at Berkeley Lab; for the researchers who founded the SNfactory, including her thesis advisor, Nobel laureate Saul Perlmutter, one of the main goals was gathering a dataset of sufficient quality to test hypotheses like supernova twinning.

Fakhouri's timing was good; she was able to take advantage of precise spectrophotometry -- simultaneous measures of spectra and brightness -- of numerous nearby Type Ia's, collected using the SNfactory's SuperNova Integral Field Spectrograph (SNIFS) on the University of Hawaii's 2.2-meter telescope on Mauna Kea.

"Nearby" is relative; some SNfactory supernovae are more than a billion light years away. But all yield more comprehensive and detailed measurements than the really distant supernovae also needed for cosmology. The twin study used data from the first years of the SNfactory's observations; further work will use hundreds of high-quality Type Ia spectra from the SNfactory, so far the only large database in the world that can be used for this work.

Despite the surprising results, Fakhouri describes the initial research as "a long slog," requiring hard work and attention to detail. One challenge was making fair comparisons of time series, in which spectra are taken at frequent intervals as a supernova reaches maximum luminosity, then slowly fades; different colors (wavelengths) brighten and fade at different rates.

Because of demands on telescope time and other issues like weather, the time series of different supernovae can't be sampled uniformly. SNfactory member Rollin Thomas, of Berkeley Lab's Computational Cosmology Center, recommended a mathematical procedure called Gaussian Process regression to fill the gaps. Fakhouri says the outcome "was a big breakthrough."

Cleaning up the spectra and ranking the supernovae for twinness was done completely "blind" -- the researchers had no information about the supernovae except their spectra. "The unblinding process was suspenseful," Fakhouri says. "We might have found that twinning was completely useless." The result was a relief: the closer the twins' spectra, the closer their brightnesses.

The result strongly suggests that the long-accepted 15-percent uncertainty in Type Ia brightness is not merely statistical; it masks real but unknown differences in the nature of the supernovae themselves. The twin method's dramatic reduction of brightness dispersion suggests that hidden unknowns about the physical explosion processes of twins have been severely reduced as well, a strong step toward using such supernovae as true standard candles.

The best of the bunch

When Fakhouri received her doctorate, graduate student Kyle Boone, second author of the ApJ paper, took over the final steps of the analysis. "I started by comparing the twin method to other methods for reducing dispersion in brightness."

The conventional approach has been to fit a curve through a series of data points of brightness versus time: a lightcurve. Dimmer Type Ia's have narrower lightcurves and are redder; this fact is used to "standardize" supernovae, that is, to adjust their brightnesses to a common system.

The twin method, says Boone, "beats the lightcurve method without even trying. Plus, we found this can be done with just one spectrum -- an entire lightcurve is not needed."

Other recent methods are more subtle and detailed, but all have drawbacks compared to twinning. "The main competing technique gives excellent results but depends on wavelengths in the near infrared, where dispersion of the starting brightness is much less," Boone says. "That will be difficult to use with distant supernovae, whose high redshift makes near-infrared wavelengths inaccessible."

Fakhouri says, "Supernovae offer unique advantages for cosmology, but we need multiple techniques," including statistical methods charting how dark energy has shaped the structure of the universe. "The great thing about nature is that it provides different kinds of probes that can be decoupled from one another."

Supernovae are a singular asset, notes Aldering: "Supernovae found dark energy, and they still provide the strongest constraints on dark energy properties."

Says Boone, "We are working to see how well the twins technology can be applied to a very large sample of well-characterized, high-redshift supernovae that a space telescope like WFIRST could provide." NASA plans to launch WFIRST, the Wide-Field Infrared Survey Telescope, in the mid-2020s. Among other investigations, it will capture the spectra of many thousands of distant Type Ia supernovae.

When based on a reference sample of well-measured supernovae large enough for every new supernova to find its perfect twin, twin-supernova technology could lead to precise measures of dark energy's effect on the universe over the past 10 billion years. Each point in space and time so labeled will be an accurate milestone on the journey that led to the universe we live in today.

Figure 1: A simulation of supernova SN1987A suggests that lower-density material can form tendrils that push into the star’s outer layers as it explodes from the core (center left).

Simulations of a supernova suggest that density variations inside a star help propel heavy elements from its core

When the core of a massive star collapses under its own gravity, it can trigger a supernova—an incredibly energetic explosion that flings the star’s content far out into space. Detailed simulations now suggest that density variations between layers inside the star could help accelerate that core material to remarkable speeds.

A team of researchers led by Shigehiro Nagataki of the RIKEN Astrophysical Big Bang Laboratory based their simulation on supernova SN1987A, which formed from a blue supergiant star that exploded in the Large Magellanic Cloud, a nearby galaxy. Material from the star’s core, including clumps of a radioactive isotope of nickel (56Ni), streaked out in long ‘fingers’. Up to 17 per cent of this nickel moves faster than 3,000 kilometers per second—similar to the speeds of hydrogen and helium blown from the star’s outer layers. Astronomers have struggled to explain how this core material could exit the star so quickly.

One possibility is that as the exploding core pushed relatively low-density material into the star’s outer layers, it created a form of turbulence (known as Rayleigh–Taylor instability) similar to the mushroom cloud of gas and ash from a volcanic eruption. But the precise origin and location of this instability have remained a mystery.

Nagataki’s team’s simulation now offers an answer. They based their model on a blue supergiant, which is more than 16 times the mass of our own Sun. It has a nickel core surrounded by successive onion-like layers that are rich in sulphur, silicon, oxygen and carbon, helium and finally hydrogen. Simulations performed for several different types of supernova explosions revealed the conditions that most closely matched astronomical observations (Fig. 1).

The simulation produced high-speed nickel clumps when there were density fluctuations of 25 per cent or more at the interfaces between the helium and carbon–oxygen layers, and between the helium and hydrogen layers. The simulated explosion was also asymmetric, with more material funneled into jets on opposite sides of the star, and with one jet more powerful than the other. This model is the most accurate reproduction of SN1987A to date, says Nagataki.

The underlying density fluctuations could arise from convection currents inside the star, says Nagataki. “As for the asymmetric and jet-like explosion,” he adds, “it can be produced at the center of the supernova.”

The simulation successfully reproduces the first few hours after the birth of SN1987A. The team now hopes to extend their model to cover the entire 28-year lifetime of the supernova.

It's like something out of an interplanetary chess game. Astrophysicists at the University of Toronto have found that a close encounter with Jupiter about four billion years ago may have resulted in another planet's ejection from the Solar System altogether.

The existence of a fifth giant gas planet at the time of the Solar System's formation - in addition to Jupiter, Saturn, Uranus and Neptune that we know of today - was first proposed in 2011. But if it did exist, how did it get pushed out?

For years, scientists have suspected the ouster was either Saturn or Jupiter.

"Our evidence points to Jupiter," said Ryan Cloutier, a PhD candidate in U of T's Department of Astronomy & Astrophysics and lead author of a new study published in The Astrophysical Journal.

Planet ejections occur as a result of a close planetary encounter in which one of the objects accelerates so much that it breaks free from the massive gravitational pull of the Sun. However, earlier studies which proposed that giant planets could possibly eject one another did not consider the effect such violent encounters would have on minor bodies, such as the known moons of the giant planets, and their orbits.

So Cloutier and his colleagues turned their attention to moons and orbits, developing supercomputer simulations based on the modern-day trajectories of Callisto and lapetus, the regular moons orbiting around Jupiter and Saturn respectively. They then measured the likelihood of each one producing its current orbit in the event that its host planet was responsible for ejecting the hypothetical planet, an incident which would have caused significant disturbance to each moon's original orbit.

"Ultimately, we found that Jupiter is capable of ejecting the fifth giant planet while retaining a moon with the orbit of Callisto," said Cloutier, who is also a graduate fellow at the Centre for Planetary Sciences at the University of Toronto at Scarborough. "On the other hand, it would have been very difficult for Saturn to do so because Iapetus would have been excessively unsettled, resulting in an orbit that is difficult to reconcile with its current trajectory."

Page 2 of 13