Evidence of broadside collision with dwarf galaxy discovered in Milky Way

'Shell structures' are the first of their kind found in the galaxy

Nearly 3 billion years ago, a dwarf galaxy plunged into the center of the Milky Way and was ripped apart by the gravitational forces of the collision. Astrophysicists announced today that the merger produced a series of telltale shell-like formations of stars in the vicinity of the Virgo constellation, the first such "shell structures" to be found in the Milky Way. The finding offers further evidence of the ancient event and new possible explanations for other phenomena in the galaxy.

Astronomers identified an unusually high density of stars called the Virgo Overdensity about two decades ago. Star surveys revealed that some of these stars are moving toward us while others are moving away, which is also unusual, as a cluster of stars would typically travel in concert. Based on emerging data, astrophysicists at Rensselaer Polytechnic Institute proposed in 2019 that the overdensity was the result of a radial merger, the stellar version of a T-bone crash.

"When we put it together, it was an 'aha' moment," said Heidi Jo Newberg, Rensselaer professor of physics, applied physics, and astronomy, and lead author of The Astrophysical Journal paper detailing the discovery. "This group of stars had a whole bunch of different velocities, which was very strange. But now that we see their motion as a whole, we understand why the velocities are different, and why they are moving the way that they are." Stars identified in the research have formed {module INSIDE STORY}

The newly announced shell structures are planes of stars curved, like umbrellas, left behind as the dwarf galaxy was torn apart, literally bouncing up and down through the center of the galaxy as it was incorporated into the Milky Way, an event the researchers have named the "Virgo Radial Merger." Each time the dwarf galaxy stars pass quickly through the galaxy center, slow down as they are pulled back by the Milky Way's gravity until they stop at their farthest point, and then turn around to crash through the center again, another shell structure is created. Simulations that match survey data can be used to calculate how many cycles the dwarf galaxy has endured, and therefore when the original collision occurred.

The new paper identifies two shell structures in the Virgo Overdensity and two in the Hercules Aquila Cloud region, based on data from the Sloan Digital Sky Survey, the European Space Agency's Gaia space telescope, and the LAMOST telescope in China. Supercomputer modeling of the shells and the motion of the stars indicates that the dwarf galaxy first passed through the galactic center of the Milky Way 2.7 billion years ago.

Newberg is an expert on the halo of the Milky Way, a spherical cloud of stars that surrounds the spiral arms of the central disk. Most if not all of those stars appear to be "immigrants," stars that formed in smaller galaxies that were later pulled into the Milky Way. As the smaller galaxies coalesce with the Milky Way, their stars are pulled by so-called "tidal forces," the same kind of differential forces that make tides on Earth, and they eventually form a long cord of stars moving in unison within the halo. Such tidal mergers are fairly common and have formed much of Newberg's research over the past two decades.

More violent "radial mergers" are considered far less common. Thomas Donlon II, a Rensselaer graduate student and first author of the paper, said that they were not initially seeking evidence of such an event.

"There are other galaxies, typically more spherical galaxies, that have a very pronounced shell structure, so you know that these things happen, but we've looked in the Milky Way and hadn't seen really obvious gigantic shells," said Donlon, who was lead author on the 2019 paper that first proposed the Virgo Radial Merger. As they modeled the movement of the Virgo Overdensity, they began to consider a radial merger. "And then we realized that it's the same type of merger that causes these big shells. It just looks different because, for one thing, we're inside the Milky Way, so we have a different perspective, and also this is a disk galaxy and we don't have as many examples of shell structures in disk galaxies."

View a video simulation of the formation of the stellar shell structures. 

{media id=242,layout=solo}


 

The finding poses potential implications for a number of other stellar phenomena, including the Gaia Sausage, a formation of stars believed to have resulted from the merger of a dwarf galaxy between 8 and 11 billion years ago. Previous work supported the idea that the Virgo Radial Merger and the Gaia Sausage resulted from the same event; the much lower age estimate for the Virgo Radial Merger means that either the two are different events or the Gaia Sausage is much younger and could not have caused the creation of the thick disk of the Milky Way, as previously claimed. A recently discovered spiral pattern in position and velocity data for stars close to the sun sometimes called the Gaia Snail, and a proposed event called the Splash, may also be associated with the Virgo Radial Merger.

"There are lots of potential tie-ins to this finding," Newberg said. "The Virgo Radial Merger opens the door to a greater understanding of other phenomena that we see and don't fully understand, and that could very well have been affected by something having fallen right through the middle of the galaxy less than 3 billion years ago."

Pitt engineer fights fire with data

Pitt IE professor receives $270K from NSF for Wildfire Management Optimization

The wildfires that consumed the west coast of the U.S. this year were a part of a larger pattern. Experts warn that climate change is increasing the severity and extent of wildfires over the past several, and their impact on communities, the environment and the economy is growing.

Industrial engineer and professor Oleg Prokopyev at the University of Pittsburgh's Swanson School of Engineering is utilizing optimization to find a solution to this problem. Prokopyev will collaborate with Lewis Ntaimo and Jianbang Gan at Texas A&M University on the project, titled "Collaborative Research: Fuel Treatment Planning Optimization for Wildfire Management." CAPTION Oleg Prokopyev, professor of industrial engineering at the University of Pittsburgh's Swanson School of Engineering{module INSIDE STORY}

The National Science Foundation recently awarded $550,000 for the work, with $270,000 designated for Pitt.

"One strategy for mitigating forest fires is fuel treatment, which involves strategically removing some of the vegetation--the 'fuel' for the fire--with controlled burns, grazing or mechanical thinning," said Prokopyev. "Our models will help predict when, where and how to best implement these methods."

Using advanced decision-making methods, such as mixed-integer optimization and simulation, the project will provide a better understanding of what types of fuel treatment options would be most effective, and when to implement them.

In addition, the project will use historical data from the Texas A&M Forest Service to calibrate and validate the developed mathematical models.

The project began Sept. 1, 2020 and is expected to last three years.

UD's new approach to artificial intelligence builds in uncertainty

Smarter models, smarter choices

They call it artificial intelligence -- not because the intelligence is somehow fake. It's real intelligence, but it's still made by humans. That means AI -- a power tool that can add speed, efficiency, insight, and accuracy to a researcher's work -- has many limitations.

It's only as good as the methods and data it has been given. On its own, it doesn't know if the information is missing, how much weight to give different kinds of information, or whether the data it draws on is incorrect or corrupted. It can't deal precisely with uncertainty or random events -- unless it learns how. Relying exclusively on data, as machine-learning models usually do, it does not leverage the knowledge experts have accumulated over years and physical models underpinning physical and chemical phenomena. It has been hard to teach the supercomputer to organize and integrate information from widely different sources. CAPTION Prof. Dion Vlachos (left), director of UD's Catalysis Center for Energy Innovation, and Joshua Lansford, a doctoral student in UD's Department of Chemical and Biomolecular Engineering, are co-authors on the paper recently published in the journal Science Advances.  CREDIT Graphic by Jeffrey C. Chase{module INSIDE STORY}

Now researchers at the University of Delaware and the University of Massachusetts-Amherst have published details of a new approach to artificial intelligence that builds uncertainty, error, physical laws, expert knowledge, and missing data into its calculations and leads ultimately to much more trustworthy models. The new method provides guarantees typically lacking from AI models, showing how valuable -- or not -- the model can be for achieving the desired result.

Joshua Lansford, a doctoral student in UD's Department of Chemical and Biomolecular Engineering, and Prof. Dion Vlachos, director of UD's Catalysis Center for Energy Innovation, are co-authors on the paper published Oct. 14 in the journal Science Advances. Also contributing were Jinchao Feng and Markos Katsoulakis of the Department of Mathematics and Statistics at the University of Massachusetts-Amherst.

The new mathematical framework could produce greater efficiency, precision, and innovation for computer models used in many fields of research. Such models provide powerful ways to analyze data, study materials, and complex interactions and tweak variables in virtual ways instead of in the lab.

"Traditionally in physical modelings, we build a model first using only our physical intuition and expert knowledge about the system," Lansford said. "Then after that, we measure uncertainty in predictions due to error in underlying variables, often relying on brute-force methods, where we sample, then run the model and see what happens."

Effective, accurate models save time and resources and point researchers to more efficient methods, new materials, greater precision, and innovative approaches they might not otherwise consider.

The paper describes how the new mathematical framework works in a chemical reaction known as the oxygen reduction reaction, but it is applicable to many kinds of modeling, Lansford said.

"The chemistries and materials we need to make things faster or even make them possible -- like fuel cells -- are highly complex," he said. "We need precision... And if you want to make a more active catalyst, you need to have bounds on your prediction error. By intelligently deciding where to put your efforts, you can tighten the area to explore.

"Uncertainty is accounted for in the design of our model," Lansford said. "Now it is no longer a deterministic model. It is a probabilistic one."

With these new mathematical developments in place, the model itself identifies what data are needed to reduce model error, he said. Then a higher level of theory can be used to produce more accurate data or more data can be generated, leading to even smaller error boundaries on the predictions and shrinking the area to explore.

"Those calculations are time-consuming to generate, so we're often dealing with small datasets -- 10-15 data points. That's where the need comes in to apportion error."

That's still not a money-back guarantee that using a specific substance or approach will deliver precisely the product desired. But it is much closer to a guarantee than you could get before.

This new method of model design could greatly enhance work in renewable energy, battery technology, climate change mitigation, drug discovery, astronomy, economics, physics, chemistry, and biology, to name just a few examples.

Artificial intelligence doesn't mean human expertise is no longer needed. Quite the opposite.

The expert knowledge that emerges from the laboratory and the rigors of scientific inquiry is essential, foundational material for any computational model.