Our closest neighbor, the Andromeda galaxy, has a fascinating history that can be uncovered through galactic archaeology

Researchers from the University of Hertfordshire in the UK have uncovered fascinating information about the Andromeda galaxy, which is our closest neighbor in the universe. After analyzing the elemental abundances in Andromeda, they concluded that the galaxy's formation was more intense and powerful than that of our own Milky Way. The research conducted by Professor Kobayashi throws new light on the impact and nature of the merging of two gas-rich galaxies, which most likely triggered a significant amount of star formation in Andromeda between 2 and 4.5 billion years ago.

A team of international astrophysicists, led by Professor Chiaki Kobayashi, has used state-of-the-art supercomputer modeling to study the history of the galaxy through galactic archaeology. This approach involves examining the chemical composition of stars and the development of their host galaxy to reconstruct its past. The study focuses on the elemental abundances in Andromeda, particularly the presence of both planetary nebulae and red-giant branch stars.

The analysis reveals that Andromeda's formation was more dramatic and forceful than that of our own Milky Way. After an intense initial burst of star formation that created the galaxy, a secondary layer of stars was produced between 2 billion and 4.5 billion years ago, most likely caused by what scientists call a "wet merger" - a merging of two gas-rich galaxies that instigates a large amount of star formation.

Scientists have long believed that Andromeda experienced a merger of two galaxies, based on the position and motion of individual stars in the galaxy. Professor Kobayashi's research sheds new light on the nature and impact of such a merger using the chemical composition of stars. It explains how stars and elements were formed throughout the history of Andromeda.

Professor Kobayashi, who is an astrophysics professor at the University of Hertfordshire’s Centre for Astrophysics Research, has explained that by analyzing the chemical abundance of different ages of stars in Andromeda, we can better understand the origins and history of this galaxy. According to her, Andromeda is a spiral disc galaxy similar in many ways to our own Milky Way. However, her new research confirms that Andromeda's history is significantly more intense and dramatic, with bursts of activity forming stars in abundance and two distinct eras of star formation.

Professor Kobayashi’s theoretical model predicts that there are two distinct chemical compositions of stars in the two-disc components of Andromeda. One composition has ten times more oxygen than iron, while the other has a similar amount of oxygen and iron. This modeling has been confirmed by the spectroscopic observations of planetary nebulae and also by those of red-giant stars with the James Webb Space Telescope (JWST).

The new study is an extension of Professor Kobayashi’s ongoing, ground-breaking research into the origin of elements in the Universe. According to her, oxygen is one of the alpha elements produced by massive stars along with neon, magnesium, silicon, sulfur, argon, and calcium. While oxygen and argon have been measured with planetary nebulae, JWST is required to measure other elements, including iron, as Andromeda is so far away. She believes that in the coming years, JWST and ground-based large telescopes will keep looking at Andromeda, which will give further weight to the new findings.

The latest research shows that number of days considered 'hail-prone' have decreased over much of the country, but increased over the southwest and southeast where there are large population centres. Photo: Getty images/Andrew Merry
The latest research shows that number of days considered 'hail-prone' have decreased over much of the country, but increased over the southwest and southeast where there are large population centres. Photo: Getty images/Andrew Merry

Probability of hail occurring in Australia has significantly altered in the past 40 years

The UNSW Sydney and the Bureau of Meteorology have discovered that the number of days considered ‘hail-prone’ has decreased across much of Australia but has increased by up to approximately 40 percent in some heavily populated areas. The atmospheric conditions required for a hailstorm to form include instability, enough moisture, and wind shear. These findings are important for agricultural, insurance, and city planning sectors to build resilience against future hail events and to protect densely populated areas from damage.

Hailstorms occur when the atmosphere has all the necessary components for hail formation. Dr. Tim Raupach, a researcher in atmospheric science at the UNSW Climate Change Research Centre, explains that hailstorms are measured and modeled, which makes it difficult to understand how they have changed over time or how they are expected to change in the future. To gain a better understanding of the frequency of hail events, the researchers used historical estimates of atmospheric conditions as a "proxy" for hail occurrence over the past forty years to create a continental map of how hail hazard frequency has changed across Australia. The latest study is the first continental-scale analysis of hail hazard frequency trends for Australia. The team of researchers, which included scientists from the Bureau of Meteorology, hopes that this research will help improve our understanding of hail events, which is important for the insurance industry, as well as agricultural and city planning sectors.

Not all thunderstorms are capable of producing hail. Hailstorms require certain atmospheric conditions to develop. One of the essential factors is atmospheric instability, which means that there is a tendency for updrafts to form. Updrafts occur when there is warm air near the ground and cooler air higher up. When a little bit of warm air rises and gets into the cool air, it draws air up leading to updrafts formation, as explained by Dr Raupach.

Another crucial factor is the presence of sufficient moisture in the updraft to create supercooled liquid water and ice, which swirl around in the upper part of the storm. Additionally, hail must be large enough to survive melting as it falls to reach the ground as a block of ice, as explained by Dr Raupach.

Lastly, wind shear, the change in the wind's properties by height, enhances hail formation. Dr Raupach explains that wind shear is the wind changing direction or velocity as you go higher in the atmosphere. When there is a lot of wind shear, the storm tends to be more severe and prone to forming hail.

When all these factors are present, the atmospheric conditions become hail-prone.

Researchers have developed a method to estimate hail-prone days in Australia over the past four decades. They created a "hail proxy" by combining the necessary ingredients for hail formation and applied it to 40 years of reanalysis data. Reanalysis products combine observations with supercomputing weather modeling to estimate the past atmosphere's state. Using this method, the researchers were able to produce a map of how the number of hail-prone days per year has changed across the entire continent, with a resolution of about 30 kilometers per pixel.

The hail proxy enabled the researchers to estimate atmospheric conditions on a grid of points across the country, instead of relying on spotty surface records. The Bureau of Meteorology's long-term weather radar archive was used to compare radar observations with the reanalysis hail proxy. Radar observations were collected from 20 Bureau radar sites across the country, with each site having between 12 and 24 years of records. The radar results corroborated the pattern of results seen through the statistical analysis of historic estimates.

According to Dr. Raupach, there has been a decrease in the number of hail-prone days in most parts of Australia, except for the southeast and southwest regions where large population centers are located. The annual number of hail-prone days has increased by approximately 40% around Sydney and Perth, which is about a 10% increase per decade in the number of hailstorm-causing days. Although there are fewer hail-prone days, the chances of hail occurring are higher when there are more such days. These changes in the data are all relative. Even though not every hail-prone day results in hail, an increase in the number of hail-prone days increases the likelihood of hail.

Although the exact reasons behind the changes in hail patterns are still unknown, the research team has taken into account the possible influence of climate change. Dr Raupach explains that while it is not a climate attribution study, the team considered how hailstorms may behave in a warmer environment.

There is a general expectation that climate change may result in less frequent surface hail due to increased melting. The warmer atmosphere causes more hail to melt before it hits the ground. However, a warmer atmosphere would be more unstable, which means there may be larger hailstones. Large hailstones are more likely to survive increased melting, so hail that does occur could be larger, and therefore more severe.

The team discovered that the changes in hail-prone day patterns are primarily driven by changes in extreme atmospheric instability, which are very complex and regionally dependent. In regions with increased instability, there might be more generation of hail and larger hailstones, which may survive more melting. Conversely, in areas where instability decreases, there may be a dampening effect.

The links between climate change and hailstorms are multifaceted, and more work needs to be done to understand how these patterns will continue to change under warming conditions.

The findings of this study are crucial in our understanding of hail risks. According to Dr. Raupach, the agricultural and insurance industries, as well as city planning, can benefit from this information. Hailstorms can destroy crops and cause significant damage that leads to insured losses, making hail a primary concern for the insurance industry in Australia. The research underscores the importance of building resilient infrastructure that can withstand the potential increase in hail hazards in the future. Dr. Raupach aims to extend his research by using supercomputing and climate models to predict future trends in hailstorms and help plan for the impact of such changes on agriculture, insurance, and densely populated areas.

Representation of the organic molecules studied, surrounded by water molecules (image: Tárcius Nascimento Ramos)
Representation of the organic molecules studied, surrounded by water molecules (image: Tárcius Nascimento Ramos)

Alternative method reduces supercomputer simulation time for the absorption spectrum from days to hours

The absorption of light by molecules has various applications in microscopy, medicine, and data storage. A Brazilian physicist has proposed an alternative method that reduces the time required for supercomputer simulation of the absorption spectrum from two days to a few hours. The most suitable method for predicting the one- and two-photon absorption spectra of large molecules in solution is the semi-empirical method INDO/S. This method enables the development of novel compounds with greater efficiency.

Absorption spectroscopy is an analytical chemistry tool that can determine the presence of a particular substance in a sample by measuring the intensity of the light absorbed as a function of wavelength. Measuring the absorbance of an atom or molecule can provide important information about electronic structure, quantum state, sample concentration, phase changes, or composition changes. It can also help determine the interaction of molecules with each other, and their potential technological applications.

Molecules that have a high probability of simultaneously absorbing two photons of low-energy light are highly useful in various fields. They can be used as molecular probes for high-resolution microscopy, as a substrate for data storage in dense three-dimensional structures, or as vectors in medicinal treatments.

Direct experimentation to study the phenomenon can be challenging, and computer simulation is often used alongside spectroscopic characterization. Simulations provide a microscopic view that is difficult to obtain in experiments. However, simulating relatively large molecules can take several days of processing by supercomputers or even months by conventional computers. To address this issue, physicist Tárcius Nascimento Ramos and his team have proposed an alternative calculation method in an article published in The Journal of Chemical Physics.

Ramos explains that they evaluated the performance of a semi-empirical method that was widely used in past decades but has been neglected in recent years because of its approximative nature. Using this method, they were able to reduce the calculation time to just four hours on a conventional computer. This low computing cost allowed them to consider a large statistical sample for simulations of molecules in solutions, which isn't feasible with the currently dominant method - density functional theory (DFT). DFT is a mathematical tool used in quantum mechanics to describe the electronic properties of complex systems without having to investigate the individual wave functions of each electron.

"The alternative method we used was INDO/S [intermediate neglect of differential overlap with spectroscopic parameterization]. It's based on the wave function of the molecular system but resolves approximately. Parts of the complex and computationally costly calculations are replaced by tabulated values obtained by adjusting experimental spectroscopic data. This makes the method highly efficient for theoretical studies of large molecular compounds," Ramos explained.

The practicality of the method used in this study is evident when considering the molecule being studied, which is derived from stilbene and contains over 200 carbon, oxygen, and hydrogen atoms. These large and flexible molecules pose a challenge as their electronic properties change when their shape changes, making conventional simulations time-consuming and expensive.

At the end of the study, the researchers characterized the one- and two-photon absorption spectra for this class of molecules, bridging the experimental gap. The most suitable method for predicting the absorption spectra of large molecules in solution was found to be the often neglected semi-empirical method. This discovery can pave the way for molecular engineers to develop novel compounds more efficiently for various applications.

It is worth noting the difference between one- and two-photon absorption. Molecules absorb photons only when they can assume excited states compatible with the energy of the photons. The selection rules for one-photon absorption differ from those for two-photon absorption, making the latter more suitable for refined uses due to its high spatial resolution of excitation resulting from its non-linear optical nature.

"Microscope imaging with two-photon absorption has far higher resolution and can be used to characterize deep tissue with less damage to the surrounding structures. In the case of data storage, the high resolution means 3D structures can be created with precision and plenty of detail, so that points inside materials can be encoded with high data density per volume," Ramos explained.

Computer modeling of two-photon absorption by organic molecules in solution was the subject of Ramos's Ph.D. research. The JCP article refers to another step forward in this investigation.

Mika Gustafsson and David Martínez hope that AI-based models could eventually be used in precision medicine to develop treatments and preventive strategies tailored to the individual.  Thor Balkhed
Mika Gustafsson and David Martínez hope that AI-based models could eventually be used in precision medicine to develop treatments and preventive strategies tailored to the individual. Thor Balkhed

Artificial intelligence is paving the way for precision medicine

Artificial Intelligence (AI) models can accurately estimate a person's age and determine whether they have a smoking history or not. Researchers at Linköping University have developed an AI-based method that can be used to address various medical and biological issues. This method can identify epigenetic markers that were previously known and those that are new and associated with conditions. The goal of AI models is to simplify complex biological data and extract the most relevant characteristics and patterns. The ultimate objective is to create an interpretable AI model that can help understand why someone is ill or not.

Epigenetics refers to the regulation of gene activity, which can be compared to a power switch that turns genes on or off without altering them. This process can be influenced by several factors, such as smoking, dietary habits, and environmental pollution. To develop personalized treatments and preventive strategies, researchers at Linköping University (LiU) have trained numerous AI neural network models using epigenetic data from over 75,000 human samples. These models are of the autoencoder type, which helps to self-organize the data and identify interrelation patterns in the vast amount of information.

Research conducted by LiU scientists has revealed that smoking leaves permanent traces on the DNA even after a person quits smoking. The researchers developed a model that compared the effects of smoking on the body with existing models. These models are based on specific epigenetic changes that occur in the lungs as a result of smoking. The new model can detect if someone is a current, former, or non-smoker. Additionally, other models that utilize epigenetic markers can estimate an individual's chronological age or group of individuals based on their health status.

The researchers at Linköping University trained an autoencoder and used its results to classify individuals on age, and smoker status and diagnose the disease systemic lupus erythematosus, SLE. While existing models depend on selected epigenetic markers, the autoencoders developed by the researchers performed equally well or better. The researchers discovered that their models could identify new markers associated with the condition they were examining, such as markers for respiratory diseases and DNA damage. The autoencoder models were designed to compress complex biological data into a representation of the most relevant characteristics and patterns in the data. The researchers allowed the data to speak for itself, and the autoencoder self-organized the data in a way similar to how it works in the body. Using the most important characteristics found by the autoencoder, the researchers can create models to classify a large amount of environment-related, individual-specific factors where there is not enough training data for complex AI models.

It is sometimes difficult to understand how certain types of AI work. They are like black boxes that provide answers but it is unclear how they arrived at those answers. However, Mika Gustafsson and his team are working on creating interpretable AI models. These models let researchers look inside the black box and better understand how the AI works. This is important because it helps us understand why certain conditions and diseases occur, not just whether someone is affected or not. The research was funded by several organizations including the Swedish Research Council, the Wallenberg AI, Autonomous Systems and Software Program (WASP), and the SciLifeLab & Wallenberg National Program for Data-Driven Life Science (DDLS).

Astronomers use a special technique to find stellar streams. They reverse the light and dark tones of images, similar to negative images, but stretch them to highlight the faint streams. Color images of nearby galaxies are scaled and superposed to emphasize the visible disk. These galaxies are surrounded by massive halos of hot gas containing sporadic stars, which are seen as the shadowy areas around each galaxy. NASA's upcoming Nancy Grace Roman Space Telescope is expected to improve these observations by resolving individual stars, allowing for a better understanding of each stream's stellar populations and the ability to spot stellar streams of various sizes in more galaxies. Credit: Carlin et al. (2016), based on images from Martínez-Delgado et al. (2008, 2010)
Astronomers use a special technique to find stellar streams. They reverse the light and dark tones of images, similar to negative images, but stretch them to highlight the faint streams. Color images of nearby galaxies are scaled and superposed to emphasize the visible disk. These galaxies are surrounded by massive halos of hot gas containing sporadic stars, which are seen as the shadowy areas around each galaxy. NASA's upcoming Nancy Grace Roman Space Telescope is expected to improve these observations by resolving individual stars, allowing for a better understanding of each stream's stellar populations and the ability to spot stellar streams of various sizes in more galaxies. Credit: Carlin et al. (2016), based on images from Martínez-Delgado et al. (2008, 2010)

NASA's Roman mission prepares to handle a massive amount of data in the future

The Nancy Grace Roman Space Telescope (Roman) team is preparing for the deluge of data the mission will return by creating simulations, scouting the skies with other telescopes, calibrating Roman’s components, and more. Simulations will be used to test algorithms, estimate Roman’s scientific return, and fine-tune observing strategies so that the most can be learned about the universe. Roman will also identify interesting targets that observatories such as NASA’s James Webb Space Telescope can zoom in on for more detailed studies. ezgif.com resize 1 91c3e

As part of a mission to uncover the mysteries of dark energy, scientists from around the world will work together to maximize the potential of the Roman telescope. The mission is expected to launch by May 2027. To ensure that scientists are equipped with the necessary tools, various teams, and individuals will contribute their efforts to the cause. Julie McEnery, the senior project scientist at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, said that they are laying a foundation by harnessing the science community at large. The goal is to perform powerful scientific research right from the start. The simulation plays a vital role in the preparation phase. Scientists can use it to test algorithms, estimate Roman’s scientific returns, and fine-tune observation strategies.

The teams will sprinkle different cosmic phenomena through a simulated dataset and then run machine learning algorithms to see how well they can automatically find the phenomena. Given Roman’s enormous data collection rate, identifying underlying patterns quickly and efficiently will be crucial. During its five-year primary mission, the Roman telescope is expected to amass 20,000 terabytes (20 petabytes) of observations containing trillions of individual measurements of stars and galaxies.

Preparing for the launch of the Roman Space Telescope is a complex process, as every observation made by the telescope will be used by multiple teams for different scientific purposes. Scientists will carry out preliminary observations using other telescopes such as the Hubble Space Telescope, the Keck Observatory, and PRIME. These observations will help to optimize Roman’s observations and better understand the data the mission will deliver.

Astronomers will explore ways to combine data from different observatories and use multiple telescopes in tandem. For instance, combining observations from PRIME and Roman would help astronomers learn more about objects found via warped space-time. Roman scientists will also use archived Hubble data to learn about the history of cosmic objects and identify interesting targets that telescopes such as the James Webb Space Telescope can study in detail.

Planning for each Roman science case will take many teams working in parallel. Scientists will need to consider all the things needed to study a particular object, such as algorithms for dim objects, ways to measure star positions precisely, understanding detector effects, and developing effective strategies to image stellar streams.

One team is developing processing and analysis software for Roman’s Coronagraph Instrument, which will unveil several cutting-edge technologies that could help astronomers directly image planets beyond our solar system. They will simulate different objects and planetary systems the Coronagraph could unveil, from dusty disks surrounding stars to old, cold worlds similar to Jupiter.

The mission’s science centers are getting ready to manage Roman’s data pipeline and establish systems for planning and executing observations. They will convene a survey definition team to determine Roman’s optimal observation plans in detail based on all the preparatory information generated by scientists and the interests of the broader astronomical community.

The team is excited to set the stage for Roman and ensure that each of its future observations will contribute to a wealth of scientific discoveries.