Figure 1. Example raw (top row) and convolved, noisy (bottom row) channel maps in a disk with a planet present. The planet (circled in white) is visible as a kink in the right column. The opposite velocity channel is shown in the left column, and the systemic channel is shown in the middle column. The beam size is indicated in the bottom middle image (solid white circle). This disk is one of the smallest and farthest simulated and is observed with some of the worst spatial resolutions, which is why the beam is so large.
Figure 1. Example raw (top row) and convolved, noisy (bottom row) channel maps in a disk with a planet present. The planet (circled in white) is visible as a kink in the right column. The opposite velocity channel is shown in the left column, and the systemic channel is shown in the middle column. The beam size is indicated in the bottom middle image (solid white circle). This disk is one of the smallest and farthest simulated and is observed with some of the worst spatial resolutions, which is why the beam is so large.

Terry deploys ML on JWST data for discovering exoplanets

New research from the University of Georgia reveals that artificial intelligence can be used to find planets outside of our solar system. A recent study demonstrated that machine learning can be used to find exoplanets, information that could reshape how scientists detect and identify new planets very far from Earth.

“One of the novel things about this is analyzing environments where planets are still forming,” said Jason Terry, a doctoral student in the UGA Franklin College of Arts and Sciences department of physics and astronomy and lead author on the study. “Machine learning has rarely been applied to the type of data we’re using before, specifically looking at systems that are still actively forming planets.” 

The first exoplanet was found in 1992, and though more than 5,000 are known to exist, those have been among the easiest for scientists to find. Exoplanets at the formation stage are difficult to see for two primary reasons. They are too far away, often hundreds of light-years from Earth, and the discs where they form are thicker than the distance of the Earth to the sun. Data suggests the planets tend to be in the middle of these discs, conveying a signature of dust and gases kicked up by the planet.

The research showed that artificial intelligence can help scientists overcome these difficulties.

“This is a very exciting proof of concept,” said Cassandra Hall, assistant professor of astrophysics, principal investigator of the Exoplanet and Planet Formation Research Group, and co-author of the study. “The power here is that we used exclusively synthetic telescope data generated by computer simulations to train this AI, and then applied it to real telescope data. This has never been done before in our field, and paves the way for a deluge of discoveries as James Webb Telescope data rolls in.”

The James Webb Space Telescope, launched by NASA in 2021, has inaugurated a new level of infrared astronomy, bringing stunning new images and reams of data for scientists to analyze. It’s just the latest iteration of the agency’s quest to find exoplanets, scattered unevenly across the galaxy. The Nancy Grace Roman Observatory, a 2.4-meter survey telescope scheduled to launch in 2027 that will look for dark energy and exoplanets, will be the next major expansion in capability – and delivery of information and data – to comb through the universe for life.

The Webb telescope supplies the ability for scientists to look at exoplanetary systems in an exceptionally bright, high resolution, with the forming environments themselves a subject of great interest as they determine the resulting solar system.

“The potential for good data is exploding, so it’s a very exciting time for the field,” Terry said.

New analytical tools are essential 

Next-generation analytical tools are urgently needed to greet this high-quality data, so scientists can spend more time on theoretical interpretations rather than meticulously combing through the data and trying to find tiny little signatures. Figure 2. Example raw (top row) and convolved, noisy (bottom row) channel maps in a disk without a planet present. The beam size is indicated in the bottom middle image.

“In a sense, we’ve sort of just made a better person,” Terry said. “To a large extent the way we analyze this data is you have dozens, hundreds of images for a specific disc and you just look through and ask ‘is that a wiggle?’ then run a dozen simulations to see if that’s a wiggle and …  it’s easy to overlook them – they’re really tiny, and it depends on the cleaning, and so this method is one, really fast, and two, its accuracy gets planets that humans would miss.”

Terry says this is what machine learning can already accomplish – improve the human capacity to save time and money as well as efficiently guide scientific time, investments, and new proposals.

“There remains, within science and particularly astronomy in general, skepticism about machine learning and of AI, a valid criticism of it being this black box – where you have hundreds of millions of parameters and somehow you get out an answer. But we think we’ve demonstrated pretty strongly in this work that machine learning is up to the task. You can argue about interpretation. But in this case, we have very concrete results that demonstrate the power of this method.”

The research team’s work is designed to develop a concrete foundation for future applications on observational data, demonstrating the method’s effectiveness by using simulational observations.

China could have a great oportunity to solidify its leading role in the renewable energy market, but this requires commitment to phasing out coal. Photo: Chris LeBoutillier / Unsplash
China could have a great opportunity to solidify its leading role in the renewable energy market, but this requires commitment to phasing out coal. Photo: Chris LeBoutillier / Unsplash

German scientists’ simulation shows coal exit can happen only with stronger policies, China

Current climate policies including efforts like the Powering Past Coal Alliance will not add up to a global coal exit, a new study shows. Countries phasing coal out of the electricity sector need to broaden their policy strategy, or else they risk pushing the excess coal supply into other industries at home, like steel production. The scientists find that China has an opportunity to dominate the renewable energy technology market if it begins phasing down coal immediately. Otherwise, it could dangerously delay the renewable energy breakthrough worldwide.

“It’s really a make-or-break moment,” says Stephen Bi from the Potsdam Institute for Climate Impact Research (PIK) and Potsdam University in Potsdam, Germany, lead author of the study. “Our computer simulation of climate economics and policy making indicates that current policies lead the world to less than a 5 percent likelihood of phasing out coal by mid-century. This would leave minimal chances of reaching net-zero emissions by 2050 and limiting disastrous climate risks.”

“The most shocking result was that even though most countries decide to stop burning coal for electricity during the simulation, this has almost zero impact on total future coal use,” says Bi. “We then dug deeper into this perplexing result to identify what policymakers can do to actually achieve the coal exit.

Carbon pricing and coal mining phase-out would be effective policies

Investigating the Powering Past Coal Alliance, launched at the world climate summit COP23 in 2017, the scientists sought to understand whether these countries’ efforts to cut coal would make it easier or harder for other countries to follow suit. That is, the coalition may grow as member states work to modernize their electricity sectors, but it may also lead to a rebound in coal use globally. The latter effect often referred to as ‘leakage’, can arise due to market effects: if demand decreases in some places, so do prices, which in turn can increase demand elsewhere.

Interestingly, the scientists’ supercomputer simulation shows that the most concerning leakage effect, in this case, may actually arise within the Alliance itself rather than through international coal markets.  Although the Powering Past Coal Alliance is expected to grow, its pledge is limited to the electricity sector. This means that countries who join can actually increase their coal use in steel, cement, and chemicals production, greatly hindering the potential of this initiative.

“The greatest risk to the coal exit movement may actually come from free-riding sectors in coalition members. Unregulated industries can take advantage of falling coal prices at home and use more coal than they otherwise would have,” says co-author Nico Bauer, also from PIK.

The scientists conclude that additional strong policies are needed to avoid this effect. “The coal exit debate has to look beyond the power sector and also include the heavy industry. Carbon pricing would be the most efficient instrument to close loopholes in domestic regulations, while restrictions on coal mining and exports would go the furthest to deter free-riding abroad,” continues Bauer.

“A golden opportunity for China” – if it acts quickly

“China plays a special role since it produces and consumes more than half of all coal globally. The Chinese government must act swiftly to curtail the coal-driven Covid recovery,” says Bi. “The current coal plans jeopardize China’s recent promise to peak domestic emissions before 2030 and to achieve net-zero emissions by 2060. The computer simulation gives China roughly fifty-fifty odds of joining the Alliance, and it only falls on the right side of that line if it stops building coal plants by 2025.”

Further, the simulation shows that the Alliance only manages to boost solar and wind energy expansion if China decides to phase out coal. China would thus have “a golden opportunity to solidify its leading role in the renewable energy market and unleash sustainable development opportunities worldwide, but this requires a commitment to phasing out coal,” explains Bi. “If not, then it becomes less clear how we’ll achieve sufficient diffusion of renewables worldwide. China’s actions today can position it to either lead or impede the global energy transition.”

Innovative first real-world policy-making supercomputer simulation

These insights are substantially more robust than most previous analyses because the scientists used a data-driven approach for simulating real-world policy making, called Dynamic Policy Evaluation, for the first time. “Scientifically analyzing future emissions is subject to a large degree of uncertainties, not least policies. We were able to determine that coal-exit commitments often depend on certain domestic pre-conditions, which enabled us to remove some of the uncertainty on their emission impacts. Our new approach is thus the first to coherently simulate policy adoption in future scenarios which are also in line with historical evidence,” says co-author Jessica Jewell from the Chalmers University of Technology.

“The G20 has initiated the phase-out of international public finance for coal projects. We are now assessing how much political momentum this can potentially impart on the Powering Past Coal Alliance,” says PIK Director Ottmar Edenhofer. “Things are therefore looking somewhat more promising, but we must account for negative feedback alongside the positive for a balanced assessment of policy diffusion in our multipolar world. What remains clear is that governments must take a much much more active approach to phase out coal if they want to stay true to their climate promises.”

NASA/Chris Gunn
NASA/Chris Gunn

High-gain antenna for Roman mission clears environmental tests to downlink the highest data volume of any NASA astrophysics mission

Engineers at NASA's Goddard Space Flight Center in Greenbelt, Maryland, have finished testing the high-gain antenna for the Nancy Grace Roman Space Telescope. When it launches by May 2027, this NASA observatory will help unravel the secrets of dark energy and dark matter, search for and image exoplanets, and explore many topics in infrared astrophysics. Pictured above in a test chamber, the antenna will provide the primary communication link between the Roman spacecraft and the ground. It will downlink the highest data volume of any NASA astrophysics mission so far.

The antenna reflector is made of a carbon composite material that weighs very little but will still withstand the spacecraft’s wide temperature fluctuations. The dish spans 5.6 feet (1.7 meters) in diameter, standing about as tall as a refrigerator, yet only weighs 24 pounds (10.9 kilograms). Its large size will help Roman send radio signals across a million miles of intervening space to Earth. At one frequency, the dual-band antenna will receive commands and send back information about the spacecraft’s health and location. It will use another frequency to transmit a deluge of data at up to 500 megabits per second to ground stations in New Mexico, Australia, and Japan. These locations are spread out so the Roman team will consistently be able to communicate with the spacecraft.

Producing this antenna was a coordinated effort between the government and the commercial sector. NASA was responsible for the radio frequency design and fabrication of the feed assemblies. A commercial partner, Applied Aerospace Structures Corporation (AASC) in Stockton, California, was contracted for the final flight mechanical design and fabrication of the composite reflector and strut assembly. The completed antenna was delivered to NASA in December. Engineers at AASC and Goddard have extensively tested it to confirm it will operate as expected in the extreme environment of space, where it will experience a temperature range of minus 26 to 284 degrees Fahrenheit (minus 32 to 140 degrees Celsius). The team also put the antenna through vibrational testing to make sure it will withstand the spacecraft’s launch. Engineers measured the antenna’s performance in a radio-frequency anechoic test chamber, shown in the photo above. Every surface in the test chamber is covered in pyramidal foam pieces that minimize interfering reflections during testing. Next, the team will attach the antenna to the articulating boom assembly, and then electrically integrate it with Roman’s Radio Frequency Communications System.

Part of the set-up for creating medium-density amorphous ice: ordinary ice and steel balls in a jar (not amorphous ice)  Credit: Christoph Salzmann
Part of the set-up for creating medium-density amorphous ice: ordinary ice and steel balls in a jar (not amorphous ice) Credit: Christoph Salzmann

UK prof Davies creates a new form of amorphous ice using an atomic-scale model of it in supercomputer

A collaboration between scientists at Cambridge and UCL has led to the discovery of a new form of ice that more closely resembles liquid water than any other and may hold the key to understanding this most famous of liquids.

The new form of ice is amorphous. Unlike ordinary crystalline ice where the molecules arrange themselves in a regular pattern, in amorphous ice, the molecules are in a disorganized form that resembles a liquid.

The team created a new form of amorphous ice in an experiment and achieved an atomic-scale model of it in a supercomputer simulation. The experiments used a technique called ball-milling, which grinds crystalline ice into small particles using metal balls in a steel jar. Ball-milling is regularly used to make amorphous materials, but it had never been applied to ice.

The team found that ball-milling created an amorphous form of ice, which, unlike all other known ices, had a density similar to that of liquid water and whose state resembled water in solid form. They named the new ice medium-density amorphous ice (MDA).

To understand the process at the molecular scale the team employed super-computational simulation. By mimicking the ball-milling procedure via repeated random shearing of crystalline ice, the team successfully created a super-computational model of MDA.

“Our discovery of MDA raises many questions on the very nature of liquid water and so understanding MDA’s precise atomic structure is very important,” said co-author Dr. Michael Davies, who carried out the super-computational modeling. “We found remarkable similarities between MDA and liquid water.”

A happy medium

Amorphous ices have been suggested to be models for liquid water. Until now, there have been two main types of amorphous ice: high-density and low-density amorphous ice.

As the names suggest, there is a large density gap between them. This density gap, combined with the fact that the density of liquid water lies in the middle, has been a cornerstone of our understanding of liquid water. It has led in part to the suggestion that water consists of two liquids: one high- and one low-density liquid.

Senior author Professor Christoph Salzmann said: “The accepted wisdom has been that no ice exists within that density gap. Our study shows that the density of MDA is precisely within this density gap and this finding may have far-reaching consequences for our understanding of liquid water and its many anomalies.”

A high-energy geophysical material

The discovery of MDA gives rise to the question: where might it exist in nature? Shear forces were discovered to be vital to creating MDA in this study. The team suggests ordinary ice could undergo similar shear forces in the ice moons due to the tidal forces exerted by gas giants such as Jupiter.

Moreover, MDA displays one remarkable property that is not found in other forms of ice. Using calorimetry, they discovered that when MDA recrystallizes to ordinary ice it releases an extraordinary amount of heat. The heat released from the recrystallization of MDA could play a role in activating tectonic motions. More broadly, this discovery shows water can be a high-energy geophysical material.

Professor Angelos Michaelides, the lead author from Cambridge's Yusuf Hamied Department of Chemistry, said: “Amorphous ice in general is said to be the most abundant form of water in the universe. The race is now on to understand how much of it is MDA and how geophysically active MDA is.”

Image sequence of projectiles being fired through three different steel plates. The pictures on the right show the exit holes in the various plates.
Image sequence of projectiles being fired through three different steel plates. The pictures on the right show the exit holes in the various plates.

NTNU's SIMLab supercomputing shows how to create buildings that can withstand the most extreme stress loads

In an explosion, fragments, and debris can be ejected at great speed and strike the surroundings. Then comes the shock wave. It's a scary combination.

Combined ballistic impacts pose a major challenge for engineers who build structures that must withstand extreme stresses. The combination of blast pressure and impact at high speed increases the chances of greater damage. Ph.D. candidate at the Norwegian University of Science and Technology (NTNU), Benjamin Stavnar Elveli describes it as the scariest stress there is.

“These combined impacts work in the same way as shrapnel bombs,” he says.

Infrastructure shift from massive and military to light and civilian

In the past, protective structures have involved massive concrete military buildings. In recent decades, new threats have emerged, and the need to protect civilian buildings and structures in urban areas has increased.

This has fuelled interest in lighter, thin-walled solutions that can withstand large deformations without cracking and collapsing.

The regulations have not followed this same development. No standards address this type of load yet, and research in the field is very limited.

Elveli has investigated how different types of thin steel plates behave when exposed to such extreme stress loads. His work can help to establish guidelines for how resistant, lightweight structures should be designed.

Initial projectiles do the most damage

Whether they occur in accidents or on purpose: explosions can cause massive damage. Debris and fragments can be torn loose from parts of buildings, cars, gravel, or stones. When they hit, they can act like projectiles.

Elveli says that any buildings, cars, or other objects in the vicinity would be exposed to a load that is more serious than if either stress load occurred alone. The damage is believed to be greatest when fragments hit first.

“That’s because the structure already has a defect or weakness from the projectile and then has to withstand the shock wave itself,” he says. Most often, cracking and destruction starts in the weak spots.”

Safer structures, safer society

Elveli's Ph.D. is based on more than 80 small-scale explosion tests on three different types of steel. By combining physical experiments with theory and mathematical modeling, he has recreated explosive loads in supercomputer simulations. The aim is to gain as much control as possible over how structures react to such loads.

The more scientists understand the actual physics of these loads, the more accurate, safe, and sustainable solutions the construction engineers of the future can deliver.

The danger of overestimating the strength

A shock wave can last for several milliseconds and cause great destruction over a large area. A fragment moves even faster and produces concentrated damage. Simulating this combined effect means that you have to describe two completely different phenomena in one and the same model. It's complicated.

“Often you’ll end up with some sort of trade-off. In order to capture the local weaknesses that occur during the explosion, we need to determine how accurate the descriptions of the impact of the fragments should be. If we don’t achieve full control of this, the model could overestimate the strength of the building to withstand the stress,” says Elveli.

Need solutions that can be trusted

Overestimating strength can have fatal consequences. The solutions that construction engineers deliver have to be dependable. A large part of Elveli’s doctoral work has been to investigate how accurate the models need to be to ensure reliable buildings.

A common approach has been to assume that the fragments hit before the shock wave happens. The physical experiments than have to be divided into two different sequences that follow each other. Often such studies use a simplified approach, where the test pieces have holes milled out by a machine to mimic damage from real fragments.

Overestimating resilience

Elveli has compared the behavior of machined plates with plates hit with real projectiles. Real projectiles created small petal-like cracks and deformation around the points of impact, whereas the pre-formed defects had perfectly even edges.

The explosion tests showed that the destruction started in the cracks and spread outwards. The researcher thus shows that the simplified approach has weaknesses.

“Idealized defects, like in the machined plates, are easier to test and simulate. But because they lack the deformations and damage that occur in real explosions, there’s a risk of exaggerating the strength of the materials in these models,” he says.

Great need for supercomputer simulations

Understanding the need to develop accurate supercomputer simulations is easy enough. Researchers who work with strength calculations cannot blow up actual buildings to test their resilience.

Elveli has put a lot of work into designing controlled and reliable small-scale explosion tests. He believes that his research will be useful for other researchers in the military and civilian arenas. For industrial use, precise and reliable simulations are currently expensive and time-consuming.

The many tests have produced large amounts of data that may interest the research and development departments of large companies. Elveli’s work makes it possible to simulate how structures behave when they are bent, stretched, or otherwise deformed.

In total, he has carried out 110 tests, of which 82 were explosion experiments. High-speed cameras filming 37 000 frames per second have captured the details as the steel plates are damaged. Elveli obtained his doctorate at NTNU’s SIMLab/Department of Structural Engineering