Schematic showing the expected future (2081-2100) warming pattern and the corresponding changes in winds and ocean heat transport for an SSP5-8.5 greenhouse gas emission scenario / Figure credit: Sahil Sharma
Schematic showing the expected future (2081-2100) warming pattern and the corresponding changes in winds and ocean heat transport for an SSP5-8.5 greenhouse gas emission scenario / Figure credit: Sahil Sharma

Korea's Aleph supercomputer uncovers the mechanisms that cause uneven warming in the Indian Ocean

An international team of climate scientists uncovers the physical mechanisms that can cause uneven future warming in the Indian Ocean and corresponding shifts in monsoon precipitation. 

Even though Global Warming is happening globally, some areas warm faster than others in response to increasing greenhouse gas concentrations. The corresponding climate change temperature difference pattern causes large-scale changes in winds and weather systems, impacting societies and ecosystems. Previous work has mainly focused on the well-known Arctic amplification pattern and on the projected east/west temperature difference in the equatorial Pacific, which in turn can impact regions far outside the tropics. Little attention has been paid so far to the mechanisms that cause uneven heating in the Indian Ocean and related impacts on wind and rainfall in the adjacent land area.

By analyzing data from one of the most extensive future climate change simulations performed to date (the ICCP/CESM Large-Ensemble simulation of the Community Earth System Model, version 2), the team of scientists from South Korea and Japan was able to pinpoint why over the next decades the tropical eastern Indian Ocean is expected to warm less than the Arabian Sea and the Southeastern Indian Ocean (Figure).

A key area identified by the researchers to explain the uneven Indian Ocean warming is the area west of Indonesia, where under present-day conditions colder deep waters occasionally upwell to the surface. This connection between surface and deeper ocean waters serves as a thermostat, which explains the weakened future regional warming signal relative to other Indian Ocean areas. In the tropics, air rises in warmer areas and tends to sink in colder regions.  The reduced eastern equatorial Indian warming is therefore accompanied by higher-than-normal sea level pressure and winds that blow toward the Arabian Sea. 

“Changes in the winds automatically influence the ocean circulation. In our case, stronger future winds blowing from Indonesia towards the Arabian Peninsula push more tropical waters toward the Arabian Sea. This leads to accelerated warming of the ocean there” says Sahil Sharma, a Ph.D. student at the IBS Center for Climate Physics (ICCP) and Pusan National University, South Korea, and the first writer of the study. 

“Enhanced future warming in the Arabian Sea will further reduce atmospheric surface pressure and generate more rainfall, also in the adjacent regions.  Climate models show on average a 50% intensification of mean rainfall over the southern Arabian Peninsula and parts of western India by the year 2100 if CO2 emissions are not cut drastically.”, says Prof. Kyung-Ja Ha from the IBS Center for Climate Physics and Pusan National University and corresponding author of the study.

The writers further highlighted the southeastern Indian ocean as a regional global warming hotspot. In this region, future surface warming induced by greenhouse gases can be further amplified by a reduction of clouds and more sunshine arriving at the surface of the ocean.

“The Indian Ocean is a peculiar region. Ocean currents are less efficient in transporting excess heating from Global Warming to higher latitudes, as compared to other ocean basins. The northern transport route is blocked by land. This means that we find a very different temperature response pattern in the Indian Ocean, which emerges from a combination of oceanic and atmospheric processes, including winds and clouds ”, says Dr. Keith Rodgers from the IBS Center for Climate Physics, co-author of the study.

This study was made possible by using a climate computer model experiment, which was previously conducted on the ICCP/IBS supercomputer Aleph. Rather than running a climate model only once into the future for a given greenhouse gas forcing, the researcher ran 100 simulations into the future which represent different realizations of the internal variability in the climate system.

This new modeling resource has been instrumental in identifying the complex interplay between the ocean and atmosphere which is responsible for the Indian Ocean warming pattern. This understanding will be useful for informing fisheries and other marine resource management concerns.

The research team will further explore how climate change will impact other regional processes in the Indian Ocean area, including marine ecosystems and sea levels.

NASA's Solar Dynamics Observatory captured this image of a solar flare on Oct. 2, 2014. The solar flare is the bright flash of light at top. A burst of solar material erupting out into space can be seen just to the right of it. Credits: NASA/SDO
NASA's Solar Dynamics Observatory captured this image of a solar flare on Oct. 2, 2014. The solar flare is the bright flash of light at top. A burst of solar material erupting out into space can be seen just to the right of it. Credits: NASA/SDO

NASA-built supercomputer model DAGGER makes predictions with AI to give time to prepare for solar storms

Like a tornado siren for life-threatening storms in America’s heartland, a new supercomputer model that combines artificial intelligence (AI) and NASA satellite data could sound the alarm for dangerous space weather. This movie, captured by SOHO

The model uses AI to analyze spacecraft measurements of the solar wind (an unrelenting stream of material from the Sun) and predict where an impending solar storm will strike, anywhere on Earth, with 30 minutes of warning. This could provide just enough time to prepare for these storms and prevent severe impacts on power grids and other critical infrastructure.

The Sun constantly sheds solar material into space – both in a steady flow known as the “solar wind,” and in shorter, more energetic bursts from solar eruptions. When this solar material strikes Earth’s magnetic environment (its “magnetosphere”), it sometimes creates so-called geomagnetic storms. The impacts of these magnetic storms can range from mild to extreme, but in a world increasingly dependent on technology, their effects are growing ever more disruptive.

For example, a destructive solar storm in 1989 caused electrical blackouts across Quebec for 12 hours, plunging millions of Canadians into the dark and closing schools and businesses. The most intense solar storm on record, the Carrington Event in 1859, sparked fires at telegraph stations and prevented messages from being sent. If the Carrington Event happened today, it would have even more severe impacts, such as widespread electrical disruptions, persistent blackouts, and interruptions to global communications. Such technological chaos could cripple economies and endanger the safety and livelihoods of people worldwide.

In addition, the risk of geomagnetic storms and devastating effects on our society is presently increasing as we approach the next “solar maximum” – a peak in the Sun’s 11-year activity cycle – which is expected to arrive sometime in 2025.

To help prepare, an international team of researchers at the Frontier Development Lab – a public-private partnership that includes NASA, the U.S. Geological Survey, and the U.S. Department of Energy – have been using artificial intelligence (AI) to look for connections between the solar wind and geomagnetic disruptions, or perturbations, that cause havoc on our technology. The researchers applied an AI method called “deep learning,” which trains computers to recognize patterns based on previous examples. They used this type of AI to identify relationships between solar wind measurements from heliophysics missions (including ACEWindIMP-8, and Geotail) and geomagnetic perturbations observed at ground stations across the planet.

DAGGER’s developers compared the model’s predictions to measurements made during solar storms in August 2011 and March 2015. At the top, colored dots show measurements made during the 2011 storm. Colors indicate the intensity of geomagnetic perturbations that can induce currents in electric grids, with orange and red indicating the strongest effects. DAGGER’s 30-minute forecast for that same time (bottom) shows the most intense perturbations in approximately the same locations around Earth’s north pole. Credits: V. Upendran et al.From this, they developed a supercomputer model called DAGGER (formally, Deep Learning Geomagnetic Perturbation) that can quickly and accurately predict geomagnetic disturbances worldwide, 30 minutes before they occur. According to the team, the model can produce predictions in less than a second, and the predictions update every minute.

The DAGGER team tested the model against two geomagnetic storms that happened in August 2011 and March 2015. In each case, DAGGER was able to quickly and accurately forecast the storm’s impacts around the world.

Previous prediction models have used AI to produce local geomagnetic forecasts for specific locations on Earth. Other models that didn’t use AI have provided global predictions that weren’t very timely. DAGGER is the first one to combine the swift analysis of AI with real measurements from space and across Earth to generate frequently updated predictions that are both prompt and precise for sites worldwide.

“With this AI, it is now possible to make rapid and accurate global predictions and inform decisions in the event of a solar storm, thereby minimizing – or even preventing – devastation to modern society,” said Vishal Upendran of the Inter-University Center for Astronomy and Astrophysics in India, who is the lead author of a paper about the DAGGER model published in the journal Space Weather.

The supercomputer code in the DAGGER model is open source, and according to Upendran, it could be adopted, with help, by power grid operators, satellite controllers, telecommunications companies, and others to apply the predictions for their specific needs. Such warnings could give them time to take action to protect their assets and infrastructure from an impending solar storm, such as temporarily taking sensitive systems offline or moving satellites to different orbits to minimize damage.

With models like DAGGER, there could one day be solar storm sirens that sound an alarm in power stations and satellite control centers around the world, just as tornado sirens wail in advance of threatening terrestrial weather in towns and cities across America.

Researchers envision a future where artificial intelligence is used to craft better survey questions, refining them to be more accessible and representative; and even simulate populations that are difficult to reach.  CREDIT Illustration by Nate Edwards
Researchers envision a future where artificial intelligence is used to craft better survey questions, refining them to be more accessible and representative; and even simulate populations that are difficult to reach. CREDIT Illustration by Nate Edwards

Can AI predict how you'll vote in the next election?

BYU study proves artificial intelligence can respond to complex survey questions like a real human.

Artificial intelligence technologies like ChatGPT are seemingly doing everything these days: writing code, composing music and even creating images so realistic you'll think they were taken by professional photographers. Add thinking and responding like a human to the conga line of capabilities. A recent study from BYU proves that artificial intelligence can respond to complex survey questions just like a real human.

To determine the possibility of using artificial intelligence as a substitute for human responders in survey-style research, a team of political science and computer science professors and graduate students at BYU tested the accuracy of programmed algorithms of a GPT-3 language model – a model that mimics the complicated relationship between human ideas, attitudes, and sociocultural contexts of subpopulations.

In one experiment, the researchers created artificial personas by assigning the AI certain characteristics like race, age, ideology, and religiosity; and then tested to see if the artificial personas would vote the same as humans did in 2012, 2016, and 2020 U.S. presidential elections. Using the American National Election Studies (ANES) for their comparative human database, they found a high correspondence between how the AI and humans voted.

"I was absolutely surprised to see how accurately it matched up," said David Wingate, BYU computer science professor, and co-author of the study. "It's especially interesting because the model wasn't trained to do political science – it was just trained on a hundred billion words of text downloaded from the internet. But the consistent information we got back was so connected to how people really voted."

In another experiment, they conditioned artificial personas to offer responses from a list of options in an interview-style survey, again using the ANES as their human sample. They found a high similarity between nuanced patterns in human and AI responses.

This innovation holds exciting prospects for researchers, marketers, and pollsters. Researchers envision a future where artificial intelligence is used to craft better survey questions, refining them to be more accessible and representative; and even simulate populations that are difficult to reach. It can be used to test surveys, slogans, and taglines as a precursor to focus groups.

"We're learning that AI can help us understand people better," said BYU political science professor Ethan Busby. "It's not replacing humans, but it is helping us more effectively study people. It's about augmenting our ability rather than replacing it. It can help us be more efficient in our work with people by allowing us to pre-test our surveys and our messaging."

And while the expansive possibilities of large language models are intriguing, the rise of artificial intelligence poses a host of questions – how much does AI really know? Which populations will benefit from this technology and which will be negatively impacted? And how can we protect ourselves from scammers and fraudsters who will manipulate AI to create more sophisticated phishing scams?

While much of that is still to be determined, the study lays out a set of criteria that future researchers can use to determine how accurate an AI model is for different subject areas.

"We're going to see positive benefits because it's going to unlock new capabilities," said Wingate, noting that AI can help people in many different jobs be more efficient. "We're also going to see negative things happen because sometimes computer models are inaccurate and sometimes they're biased. It will continue to churn society."

Busby says surveying artificial personas shouldn't replace the need to survey real people and that academics and other experts need to come together to define the ethical boundaries of artificial intelligence surveying in research related to social science.

A new study, led by David Ouyang, MD, a cardiologist in the Smidt Heart Institute and a researcher in the Division of Artificial Intelligence in Medicine, found individuals with spherical hearts were 31% more likely to develop atrial fibrillation and 24% more likely to develop cardiomyopathy, a type of heart muscle disease.
A new study, led by David Ouyang, MD, a cardiologist in the Smidt Heart Institute and a researcher in the Division of Artificial Intelligence in Medicine, found individuals with spherical hearts were 31% more likely to develop atrial fibrillation and 24% more likely to develop cardiomyopathy, a type of heart muscle disease.

The shape of your heart matters

New research from the Smidt Heart Institute finds that your heart’s shape and specific genetic markers may predict the risk for developing atrial fibrillation and heart muscle disease

Curious to know if you’re at risk for two common heart conditions? Your doctor may want to check the shape of your heart. 

Investigators from the Smidt Heart Institute at Cedars-Sinai have discovered that patients who have round hearts shaped like baseballs are more likely to develop future heart failure and atrial fibrillation than patients who have longer hearts shaped like the traditional Valentine heart. 

Their findings, published in Med—Cell Press’ new peer-reviewed medical journal—used deep learning and advanced imaging analysis to study the genetics of heart structure. Their results were telling.

“We found that individuals with spherical hearts were 31% more likely to develop atrial fibrillation and 24% more likely to develop cardiomyopathy, a type of heart muscle disease,” said David Ouyang, MD, a cardiologist in the Smidt Heart Institute and a researcher in the Division of Artificial Intelligence in Medicine.

The risk was identified after investigators analyzed cardiac MRI images from 38,897 healthy individuals from the UK Biobank. Using this same database, researchers then used computational models to identify genetic markers of the heart that are associated with these cardiac conditions.

“By looking at the genetics of sphericity, we found four genes associated with cardiomyopathy: PLN, ANGPT1, PDZRN3, and HLA DR/DQ,” said Ouyang. “The first three of these genes were also associated with a greater risk of developing atrial fibrillation.”

Atrial fibrillation, the most common type of abnormal heart rhythm disorder, greatly increases a person’s risk of having a stroke. The condition is rising in prevalence and is projected to affect 12.1 million people in the U.S. by 2030. 

Cardiomyopathy is a type of heart muscle disease that makes it harder for the heart to pump blood to the rest of the body and can eventually lead to heart failure. The main types of cardiomyopathies—dilated, hypertrophic, arrhythmogenic, and restrictive—affect as many as 1 of every 500 adults. 

Christine M. Albert, MD, MPHCedars-Sinai cardiologists say the shape of one’s heart changes over years, typically becoming rounder over time and especially after a major cardiac event like a heart attack.

“A change in the heart’s shape may be a first sign of disease,” said Christine M. Albert, MD, MPH, chair of the Department of Cardiology at the Smidt Heart Institute and a study author. “Understanding how a heart changes when faced with illness—coupled with now having more reliable and intuitive imaging to support this knowledge—is a critical step in prevention for two life-altering diseases.” 

Ouyang says the findings provide more clarity on the potential use of cardiac imaging to diagnose more effectively—and prevent—many conditions. He also emphasized the need for additional studies. 

“Large biobanks with cardiac imaging data now offer an opportunity to analyze and define variation in cardiac structure and function that was not possible using traditional approaches,” said Ouyang. “Deep learning and computer vision also allow for faster as well as more comprehensive cardiac measures that may help to identify genetic variations affecting a heart—up to years or even decades before any obvious heart disease develops.”  

Experimental setup depicting the ferrimagnetic insulator yttrium iron garnet (YIG) wafer with nanomagnetic strips © LMGN EPFL
Experimental setup depicting the ferrimagnetic insulator yttrium iron garnet (YIG) wafer with nanomagnetic strips © LMGN EPFL

Swiss magnonics researchers demo spin waves for a new supercomputing architecture

EPFL researchers have sent and stored data using charge-free magnetic waves, rather than traditional electron flows. The discovery could solve the dilemma of energy-hungry computing technology in the age of big data.

Like electronics or photonics, magnonics is an engineering subfield that aims to advance information technologies when it comes to speed, device architecture, and energy consumption. A magnon corresponds to the specific amount of energy required to change the magnetization of a material via a collective excitation called a spin wave. 

Because they interact with magnetic fields, magnons can be used to encode and transport data without electron flows, which involve energy loss through heating (known as Joule heating) of the conductor used. As Dirk Grundler, head of the Lab of Nanoscale Magnetic Materials and Magnonics (LMGN) in the School of Engineering explains, energy losses are an increasingly serious barrier to electronics as data speeds and storage demands soar.

“With the advent of AI, the use of computing technology has increased so much that energy consumption threatens its development,” Grundler says. “A major issue is traditional computing architecture, which separates processors and memory. The signal conversions involved in moving data between different components slow down computation and waste energy.”

This inefficiency, known as the memory wall or Von Neumann bottleneck, has had researchers searching for new supercomputing architectures that can better support the demands of big data. And now, Grundler believes his lab might have stumbled on such a “holy grail.”

While doing other experiments on a commercial wafer of the ferrimagnetic insulator yttrium iron garnet (YIG) with nanomagnetic strips on its surface, LMGN Ph.D. student Korbinian Baumgaertl was inspired to develop precisely engineered YIG-nanomagnet devices. With the Center of MicroNanoTechnology‘s support, Baumgaertl was able to excite spin waves in the YIG at specific gigahertz frequencies using radiofrequency signals, and – crucially – to reverse the magnetization of the surface nanomagnets.

“The two possible orientations of these nanomagnets represent magnetic states 0 and 1, which allows digital information to be encoded and stored,” Grundler explains.

A route to in-memory computation

The scientists made their discovery using a conventional vector network analyzer, which sent a spin wave through the YIG-nanomagnet device. Nanomagnet reversal happened only when the spin wave hit a certain amplitude, and could then be used to write and read data.

“We can now show that the same waves we use for data processing can be used to switch the magnetic nanostructures so that we also have nonvolatile magnetic storage within the very same system,” Grundler explains, adding that “nonvolatile” refers to the stable storage of data over long periods without additional energy consumption.

It’s this ability to process and store data in the same place that gives the technique its potential to change the current computing architecture paradigm by putting an end to the energy-inefficient separation of processors and memory storage, and achieving what is known as in-memory computation.

Optimization on the horizon

Baumgaertl, Grundler, and the LMGN team are already working on optimizing their approach.

“Now that we have shown that spin waves write data by switching the nanomagnets from states 0 to 1, we need to work on a process to switch them back again – this is known as toggle switching,” Grundler says.

He also notes that theoretically, the magnonics approach could process data in the terahertz range of the electromagnetic spectrum (for comparison, current computers function in the slower gigahertz range). However, they still need to demonstrate this experimentally.

“The promise of this technology for more sustainable computing is huge. With this publication, we are hoping to reinforce interest in wave-based computation, and attract more young researchers to the growing field of magnonics."