UArizona geosciences prof Tierney's maps shed light on our climate future

Jessica TierneyAbout 56 million years ago, volcanoes quickly dumped massive amounts of carbon dioxide into the atmosphere, heating the Earth rapidly.

This time period – called the Paleocene-Eocene Thermal Maximum, or PETM – is often used as a historic parallel for our own future under climate change, since humans have also rapidly poured carbon dioxide into the atmosphere over the last 250 years.

A University of Arizona-led team of researchers' study includes temperature and rainfall maps of Earth during the PETM to help better understand what conditions were like in that time period and how sensitive the climate was to soaring levels of carbon dioxide.

The team, led by UArizona geosciences professor Jessica Tierney, combined previously published temperature data and climate models to confirm that the PETM is, in fact, a good indicator of what might happen to the climate under future carbon dioxide level projections.

"The PETM is not a perfect analog for our future, but we were somewhat surprised to find that yes, the climate changes we reconstructed share a lot of similarities with future predictions as outlined in the latest IPCC (Intergovernmental Panel on Climate Change) AR6 report," Tierney said.

The long-ago time period and our future both are characterized by faster warming at the poles than the rest of the globe – a phenomenon called Arctic amplification – as well as stronger monsoons, more intense winter storms, and less rainfall at the edges of the tropics. The researchers also found that as more carbon dioxide is pumped into the air, the climate becomes more sensitive than previous studies predicted.

"Overall, our work helps us to understand our future under climate change better," Tierney said. "It gives some confirmation that the basics of climate change – such as polar amplification, more intense monsoons, and winter storms – are features of high greenhouse gas climates both past and future." 

Tierney and her team built their maps of the PETM by combining what's called proxy temperature data with climate models. Paleoclimatologists like Tierney can deduce temperatures from the past by chemically analyzing certain types of fossils from a given time period. That proxy temperature data, combined with modern climate modeling technology, allowed Tierney and her collaborators to create global temperature maps of the PETM.

The climate models used by the researchers to create the maps of the past are typically used to make future climate predictions – including those in the IPCC assessment reports. Tierney and her team instead used them to generate simulations of what Earth looked like 56 million years ago.

"We moved the continents around to match the PETM and then we ran some simulations at a bunch of different levels of carbon dioxide, anywhere from three to 11 times today's levels – or from 850 parts per million to a really high value of 3,000 parts per million – because those are all possible levels of carbon dioxide that could have occurred in the PETM," Tierney said. "For context, carbon dioxide in our atmosphere today is about 420 parts per million and it was about 280 parts per million before the Industrial Revolution. By adding in the geological evidence, we narrowed down simulations to the ones that best matched that evidence."

Tierney and her team have used this method in past studies to reconstruct the climate in more recent time periods.

The new study also more precisely estimates how much the globe warmed during the PETM. Previous studies suggested the PETM was 4 to 5 degrees Celsius warmer than the time period right before it. Tierney's research, however, revealed that that number is 5.6 degrees Celsius, suggesting the climate is more sensitive to increases in carbon dioxide than previously thought.

Climate sensitivity is how much the planet warms per doubling of carbon dioxide.

"Nailing this number down really matters, because if climate sensitivity is high, then we'll see more warming by the end of the century than if it's lower," Tierney said. "The IPCC AR6 predictions span 2 to 5 degrees Celsius per doubling of carbon dioxide. In this study, we quantify that sensitivity during the PETM and found that the sensitivity is between 5.7 to 7.4 degrees Celsius per doubling, which is much higher."

Ultimately, this means that under higher levels of carbon dioxide than we have today, the planet will get more sensitive to carbon dioxide, which, according to Tierney, "is something that's important for thinking about longer-term climate change, beyond the end of the century."

Harvard Medical School research indicates that compared with human clinicians, image heat maps underperform, require further refinement

850 AIArtificial intelligence models that interpret medical images hold the promise to enhance clinicians’ ability to make accurate and timely diagnoses, while also lessening workload by allowing busy physicians to focus on critical cases and delegate rote tasks to AI.

But AI models that lack transparency about how and why a diagnosis is made can be problematic. This opaque reasoning — also known as “black box” AI — can diminish clinician trust in the reliability of the AI tool and thus discourage its use. This lack of transparency could also mislead clinicians into overtrusting the tool’s interpretation.

In the realm of medical imaging, one way to create more understandable AI models and to demystify AI decision-making has been saliency assessments — an approach that uses heat maps to pinpoint whether the tool is correctly focusing only on the relevant pieces of a given image or homing in on irrelevant parts of it.

Heat maps work by highlighting areas on an image that influenced the AI model’s interpretation. This could help human physicians see whether the AI model focuses on the same areas as they do or is mistakenly focusing on irrelevant spots on an image.

But a new study shows that for all their promise, saliency heat maps may not be yet ready for prime time.

The analysis, led by Harvard Medical School investigator Pranav Rajpurkar, Matthew Lungren of Stanford, and Adriel Saporta of New York University, quantified the validity of seven widely used saliency methods to determine how reliably and accurately they could identify pathologies associated with 10 conditions commonly diagnosed on x-rays, such as lung lesions, pleural effusion, edema, or enlarged heart structures. To ascertain performance, the researchers compared the tools’ performance against human expert judgment.

In the final analysis, tools using saliency-based heat maps consistently underperformed in image assessment and in their ability to spot pathological lesions, compared with human radiologists.

The work represents the first comparative analysis between saliency maps and human expert performance in the evaluation of multiple X-ray pathologies. The study also offers a granular understanding of whether and how certain pathological characteristics on an image might affect AI tool performance.

The saliency-map feature is already used as a quality assurance tool by clinical practices that employ AI to interpret computer-aided detection methods, such as reading chest X-rays. But in light of the new study findings, this feature should be applied with caution and a healthy dose of skepticism, the researchers said.

“Our analysis shows that saliency maps are not yet reliable enough to validate individual clinical decisions made by an AI model,” said Rajpurkar, who is an assistant professor of biomedical informatics at HMS. “We identified important limitations that raise serious safety concerns for use in current practice.”

The researchers caution that because of the important limitations identified in the study, saliency-based heat maps should be further refined before they are widely adopted in clinical AI models.

The team’s full codebase, data, and analysis are open and available to all interested in studying this important aspect of clinical machine learning in medical imaging applications.

Co-authors included Xiaotong Gui, Ashwin Agrawal, Anuj Pareek, Jayne Seekins, Francis Blankenberg, and Andrew Ng, all from Stanford University; Steven Truong and Chanh Nguyen, of VinBrain, Vietnam; and Van-Doan Ngo, of Vinmec International Hospital, Vietnam.

Scottish researchers propose a roadmap to understand whether AI models, the human brain process things the same way

Media 526261 smxx c744dDeep Neural Networks, part of the broader family of machine learning, have become increasingly influential in everyday real-world applications such as automated face recognition systems and self-driving cars.

Researchers use Deep Neural Networks, or DNNs, to model the processing of information, and to investigate how this information processing matches that of humans.

While DNNs have become an increasingly popular tool to model the computations that the brain does, particularly to recognize real-world “things visually,” the ways in which DNNs do this can be very different.

New research, published in the journal Trends in Cognitive Sciences and led by the University of Glasgow’s School of Psychology and Neuroscience, presents a new approach to understanding whether the human brain and its DNN models recognize things in the same way, using similar steps of computations.

Currently, Deep Neural Network technology is used in applications such as face recognition, and while it is successful in these areas, scientists still do not fully understand how these networks process information.

This opinion article outlines a new approach to better this understanding of how the process works: first, researchers must show that both the brain and the DNNs recognize the same things – such as a face – using the same face features; and, secondly, that the brain and the DNN must process these features in the same way, with the same steps of computations.

As a current challenge of accurate AI development is understanding whether the process of machine learning matches how humans process information, it is hoped this new work is another step forward in the creation of more accurate and reliable AI technology that will process information more as our brains do.

Prof Philippe Schyns, Dean of Research Technology at the University of Glasgow, said: “Having a better understanding of whether the human brain and its DNN models recognize things the same way would allow for more accurate real-world applications using DNNs.

“If we have a greater understanding of the mechanisms of recognition in human brains, we can then transfer that knowledge to DNNs, which in turn will help improve the way DNNs are used in applications such as facial recognition, where there are currently not always accurate.

“Creating human-like AI is about more than mimicking human behavior – technology must also be able to process information, or ‘think’, like or better than humans if it is to be fully relied upon. We want to make sure AI models use the same process to recognize things as a human would, so we don’t just have the illusion that the system is working.”

The study, ‘Degrees of Algorithmic Equivalence between the Brain and its DNN Models’ is published in Trends in Cognitive Sciences. The work is funded by Wellcome and Physical Sciences Research Council.