Washington University in St. Louis prof finds pandemic air quality due to weather, not just lockdowns

Research shows meteorology plays an outsized role over the short-term

Headlines proclaiming Covid lockdowns drastically reduced pollution were mostly referring to nitrogen dioxide, NO2, a reactive gas emitted from burning fuel. There had been less understanding of how lockdowns affected PM2.5, tiny particulate matter that can penetrate a person's lungs, leading to a host of health problems, including increased risk for heart attack and cancer.

"PM 2.5 is the leading global environmental determinant of longevity. It is a key pollutant of concern for health," said Randall Martin, the Raymond R. Tucker Distinguished Professor in the Department of Energy, Environmental & Chemical Engineering in the McKelvey School of Engineering at Washington University in St. Louis.

New research from Martin's lab, in collaboration with the Goddard Space Flight Center, California Institute of Technology, and Dalhousie University in Nova Scotia, mapped PM 2.5 levels across China, Europe, and North America. Using satellite data, ground-based monitoring, and an innovative supercomputer modeling system, researchers found mostly slight changes in PM 2.5 -- with one exception.

The majority of changes they found were not driven by lockdown, but by the natural variability of meteorology. Their results were published on June 23 in the journal Science Advances.

The meteorological effects that we experience every day also affect PM2.5 variability, Martin said.

"The shorter time period, the more susceptible PM 2.5 is to meteorology," he said.

During the pandemic, among the images of overflowing ICUs and empty grocery store shelves, there were some photographic bright spots: before and after pictures accompanied articles proclaiming air quality improved because people were staying home.

The visuals were striking -- both on the ground, where blue skies shone over LA highways, and from space -- data from NASA satellites made a clear atmospheric reduction in nitrogen dioxide.

"People automatically started wondering, 'What's the picture for PM 2.5?'" said Melanie Hammer, a visiting research associate in Martin's lab. That was the obvious question not just because particulate matter often comes from the same sources as NO2, but because NO2 can form PM 2.5.

"NO2 is considered a secondary source of PM 2.5," Hammer said. When it's emitted, NO2 interacts with other chemicals in the atmosphere and can form PM 2.5. A few early studies looked at data gathered from ground monitoring sites, which test the surrounding air, but those ground sites are few and far between and incapable of piecing together a bigger picture.

Only about a fraction of the world's population live in countries that have more than three PM 2.5 monitors per million people. Most of the population lives in areas with no monitoring.

"We decided to look again, using a more complete picture from satellite images," Hammer said. The images, provided by NASA, contain data for columns of atmosphere spanning the ground to the edge of space. These data are referred to as aerosol optical depth, related to surface PM 2.5 concentrations using the chemical transport model GEOS-Chem, which simulates the composition of the atmosphere; the reactions and relationships of its different constituents; and the way they move through the air.

Researchers focused on three regions that do have extensive ground monitoring systems in place: North America, Europe, and China, and compared monthly estimates of PM 2.5 from January to April in 2018, 2019, and 2020.

When they compared PM 2.5 levels over the three years during the months that coincided with each region's lockdown phases, there weren't many clear signals over North America or Europe.

"We found the most clearly detectable signal was a significant reduction over the North China Plain, where the most strict lockdowns were concentrated," she said.

To figure out whether lockdown was responsible for that signal, and several smaller ones dotted around the areas surveyed, the team ran "sensitivity simulations" using GEOS-Chem, changing parameters to see which scenario most closely matched reality.

They simulated a scenario where emissions were held constant and meteorology was solely responsible for year-over-year changes in PM 2.5.

"We found that that explained a large part of the differences we were seeing," Hammer said. They also ran a simulation in which they reduced transportation-related emissions and other man-made sources of NO2, mirroring lockdown when fewer people were driving and fewer industrial sites were operational.

"On its own, that actually didn't really explain much at all," Hammer said. But combining the two, "That's when the signal over the North China Plain stood out."

Hammer suspects that the change in PM 2.5 levels over the North China Plain was so striking because of how polluted it tends to be in "normal" times. "You're probably more likely to see a larger reduction in a region that has higher concentrations, to begin with."

In a way, that highlights a relevant point that may not at first be intuitive: Average PM 2.5 levels have been dropping steadily in North America and Europe. "It's just harder to perturb really low concentrations," Hammer said.

But it also underscores the complex relationship between NO2 and PM 2.5. Although NO2 does interact with other atmospheric chemicals to form PM 2.5, the two do not have a linear relationship; twice as much NO2 in the atmosphere does not necessarily lead to twice as much PM 2.5.

Hammer said, intuitively, she did expect to see more of a reduction in PM 2.5 levels. "It was kind of a surprise that meteorology played such a dominant role.

"Turns out, it's a pretty complex relationship and it doesn't always behave how you would expect."

Memorial Sloan Kettering computational biologist develops machine learning tool to show how cancers evolve in real-time

From amoebas to zebras, all living things evolve. They change over time as pressures from the environment cause individuals with certain traits to become more common in a population while those with other traits become less common.

Cancer is no different. Within a growing tumor, cancer cells with the best ability to compete for resources and withstand environmental stressors will come to dominate in frequency. It's "survival of the fittest" on a microscopic scale.

But fitness -- how well suited any particular individual is to its environment -- isn't set in stone; it can change when the environment changes. The cancer cells that might do best in an environment saturated with chemotherapy drugs are likely to be different than the ones that will thrive in an environment without those drugs. So, predicting how tumors will evolve over time, especially in response to treatment, is a major challenge for scientists.

A new study by researchers at Memorial Sloan Kettering in collaboration with researchers at the University of British Columbia/BC Cancer in Canada suggests that one day it may be possible to make those predictions. The study was led by MSK computational biologist Sohrab Shah and BC Cancer breast cancer researcher Samuel Aparicio. The scientists showed that a machine-learning approach, built using principles of population genetics that describe how populations change over time, could accurately predict how human breast cancer tumors will evolve.

"Population genetic models of evolution match up nicely to cancer, but for a number of practical reasons it's been a challenge to apply these to the evolution of real human cancers," says Dr. Shah, Chief of Computational Oncology at MSK. "In this study, we show it's possible to overcome some of those barriers." MSK computational biologist Sohrab Shah, together with BC Cancer's Samuel Aparicio, led a new study about cancer evolution.

Ultimately, the approach could provide a means to predict whether a patient's tumor is likely to stop responding to a particular treatment and identify the cells that are likely to be responsible for relapse. This could mean highly tailored treatments, delivered at the optimal time, to produce better outcomes for people with cancer.

A Trifecta of Innovations

Three separate innovations came together to make these findings possible. The first was using realistic cancer models called patient xenografts, which are human cancers that have been removed from patients and transplanted into mice. The scientists analyzed these tumor models repeatedly over extended timeframes of up to three years, exploring the effects of platinum-based chemotherapy treatment and treatment withdrawal.

"Historically, the field has focused on the evolutionary history of cancer from a single snapshot," Dr. Shah says. "That approach is inherently error-prone. By taking many snapshots over time, we can obtain a much clearer picture."

The second key innovation was applying single-cell sequencing technology to document the genetic makeup of thousands of individual cancer cells in the tumor at the same time. A previously developed platform allowed the team to perform these operations in an efficient and automated fashion.

The final component was a machine-learning tool, dubbed fitClone, developed in collaboration with UBC statistics professor Alexandre Bouchard-Côté, which applies the mathematics of population genetics to cancer cells in the tumor. These equations describe how a population will evolve given certain starting frequencies of individuals with different fitness within that population.

With these innovations in place, the scientists were able to create a model of how individual cells and their offspring, or clones, will behave. When the team conducted experiments to measure evolution, they found close agreement between these data and their model.

"The beauty of this model is it can be run forwards to predict which clones are likely to expand and which clones are likely to get outcompeted," Dr. Shah says.

In other words, how cancer will evolve is predictable.

A Foundation for the Future

The particular types of genetic changes the team looked at are called copy number changes. These are differences in the number of particular DNA segments in cancer cells. Up until now, the significance of these sorts of changes hasn't been clear, and researchers have had doubts about their importance in cancer progression.

"Our results show that copy number changes have a measurable impact on fitness," Dr. Shah says.

For example, the scientists found that, in their mouse models, treatment of tumors with platinum chemotherapy led to the eventual emergence of drug-resistant tumor cells -- similar to what happens in patients undergoing treatment. These drug-resistant cells had distinct copy number variants.

The team wondered: What would happen to the tumor if they stopped treatment? Turns out the cells that took over the tumor in the presence of chemotherapy declined or disappeared when the chemotherapy was taken away; the drug-resistant cells were outmatched by the original drug-sensitive cells. This behavior indicates that drug resistance has an evolutionary cost. In other words, the traits that are good for resisting drugs aren't necessarily the best for thriving in an environment without those drugs.

Ultimately, Dr. Shah says, the goal is to one day be able to use this approach on blood samples to identify the particular clones in a person's tumor, predict how they are likely to evolve, and tailor medicines accordingly.

"This study is an important conceptual advance," Dr. Shah says. "It demonstrates that the fitness trajectories of cancer cells are predictable and reproducible."

NIBIB-funded modelers use supercomputer simulations for designing precise genetic programs

A change of instructions in a computer program directs the computer to execute a different command. Similarly, synthetic biologists are learning the rules for how to direct the activities of human cells. 

“Cells are intricate machines that have evolved many interacting circuits—sets of genes that coordinate functions like migration, metabolism, and cell division,” explains David Rampulla, PhD., director of the Division of Discovery Science and Technology at the National Institute of Biomedical Imaging and Bioengineering. “Synthetic biologists aim to build genetic circuits that provide cells with new functions, which in the future could be used to monitor and treat diseases.” 

A challenge in the field, however, is that often there are many iterations of trial and error that go into making a circuit that operates as intended. Now, NIBIB-funded synthetic biologists and computational modelers have teamed up to use supercomputer simulations to circumvent the laborious process of redesigning and retesting each genetic circuit. Bioengineers used computer models to build genetic circuits that can be introduced into cells to fight or prevent disease. Credit: Image by Justin Muir.

One of the team members is engineer Josh Leonard, Ph.D., associate professor of chemical and biological engineering in the McCormick School of Engineering at Northwestern University. The computational modeler is Neda Bagheri, Ph.D., associate professor of biology and chemical engineering and a Washington Research Foundation Investigator at the University of Washington, Seattle. Together with their respective research teams, they are working to build genetic programs more quickly and reliably and to develop tools that help other researchers do the same.

“Engineering a cell comes down to building a piece of DNA, which encodes a set of genes whose products interact in a way that we call a circuit or program. When that DNA is introduced into a cell, the cell is instructed to perform the desired function. Normally, it is difficult to know whether a circuit will work until we test it,” says Leonard. “This exceptional collaboration aspires to build genetic programs that do what we want them to do the first time, every time because the computational models effectively take care of the trial and error for us in advance.” 

The team used customized simulations to analyze dozens of genetic circuits with different types of functions, such as turning genes on or off in response to various input signals. The most promising constructs were built and tested to see if they functioned as predicted. The research team was a bit stunned to find that nearly all of the circuits designed in this way closely matched model predictions.

“In my experience, almost nothing works like that in science; nothing works the first time,” said Leonard. He explained that the usual process is to test lots of options, study the results, and debug to eventually identify a design that works.

Accelerated testing in computational models allowed the team to build and test operational circuits that performed increasingly more complex tasks. For example, programming the cell to evaluate multiple environmental features, such as the overproduction of a harmful protein or metabolite, and determine whether to deliver a therapeutic payload.

“We’re now at a point where we have both a reliable set of tools and a formal design process for constructing gene regulatory functions,” said Joseph Muldoon, a recent doctoral student and the first author of the study. ”Next it will be exciting to see whether these capabilities help address the unmet needs in biomedicine that motivated this work.”

The work is a significant step toward the efficient design of a genetic “toolkit” that can be used by the broader biomedical engineering community to meet various application-specific needs. The team believes the new tools combined with computational modeling will enable bioengineers to build customized cellular functions for applications ranging from fundamental research to new medical treatments.