University of Montana's supercomputer models convey melting glaciers will produce new salmon habitat

For decades, climate change has had detrimental impacts on Pacific salmon populations. Spawning streams are overheating and droughts are drying up salmon habitats entirely, impacting many food webs from the Rocky Mountains and Coast Ranges to the Pacific Ocean. While the newly created habitat may be a ray of light for salmon in some locations, climate change continues to pose grave challenges for salmon and other fish populations.

But in a new study involving researchers from the University of Montana’s Flathead Lake Biological Station, scientists discovered warming trends may offer one silver lining, if only for a while: The retreat of glaciers in the Pacific mountains of western North America potentially could produce more than 6,000 kilometers of new Pacific salmon habitat by the year 2100.

“Climate change alters the shape and dynamics of stream ecosystems,” said Diane Whited, an FLBS scientist whose role in the study focused on spatial modeling of potentially accessible stream habitat once glaciers have receded. “This information is crucial for managing the future of salmon habitat and productivity.”

Researchers modeled glacial retreat under different climate change scenarios. To accomplish this, they used supercomputer models to peel back the ice of 46,000 glaciers between southern British Columbia and south-central Alaska to look at how much potential salmon habitat would be created when the underlying bedrock is exposed and new streams flow over the landscape.

According to the team, the desirable stream habitat for salmon is connected to the ocean, maintains a low-gradient slope of 10% or less, and has retreating glaciers at its headwaters. By the end of the study, the researchers found 315 of the glaciers examined could fit those requirements.

Under a moderate climate scenario, the loss or reduction of those glaciers may reveal around 6,150 kilometers of potential new salmon habitat throughout the Pacific mountains of western North America by the year 2100 – a distance nearly equal to the length of the Mississippi River.

The researchers caution that while the newly created habitat may be a ray of light for salmon in some locations, overall climate change poses grave challenges for salmon populations. Additionally, if current warming trends continue, the newly emerging salmon habitats would eventually overheat and ultimately disappear the same way that current salmon habitats are today.

“On one hand, this amount of new salmon habitat will provide local opportunities for some salmon populations,” said SFU spatial analyst Kara Pitman, the lead author of the study. “On the other hand, climate change and other human impacts continue to threaten salmon survival via warming rivers, changes in stream flows, and poor ocean conditions.”

UK's low-cost AI soil sensors could help farmers curb fertilizer use

The technology could help growers work out the best time to use fertilizer on their crops and how much is needed, taking into account factors such as the weather and the condition of the soil. This would reduce the expensive and environmentally damaging effects of overfertilizing soil, which releases the greenhouse gas nitrous oxide and can pollute soil and waterways. 

Overfertilisation has so far rendered 12 percent of once-arable land worldwide unusable and the use of nitrogen-based fertilizer has risen by 600 percent in the last 50 years. However, it is difficult for crop growers to precisely tailor their fertilizer use: too much and they risk environmental damage and money wastage; too little and they risk poor crop yields. The researchers behind this new sensing technology say it could provide benefits for both the environment and growers. 

The sensor, named chemically functionalized paper-based electrical gas sensor (chemPEGS), measures levels of ammonium in soil – the compound that is converted to nitrites and nitrates by soil bacteria. Using a type of artificial intelligence called machine learning, it combines this with weather data, time since fertilization, pH, and soil conductivity measurements. It uses these data to predict how much total nitrogen the soil has now and how much it will have up to 12 days in the future, to predict the optimum time for fertilization. 

The research study identifies how this new low-cost solution could help growers yield maximum crops with minimal fertilization, particularly for fertilizer-thirsty crops like wheat. The technology could simultaneously reduce growers’ expenses and environmental harm from nitrogen-based fertilizers – the most widely used fertilizer type. 

Lead researcher Dr. Max Grell, who co-developed the technology at Imperial College London’s Department of Bioengineering in the UK said: “It’s difficult to overstate the problem of overfertilization both environmentally and economically. Yields and resulting income are down year by year, and growers don’t currently have the tools they need to combat this. 

“Our technology could help to tackle this problem by empowering growers to know how much ammonia and nitrate are currently in soil and to predict how much there will be in the future based on weather conditions. This could let them fine-tune fertilization to the specific needs of the soil and crops.” 

Nitrogen pollution 

Excess nitrogen fertilizer releases nitrous oxide into the air, a greenhouse gas 300 times more potent than carbon dioxide and which contributes to the climate crisis. Excess fertilizer can also be washed by rain into waterways where it deprives aquatic life of oxygen, leading to algal blooms and reduced biodiversity. 

However, it remains difficult to precisely tailor levels of fertilization to soil and crop needs. Testing is rare and current ways to measure soil nitrogen involve sending soil samples to laboratories – a lengthy and expensive process whose results are of limited use by the time they reach the grower. 

This new low-cost approach could expedite the process of testing the soil. While chemPEGS only measures ammonium, the machine learning component allows it to predict current levels of nitrate and future levels of nitrate and ammonium in the soil. 

Senior author and principal investigator Dr. Firat Guder, from Imperial’s Department of Bioengineering, said: “Much of our food comes from soil - a non-renewable resource which we’ll lose if we don’t look after it. This, combined with nitrogen pollution from agriculture, presents a conundrum for the planet – one that we hope to help tackle with precision agriculture. 

“Our sensing technology can measure and predict soil nitrogen with enough accuracy to forecast the impact of weather on fertilization planning, and tune timing for crop requirements, which we hope will help to reduce overfertilization while improving crop yields and profits for growers." 

The researchers expect chemPEGS and associated AI technology, which are currently in the prototype stage, to be available for commercialization in three to five years with more testing and manufacturing standardization. 

Novel MD simulation method can accelerate COVID-19 drug discovery

The technique was used by researchers affiliated with institutions in Brazil, Germany, and Finland to study the SARS-CoV-2 main protease (Mpro), a key driver of the virus’s reproductive cycle.

A study by researchers affiliated with institutions in Brazil, Germany, and Finland proposes a new standard for a supercomputer simulation that promises to accelerate the search for novel bioactive compounds against the virus that causes COVID-19. The researchers used the procedure to analyze a key protein in the reproductive cycle of SARS-CoV-2, which has been a major focus of attention for scientists and the pharmaceutical industry as a target for antiviral drugs. They estimate that the method they have developed can cut the time taken in the initial stage of basic research from two or three years to under a year. 

An article reporting the results of the study is published in the Journal of Biomolecular Structure and Dynamics. The authors explain that the outer protein layer of SARS-CoV-2 is rich in the amino acid cysteine, which must be active and intact for the virus to remain viable, and is therefore referred to as its main protease (Mpro). Proteases are enzymes that cleave the peptide bonds between amino acids and proteins and break down polyproteins (protein chains) into smaller proteins, which in this virus are used to produce the RNA that encodes key structures such as the spike (the component that enables it to invade and infect human cells) and the viral envelope (the outer layer that protects its genetic material). 

Mpro is considered a target for drugs against COVID-19 because inhibiting the cleavage of key proteins could block viral invasion and replication. One of the potential strategies entails synthesizing chemical components designed to bind to specific parts of such proteins to inactivate them. 

Drug discovery and development takes about 20 years on average. The study just published proposes a method that could reduce this window by a year in the case of novel bioactive compounds against SARS-CoV-2. The search for active compounds is the initial stage of drug discovery and typically takes two to three years.

“The pre-clinical phase of the development of the vaccines the world is now administering was accelerated by the use of published information about SARS-CoV-1, from which SARS-CoV-2 evolved. We can’t benefit in this way while doing basic research to develop a drug because we lack the requisite base information,” said Glaucio Monteiro Ferreira, first author of the article. 

Ferreira is a professor in the Clinical and Toxicological Analysis Department at the University of São Paulo’s School of Pharmaceutical Sciences (FCF-USP) in Brazil and conducted part of the research while he was a postdoctoral fellow at Tübingen University Hospital in Germany, with FAPESP’s support (16/12899-6 and 19/24112-9).  

Another reason for the long lead time in drug development is the complexity of researching how best to administer the substance (orally or by injection, for example). “If it is a medication to be ingested, we have to make sure it will pass through all the barriers in the body to reach the site where it should act. These details explain why drugs take longer to develop than vaccines,” he said. 

Dimers

Ferreira used advanced bioinformatics and structural biology techniques to investigate Mpro (also called 3CLpro), which needs cysteine as a substrate or “food” to perform its function. Previous research had shown that many key proteins in SARS-CoV-2 are monomers, meaning they have a single chain of amino acids. “However, we know the virus is a dimer, with doubled protein chains. This complicates drug discovery because you have to find a compound that can prevent the formation of both chains,” he said. 

Promising results had been achieved using covalent cysteine-binding inhibitors in previous research. In this study, the aim was to find a means of inhibiting cysteine itself to prevent Mpro from “feeding” on it and block viral replication. 

Computer simulations were run to look for a compound that prevented the formation of the two chains, using molecular dynamics to analyze the physical motion of atoms in a simulated viral attack on human cells. Ferreira and his group discovered from the simulations with one and two protein ligands that analysis of monomers using X-ray diffraction captured results after cysteine consumption and cleavage of the protein, an approach that fails to focus on blocking of this phase in protein replication, the process of interest. 

They then simulated the use of two inhibitors that affect the action of Mpro – covalently bound ligands N1 and N3 – and found the former to be more effective in that it did not permit electric charge donation to cysteine, “starving” the enzyme as a result. 

“Our simulations led us more quickly to this inhibitor, which really can block the action of the enzyme, indicating the compound’s potential to become a powerful drug,” Ferreira said. Coincidentally, Pfizer recently announced an initiative to find a COVID-19 drug targeting Mpro, although Ferreira’s research began long ago. 

His collaborators were Thales Kronenberger and Antti Poso at Tübingen University Hospital (Germany); Arun Kumar Tonduru at the University of Eastern Finland; and Rosario Dominguez Crespo Hirata and Mario Hiroyuki Hirata at the University of São Paulo (USP) in Brazil. The investigation was performed during Ferreira’s postdoctoral research at the Molecular Biology Laboratory for Diagnosis and Pharmacogenomics (LBMAD) under the supervision of Professor Hirata.

OSU research enables a key step toward personalized medicine by modeling biological systems

A new study by the Oregon State University College of Engineering shows that machine learning techniques can offer powerful new tools for advancing personalized medicine, care that optimizes outcomes for individual patients based on unique aspects of their biology and disease features. Brian D. Wood  CREDIT Johanna Carson, OSU

The research with machine learning, a branch of artificial intelligence in which computer systems use algorithms and statistical models to look for trends in data, tackles long-unsolvable problems in biological systems at the cellular level, said Oregon State’s Brian D. Wood, who conducted the study with then OSU Ph.D. student Ehsan Taghizadeh and Helen M. Byrne of the University of Oxford.

“Those systems tend to have high complexity – first because of the vast number of individual cells and second, because of the highly nonlinear way in which cells can behave,” said Wood, a professor of environmental engineering. “Nonlinear systems present a challenge for upscaling methods, which is the primary means by which researchers can accurately model biological systems at the larger scales that are often the most relevant.”

A linear system in science or mathematics means any change to the system’s input results in a proportional change to the output; a linear equation, for example, might describe a slope that gains 2 feet vertically for every foot of horizontal distance.

Nonlinear systems don’t work that way, and many of the world’s systems, including biological ones, are nonlinear.

The new research, funded in part by the U.S. Department of Energy and published in the Journal of Computational Physics, is one of the first examples of using machine learning to address issues with modeling nonlinear systems and understanding complex processes that might occur in human tissues, Wood said.

“The advent of machine learning has given us a new tool in our arsenal to solve problems we could not solve before,” he explained. “While the tools themselves are not necessarily new, the particular applications we have are very different. We are beginning to apply machine learning in a more constrained way, and this is allowing us to solve physical problems we had no way of solving before.”

In modeling cellular activity within an organ, it is not possible to individually model each cell in that organ – a cubic centimeter of tissue may contain a billion cells – so researchers rely on what’s known as upscaling.

Upscaling seeks to decrease the data required to analyze or model a particular biological process while maintaining the fidelity – the degree to which a model accurately reproduces something – of the core biology, chemistry, and physics occurring at the cellular level.  

Biological systems, Wood notes, resist traditional upscaling techniques, and that’s where machine learning methods come in.

By reducing the information load for a very complicated system at the cellular level, researchers can better analyze and model the impact or response of those cells with high fidelity without having to model each one. Wood describes it as “simplifying a computational problem that has tens of millions of data points by reducing it to thousands of data points.”

The new approach could pave the way to potential patient treatments based on numerical model outcomes. In this study, researchers were able to employ machine learning and develop a novel method to resolve classic nonlinear problems in biological and chemical systems.

“Our work capitalizes on what is called deep neural networks to upscale the nonlinear processes found in transport and reactions within tissues,” Wood said.

Wood is collaborating on another research project employing machine learning techniques to model blood flow through the body.

“The promises of individualized medicine are rapidly becoming a reality,” he said. “The combination of multiple disciplines – such as molecular biology, applied mathematics, and continuum mechanics – are being combined in new ways to make this possible.  One of the key components of this will certainly be the continuing advances in machine learning methods.”

UK scientists solve the grass leaf conundrum

The grass is cut regularly by our mowers and grazed on by cows and sheep, yet continues to grow back.  The secret to its remarkable regenerative powers lies in part in the shape of its leaves, but how that shape arises has been a topic of longstanding debate. Developing Maize Plant – a staple crop and member of the grass family. A new study explains how the grass leaf evolved.  CREDIT Annis Richardson

The debate is relevant to our staple crops wheat, rice, and maize because they are members of the grass family with the same type of leaf.

The mystery of grass leaf formation has now been unraveled by a John Innes Centre team using the latest computational modeling and developmental genetic techniques. 

The John Innes Centre (JIC), located in Norwich, Norfolk, England, is an independent center for research and training in plant and microbial science founded in 1910.

One of the corresponding authors Professor Enrico Coen said of the findings which appear in Science: “The grass leaf has been a conundrum.  By formulating and testing different models for its evolution and development we’ve shown that current theories are likely incorrect and that a discarded idea proposed the 19th century is much nearer the mark.”

Flowering plants can be categorized into monocots and eudicots. Monocots, which include the grass family, have leaves that encircle the stem at their base and have parallel veins throughout.  Eudicots, which include brassicas, legumes, and most common garden shrubs and trees, have leaves that are held away from the stem by stalks, termed petioles, and typically have broad laminas with net-like veins.

In grasses, the base of the leaf forms a tube-like structure, called the sheath. The sheath allows the plant to increase in height while keeping its growing tip close to the ground, protecting it from the blades of lawnmowers or incisors of herbivores.

In the 19th Century, botanists proposed that the grass sheath was equivalent to the petiole of eudicot leaves. But this view was challenged in the 20th century when plant anatomists noted that petioles have parallel veins, similar to the grass leaf, and concluded that the entire grass leaf (except for a tiny region at its tip) was derived from the petiole. 

Using recent advances in computational modeling and developmental genetics, the team revisited the problem of grass development.  They modeled different hypotheses for how grass leaves grow and tested the predictions of each model against experimental results.  To their surprise, they found that the model based on the 19th-century idea of sheath-petiole equivalence was much more strongly supported than the current view.

This mirrors findings in animal development where a discarded theory - that the ‘underbelly’ side of insects corresponds to the back of vertebrates like us – was vindicated in the light of fresh developmental genetic research.

The grass study shows how simple modulations of growth rules, based on a common pattern of gene activities, can generate a remarkable diversity of different leaf shapes, without which our gardens and dining tables would be much poorer.

Evolution of the grass leaf by primordium extension and petiole-lamina remodelling, is published in the journal Science10.1126/science.abf9407