Webb telescope captures its first image of exoplanet

For the first time, astronomers have used NASA’s James Webb Space Telescope to take a direct image of a planet outside our solar system. The exoplanet is a gas giant, meaning it has no rocky surface and could not be habitable. This image shows the exoplanet HIP 65426 b in different bands of infrared light, as seen from the James Webb Space Telescope: purple shows the NIRCam instrument’s view at 3.00 micrometers, blue shows the NIRCam instrument’s view at 4.44 micrometers, yellow shows the MIRI instrument’s view at 11.4 micrometers, and red shows the MIRI instrument’s view at 15.5 micrometers. These images look different because of the ways the different Webb instruments capture light. A set of masks within each instrument, called a coronagraph, blocks out the host star’s light so that the planet can be seen. The small white star in each image marks the location of the host star HIP 65426, which has been subtracted using the coronagraphs and image processing. The bar shapes in the NIRCam images are artifacts of the telescope’s optics, not objects in the scene. (Unlabeled version.) Credit: NASA/ESA/CSA, A Carter (UCSC), the ERS 1386 team, and A. Pagan (STScI).

The image, as seen through four different light filters, shows how Webb’s powerful infrared gaze can easily capture worlds beyond our solar system, pointing the way to future observations that will reveal more information than ever before about exoplanets.

“This is a transformative moment, not only for Webb but also for astronomy generally,” said Sasha Hinkley, associate professor of physics and astronomy at the University of Exeter in the United Kingdom, who led these observations with a large international collaboration. Webb is an international mission led by NASA in collaboration with its partners, ESA (European Space Agency) and CSA (Canadian Space Agency).

The exoplanet in Webb’s image, called HIP 65426 b, is about six to 12 times the mass of Jupiter, and these observations could help narrow that down even further. It is young as planets go — about 15 to 20 million years old, compared to our 4.5-billion-year-old Earth.

Astronomers discovered the planet in 2017 using the SPHERE instrument on the European Southern Observatory’s Very Large Telescope in Chile and took images of it using short infrared wavelengths of light. Webb’s view, at longer infrared wavelengths, reveals new details that ground-based telescopes would not be able to detect because of the intrinsic infrared glow of Earth’s atmosphere.

Researchers have been analyzing the data from these observations and are preparing a paper. But Webb’s first capture of an exoplanet already hints at future possibilities for studying distant worlds.

Since HIP 65426 b is about 100 times farther from its host star than Earth is from the Sun, it is sufficiently distant from the star that Webb can easily separate the planet from the star in the image.

Webb’s Near-Infrared Camera (NIRCam) and Mid-Infrared Instrument (MIRI) are both equipped with coronagraphs, which are sets of tiny masks that block out starlight, enabling Webb to take direct images of certain exoplanets like this one. NASA’s Nancy Grace Roman Space Telescope, slated to launch later this decade, will demonstrate an even more advanced coronagraph.

“It was really impressive how well the Webb coronagraphs worked to suppress the light of the host star,” Hinkley said.

Taking direct images of exoplanets is challenging because stars are so much brighter than planets. The HIP 65426 b planet is more than 10,000 times fainter than its host star in the near-infrared, and a few thousand times fainter in the mid-infrared.

In each filtered image, the planet appears as a slightly differently shaped blob of light. That is because of the particulars of Webb’s optical system and how it translates light through the different optics.

“Obtaining this image felt like digging for space treasure,” said Aarynn Carter, a postdoctoral researcher at the University of California, Santa Cruz, who led the analysis of the images. “At first all I could see was light from the star, but with careful image processing I was able to remove that light and uncover the planet.”

While this is not the first direct image of an exoplanet taken from space – the Hubble Space Telescope has captured direct exoplanet images previously – HIP 65426 b points the way forward for Webb’s exoplanet exploration.

“I think what’s most exciting is that we’ve only just begun,” Carter said. “There are many more images of exoplanets to come that will shape our overall understanding of their physics, chemistry, and formation. We may even discover previously unknown planets, too.”

Swedish researchers use supercomputer simulations to show that stable molecules can become reactive with light

Modulating the photocyclization reactivity of diarylethenes through changes in the excited-state aromaticity of the π-linker

Researchers at Linköping University have used supercomputer simulations to show that stable aromatic molecules can become reactive after absorbing light. The results, published in the Journal of Organic Chemistry, may have long-term applications in such areas as the storage of solar energy, pharmacology, and molecular machines. 

“Everyone knows that petrol smells nice. This is because it contains the aromatic molecule benzene. And aromatic molecules don’t just smell nice: they have many useful chemical properties. Our discovery means that we can add more properties”, says Bo Durbeej, professor of computational physics at Linköping University.

In normal organic chemistry, heat can be used to start reactions. However, an aromatic molecule is a stable hydrocarbon, and it is difficult to initiate reactions between such molecules and others simply by heating. This is because the molecule is already in an optimal energy state. In contrast, a reaction in which an aromatic molecule is formed takes place extremely readily. 

Researchers at Linköping University have now used supercomputer simulations to show that it is possible to activate aromatic molecules using light. Reactions of this type are known as photochemical reactions. Bo Durbeej, professor of computational physics at Linköping University.  CREDIT Thor Balkhed/Linköping University

“It is possible to add more energy using light than using heat. In this case, light can help an aromatic molecule to become antiaromatic, and thus highly reactive. This is a new way to control photochemical reactions using the aromaticity of the molecules”, says Bo Durbeej.

The result was important enough to be highlighted on the cover of the Journal of Organic Chemistry when it was published. In the long term, it has possible applications in many areas. Bo Durbeej’s research group focuses on applications in the storage of solar energy, but he sees potential also in molecular machines, molecular synthesis, and phytopharmacology. In the latter application, it may be possible to use light to selectively activate drugs with aromatic groups at a location in the body where the pharmacological effect is wanted.

“In some cases, it’s not possible to supply heat without harming surrounding structures, such as body tissue. It should, however, be possible to supply light”, says Bo Durbeej.

The researchers tested the hypothesis that it was the loss of aromaticity that led to the increased reactivity by examining the opposite relationship in the simulations. In this case, they started with an antiaromatic unstable molecule and simulated it as subject to light irradiation. This led to the formation of an aromatic compound, and the researchers saw, as expected, that the reactivity was lost.

“Our discovery extends the concept of ‘aromaticity’, and we have shown that we can use this concept in organic photochemistry”, says Bo Durbeej.

UMaine deploys AI in its wireless network to better monitor Maine’s forests

Monitoring and measuring forest ecosystems is a complex challenge because of an existing combination of software, collection systems, and computing environments that require increasing amounts of energy to power. The University of Maine’s Wireless Sensor Networks (WiSe-Net) laboratory has developed a novel method of using artificial intelligence and machine learning to make monitoring soil moisture more energy and cost-efficient — one that could be used to make measuring more efficient across the broad forest ecosystems of Maine and beyond. UMaine researchers testing wireless sensors used to collect forest data.

Soil moisture is an important variable in forested and agricultural ecosystems alike, particularly under the recent drought conditions of past Maine summers. Despite the robust soil moisture monitoring networks and large, freely available databases, the cost of commercial soil moisture sensors and the power that they use to run can be prohibitive for researchers, foresters, farmers, and others tracking the health of the land.

Along with researchers at the University of New Hampshire and the University of Vermont, UMaine’s WiSe-Net designed a wireless sensor network that uses artificial intelligence to learn how to be more power efficient in monitoring soil moisture and processing the data. The research was funded by a grant from the National Science Foundation

“AI can learn from the environment, predict the wireless link quality and incoming solar energy to efficiently use limited energy and make a robust low-cost network run longer and more reliably,” says Ali Abedi, principal investigator of the recent study and professor of electrical and computer engineering at the University of Maine.

The software learns over time how to make the best use of available network resources, which helps produce power-efficient systems at a lower cost for large-scale monitoring compared to the existing industry standards.

WiSe-Net also collaborated with Aaron Weiskittel, director of the Center for Research on Sustainable Forests, to ensure that all hardware and software research is informed by the science and tailored to the research needs. 

“Soil moisture is a primary driver of tree growth, but it changes rapidly, both daily as well as seasonally,” Weiskittel says. “We have lacked the ability to monitor effectively at scale. Historically, we used expensive sensors that collected at fixed intervals — every minute, for example — but were not very reliable. A cheaper and more robust sensor with wireless capabilities like this really opens the door for future applications for researchers and practitioners alike.”

Although the system designed by the researchers focuses on soil moisture, the same methodology could be extended to other types of sensors, like ambient temperature, snow depth, and more, as well as scaling up the networks with more sensor nodes.

“Real-time monitoring of different variables requires different sampling rates and power levels. An AI agent can learn these and adjust the data collection and transmission frequency accordingly rather than sampling and sending every single data point, which is not as efficient,” Abedi says. 

Drexel's supercomputer model could help project severity of next COVID variant

As public health officials around the world contend with the latest surge of the COVID-19 pandemic, researchers at Drexel University have created a supercomputer model that could help them be better prepared for the next one. Using machine learning algorithms, trained to identify correlations between changes in the genetic sequence of the COVID-19 virus and upticks in transmission, hospitalizations, and deaths, the model can provide an early warning about the severity of new variants. covid sequencing 16x9 47d76

More than two years into the pandemic, scientists and public health officials are doing their best to predict how mutations of the SARS-CoV-2 virus are likely to make it more transmissible, evasive to the immune system, and likely to cause severe infections. But collecting and analyzing the genetic data to identify new variants — and linking it to the specific patients who have been sickened by it — is still an arduous process.

Because of this, most public health projections about new “variants of concern” — as the World Health Organization categorizes them — are based on surveillance testing and observation of the regions where they are already spreading.

“The speed with which new variants, like Omicron, have made their way around the globe means that by the time public health officials have a good handle on how vulnerable their population might be, the virus has already arrived,” said Bahrad A. Sokhansanj, Ph.D., an assistant research professor in Drexel’s College of Engineering who led the development of the computer model. “We’re trying to give them an early warning system – like advanced weather modeling for meteorologists – so they can quickly predict how dangerous a new variant is likely to be — and prepare accordingly.”

The Drexel model, which was recently published in the journal Computers in Biology and Medicine, is driven by targeted analysis of the genetic sequence of the virus’s spike protein — the part of the virus that allows it to evade the immune system and infect healthy cells, it is also the part known to have mutated most frequently throughout the pandemic — combined with a mixed effects machine learning analysis of factors such as age, sex and geographic location of COVID patients.

Learning to Find Patterns

The research team used a newly developed machine learning algorithm, called GPBoost, based on methods commonly used by large companies to analyze sales data. Via a textual analysis, the program can quickly home in on the areas of the genetic sequence that are most likely to be linked to changes in the severity of the variant.

It layers these patterns with those gleaned from a separate perusal of patient metadata (age and sex) and medical outcomes (mild cases, hospitalizations, deaths). The algorithm also accounts for and attempts to remove, biases due to how different countries collect data. This training process not only allows the program to validate the predictions it has already made about the existing variants, but it also prepares the model to make projections when it comes across new mutations in the spike protein. It shows these projections as a range of severity – from mild cases to hospitalizations and deaths – depending on the age, or sex of a patient.

“When we get a sequence, we can make a prediction about the risk of severe disease from a variant before labs run experiments with animal models or cell culture, or before enough people get sick that you can collect epidemiological data. In other words, our model is more like an early warning system for emerging variants” Sokhansanj said.

Genetic and patient data from the GISAID database – the largest compendium of information on people who have been infected with the coronavirus – were used to train the algorithm. Once the algorithms were primed the team used them to make projections about the Omicron subvariants post-BA.1 and BA.2.

“We show that future Omicron subvariants are likelier to cause more severe disease,” Sokhansanj said. “Of course, in the real world, that increased disease severity will be mitigated by prior infection by the previous Omicron variants – this factor is also reflected in the modeling.”

Keeping up with Covid

Drexel’s targeted approach to predictive modeling of COVID-19 is a crucial development because the massive amount of genetic sequencing data being collected has strained standard analysis methods to extract useful information quickly enough to keep up with the virus’s new mutations.

“The amount of spike protein mutations has already been quite substantial and it will likely continue because the virus is encountering hosts that have never been infected before,” said Gail Rosen, Ph.D., a professor in the College of Engineering, who heads Drexel’s Ecological and Evolutionary Signal-processing and Informatics Laboratory.

“Some estimates suggest that SARS-CoV-2 has only ‘explored’ as little as 30-40% of the potential space for spike mutations,” she said. “When you consider that each mutation could impact key virus properties, like virulence and immune evasion, it seems vital to be able to quickly identify these variations and understand what they mean for those who are vulnerable to infection.”

Rosen’s lab has been at the forefront of using algorithms to cut through the noise of genetic sequencing data and identify patterns that are likely to be significant. Early in the pandemic, the group was able to track the geographic evolution of new SARS-CoV-2 variants by developing a method for quickly identifying and labeling its mutations. Her team has continued to leverage this process to better understand the patterns of the pandemic.

Vision Among Variables

Up until now, scientists have predominantly used genetic sequencing to better identify mutations alongside lab experiments and epidemiological studies. There has been little success in linking specific genetic sequence variations to the virality of new variants. The Drexel researchers believe this is due to progressive changes in vaccination and immunity over time, as well as variations in how data is reported in different countries.

“We know that each successive COVID-19 variant thus far has resulted in slightly milder infections because of increases in vaccination, immunity, and health care providers having a better understanding of how to treat infections. But what we have discovered through our mixed effects analysis is that this trend does not necessarily hold for each country. This is why our model considers geographic location as one of the variables taken into consideration by the machine learning algorithm,” Sokhansanj said.

While disparities and inconsistencies in patient and public health data have been a challenge for public health officials throughout the pandemic, the Drexel model can account for this and explain how it affected the algorithm’s projections.

“One of our key goals was making sure that the model is explainable, that is, we can tell why it's making the predictions that it's making,” Sokhansanj said. “You really want a model that allows you to look under the hood to see, for example, the reasons why its predictions may or may not agree with what biologists understand from lab experiments — to ensure the predictions are built on the right structure.”

A Better View

The team notes that advances like this underscore the need to provide more public health resources to vulnerable areas of the world — not only for treatment and vaccination but also for collecting public health data, including sequencing emerging variants.

The researchers are currently using the model to more rigorously analyze the current group of emerging variants that will become dominant after Omicron BA.4 and BA.5.

“The virus can and will continue to surprise us,” Sokhansanj said. “We urgently need to expand our global capacity to sequence variants, so that we can analyze the sequences of potentially dangerous variants as soon as they show up — before they become a worldwide problem.”

Swedish biologists develop algo that uncovers the secrets of cell factories

Drug molecules and biofuels can be made to order by living cell factories, where biological enzymes do the job. Now researchers at the Chalmers University of Technology have developed a supercomputer model that can predict how fast enzymes work, making it possible to find the most efficient living factories, as well as to study difficult diseases. The researchers tested their model by simulating metabolism in more than 300 types of yeasts. When compared with measured, pre-existing knowledge, the researchers concluded that models with predicted kcat values could accurately simulate metabolism. The image shows common baker’s yeast, Saccharomyces cerevisiae

Enzymes are proteins found in all living cells. Their job is to act as catalysts that increase the rate of specific chemical reactions that take place in the cells. The enzymes thus play a crucial role in making life on earth work and can be compared to nature's small factories. They are also used in detergents, and to manufacture, among other things, sweeteners, dyes, and medicines. The potential uses are almost endless but are hindered by the fact that it is expensive and time-consuming to study the enzymes.

“To study every natural enzyme with experiments in a laboratory would be impossible, they are simply too many. But with our algorithm, we can predict which enzymes are most promising just by looking at the sequence of amino acids they are made up of”, says Eduard Kerkhoven, a researcher in systems biology at the Chalmers University of Technology and the study's lead author.

Only the most promising enzymes need to be tested
The enzyme turnover number or kcat value describes how fast and efficient an enzyme works and is essential for understanding a cell's metabolism. In the new study, Chalmers researchers have developed a computer model that can quickly calculate the kcat value. The only information needed is the order of the amino acids that build up the enzyme - something that is often widely available in open databases. After the model makes the first selection, only the most promising enzymes need to be tested in the lab.

Given the number of naturally occurring enzymes, the researchers believe that the new calculation model may be of great importance.

“We see many possible biotechnological applications. As an example, biofuels can be produced when enzymes break down biomass in a sustainable manufacturing process. The algorithm can also be used to study diseases in the metabolism, where mutations can lead to defects in how enzymes in the human body work”, says Eduard Kerkhoven.

More knowledge of enzyme production
More possible applications are more efficient production of products made from natural organisms, as opposed to industrial processes. Penicillin extracted from a mold is one such example, as well as the cancer drug taxol from yew and the sweetener stevia. They are typically produced in low amounts by natural organisms.

“The development and manufacture of new natural products can be greatly helped by knowledge of which enzymes can be used”, says Eduard Kerkhoven.

The calculation model can also point out the changes in kcat value that occur if enzymes mutate, and identify unwanted amino acids that can have a major impact on an enzyme's efficiency. The model can also predict whether the enzymes produce more than one "product".

“We can reveal if the enzymes have any ‘moonlighting’ activities and produce metabolites that are not desirable. It is useful in industries where you often want to manufacture a single pure product.”

The researchers tested their model by using 3 million kcat values to simulate metabolism in more than 300 types of yeasts. They created computer models of how fast the yeasts could grow or produce certain products, like ethanol. When compared with measured, pre-existing knowledge, the researchers concluded that models with predicted kcat values could accurately simulate metabolism.