An increasingly warm and ice-free Arctic Ocean has, in recent decades, led to more moisture in higher latitudes. This moisture is transported south by cyclonic weather systems where it precipitates as snow, influencing the global hydrological cycle and many terrestrial systems that depend on it (Illustration: Tomonori Sato).
An increasingly warm and ice-free Arctic Ocean has, in recent decades, led to more moisture in higher latitudes. This moisture is transported south by cyclonic weather systems where it precipitates as snow, influencing the global hydrological cycle and many terrestrial systems that depend on it (Illustration: Tomonori Sato).

Japanese environmental scientist Sato develops a tagged moisture transport model that predicts more snowfall

A new model explains that water evaporating from the Arctic Ocean due to a warming climate is transported south and can lead to increased snowfall in northern Eurasia in late autumn and early winter. This information will allow for more accurate predictions of severe weather events.

Rising air temperatures due to global warming melt glaciers and polar ice caps. Seemingly paradoxically, snow cover in some areas in northern Eurasia has increased over the past decades. However, snow is a form of water; global warming increases the quantity of moisture in the atmosphere, and thus the quantity and likelihood of rain and snow. Understanding where exactly the moisture comes from, how it is produced, and how it is transported south is relevant for better predictions of extreme weather and the evolution of the climate. 

Hokkaido University environmental scientist Tomonori Sato and his team developed a new tagged moisture transport model that relies on the “Japanese 55-year reanalysis dataset”, a painstaking reanalysis of worldwide historical weather data over the past 55 years. The group used this material to keep their model calibrated over much longer distances than hitherto possible and were thus able to shed light on the mechanism of moisture transport in particular over the vast landmasses of Siberia. 

A standard technique to analyze moisture transport is the “tagged moisture transport model”. This is a supercomputer modeling technique that tracks where hypothetical chunks of atmospheric moisture form, how they have moved around, and where they precipitate due to the local climatic conditions. But the computer models become more and more inaccurate as the distance to the ocean increases. In particular, this makes quantitative predictions difficult. Thus, these methods have not been able to satisfyingly explain the snowfall in northern Eurasia.

The results of the study, published in the journal npj Climate and Atmospheric Science show that water evaporation from the Arctic Ocean has increased over the past four decades and that the biggest changes have occurred from the Barents and Kara Seas north of western Siberia, as well as over the Chukchi and East Siberian Seas north of eastern Siberia, between October and December. At this time of year, the Arctic Ocean is still warm and the area not covered by ice is still large. Importantly, this development coincides with the area where sea ice retreat has been strongest over the time frame of the study. In addition, the quantitative model shows that evaporation and snowfall are especially strong during certain weather events such as cyclonic systems taking up unusually large quantities of moisture and transporting them south into Siberia, thus also highlighting detailed and specific mechanistic insights into the weather dynamics of the region.

With the Arctic Ocean being twice as sensitive to rapid warming than the global average, evaporation and subsequent changes to the hydrological cycle over northern Eurasia will become even more pronounced in the years to come. The researchers say that since snowfall often delays the downstream effects of the abnormal weather events that cause it, “knowledge of the precursor signal stored as a snow cover anomaly is expected to help improve seasonal predictions of abnormal weather, e.g., the potential for heatwaves that enhance the risk of fire in boreal forests.” This study, therefore, yields a key element to understanding the mechanism of this weather system as well as others that are influenced by it, and thus to make better predictions of severe events that could harm people and infrastructure. 

This study was supported by the Japan Society for the Promotion of Science KAKENHI (JP19H05668); the Arctic Challenge for Sustainability II (ArCSII) project (JPMXD1420318865); and the Japan Science and Technology Agency (JST) SICORP (JPMJSC1902).

The Zwicky Transient Facility scans the sky using a state-of-the-art wide-field camera mounted on the Samuel Oschin telescope at the Palomar Observatory in Southern California Image credit: Palomar Observatory/Caltech
The Zwicky Transient Facility scans the sky using a state-of-the-art wide-field camera mounted on the Samuel Oschin telescope at the Palomar Observatory in Southern California Image credit: Palomar Observatory/Caltech

Fremling's SNIascore identifies 1000 supernovae

Today’s astronomical facilities scan the night sky deeper and faster than ever before. Identifying and classifying known and potentially interesting cosmic events is becoming impossible for one or a group of astronomers. Therefore, increasingly they train supercomputers to do the work for them. Astronomers from the Zwicky Transient Facility collaboration at Caltech have announced that their machine-learning algorithm has now classified and reported 1000 supernovae completely autonomously. 

“We needed a helping hand and we knew that once we train our computers to do the job, they would take a big load off our backs”, says Christoffer Fremling, a staff astronomer at Caltech and the mastermind behind the new algorithm, dubbed SNIascore. “SNIascore classified its first supernova in April 2021 and a year and a half later we are hitting a nice milestone of 1000 supernovae without any human involvement.” 

{media id=293,layout=solo}

Many of the current and most exciting scientific questions that astronomers are trying to answer require them to collect large samples of different cosmic events. As a result, modern astronomical observatories have become relentless data-generating machines that throw tens of thousands of alerts and images at astronomers every night. This is particularly true in the field of time-domain astronomy, in which researchers look for fast-changing objects, or transients, such as exploding and dying stars known as supernovae, black holes eating orbiting stars, asteroids, and more.

“The traditional notion of an astronomer sitting at the observatory and sieving through telescope images carries a lot of romanticism but is drifting away from reality,” says Matthew Graham, the ZTF project scientist at Caltech.

Apart from freeing time for astronomers to pursue other science questions, the machine learning algorithm is much faster at classifying potential supernova candidates and sharing the results with the astronomical community. With SNIascore the process is shortened from 2-3 days to about 10 min, or near real-time. Such early identification of cosmic explosions is often critical to better study their physics.

“SNIascore sits on top of other underlying machine learning algorithms and layers that we have developed for ZTF, and it demonstrates well how machine learning applications are coming of age in near real-time astronomy,” says Ashish Mahabal, a computational scientist at Caltech’s Center for Data-Driven Discovery who leads machine learning activities for ZTF.

For now, SNIascore can only classify what’s known as Type Ia supernovae, or the “standard candles” in the sky used by astronomers to measure the universe's expansion rate. These are dying stars that go bang with a thermonuclear explosion of consistent strength. However, Christoffer and colleagues are working hard on extending the capabilities of the algorithm to classify other types of supernovae in the near future.

SNIascore is currently adapted to work with the SEDM spectrograph (Spectral Energy Distribution Machine) housed in a dome just a few hundred feet away from the ZTF camera at the Palomar Observatory. ZTF scans the sky continuously and sends hundreds of thousands of alerts every night for potential cosmic transients to astronomers around the world. The SEDM spectrograph is triggered to follow up and observe the most promising ones. It produces a spectrum of the cosmic event which carries information about the intensity of various frequencies of the light caught by the telescope camera. This spectrum is what can tell astronomers definitively what kind of event is being observed. Using clever machine learning techniques, Christoffer’s team has trained SNIascore to read the SEDM spectra remarkably well.

“SNIascore is incredibly accurate. After 1000 supernovae we have seen how the algorithm performs in the “real world” and we have had no clear misclassification since launching back in April 2021. This gives us the confidence to go ahead and implement the same algorithm with other observing facilities,” added Fremling.

He and colleagues are currently adapting SNIascore to work with the upcoming SEDMv2 spectrograph mounted on the 2.1m telescope at the Kitt Peak Observatory in Arizona. SEDMv2 will be the advanced version of SEDM and will allow for fainter supernovae to be detected and classified. Currently, SNIascore classifies on average two supernovae every night. With SEDMv2 this number can potentially double.

The advantages of SNIascore go beyond quickly and reliably building large datasets of supernovae. Astronomers looking for other transient events can now quickly rule out candidates classified by the SNIascore as supernovae such that no telescope time is wasted on following them when the search is for other types of cosmic explosions.

Other efforts for classifications of transient events also use machine learning but they rely only on the so-called “light curve” of the event or the amount of light seen by the telescope as an evolution of time. SNIascore has the advantage of being trained on and using spectroscopic information, the only robust way to confirm the nature of most transients. The algorithm is open source and other groups can adapt it to their own telescope facilities.

“The most challenging part in implementing SNIascore was to train the algorithm. It required that humans carefully check images and build an impeccable training dataset. After 1000 automatically classified supernovae, looking back I think it was entirely worth the effort,” says Fremling.

The SNIascore was developed as part of ZTF’s Bright Transient Survey - currently, the largest supernovae survey available to the astronomical community. The entire BTS dataset has close to 7000 supernovae 90% of which were discovered and classified by ZTF (10% were contributions from other groups and facilities).

“Our ambition is to continue to grow the BTS dataset with the help of SNIascore in the future to build the most comprehensive sample of supernovae which astronomers can use to answer fundamental questions in cosmology such as how fast the universe is expanding and to potentially map the dark matter distribution and the large scale structure of the universe,” added Fremling.

Northeastern University mechanical engineers make a new form of silicon that could revolutionize semiconductor industry

Ph.D. student Jianlin Li, works on the catalyst-free etching of sub-5 nm silicon nanowires in the Egan Research Center. Photo by Matthew Modoono/Northeastern UniversityAfter a 10-year research study that started by accident and was met with skepticism, a team of Northeastern University mechanical engineers was able to synthesize highly dense, ultra-narrow silicon nanowires that could revolutionize the semiconductor industry.

Yung Joon Jung, Northeastern professor of mechanical and industrial engineering, says it might have been his favorite research project.

“Everything is new, and it required a lot of perseverance,” says Jung, who specializes in engineering and application of nanostructure systems and previously studied carbon nanotubes.

Jung and his collaborators, including another Northeastern professor of mechanical engineering, Moneesh Upmanyu, have achieved a major advancement in nanowire synthesis by discovering a new, highly dense form of silicon and mastering a new, scalable catalyst-free etching process to produce ultra-small silicon nanowires of two to five nanometers in diameter.

About 10 years ago, students brought Jung's attention to an unusual result of an experiment they were conducting using silicon wafers. The material he saw under an electron microscope was different from the one they intended to produce, Jung says.

He decided to find out more about this substance and discovered that it was silicon with "a very, very tiny" wire-like nanostructure, Jung says. They were able to reproduce the new material, he says, but when they tried to improve the synthesis process the nanowires didn't grow.

The scientist and his team had to rewind and study, from the beginning, the synthesis mechanism and the material's atomic-scale structure and properties. Jung, an experimentalist, decided to enlist Upmanyu, who uses theory, supercomputer modeling, and simulation to understand materials and explain experiments.

"I always need help from Moneesh to understand what is happening," Jung says. In a research paper, Mechanical and Industrial Engineering Professors Moneesh Upmanyu, left, and Yung Joon Jung describe a novel process of  producing highly dense and vertically aligned ultra-narrow silicon nanowires via a catalyst-free chemical vapor etching process. Photo by Matthew Modoono/Northeastern University

The scientists thought that maybe the substance resulting from silicon wafers during synthesis was not silicon at all. The material had a highly compressed structure, reduced by 10% to 20% compared to regular silicon, which normally is not stable in such a compressed state, Upmanyu says.

Some of their colleagues and research reviewers agreed. "They would say, 'This shouldn't be silicon' or 'This shouldn't occur with silicon,'" Jung says.

Through the computational analysis and modeling, Upmanyu was able to show that, despite unusual properties, the new material was a form of silicon with a very thin layer of oxide on top, which probably helps sustain the compression, he says.

"This material is very promising," he says. "That compression, I feel, is at the heart of all the interesting properties you see."

One of the reasons silicon is widely used as a semiconductor in microelectronics such as computer chips, integrated circuits, transistors, silicon diodes, and liquid crystal displays is that it is cheap and abundant, Upmanyu says. It is the second most abundant element in the Earth's crust after oxygen, but it does not occur in its pure, uncombined state in nature. It can be found in sand, quartz, flint, granite, mica, and clay, among other stones and minerals.

In the 1970s, the thriving silicon computer chip industry even gave a new name to the southern region of the San Francisco Bay—"Silicon Valley"— which was popularized by Don Hoefler, an Electronic News Magazine reporter.

However, traditional silicon cannot withstand high temperatures and, hence, is limited to lower-power applications. It has a bandgap of 1.11 electron Volts (bandgap determines the energy needed to make the electrons in the semiconducting material conduct electricity upon being stimulated by external sources).

The new material has an ultra-wide bandgap of 4.16 eV—a world record, Jung says. The ultra-wide bandgap implies that the material needs larger stimuli to conduct electricity, but can operate at high power, high temperature, and high frequencies. Silicon nanowires produced from this new material will be suitable for power electronics, transistors, diodes, and LED devices, Jung says.

Unlike regular silicon, the new material is highly resistant to oxidation. It is also photoluminant—able to emit blue and purple light, which can be used for ultraviolet lighting and in blue light diodes.

Jung and his research team have also created a new method of producing silicon nanowires, called chemical vapor etching, which removes material instead of growing crystals. As a result, they can make nanowires that are 10 to 20 times smaller than the silicon nanowires currently used commercially.

Previously known nanowire synthesis processes use catalyst particles to grow silicon crystals.

"The catalyst-free aspect cannot be overstated enough, as it eliminates the need to remove the catalyst after synthesis, which invariably degrades the functional properties of the nanowires," Jung says.

Sometimes, catalyst particles become part of the nanowire surface, he says, and their removal is almost impossible.

At this point, scientists can reproduce nanowires with controlled lengths of up to 100 microns.

"I feel a broad impact going forward," Upmanyu says. "This chemical vapor etching method that he [Jung] has pioneered, is going to be useful for a host of other materials … You can think of not just electronic applications, but any application where you want to have a small-size dimension of material made. … It is very powerful."

He says that the new silicon material should be very attractive to the semiconductor industry. It can be used in military radios, radars, and in photovoltaics like solar cells. Regular silicon bandgap does not allow to process of ultraviolet light and use it for generating electricity, Upmanyu says.

"So, if you have a wide-bandgap material, which is cheap, abundant, like silicon, now you can have very high-efficiency solar cells," he says.

It can be even used for harvesting solar energy underwater. Water absorbs the red and infrared spectrum, Upmanyu says, so solar cells that can harvest blue and ultraviolet light become crucial.

The new silicon nanowires can improve lithium-ion batteries, Jung says. Further adding some select materials like phosphorus or nitrogen (a technique called doping) can lead to other interesting properties and allow other applications, Upmanyu says.

He believes that various interesting quantum phenomena can be manipulated in these silicon nanowires, due to their very small size, which makes this material promising for quantum information processing and maybe even quantum supercomputing, Upmanyu says.

Several other engineering institutions around the globe contributed to this research.

The research is not over. Scientists are still interested in understanding better all the chemistry behind the process and figuring out why the compression of this form of silicon is so stable. They want to optimize the etching process to produce a smoother surface and further scale it up for industrial application.

"You want to be able to understand the process so that you can manipulate it to what you want to do," Upmanyu says.

They will be also looking for collaborators, interested in making devices with this new silicon material.

"You want a new form of something you made to be adopted as widely as possible. I think commercialization and device integration is the key here," Upmanyu says.