Chinese researchers use CFD to simulate SARS-COV-2 transmission and infection on airline flights

A study published in Indoor Air simulated the transmission of SARS-CoV-2, the virus that causes COVID-19, on a flight from London to Hanoi and another flight from Singapore to Hangzhou. Image credit: Dr. Lai

When simulating the dispersion of droplets of different sizes generated by coughing, talking, and breathing activities in an airline cabin by an infected person, researchers found that the SARS-CoV-2 virus contained in such droplets traveled by the cabin air distribution and was inhaled by other passengers.

The scientists counted the number of viral copies inhaled by each passenger to determine infection. Their method correctly predicted 84% of the infected/uninfected cases on the first flight. The team also found that wearing masks and reducing conversation frequency between passengers could help to reduce the risk of exposure on the second flight.

“We are very pleased to see that our model validated by experimental data can achieve such a high accuracy in predicting COVID-19 transmission in airliner cabins,” said corresponding author Dayi Lai, Ph.D., Associate Professor and Associate Head of the Department of Architecture, School of Design of Shanghai Jiao Tong University, in China. “Also, it’s important to know that wearing masks makes a significant impact on reducing the transmission.”

NIH funds CUNY SPH, West Point AI center for precision nutrition, health

The center will develop and use new computational, data science, and tech approaches to advance precision nutrition, improve health and reduce chronic diseases

The National Institutes of Health (NIH) has awarded the CUNY Graduate School of Public Health and Health Policy (CUNY SPH) and the United States Military Academy at West Point an estimated $8.1 million over five years, pending available funds, to establish the world’s first artificial intelligence (AI) and computational modeling center for precision nutrition and health. Precision nutrition is an emerging area aimed at better tailoring diets to different people’s characteristics and circumstances to achieve better health outcomes. This award is part of the Nutrition for Precision Health, powered by the All of Us Research Program (NPH) initiative, a $170 million NIH-wide effort, and the first independent study that will recruit a diverse pool of participants from All of Us to inform more personalized nutrition recommendations. The NPH and the center are part of the NIH’s Common Fund, a special program aimed at catalyzing multiple biomedical disciplines.

The center will develop state-of-the-art AI, machine learning (ML), Big Data methods, and other data science approaches to better understand and improve diet and nutrition. This will include new ways to better understand how individuals have different dietary needs and avoid potential biases and disparities that may result from various nutrition recommendations. The center will be co-led by two world-renowned AI and computational modeling experts, Bruce Y. Lee, MD, MBA, professor of health policy and management at CUNY SPH and executive director of PHICOR (Public Health Informatics, Computational, and Operations Research) and Diana M. Thomas, Ph.D., professor and research chair of mathematics at West Point. 

“As the nation’s leading urban public university, CUNY is proud to help drive cutting-edge research, in partnership with West Point, that aims to bring more equity to nutrition and health approaches,” said CUNY Chancellor Félix V. Matos Rodríguez. “The Nutrition for Precision Health initiative will leverage the expertise of CUNY SPH as well as the University’s great diversity, reach, and dedication to social justice. With their renowned work in artificial intelligence and computational modeling, Drs. Lee and Thomas are the ideal scholars to lead this ambitious new center.”

“Our society is at a key inflection point,” says Lee. “We now have much more data and technology available to guide diet and nutrition in ways that have not been previously done. This could greatly improve the health of people around the world; however, if not done correctly, it could worsen health outcomes and deepen health disparities.”

“This is the first time that leading experts in data science, statistics, and systems modeling will collaborate with the top nutrition clinical and nutrition research centers in the U.S.,” says Thomas. “The effort is unique and extremely timely as we can combine new AI approaches with unprecedented levels of computing power to develop algorithms for personalized nutrition guidance.”

The center will include multiple major projects, including one led by AI expert Samantha Kleinberg, Ph.D., associate professor at Stevens Institute of Technology, and another led by social network expert Kayla de la Haye, Ph.D., associate professor at the University of Southern California. “AI and ML have helped advance many areas of health research, but we haven’t seen the benefits in nutrition because we’ve lacked the key ingredient: large scale, high quality, data on diverse populations,” says Kleinberg. “This award grants us the opportunity to finally go beyond correlations to learn not just what factors are related to health outcomes but how diet causes them and for whom.”

The center will develop data science approaches and technology to address the whole complex system of factors that affect nutrition and health, ranging from genetics to metabolism to behavior to a person’s social and physical environment. “Studies have shown that diet and nutrition can be affected by the people and things around you and your circumstances,” says de la Haye.

The connection between CUNY SPH and West Point is unique. For the first time, the public academic institutions will collectively utilize their technologies and resources in a way that improves health inequities and advances precision nutrition and health.

RIT scientists confirm a highly eccentric black hole merger for the first time

For the first time, scientists believe they have detected a merger of two black holes with eccentric orbits. According to researchers from Rochester Institute of Technology’s Center for Computational Relativity and Gravitation and the University of Florida, this can help explain how some of the black hole mergers detected by LIGO Scientific Collaboration and the Virgo Collaboration are much heavier than previously thought possible. Mark Myers, ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav)  Artist’s impression of binary black holes about to collide.

Eccentric orbits are a sign that black holes could be repeatedly gobbling up others during chance encounters in areas densely populated with black holes such as galactic nuclei. The scientists studied the most massive gravitational wave binary observed to date, GW190521, to determine if the merger had eccentric orbits.

“The estimated masses of the black holes are more than 70 times the size of our sun each, placing them well above the estimated maximum mass predicted currently by stellar evolution theory,” said Carlos Lousto, a professor in the School of Mathematical Sciences and a member of the CCRG. “This makes an interesting case to study as a second-generation binary black hole system and opens up to new possibilities of formation scenarios of black holes in dense star clusters.”

A team of RIT researchers including Lousto, Research Associate James Healy, Jacob Lange ’20 Ph.D. (astrophysical sciences and technology), Professor and CCRG Director Manuela Campanelli, Associate Professor Richard O’Shaughnessy, and collaborators from the University of Florida formed to give a fresh look at the data to see if the black holes had highly eccentric orbits before they merged. They found the merger is best explained by a high-eccentricity, precessing model. To achieve this, the team performed hundreds of new full numerical simulations in local and national lab supercomputers, taking nearly a year to complete.

“This represents a major advancement in our understanding of how black holes merge,” said Campanelli. “Through our sophisticated supercomputer simulations and the wealth of new data provided by LIGO and Virgo’s rapidly advancing detectors, we are making new discoveries about the universe at astonishing rates.”

An extension of this analysis by the same RIT and UFL team used a possible electromagnetic counterpart observed by the Zwicky Transient Facility to compute independently the cosmological Hubble constant with GW150521 as an eccentric binary black hole merger. They found excellent agreement with the expected values and recently published the work in the Astrophysical Journal.

Canada's new simulations can improve avalanche forecasting

Supercomputer simulations of snow cover can accurately forecast avalanche hazards, according to a new international study involving researchers from Simon Fraser University. 1642612936144 fd129

Currently, avalanche forecasts in Canada are made by experienced professionals who rely on data from local weather stations and on-the-ground observations from ski and backcountry ski operators, avalanche control workers for transportation and industry, and volunteers who manually test the snowpack.

But simulated snow cover models developed by a team of researchers can detect and track weak layers of snow and identify avalanche hazards in a completely different way—and can provide forecasters with another reliable tool when local data is insufficient or not available, according to a new study published in the journal Cold Regions Science and Technology.

“As far as natural hazards go, avalanches are still one of the leading causes of fatalities in Canada,” says Simon Horton, a post-doctoral fellow with the SFU Centre for Natural Hazards Research and a forecaster with Avalanche Canada.

“We’ve had these complex models that simulate the layers in the snowpack for a few decades now and they’re getting more and more accurate, but it’s been difficult to find out how to apply that to actual decision-making and improving safety.”

Researchers took 16 years’ worth of daily meteorological, snow cover, and avalanche data from two sites in Canada (Whistler and Rogers Pass, both in British Columbia) and Weissfluhjoch in Davos, Switzerland, and ran supercomputer simulations that could classify different avalanche situations.

The simulations could determine avalanche risk, for either natural or artificial release, for problem types such as new snow, wind slab, persistent weak layers, and wet snow conditions. 

“In the avalanche forecasting world, describing avalanche problems – the common scenarios that you might expect to find – are a good way for forecasters to describe avalanche hazard and communicate it to the public, so they know what kind of conditions to expect when they head out,” says Horton. “So that information is already available, except those are all done through an expert assessment based on what they know from available field observations. In a lot of situations, there’s a fair bit of uncertainty about the human assessment of what these types of avalanche problems will be.

“That’s where having more automated tools that can help predict potential hazards can help forecasters better prepare an accurate, precise forecast.”

The results of the study showed the modeling was consistent with the real observed frequencies of avalanches over those 16 years and that the approach has the potential to support avalanche forecasting in the future.

Researchers also believe the modeling might be useful to study the future impacts of climate change on snow instability.

RIKEN scientists achieve key elements for fault-tolerant quantum supercomputation in silicon spin qubits

Researchers from RIKEN and QuTech have achieved a key milestone toward the development of a fault-tolerant quantum supercomputer. They were able to demonstrate a two-qubit gate fidelity of 99.5 percent—higher than the 99 percent considered to be the threshold for building fault-tolerant computers—using electron spin qubits in silicon, which are promising for large-scale quantum supercomputers as the nanofabrication technology for building them already exists.

The world is currently in a race to develop large-scale quantum supercomputers that could vastly outperform classical computers in certain areas. However, these efforts have been hindered by many factors, including in particular the problem of decoherence, or noise generated in the qubits. This problem becomes more serious with the number of qubits, hampering scaling up. To achieve a large-scale computer that could be used for useful applications, it is believed that a two-qubit gate fidelity of at least 99 percent to implement the surface code for error correction is required. This has been achieved in certain types of computers, using qubits based on superconducting circuits, trapped ions, and nitrogen-vacancy centers in diamond, but these are hard to scale up to the millions of qubits required to implement practical quantum computation with an error correction.

To do the current work, the group decided to experiment with a quantum dot structure that was fabricated by nanofabrication on a strained silicon/silicon-germanium quantum well substrate, using a controlled-NOT (CNOT) gate. In previous experiments, the gate fidelity was limited due to slow gate speed. To improve the gate speed, they carefully designed the device and tuned the device operation condition by voltages applied to gate electrodes to combine established fast single-spin rotation technique using micromagnets and a large two-qubit coupling. This allows them to enhance the gate speed by a factor of 10 compared to the previous works. Interestingly, it was previously believed that increasing gate speed would always lead to better fidelity, but they found that there was a limit and that beyond that the increasing speed made the fidelity worse.

Through the work, they discovered that a property called the Rabi frequency—a marker of how the qubits change states in response to an oscillating field—is key to the performance of the system, and they found a range of frequencies for which the single-qubit gate fidelity was 99.8 percent and the two-qubit gate fidelity was 99.5 percent, clearing the required threshold.

Through this, they demonstrated that they could achieve universal operations, meaning that all the basic operations that constitute quantum operations, consisting of a single qubit operation and a two-qubit operation, could be performed with the gate fidelities above the error correction threshold.

To test the capability of the new system, the researchers implemented a two-qubit Deutsch-Jozsa algorithm and the Grover search algorithm. Both algorithms output correct results with high fidelity of 96-97%, demonstrating that silicon quantum supercomputers can perform quantum calculations with high accuracy.

Akito Noiri, the first writer of the study, says, “We are very happy to have achieved a high-fidelity universal quantum gate set, one of the key challenges for silicon quantum computers.”

Seigo Tarucha, leader of the research groups, said, "The presented result makes spin qubits, for the first time, competitive against superconducting circuits and ion traps in terms of universal quantum control performance. This study demonstrates that silicon quantum computers are promising candidates, along with superconductivity and ion traps, for research and development toward the realization of large-scale quantum computers."