German, Chinese team develops a new approach for efficient data processing

A new approach to IT technologies: The use of spin waves as data storage may one day allow supercomputers to compute more efficiently and reliably. This is the conclusion reached by a team of researchers from Martin Luther University Halle-Wittenberg (MLU) and Lanzhou University in China based on simulations for spin waves in structured materials. The study was published in the academic journal "npj computational materials".

Logic operations, including addition, multiplication, and subtraction, form the basis of all computing applications in devices such as smartphones and computers. "Without these operations, we would be unable to make use of the stored data," explains Professor Jamal Berakdar from the Institute of Physics at MLU. Until now, logic operations have been carried out with the help of charge currents. However, this technology has some disadvantages: "Moving charges react sensitively to external electric or magnetic fields. There is also an increase in energy dissipation - the smaller the devices become, the more the material heats up," says Berakdar. Therefore, researchers have been looking for alternative approaches. Magnonics attempts to utilize so-called magnons. Magnons are waves that can be generated in magnetic materials, thus also called spin waves. They are created through the oscillation of magnetization and can be used as signals in the data processing. Unlike charge-currents-assisted logic operation, electrons do not have to travel in magnonic-based information processing, so less energy is consumed. a Twisted magnon beams propagating along a cylindrical AFM waveguide with two magnetic sublattices A (up arrows) and B (down arrows). b Degenerate right- and left-handed spin wave modes, \left|L\right\rangle and \left|R\right\rangle dominated by the sublattice A and B, respectively. The small magnetization (green arrows) is defined as {\bf{m}}=\left.({{\bf{S}}}_{A}+{{\bf{S}}}_{B})/2S\right).

"Until now, research on magnons and spintronics has generally focused on conventional ferromagnets," says Berakdar. However, the team from Germany and China found that nanostructured antiferromagnetic wires are particularly well suited for logical operations. "Antiferromagnets differ fundamentally from ferromagnets. At the microscopic level, they are magnetically ordered, but in such a way that the material as a whole has no magnetization. Therefore, antiferromagnets respond only weakly to electric or magnetic fields," explains Berakdar. The simulations carried out by the physicists show that there are special magnons in nanostructured antiferromagnetic wires that can be used to perform logic operations quickly, reliably, and simultaneously without consuming a lot of energy.

Machine learning uncovers genes of importance in agriculture, medicine

Approach using evolutionary principles identifies genes that enable plants to grow more with less fertilizer

Machine learning can pinpoint “genes of importance” that help crops to grow with less fertilizer. It can also predict additional traits in plants and disease outcomes in animals, illustrating its applications beyond agriculture.

Using genomic data to predict outcomes in agriculture and medicine is both a promise and challenge for systems biology. Researchers have been working to determine how to best use the vast amount of genomic data available to predict how organisms respond to changes in nutrition, toxins, and pathogen exposure—which in turn would inform crop improvement, disease prognosis, epidemiology, and public health. However, accurately predicting such complex outcomes in agriculture and medicine from genome-scale information remains a significant challenge. Corn (maize) growing in the NYU Rose Sohn Zegar Greenhouse on the roof of the NYU Center for Genomics & Systems Biology

In the study, NYU researchers and collaborators in the U.S. and Taiwan tackled this challenge using machine learning, a type of artificial intelligence used to detect patterns in data.

“We show that focusing on genes whose expression patterns are evolutionarily conserved across species enhances our ability to learn and predict ‘genes of importance’ to growth performance for staple crops, as well as disease outcomes in animals,” explained Gloria Coruzzi, Carroll & Milton Petrie Professor in NYU’s Department of Biology and Center for Genomics and Systems Biology and the paper’s senior author.

“Our approach exploits the natural variation of genome-wide expression and related phenotypes within or across species,” added Chia-Yi Cheng of NYU’s Center for Genomics and Systems Biology and National Taiwan University, the lead author of this study. “We show that paring down our genomic input to genes whose expression patterns are conserved within and across species is a biologically principled way to reduce dimensionality of the genomic data, which significantly improves the ability of our machine learning models to identify which genes are important to a trait.”

As a proof-of-concept, the researchers demonstrated that genes whose responsiveness to nitrogen are evolutionarily conserved between two diverse plant species—Arabidopsis, a small flowering plant widely used as a model organism in plant biology, and varieties of corn, America’s largest crop—significantly improved the ability of machine learning models to predict genes of importance for how efficiently plants use nitrogen. Nitrogen is a crucial nutrient for plants and the main component of fertilizer; crops that use nitrogen more efficiently grow better and require less fertilizer, which has economic and environmental benefits.  Corn (maize) growing in the NYU Rose Sohn Zegar Greenhouse on the roof of the NYU Center for Genomics & Systems Biology.

The researchers conducted experiments that validated eight master transcription factors as genes of importance to nitrogen use efficiency. They showed that altered gene expression in Arabidopsis or corn could increase plant growth in low nitrogen soils, which they tested both in the lab at NYU and in cornfields at the University of Illinois.

“Now that we can more accurately predict which corn hybrids are better at using nitrogen fertilizer in the field, we can rapidly improve this trait. Increasing nitrogen use efficiency in corn and other crops offers three key benefits by lowering farmer costs, reducing environmental pollution, and mitigating greenhouse gas emissions from agriculture,” said study author Stephen Moose, Alexander Professor of Crop Sciences at the University of Illinois at Urbana-Champaign.

Moreover, the researchers proved that this evolutionarily informed machine learning approach can be applied to other traits and species by predicting additional traits in plants, including biomass and yield in both Arabidopsis and corn. They also showed that this approach can predict genes of importance to drought resistance in another staple crop, rice, as well as disease outcomes in animals through studying mouse models.

“Because we showed that our evolutionarily informed pipeline can also be applied in animals, this underlines its potential to uncover genes of importance for any physiological or clinical traits of interest across biology, agriculture, or medicine,” said Coruzzi.

“Many key traits of agronomic or clinical importance are genetically complex and hence it’s difficult to pin down their control and inheritance. Our success proves that big data and systems level thinking can make these notoriously difficult challenges tractable,” said study author Ying Li, faculty in the Department of Horticulture and Landscape Architecture at Purdue University.

NYU Langone doctors use machine learning tool that improves accuracy of breast cancer imaging

A computer program trained to see patterns among thousands of breast ultrasound images can aid physicians in accurately diagnosing breast cancer, a new study shows.

When tested separately on 44,755 already completed ultrasound exams, the artificial intelligence (AI) tool improved radiologists' ability to correctly identify the disease by 37 percent and reduced the number of tissue samples, or biopsies, needed to confirm suspect tumors by 27 percent.

Led by researchers from the Department of Radiology at NYU Langone Health and its Laura and Isaac Perlmutter Cancer Center, the team's AI analysis is believed to be the largest of its kind, involving 288,767 separate ultrasound exams taken from 143,203 women treated at NYU Langone hospitals in New York City between 2012 and 2018.

"Our study demonstrates how artificial intelligence can help radiologists reading breast ultrasound exams to reveal only those that show real signs of breast cancer and to avoid verification by biopsy in cases that turn out to be benign," says the study senior investigator Krzysztof Geras, Ph.D.

Ultrasound exams use high-frequency sound waves passing through the tissue to construct real-time images of the breast or other tissues. Although not generally used as a breast cancer screening tool, it has served as an alternative (to mammography) or follow-up diagnostic test for many women, says Geras, an assistant professor in the Department of Radiology at NYU Grossman School of Medicine and a member of the Perlmutter Cancer Center.

Ultrasound is cheaper, more widely available in community clinics, and does not involve exposure to radiation, the researchers say. Moreover, ultrasound is better than mammography for penetrating dense breast tissue and distinguishing packed but healthy cells from compact tumors.

However, the technology has also been found to result in too many false diagnoses of breast cancer, producing anxiety and unnecessary procedures for women. Some studies have shown that a majority of breast ultrasound exams indicating signs of cancer turn out to be noncancerous after biopsy.

"If our efforts to use machine learning as a triaging tool for ultrasound studies prove successful, ultrasound could become a more effective tool in breast cancer screening, especially as an alternative to mammography, and for those with dense breast tissue," says study co-investigator and radiologist Linda Moy, MD. "Its future impact on improving women's breast health could be profound," adds Moy, a professor at NYU Grossman School of Medicine and a member of the Perlmutter Cancer Center.

Geras cautions that while his team's initial results are promising, his team only looked at past exams in their latest analysis, and clinical trials of the tool in current patients and real-world conditions are needed before it can be routinely deployed. He also has plans to refine the AI software to include additional patient information, such as a woman's added risk from having a family history or genetic mutation tied to breast cancer, which was not included in their latest analysis.

For the study, over half of ultrasound breast examinations were used to create the computer program. Ten radiologists then each reviewed a separate set of 663 breast exams, with an average accuracy of 92 percent. When aided by the AI model, their average accuracy in diagnosing breast cancer improved to 96 percent. All diagnoses were checked against tissue biopsy results.

The latest statistics from the American Cancer Society estimate that one in eight women (13 percent) in the U.S. will be diagnosed with breast cancer over their lifetime, with more than 300,000 positive diagnoses in 2021 alone.