Left: An antiferromagnet can function as “parallel electrical circuits” carrying Néel spin currents. Right: A tunnel junction based on the antiferromagnets hosting Néel spin currents can be regarded as “electrical circuits” with the two ferromagnetic tunnel junctions connected in parallel. (Image by SHAO Dingfu)
Left: An antiferromagnet can function as “parallel electrical circuits” carrying Néel spin currents. Right: A tunnel junction based on the antiferromagnets hosting Néel spin currents can be regarded as “electrical circuits” with the two ferromagnetic tunnel junctions connected in parallel. (Image by SHAO Dingfu)

Chinese physicists discover 'parallel circuits' of spin currents in antiferromagnets

A group of physicists at Hefei Institutes of Physical Science (HFIPS) of the Chinese Academy of Sciences (CAS) revealed a secret of antiferromagnets, which could accelerate spintronics, a next-gen data storage and processing technology for overcoming the bottleneck of modern digital electronics.

Spintronics is a vigorously developing field employing the spin of electrons within magnetic materials to encode information. Spin-polarized electric currents play a central role in spintronics, due to the capabilities of manipulation and detection of magnetic moment directions for writing and reading 1s and 0s. Currently, most spintronic devices are based on ferromagnets, where the net magnetizations can efficiently spin-polarized electric currents. Antiferromagnets, with opposite magnetic moments aligned alternately, are less investigated but may promise even faster and smaller spintronic devices. However, antiferromagnets have zero net magnetization and thus are commonly believed to carry solely spin-neutral currents useless for spintronics. While antiferromagnets consist of two antiparallel aligned magnetic sublattices, their properties are deemed to be "averaged out" over the sublattices making them spin-independent.

Prof. SHAO Ding-Fu, who led the team, has a different point of view on this research. He envisioned that collinear antiferromagnets can function as "electrical circuits" with the two magnetic sublattices connected in parallel. With this simple intuitive picture in mind, Prof. SHAO and his collaborators theoretically predicted that magnetic sublattices could polarize the electric current locally, thus resulting in the staggered spin currents hidden within the globally spin-neutral current.

He dubbed these staggered spin currents as "Néel spin currents" after Louis Néel, a Nobel laureate, who won the prize due to his fundamental work and discoveries concerning antiferromagnetism.

The Néel spin currents are a unique nature of antiferromagnets that has never been recognized. It is capable to generate useful spin-dependent properties which have been previously considered incompatible with antiferromagnets, such as a spin-transfer torque and tunneling magnetoresistance in antiferromagnetic tunnel junctions, crucial for electrical writing and reading of information in antiferromagnetic spintronics.

"Our work uncovered a previously unexplored potential of antiferromagnets, and offered a straightforward solution to achieve efficient reading and writing for antiferromagnetic spintronics," said Prof. SHAO Ding-Fu.

The experimental hall where NA61/SHINE is located (Image: CERN)
The experimental hall where NA61/SHINE is located (Image: CERN)

New measurements from the SHINE experiment help physicists work out the content of neutrino beams used in experiments in the US

At the time of the Big Bang, 13.8 billion years ago, every particle of matter is thought to have been produced together with an antimatter equivalent of opposite electrical charge. But in the present-day Universe, there is much more matter than antimatter. Why this is the case is one of physics’ greatest questions. 

The answer may lie, at least partly, in particles called neutrinos, which lack electrical charge, are almost massless, and change their identity – or “oscillate” – from one of three types to another as they travel through space. If neutrinos oscillated differently from their antimatter equivalents, antineutrinos, they could help explain the matter–antimatter imbalance in the Universe.

Experiments across the world, such as the NOvA experiment in the US, are investigating this possibility, as well as next-generation experiments including DUNE. In these long-baseline neutrino-oscillation experiments, a beam of neutrinos is measured after it has traveled a long distance – the long baseline. The experiment is then run with a beam of antineutrinos, and the outcome is compared with that of the neutrino beam to see if the two twin particles oscillate similarly or differently.

This comparison depends on an estimation of the number of neutrinos in the neutrino and antineutrino beams before they travel. These beams are produced by firing beams of protons onto fixed targets. The interactions with the target create other hadrons, which are focused using magnetic “horns” and directed into long tunnels in which they transform into neutrinos and other particles. But in this multi-step process, it isn’t easy to work out the particle content of the resulting beams – including the number of neutrinos they contain – which depends directly on the proton–target interactions.

Enter the NA61 experiment at CERN, also known as SHINE. Using high-energy proton beams from the Super Proton Synchrotron and appropriate targets, the experiment can recreate the relevant proton–target interactions. NA61/SHINE has previously made measurements of electrically charged hadrons that are produced in the interactions and yield neutrinos. These measurements helped improve estimations of the content of neutrino beams used at existing long-baseline experiments.

The NA61/SHINE collaboration has now released new Hadron measurements that will help improve these estimations further. This time around, using a proton beam with an energy of 120 GeV and a carbon target, the collaboration measured three kinds of electrically neutral hadrons that decay into neutrino-yielding charged hadrons.

This 120-GeV proton–carbon interaction is used to produce NOvA’s neutrino beam, and it will probably also be used to create DUNE’s beam. Estimations of the numbers of the different neutrino-yielding neutral hadrons that the interaction produces rely on supercomputer simulations, the output of which varies significantly depending on the underlying physics details.

“Up to now, simulations for neutrino experiments that use this interaction have relied on uncertain extrapolations from older measurements with different energies and target nuclei. This new direct measurement of particle production from 120-GeV protons on carbon reduces the need for these extrapolations,” explains NA61/SHINE deputy spokesperson Eric Zimmerman.

The supercomputer program was developed at the State University of Campinas to include more vegetation diversity in the analysis of climate change impacts  CREDIT Tiago Latesta/Projeto Brasil das Águas
The supercomputer program was developed at the State University of Campinas to include more vegetation diversity in the analysis of climate change impacts CREDIT Tiago Latesta/Projeto Brasil das Águas

Brazilian-built algorithm CAETÊ aims to project the future of the Amazon Rainforest by predicting changes in carbon capture

The supercomputer program was developed at the State University of Campinas to include more vegetation diversity in the analysis of climate change impacts. 

A group of researchers at the State University of Campinas (UNICAMP), in São Paulo state, Brazil, has developed an algorithm that projects the future of vegetation in the Amazon, presenting scenarios for the transformation of the forest driven by climate change.

One of the results shows that a drier climate in the region, with a 50% drop in precipitation, could increase diversity but lower the level of carbon storage. Storage of carbon dioxide (CO2) in roots would increase, but absorption of CO2 in leaves, stems, and trunks, which have more storage capacity, would decrease. Taking different situations into account, the scientists calculate that carbon absorption could drop between 57.48% and 57.75% compared with regular climate conditions.

The algorithm, which is the first of its kind designed exclusively for Brazil, is called CAETÊ, which means “virgin forest” in Tupi-Guarani and is an acronym for CArbon and Ecosystem functional Trait Evaluation model. Its first results are described in an article published in the journal Ecological Modelling.

CAETÊ simulates natural phenomena using mathematical equations fed with environmental data such as rainfall, solar radiation, and CO2 levels. It predicts photosynthesis rates under specific conditions, for example, or says which plant parts will store more carbon (roots, leaves, stems, or trunks), calculating carbon storage capacity in a given area and the point at which native vegetation can no longer recover.

“The main finding of the study was that including diversity in vegetation, models improves their ability to project ecosystem responses to climate change and enhances their credibility. A second point, which was unexpected, was that when a 50% drop in precipitation was applied, plant strategy diversity increased but carbon removal from the atmosphere decreased. This can have a different impact on climate change mitigation. In this case, the increase in diversity isn’t necessarily a good thing,” said Bianca Fazio Rius, first author of the article and a Ph.D. candidate at UNICAMP’s Institute of Biology (IB).

Rius is supported by FAPESP, which also funded the study via a scholarship awarded to João Paulo Darela Filho, and via AmazonFACE, a research program involving field experiments to find out how the rise in atmospheric carbon dioxide affects the Amazon Rainforest, especially its biodiversity and ecosystem services. FACE stands for Free-Air Carbon Dioxide Enrichment (more at amazonface.unicamp.br/#). 

Rius is a member of the team at the Terrestrial System Science Laboratory, headed by Professor David Montenegro Lapola, the last author of the article.

“CAETÊ accurately represents the huge biological diversity of the world’s largest tropical forest, while at the same time stimulating field data collection, which is still necessary for this kind of model,” Lapola told Agência FAPESP.

Lapola was one of the Brazilians who, with 34 other scientists affiliated with institutions here and abroad, signed an article featured on the cover of Science early this year, showing that 38% of the Amazon’s current area suffers degradation due to fire, illegal logging, edge effects (fragmentation due to changes in habitats adjacent to deforested areas) and extreme drought. As a result, carbon emissions deriving from the gradual loss of vegetation are equivalent to or even greater than emissions due to deforestation.

Pros and cons

Vegetation models are widely used to analyze the carbon balance in the Amazon under projected future climate conditions. Previous research showed that the Amazon’s average temperature has risen 1 °C in the last 40 years and that rainfall has decreased by 36% in some areas. CO2 storage capacity has also fallen owing to deforestation, vegetation degradation, and global warming.

Moreover, according to a report published on May 17 by the World Meteorological Organization (WMO), global temperatures are likely to surge to record levels in the next five years because of greenhouse gas emissions and El Niño, and rainfall is set to decrease in the Amazon.

However, most existing algorithms are based on a small number of plant functional types (PFTs), which are adopted by modelers to represent broad groupings of plant species that share similar characteristics and ecosystem functions. As a result, diversity is underrepresented and the combination of traits found in model ecosystems is far simpler than warranted by the complexity of the world’s largest tropical forest, leading to scenarios that are limited or overestimate the impact of environmental change. They include dynamic global vegetation models (DGVMs), which simulate changes in vegetation and the associated biogeochemical and hydrological cycles in response to climate change (e.g. Jena Diversity, or JeDi). If these are the cons, the pros include not depending on logistics and major investments, as do large-scale field experiments.

Tipping point

According to Rius, the study did not focus on species. “We used the idea that every individual, even individuals in the same species, can be considered a type of strategy for dealing with the environment. Computationally created strategies don’t necessarily belong to any particular species,” she said.

For a plant or any living being, she explained, a strategy represents a set of traits that determine how it responds to or affects the environment. A plant that adapts root depth to access water depending on the height of the water table could be a good example. Strategies profoundly influence the ability to survive and reproduce, and they are associated with ecosystem services such as carbon storage or the production of moisture for precipitation.

“As the climate becomes drier, we’re seeing a change in types of life strategy in the Amazon. Strategies increasingly resemble those of the Cerrado [Brazil’s savanna-like biome]. It’s as if the Cerrado had begun to penetrate the forest. Other researchers have noted this as well,” Rius said.

The study using CAETÊ provided more evidence that the inclusion of variability and diversity can have implications for modeling the Amazon’s tipping point when natural vegetation will no longer be able to recover, the scientists explained. One of the first articles to address this topic was signed by Thomas Lovejoy (1941-2021), the biologist who coined the term “biological diversity”, and Carlos Nobre, Co-Chair of the Science Panel for the Amazon. The paper highlighted the importance of the forest’s water cycle not just for Brazil but for all of South America and other regions.

Through evapotranspiration, the forest guarantees throughout the year the moisture that contributes, for example, to rainfall in parts of the La Plata River basin, especially in southern Paraguay, South Brazil, Uruguay, and eastern Argentina. 

More diversity

The development of CAETÊ began in 2015. It was based on the potential vegetation model CPTEC-PVM2 developed by Lapola and Nobre, with Marcos D. Oyama

“Most vegetation models represent the Amazon with two or three types of strategy. We set out to include more diversity. We’ll continue to develop our model because good models are never finished,” Rius said.

In this direction, Bárbara Cardeli, a Ph.D. candidate at IB-UNICAMP, joined the group and is working on the model to include a module that will calculate ecosystem services.

“This tool will be easy to use and will show whether specific ecosystem services are assured via processes such as how plant strategies allocate carbon. We want to include numerical values for the provision of these services,” Cardeli said.

The researchers envisage CAETÊ as supplying data-based input for decision-making and the formulation of public policy for the carbon market. At the 2021 UN Climate Change Conference (COP26), Brazil announced a commitment to halve its carbon emissions from the 2005 level by 2030 and achieve carbon neutrality by 2050.

Alberto Marino, Ph.D.
Alberto Marino, Ph.D.

OU researchers demo secure information transfer using spatial correlations in quantum entangled beams of light

Researchers at the University of Oklahoma led a study recently published in Science Advances that proves the principle of using spatial correlations in quantum entangled beams of light to encode information and enable its secure transmission.

Light can be used to encode information for high-data-rate transmission, long-distance communication, and more. But for secure communication, encoding large amounts of information in light has additional challenges to ensure the privacy and integrity of the data being transferred. Fig. 1. Experimental setup for encoding information in the distribution of the spatial correlations of twin beams. A hot 85Rb vapor cell is used as the nonlinear medium needed for the FWM process that generates quantum-correlated twin beams, which we refer to as probe and conjugate. The FWM is based on a double-Λ configuration in the D1 line of 85Rb, as shown on the top inset. The pump beam is reflected from an SLM that imprints a phase pattern onto it to obtain the necessary momentum distribution (angular spectrum) for the pump. The phase-structured pump is then imaged to the center of the cell via a 4f optical system. Finally, the momentum distribution of the probe and the conjugate beams is mapped to a position distribution onto an EMCCD camera in the far field using a f-to-f imaging system. Images acquired with the EMCCD are then used to measure the distribution of the spatial correlations and extract the encoded information in the twin beams. In order to generate bright twin beams, we seed the FWM with an input probe beam to achieve a photon flux of ∼ 1014 photons/s per output beam, which is limited by saturation of the EMCCD. SLM - spatial light modulator; EMCCD - electron multiplying charge coupled device.

Alberto Marino, the Ted S. Webb Presidential Professor in the Homer L. Dodge College of Arts, led the research with OU doctoral student and the study’s first author Gaurav Nirala and co-authors Siva T. Pradyumna and Ashok Kumar. Marino also holds positions with OU’s Center for Quantum Research and Technology and with the Quantum Science Center, at Oak Ridge National Laboratory.

“The idea behind the project is to be able to use the spatial properties of the light to encode large amounts of information, just like how an image contains information. However, to be able to do so in a way that is compatible with quantum networks for secure information transfer. When you consider an image, it can be constructed by combining basic spatial patterns known as modes, and depending on how you combine these modes, you can change the image or encoded information,” Marino said.

“What we’re doing here that is new and different is that we’re not just using those modes to encode information; we’re using the correlations between them,” he added. “We’re using the additional information on how those modes are linked to encoding the information.”

The researchers used two entangled beams of light, meaning that the light waves are interconnected with correlations that are stronger than those that can be achieved with classical light and remain interconnected despite their distance apart. Fig. 2. Information encoding in the distribution of the spatial correlations of twin beams. Frames (A) and (E) show the target information to be encoded in the spatial cross-correlation of the twin beams. The target is used to calculate the corresponding CGH (B) and (F) with a MRAF algorithm. The dimensions of the SLM pixels (12.5 μm×12.5 μm) and its 8-bit resolution together with the f-to-f imaging system are taken into account to calculate the simulated cross-correlations in frames (C) and (G). The measured spatial cross-correlations between the probe and conjugate intensity fluctuations reveal the encoded information, as shown in frames (D) and (H). Except for frames (B) and (F), each pixel value is normalized to the sum of the amplitude squared of all the pixels in the image to provide a better comparison between the simulation and experiment. The maximum values for the cross-correlations of the experimental and simulated data are larger than for the target due to the non-uniform distributions that result from a non-ideal setup and CGH. One can notice a small rotation (∼ 5◦) in the measured spatial cross-correlations, which is due to experimental alignment imperfections. All figures, except for the CGH, are in the EMCCD pixel basis, with a pixel size of 16 μm×16 μm. The color bar for the CGH frames (B) and (F) correspond to the 8-bit encoding of the phase in the range of 0 to 2π. For a detailed explanation of the measurement procedure and calculation of the spatial cross-correlations, see Methods and Section S2 of the Supplementary Materials.

“The advantage of the approach we introduce is that you’re not able to recover the encoded information unless you perform joint measurements of the two entangled beams,” Marino said. “This has applications such as secure communication, given that if you were to measure each beam by itself, you would not be able to extract any information. You have to obtain the shared information between both of the beams and combine it in the right way to extract the encoded information.”

Through a series of images and correlation measurements, the researchers demonstrated results of successfully encoding information in these quantum-entangled beams of light. Only when the two beams were combined using the methods intended did the information resolve into recognizable information encoded in the form of images.

“The experimental result describes how one can transfer spatial patterns from one optical field to two new optical fields generated using a quantum mechanical process called four-wave mixing,” said Nirala. “The encoded spatial pattern can be retrieved solely by joint measurements of generated fields. One interesting aspect of this experiment is that it offers a novel method of encoding information in light by modifying the correlation between various spatial modes without impacting time-correlations.” Fig. 3. Spatial auto-correlations. Experimental measurement of the auto-correlations of the spatial intensity fluctuations of the (A) probe and (B) conjugate fields. For these measurements the phase-structured pump beam is set to generate the OU logo pattern in the spatial cross-correlation between the twin beams, as shown in the top row of Fig. 2. While the encoded information can clearly be read out through joint measurements of the probe and the conjugate, each beam by itself (as seen by the auto-correlations) does not reveal the encoded information. The insets show a cross-section of the auto-correlations. The measured auto-correlations are localized and almost identical to those obtained when the pump has not been modified (see Section S4 of the Supplementary Materials). An artificial peak at the center of the auto-correlations, which is due to the use of the same image to calculate them, is removed. The experimental results are in good agreement with the simulation of the auto-correlation (C). As shown in Section S4 of the Supplementary Materials, in theory the auto-correlation functions for the probe and conjugate fields are equal to each other.

“What this could enable, in principle, is the ability to securely encode and transmit a lot of information using the spatial properties of the light, just like how an image contains a lot more information than just turning the light on and off,” Marino said. “Using the spatial correlations is a new approach to encode information.”

“Information encoding in the spatial correlations of entangled twin beams” was published in Science Advances on June 2, 2023.

Computer vision AI has been trained to identify specific objects, places, animals, even people. And it has become extremely popular—so popular, in fact, that its computational techniques have been applied to all sorts of other AI platforms. The result: a kind of digital dark matter that can cloud users’ interpretations without their ever knowing it. AI-generated image: ©V2 Ilugram - stock.adobe.com
Computer vision AI has been trained to identify specific objects, places, animals, even people. And it has become extremely popular—so popular, in fact, that its computational techniques have been applied to all sorts of other AI platforms. The result: a kind of digital dark matter that can cloud users’ interpretations without their ever knowing it. AI-generated image: ©V2 Ilugram - stock.adobe.com

Koo’s computational correction can interpret AI's DNA analyses more accurately

Scientists using artificial intelligence technology may be inviting unwanted noise into their genome analyses. Now, CSHL researchers have created a computational correction that will allow them to see through the fog and find genuine DNA features that could signal breakthroughs in health and medicine.

Cold Spring Harbor Laboratory (CSHL) Assistant Professor Peter Koo has found that scientists using popular computational tools to interpret AI predictions pick up too much “noise,” or extra information when analyzing DNA. And he’s found a way to fix this. Now, with just a couple of new lines of code, scientists can get more reliable explanations out of powerful AIs known as deep neural networks. That means they can continue chasing down genuine DNA features. Those features might just signal the next breakthrough in health and medicine. But scientists won’t see the signals if they’re drowned out by too much noise. Peter Koo

So, what causes the meddlesome noise? It’s a mysterious and invisible source like digital “dark matter.” Physicists and astronomers believe most of the universe is filled with dark matter, a material that exerts gravitational effects but that no one has yet seen. Similarly, Koo and his team discovered the data that AI is being trained on lacks critical information, leading to significant blind spots. Even worse, those blind spots get factored in when interpreting AI predictions of DNA function.

Koo says: “The deep neural network is incorporating this random behavior because it learns a function everywhere. But DNA is only in a small subspace of that. And it introduces a lot of noise. And so we show that this problem actually does introduce a lot of noise across a wide variety of prominent AI models.”

Digital dark matter is a result of scientists borrowing computational techniques from computer vision AI. DNA data, unlike images, is confined to a combination of four nucleotide letters: A, C, G, T. But image data in the form of pixels can be long and continuous. In other words, we’re feeding AI an input it doesn’t know how to handle correctly.

By applying Koo’s computational correction, scientists can interpret AI’s DNA analyses more accurately. 

Koo says: “We end up seeing sites that become much more crisp and clean, and there is less spurious noise in other regions. One-off nucleotides that are deemed very important all of a sudden disappear.”

Koo believes noise disturbance affects more than AI-powered DNA analyzers. He thinks it’s a widespread affliction among computational processes involving similar types of data. Remember, dark matter is everywhere. Thankfully, Koo’s new tool can help bring scientists out of the darkness and into the light.