Artist’s impression of GRB 211211A  CREDIT Soheb Mandhai @TheAstroPhoenix
Artist’s impression of GRB 211211A CREDIT Soheb Mandhai @TheAstroPhoenix

British physicist prof Nicholl models the extra emissions from kilonovae, the main factories of gold in the Universe

A highly unusual blast of high-energy light from a nearby galaxy has been linked by scientists to a neutron star merger.

The event, detected in December 2021 by NASA’s Neil Gehrels Swift Observatory and the Fermi Gamma-ray Space Telescope, was a gamma-ray burst – an immensely energetic explosion that can last from a few milliseconds to several hours.

This gamma-ray burst, identified as GRB 211211A, lasted about a minute – a relatively lengthy explosion, which would usually signal the collapse of a massive star into a supernova. But this event contained an excess of infrared light and was much fainter and faster-fading than a classical supernova, hinting that something different was going on.

In a new study, an international team of scientists showed that the infrared light detected in the burst came from a kilonova. This is a rare event, thought to be generated as neutron stars, or a neutron star and a black hole collide, producing heavy elements such as gold and platinum. Thus far, these events, called kilonovae, have only been associated with gamma-ray bursts with durations of less than two seconds.

The work was led by Jillian Rastinejad at Northwestern University in the US along with physicists from the University of Birmingham and the University of Leicester in the UK, and Radboud University in The Netherlands.

Dr. Matt Nicholl, an Associate Professor at the University of Birmingham, modeled the kilonova emission. “We found that this one event produced about 1,000 times the mass of the Earth in very heavy elements. This supports the idea that these kilonovae are the main factories of gold in the Universe,” he said.

Although up to 10 percent of long gamma-ray bursts are suspected to be caused by the merging of a neutron star or neutron stars and black holes, no firm evidence – in the form of kilonovae – had previously been identified.

Dr. Gavin Lamb, a post-doctoral researcher at the University of Leicester, explained: "A gamma-ray burst is followed by an afterglow that can last several days. These afterglows behave in a very characteristic manner, and by modeling them we can expose any extra emission components, such as a supernova or a kilonova."

The kilonova generated by GRB 211211A is the closest to having been discovered without gravitational waves, and has exciting implications for the upcoming gravitational wave observation run, starting in 2023. Its proximity to a neighboring galaxy only 1bn light years away allowed scientists to study the properties of the merger in unprecedented detail.

A related paper from the same collaboration in Nature Astronomy, led by Dr. Benjamin Gompertz, Assistant Professor at the University of Birmingham, describes some of these properties.

In particular, the team identified how the jet of high-energy electrons, traveling at almost the speed of light and causing the gamma-ray burst, changed with time. The cooling down of this jet was shown to be responsible for the long-lasting GRB emission.

In the paper, the team also described how close observation of GRB 211211A can offer fascinating insights into other previously unexplained gamma-ray bursts which have appeared not to fit with standard interpretations.

Dr. Gompertz said: “This was a remarkable GRB. We don’t expect mergers to last more than about two seconds. Somehow, this one powered a jet for almost a full minute. It’s possible the behavior could be explained by a long-lasting neutron star, but we can’t rule out that what we saw was a neutron star being ripped apart by a black hole.

“Studying more of these events will help us determine which is the right answer and the detailed information we gained from GRB 211211A will be invaluable for this interpretation.”

The work was funded by the European Research Council under the KilonovaRank project, which harnesses the power of Big Data in investigating large cosmic events.

A magnetic vortex, known as a skyrmion (grey dot), being displaced into the corners of a triangular field by electrical currents, where it bounces off the sides. The potentials shown in red are sufficient for carrying out Boolean logic operations.
A magnetic vortex, known as a skyrmion (grey dot), being displaced into the corners of a triangular field by electrical currents, where it bounces off the sides. The potentials shown in red are sufficient for carrying out Boolean logic operations.

German physicists demo prototype of energy-efficient supercomputing with tiny magnetic vortices

A large percentage of energy used today is consumed in the form of electrical power for processing and storing data and for running the relevant terminal equipment and devices. According to predictions, the level of energy used for these purposes will increase even further in the future. Innovative concepts, such as neuromorphic supercomputing, employ energy-saving approaches to solve this problem. In a joint project undertaken by experimental and theoretical physicists at Johannes Gutenberg University Mainz (JGU) with the funding of an ERC Synergy Grant such an approach, known as Brownian reservoir computing, has now been realized. 

Brownian computing uses ambient thermal energy

Brownian reservoir computing is a combination of two unconventional computing methods. Brownian computing exploits the fact that computer processes typically run at room temperature so that there is the option of using the surrounding thermal energy and thus cutting down on electricity consumption. The thermal energy used in the computing system is basically the random movement of particles, known as Brownian motion; which explains the name of this computing method.

Reservoir computing is ideal for exceptionally efficient data processing

Reservoir computing utilizes the complex response of a physical system to external stimuli, resulting in an extremely resource-efficient way of processing data. Most of the computation is performed by the system itself, which does not require additional energy. Furthermore, this type of reservoir computer can easily be customized to perform various tasks as there is no need to adjust the solid-state system to suit specific requirements.

A team headed by Professor Mathias Kläui of the Institute of Physics at Mainz University, supported by Professor Johan Mentink of Radboud University Nijmegen in the Netherlands, has now succeeded in developing a prototype that combines these two computing methods. This prototype is able to perform Boolean logic operations, which can be used as standard tests for the validation of reservoir computing.

The solid-state system selected in this instance consists of metallic thin films exhibiting magnetic skyrmions. These magnetic vortices behave like particles and can be driven by electrical currents. The behavior of skyrmions is influenced not only by the applied current but also by their own Brownian motion. This Brownian motion of skyrmions can result in significantly increased energy savings as the system is automatically reset after each operation and prepared for the next computation.

First prototype was developed in Mainz

Although there have been many theoretical concepts for skyrmion-based reservoir supercomputing in recent years, the researchers in Mainz succeeded in developing the first functional prototype only when combining these concepts with the principle of Brownian computing. "The prototype is easy to produce from a lithographic point of view and can theoretically be reduced to a size of just nanometers," said experimental physicist Klaus Raab. "We owe our success to the excellent collaboration between the experimental and theoretical physicists here at Mainz University," emphasized theoretical physicist Maarten Brems. Project coordinator Professor Mathias Kläui added: "I'm delighted that the funding provided through a Synergy Grant from the European Research Council enabled us to collaborate with outstanding colleagues in the Department of Theoretical Physics in Nijmegen, and it was this collaboration that resulted in our achievement. I see great potential in unconventional computing, a field which also receives extensive support here at Mainz through funding from the Carl Zeiss Foundation for the Emergent Algorithmic Intelligence Center."

L-R: Amir Livne, Dr. Gil Shamai and Prof. Ron Kimmel
L-R: Amir Livne, Dr. Gil Shamai and Prof. Ron Kimmel

Technion-developed deep-learning system looks at breast cancer scans better than a human

One in nine women in the developed world will be diagnosed with breast cancer at some point in her life. The prevalence of breast cancer is increasing, an effect caused in part by the modern lifestyle and increased lifespans. Thankfully, treatments are becoming more efficient and personalized. However, what isn’t increasing – and is in fact decreasing –  is the number of pathologists or doctors whose specialization is examining body tissues to provide the specific diagnosis necessary for personalized medicine. A team of researchers at the Technion – Israel Institute of Technology have therefore made it their quest to turn supercomputers into effective pathologists’ assistants, simplifying and improving the human doctor’s work. 

The specific task that Dr. Gil Shamai and Amir Livne from the lab of Professor Ron Kimmel from the Henry and Marilyn Taub Faculty of Computer Science at the Technion set out to achieve lies within the realm of immunotherapy. Immunotherapy has been gaining prominence in recent years as an effective, sometimes even game-changing, treatment for several types of cancer. The basis of this form of therapy is encouraging the body’s own immune system to attack the tumor. However, such therapy needs to be personalized as the correct medication must be administered to the patients who stand to benefit from it based on the specific characteristics of the tumor.

Multiple natural mechanisms prevent our immune systems from attacking our own bodies. These mechanisms are often exploited by cancer tumors to evade the immune system. One such mechanism is related to the PD-L1 protein – some tumors display it, and it acts as a sort of password by erroneously convincing the immune system that cancer should not be attacked. Specific immunotherapy for PD-L1 can persuade the immune system to ignore this particular password, but of course, would only be effective when the tumor expresses the PD-L1.

It is a pathologist’s task to determine whether a patient’s tumor expresses PD-L1. Expensive chemical markers are used to stain a biopsy taken from the tumor in order to obtain the answer. The process is non-trivial, time-consuming, and at times inconsistent. Dr. Shamai and his team took a different approach. In recent years, it has become an FDA-approved practice for biopsies to be scanned so they can be used for digital pathological analysis. Amir Livne, Dr. Shamai, and Prof. Kimmel decided to see if a neural network could use these scans to make the diagnosis without requiring additional processes. “They told us it couldn’t be done,” the team said, “so of course, we had to prove them wrong.”

Neural networks are trained in a manner similar to how children learn: they are presented with multiple tagged examples. A child is shown many dogs and various other things, and from these examples forms an idea of what “dog” is. The neural network Prof. Kimmel’s team developed was presented with digital biopsy images from 3,376 patients that were tagged as either expressing or not expressing PD-L1. After preliminary validation, it was asked to determine whether additional clinical trial biopsy images from 275 patients were positive or negative for PD-L1. It performed better than expected: for 70% of the patients, it was able to confidently and correctly determine the answer. For the remaining 30% of the patients, the program could not find the visual patterns that would enable it to decide one way or the other. Interestingly, in the cases where artificial intelligence (AI) disagreed with the human pathologist’s determination, a second test proved the AI to be right.

“This is a momentous achievement,” Prof. Kimmel explained. “The variations that the computer found – they are not distinguishable to the human eye. Cells arrange themselves differently if they present PD-L1 or not, but the differences are so small that even a trained pathologist can’t confidently identify them. Now our neural network can.”

This achievement is the work of a team comprised of Dr. Gil Shamai and graduate student Amir Livne, who developed the technology and designed the experiments, Dr. António Polónia from the Institute of Molecular Pathology and Immunology of the University of Porto, Portugal, Professor Edmond Sabo and Dr. Alexandra Cretu from Carmel Medical Center in Haifa, Israel, who are expert pathologists that conducted the research, and with the support of Professor Gil Bar-Sela, head of oncology and hematology division at Haemek Medical Center in Afula, Israel.

“It’s an amazing opportunity to bring together artificial intelligence and medicine,” Dr. Shamai said. “I love mathematics, I love developing algorithms. Being able to use my skills to help people, to advance medicine – it’s more than I expected when I started out as a computer science student.” He is now leading a team of 15 researchers, who are taking this project to the next level.

“We expect AI to become a powerful tool in doctors’ hands,” shared Prof. Kimmel. “AI can assist in making or verifying a diagnosis, it can help match the treatment to the individual patient, it can offer a prognosis. I do not think it can, or should, replace the human doctor. But it can make some elements of doctors’ work simpler, faster, and more precise.”