Swansea University physicists develop new method for putting quantum correlations to the test

Quantum supercomputers run their algorithms on large quantum systems of many parts, called qubits, by creating quantum correlations across all of them. It is important to verify that the actual computation procedures lead to quantum correlations of desired quality.

However, carrying out these checks is resource-intensive as the number of tests required grows exponentially with the number of qubits involved.

Researchers from the College of Science, working with colleagues from Spain and Germany, have now proposed a new technique that helps to overcome this problem by significantly reducing the number of measurements while increasing the resilience against noise.

Their method offers a solution to the problem of certifying correlations in large systems and is explained in a new paper which has just been published in PRX Quantum, a prestigious journal from the American Physical Society.

Research fellow Dr. Farid Shahandehthe lead scientist of this research, said: “To achieve this we combine two processes. Firstly, consider a juicer - it extracts the essence of the fruit by squeezing it into a small space. Similarly, in many cases, quantum correlations in large systems can also be concentrated in smaller parts of the system. The ‘squeezing’ is done by measurements on the rest of the system called the localization process.

“Suppose the juicer directly converts the fruit into juice boxes without any labels. We don’t know what is inside — it could be apple juice, orange juice, or just water. One way to tell would be to open the box and taste it. The quantum comparison of this is to measure a suitable quantity that tells us whether quantum correlations exist within a system or not.

“This process is called witnessing and we call the combination of the two approaches conditional witnessing.”

In their research, the physicists prove their method is efficient and generically tolerates higher levels of noise in experiments. They have also compared their approach with previous techniques in a class of quantum processors that use ions to demonstrate its efficiency.

Dr. Shahandeh, the recipient of a Royal Commission for the Exhibition of 1851 research fellowship, added: “This is of crucial importance in current technology where the addition of each qubit unavoidably amplifies the complexity of quantum states and experimental imperfections.”

Efficient and robust certification of genuine multipartite entanglement in noisy quantum error correction circuits
Andrea Rodriguez-Blanco, Alejandro Bermudez, Markus Müller, and Farid Shahandeh

UHN develops deep learning model to maximize lifespan after liver transplant

In Toronto, Canada, researchers from University Health Network have developed and validated an innovative deep learning model to predict a patient's long-term outcome after receiving a liver transplant.

First of its kind in the field of Transplantation, this model is the result of a collaboration between the Ajmera Transplant Centre and Peter Munk Cardiac Centre. The study, published in Lancet Digital Health, shows it can significantly improve long-term survival and quality of life for liver transplant recipients.

"Historically, we have seen good advances in one-year post-transplant outcomes, but survival in the longer term hasn't significantly improved in the past decades," explains Dr. Mamatha Bhat, a hepatologist with the Ajmera Transplant Centre at UHN and co-senior author of the study.

"This model can guide physicians and help anticipate when and how complications may arise. It can really be paradigm-changing in how we support liver transplant recipients in personalizing their care and helping them live better and longer."

For liver transplant recipients, long-term survival beyond one year is significantly compromised by an increased risk of cancer, cardiovascular mortality, infection, and graft failure. Clinical tools to identify patients at risk of these complications are limited.

This model will help clinicians enhance post-liver transplant care using machine learning, allowing them to identify potential risks when formulating patient-specific treatment plans.

The study results show this model is more than 80% accurate in predicting potential complications for liver transplant recipients at any point post-transplantation, based on their medical history and compared to the millions of data points compiled using artificial intelligence.

"Deep Learning enables timely processing of large-scale datasets, finding patterns and signals that can aid clinicians in better predicting the clinical outcomes and creating specific treatment recommendations," says Dr. Bo Wang, AI Lead at the Peter Munk Cardiac Centre, CIFAR AI Chair at the Vector Institute and co-senior author of this study.

The algorithms for this model were created based on the Scientific Registry of Transplant Recipients (SRTR) - a national medical database in the United States, with data from over 42,000 liver transplant recipients. They were then validated using the local dataset from UHN's Ajmera Transplant Centre, which had over 3,200 cases.

The research team now plans to share this model with clinicians so that it can be used across the world. Work is also in progress to evaluate what are the best formats to streamline its use, either developing a software or mobile application.

City, University of London consortium wins highly competitive ESA funding

Funded by the European Space Agency (ESA), City, University of London will be spearheading the development of next-generation AI-based guidance, navigation, and control (GNC) systems for space exploration.

A research consortium led by the City's Professor of Robotics and Autonomous Systems, Professor Nabil Aouf, ENAC (France), Numalis (France), and Spineworks (Portugal), beat off stiff competition from other consortia to win the ESA funding call.

This consortium enjoys the distinction of being the only one led by an academic institution.

Professor Aouf and his colleagues have set out to identify mathematical approaches to support the design and the verification of next-generation AI-based GNC architectures and functions. Professor Nabil Aouf

Their study will make a formal link between AI-based machine learning (ML) and control theory-based reasoning and optimizations within a challenging space GNC scenario.

It will also propose solutions inspired from robust control theory and other formal methods to provide a level of validation to the AI-based ML techniques and will develop explainability mechanisms to open those AI-based ML black-box schemes for GNC-based perception to increase the level of trust for space engineers to adopt those schemes.

The two other industry-led consortia winning ESA funding for this very competitive call, are SENER (Spain, Lead), Technical University Eindhoven (Netherlands), and the University of Stuttgart (Germany); and DEIMOS (Portugal, Lead), INESC-ID (Portugal), TASC Ltd (UK), and Lund University.

Professor Aouf, who is also a Director of City's London Space Innovation Centre, said: Planetary space missions present several technological challenges that have been well-highlighted through the difficulty traditionally encountered in Moon landings and, lately, in Mars landings. The landing process encompasses multiple phases; the last one is when the space lander closely approaches the terrain surface and lands within hundreds of meters of a target before coming to a complete stop.

"A question that has to be addressed with the emergence of new AI tools like deep neural networks, which show a good performance for some terrestrial autonomous systems applications, is whether they are capable of fulfilling the stringent requirements of spaceborne missions. AI-based on machine learning algorithms are often used for systems that are difficult to physically model by mathematical linear or nonlinear differential equations."