Japanese built quantum search algorithm offers hope for radically enhancing wireless networks

A proposed method of profoundly enhancing the energy efficiency of wireless networks unfortunately also suffers from being amongst computationally complex problems to solve. But a computer scientist has for the first time demonstrated that the quantum search algorithm can solve this problem faster than a classical supercomputer.

A novel wireless communications technique that turns the on-off status of parts of the telecommunications medium itself—such as antennas and subcarriers—could deliver super-charged energy efficiency gains, but its optimization suffers from being a problem that counts among the computationally hard class of problems to solve. However, a Yokohama National University located in Yokohama, Japan computer scientist has for the first time demonstrated that the quantum search algorithm can solve this problem faster than a classical supercomputer in terms of query complexity. This figure shows that the search space size of the index selection problem causes a combinatorial explosion, where any K elements are selected from M elements and assigned log2(Q) bits of information. Even for a simple case like (M, K, Q) = (16, 8, 64), there are about 6.94 * 10^{173} candidates, which is very hard to solve.

The arrival of 5G wireless networks offers a great boost to bandwidth and higher data rates, and potentially enables a wide range of new mobile data applications such as self-driving cars and the internet of things (IoT). At the same time, this explosion in traffic necessitates an enhancement of techniques for improving the efficiency of use of the radio spectrum carrier of all this information and the energy required to power the system.

One innovative method for improving energy efficiency and that has been attracting a great deal of attention in wireless communications circles in recent years is what is known as index modulation.

The technique’s name echoes the terms frequency modulation (FM) or amplitude modulation (AM) used to describe how information such as voices or music was transmitted through space via radio waves for much of the 20th Century. At the sending end, either the frequency or the amplitude of the ‘carrier’ radio wave was instantaneously modified (‘modulated’) by the transmitter to impress information on that wave, similar to how telegraph operators in the 19th Century impressed information in the form of Morse code upon an electric current running through telegraph wires. At the receiving end, the decoding, or ‘demodulation’ of that carrier wave extracted the information embedded in its form, producing sounds that could then be heard by human ears. A third way to modulate a carrier wave beyond altering a radio wave’s frequency or amplitude is by altering its phase.

Index modulation (IM) offers a fourth way, or one could say the fourth dimension, of impressing information, but this time via exploitation of the on or off status of its indices. The word index in this case is simply a catch-all term for the infrastructural and operational building blocks of the communications system, such as the transmission antennas, subcarriers, time slots, radiofrequency mirrors, LEDs, and even relays and spreading codes. By switching these various elements on or off, potentially adds another layer of information to transmission, this time in the form of binary digits or bits.

And by turning parts of the system off even as they are conveying information, the sparseness of the transmitted sequence of symbols simplifies the calculation complexity.      This also substantially reduces the energy required for a given amount of data that is transferred.

“It’s a very elegant concept, using the activation pattern of the building blocks of the communications system itself to impart information, and leading to a reduction in the complexity of the hardware,” said Associate Professor Naoki Ishikawa, the author of the paper.

But this radical improvement comes with an additional—and substantial—challenge.

IM requires optimization to determine which indices should be used and when to convey this binary information, and this particular type of optimization happens to be computationally very difficult.

“This optimization problem is what computational complexity theorists term ‘NP-hard’, one of the very hard classes of a problem there is. It leads to what we call a combinatorial explosion,” he added. “So I’ve named this monster of a mathematical challenge the ‘index selection problem’.”

To address the index selection problem, Ishikawa used an algorithm for quantum computing called Grover Adaptive Search (GAS), also known as the quantum search algorithm. Quantum computing may one day offer the ability to perform many types of computations much faster than classical computers.

In the paper, Ishikawa demonstrated for the first time that in principle GAS can solve the index selection problem faster than a classical computer in terms of query complexity.

“This shows that index modulation is compatible with quantum computers because it represents information on and off, resulting in binary variables typically used in quantum computation,” he said.

The use of GAS to solve the index modulation problem remains something of a proof of concept, as fault-tolerant, large-scale quantum supercomputers are years away from being realized. There remain many challenges for industrial applications of existing quantum computers due to their non-negligible noise drowning out many signals. In addition, GAS can provide a quadratic speedup, but the problem of exponential complexity is still unresolved and requires long-term study.

Quadratic speedup occurs when a quantum computer solves a problem through N queries where a classical computer would need to take, for example, N * N = N^2 queries. Exponential speedup occurs where a quantum computer solves a problem through N queries where a classical computer would take 2^ N queries. So if N is a large value, then the difference in terms of query complexity would become larger too.

Nevertheless, the demonstration of quantum speedup achieved by GAS has the potential to solve many other problems in society, not just the index selection problem.

South African researchers find giving a big penalty to an algorithm for false negatives results in higher precision

Anyone waiting for the results of a medical test knows the anxious question:’Will my life change completely when I know?’ And the relief if you test negative.

Nowadays, Artificial Intelligence (AI) is deployed more and more to predict life-threatening diseases. But there remains a big challenge in getting the Machine Learning (ML) algorithms precise enough. Specifically, getting the algorithms to correctly diagnose if someone is sick.

Machine Learning (ML) is the branch of AI where algorithms learn from datasets and get smarter in the process.

“Let’s say there is a dataset about a serious disease. The dataset has 90 people who do not have the disease. But 10 of the people do have the disease,” says Dr Ibomoiye Domor Mienye. Mienye is a post-doctoral AI researcher at the University of Johannesburg (UJ), South Africa.

“As an example, an ML algorithm says that the 90 do not have the disease. That is correct so far. But it fails to diagnose the 10 that do have the disease. The algorithm is still regarded as 90% accurate”, he says. Telling sick people that they’re healthy, can happen when a human doctor sees a patient. It also happens when Artificial Intelligence (AI) learns to diagnose disease. But giving a big penalty to an algorithm for false negatives results in much better precision, UJ researchers find. The research appears in Informatics in Medicine Unlocked, at https://doi.org/10.1016/j.imu.2021.100690  CREDIT Graphic design by Therese van Wyk, University of Johannesburg. Based on Pixabay images.

This is because accuracy has been defined in this way. But for health outcomes, it may be urgent to diagnose the 10 people with the disease and get them into treatment. That may be more important than complete accuracy about the 90 who do not have the condition, he adds.

Penalties against AI

In a research study published in Informatics in Medicine Unlocked, Mienye and Prof Yanxia Sun show how ML algorithms can be improved significantly for medical purposes. They used logistic regression, decision tree, XGBoost, and random forest algorithms.

These are supervised binary classification algorithms. That means they only learn from the ‘yes/no’ datasets provided to them.

Dr Mienye and Prof Sun are both from the Department of Electrical and Engineering Science at UJ.

The researchers built cost sensitivity into each of the algorithms.

This means the algorithm gets a much bigger penalty for telling a sick person in the dataset that they are healthy, than the other way round. In medical terms, the algorithms get bigger penalties for false negatives than for false positives.

Disease datasets AI learns from

Dr Mienye and Prof Sun used public learning datasets for diabetes, breast cancer, cervical cancer (858 records) and chronic kidney disease (400 records).

The datasets come from large hospitals or healthcare programs. In these binary datasets, people are classified as either having a disease, or not having it at all.

The algorithms they used are binary also. These can say “yes the person has the disease” or “no they don’t have it.” They tested all the algorithms on each dataset, both without and with the cost-sensitivity.

Significantly improved precision and recall

The results make it clear that the penalties work as intended in these datasets.

For chronic kidney disease for example, the Random Forest algorithm had precision at 0.972 and recall at 0.946, out of a perfect 1.000.

After the cost-sensitivity was added, the algorithm improved significantly to precision at 0.990 and recall at a perfect 1.000.

For CKD, the three other algorithms’ recall improved from high scores to a perfect 1.000.

Precision at 1.000 means the algorithm did not predict one or more false positives across the entire dataset. Recall at 1.000 means the algorithm did not predict one or more false negatives across the entire dataset.

With the other datasets, the results were different for different algorithms.

For cervical cancer, the cost-sensitive Random Forest and XGBoost algorithms improved from high scores to perfect precision and recall. However, the Logistic Regression and Decision Tree algorithms improved to much higher scores but did not reach 1.000.

The precision problem

In general, algorithms have been more accurate at saying people do not have a disease, than identifying the ones who are sick, says Mienye. This is an ongoing challenge in healthcare AI.

The reason is the way the algorithms learn. The algorithms learn from datasets that come from large hospitals or state healthcare programs.

But most of the people in those datasets do not have the conditions they are being tested for, says Mienye.

“At a large hospital, a person comes in to get tested for chronic kidney disease  (CKD). Their doctor sent them there because some of their symptoms are CKD symptoms. The doctor would like to rule out CKD. Turns out, the person does not have CKD.

“This happens with lots of people. The dataset ends up with more people who do not have CKD, than people who do. We call this an imbalanced dataset.”

When an algorithm starts learning from the dataset, it learns far less about CKD than it should, and isn’t accurate enough in diagnosing ill patients – unless the algorithm is adjusted for the imbalance.

AI on the other side of a boat ride

Mienye grew up in a village near the Atlantic Ocean, that is not accessible by road.

“You have to use a speedboat from the nearest town to get there. The boat ride takes two to three hours,” he says.

The nearest clinic is in the bigger town, on the other side of the boat ride.

The deep rural setting of his home village inspired him to see how AI can help people with little or no access to healthcare.

An old lady from his village is a good example of how more advanced AI algorithms may assist in future, he says. A cost-sensitive multiclass ML algorithm could assess the measured data for her blood pressure, sodium levels, blood sugar and more.

If her data is recorded correctly on a computer, and the algorithm learns from a multiclass dataset, that future AI could tell clinic staff which stage of chronic kidney disease she is at.

This village scenario is in the future, however.

Meanwhile the study’s four algorithms with cost sensitivity, are far more precise at diagnosing disease in their numerical datasets.

And they learn quickly, using the ordinary computer that one could expect to find in a remote town.

Innovative chip built by UCPH physicists resolves quantum headache

Quantum physicists at the University of Copenhagen are reporting an international achievement for Denmark in the field of quantum technology. By simultaneously operating multiple spin qubits on the same quantum chip, they surmounted a key obstacle on the road to the supercomputer of the future. The result bodes well for the use of semiconductor materials as a platform for solid-state quantum computers.

One of the engineering headaches in the global marathon towards a large functional quantum computer is the control of many basic memory devices – qubits – simultaneously. This is because the control of one qubit is typically negatively affected by simultaneous control pulses applied to another qubit. Now, a pair of young quantum physicists at the University of Copenhagen’s Niels Bohr Institute –Ph.D. student, now Postdoc, Federico Fedele, 29  and Asst. Prof. Anasua Chatterjee, 32, working in the group of Assoc. Prof. Ferdinand Kuemmeth has managed to overcome this obstacle. Federico Fedele, Anasua Chatterjee and Ferdinand Kuemmeth.

Global qubit research is based on various technologies. While researchers have come far with quantum processors based on superconductor technology, the UCPH research group is betting on semiconductor qubits – known as spin qubits.

“Broadly speaking, they consist of electron spins trapped in semiconducting nanostructures called quantum dots, such that individual spin states can be controlled and entangled with each other”, explains Federico Fedele. 

Spin qubits have the advantage of maintaining their quantum states for a long time. This potentially allows them to perform faster and more flawless computations than other platform types. And, they are so minuscule that far more of them can be squeezed onto a chip than with other qubit approaches. The more qubits, the greater a computer’s processing power. The UCPH team has extended the state of the art by fabricating and operating four qubits in a 2x2 array on a single chip.

The circuitry is 'the name of the game'

Thus far, the greatest focus of quantum technology has been on producing better and better qubits. Now it's about getting them to communicate with each other, explains Anasua Chatterjee:

"Now that we have some pretty good qubits, the name of the game is connecting them in circuits which can operate numerous qubits, while also being complex enough to be able to correct quantum calculation errors. Thus far, research in spin qubits has gotten to the point where circuits contain arrays of 2x2 or 3x3 qubits. The problem is that their qubits are only dealt with one at a time."

It is here that the young quantum physicists’ quantum circuit, made from the semiconducting substance gallium arsenide and no larger than the size of a bacterium, makes all the difference:

"The new and truly significant thing about our chip is that we can simultaneously operate and measure all qubits. This has never been demonstrated before with spin qubits – nor with many other types of qubits," says Chatterjee, who is one of two lead authors of the study, which has recently been published in the journal Physical Review X Quantum.

Being able to operate and measure simultaneously is essential for performing quantum calculations. Indeed, if you have to measure qubits at the end of a calculation – that is, stop the system to get a result – the fragile quantum states collapse. Thus, measurement must be synchronous, so that the quantum states of all qubits are shut down simultaneously. If qubits are measured one by one, the slightest ambient noise can alter the quantum information in a system.

About the chip 

The four spin qubits in the chip are made of the semiconducting material gallium arsenide. Situated between the four qubits is a larger quantum dot that connects the four qubits, and which the researchers can use to tune all of the qubits simultaneously. The illustration shows the size difference between spin qubits and superconducting qubits.

Milestone

The realization of the new circuit is a milestone on the long road to a semiconducting quantum computer.

“To get more powerful quantum processors, we have to not only increase the number of qubits but also the number of simultaneous operations, which is exactly what we did" states Professor Kuemmeth, who directed the research.

At the moment, one of the main challenges is that the chip's 48 control electrodes need to be tuned manually, and kept tuned continuously despite environmental drift, which is a tedious task for a human. That's why his research team is now looking into how optimization algorithms and machine learning could be used to automate tuning. To allow the fabrication of even larger qubit arrays, the researchers have begun working with industrial partners to fabricate the next generation of quantum chips. Overall, the synergistic efforts from computer science, microelectronics engineering, and quantum physics may then lead spin qubits to the next milestones.