UMD's quantum supercomputer experiments show that combining its pieces does not have to mean combining their error rates

Pobody’s nerfect—not even the indifferent, calculating bits that are the foundation of computers. But JQI Fellow Christopher Monroe’s group, together with colleagues from Duke University, have made progress toward ensuring we can trust the results of quantum supercomputers even when they are built from pieces that sometimes fail. They have shown in an experiment, for the first time, that an assembly of quantum computing pieces can be better than the worst parts used to make it. The team has shown how they took this landmark step toward reliable, practical quantum supercomputers. A chip containing an ion trap that researchers use to capture and control atomic ion qubits (quantum bits).  CREDIT Kai Hudek/JQI)

In their experiment, the researchers combined several qubits—the quantum version of bits—so that they functioned together as a single unit called a logical qubit. They created the logical qubit based on a quantum error correction code so that, unlike for the individual physical qubits, errors can be easily detected and corrected, and they made it to be fault-tolerant—capable of containing errors to minimize their negative effects.

“Qubits composed of identical atomic ions are natively very clean by themselves,” says Monroe, who is also a Fellow of the Joint Center for Quantum Information and Computer Science and a College Park Professor in the Department of Physics at the University of Maryland. “However, at some point, when many qubits and operations are required, errors must be reduced further, and it is simpler to add more qubits and encode information differently. The beauty of error correction codes for atomic ions is they can be very efficient and can be flexibly switched on through software controls.”

This is the first time that a logical qubit is more reliable than the most error-prone step required to make it. The team was able to successfully put the logical qubit into its starting state and measure it 99.4% of the time, despite relying on six quantum operations that are individually expected to work only about 98.9% of the time.

That might not sound like a big difference, but it’s a crucial step in the quest to build much larger quantum computers. If the six quantum operations were assembly line workers, each focused on one task, the assembly line would only produce the correct initial state 93.6% of the time (98.9% multiplied by itself six times)—roughly ten times worse than the error measured in the experiment. That improvement is because in the experiment the imperfect pieces work together to minimize the chance of quantum errors compounding and ruining the result, similar to watchful workers catching each other's mistakes.

The results were achieved using Monroe’s ion-trap system at UMD, which uses up to 32 individual charged atoms—ions—that are cooled with lasers and suspended over electrodes on a chip. They then use each ion as a qubit by manipulating it with lasers.

“We have 32 laser beams,” says Monroe. “And the atoms are like ducks in a row; each with its own fully controllable laser beam. I think of it like the atoms form a linear string and we're plucking it like a guitar string. We're plucking it with lasers that we turn on and off in a programmable way. And that's the computer; that's our central processing unit.”

By successfully creating a fault-tolerant logical qubit with this system, the researchers have shown that careful, creative designs have the potential to unshackle quantum computing from the constraint of the inevitable errors of the current state of the art. Fault-tolerant logical qubits are a way to circumvent the errors in modern qubits and could be the foundation of quantum computers that are both reliable and large enough for practical uses.

Correcting Errors and Tolerating Faults

Developing fault-tolerant qubits capable of error correction is important because Murphy’s law is relentless: No matter how well you build a machine, something eventually goes wrong. In a computer, any bit or qubit has some chance of occasionally failing at its job. And the many qubits involved in a practical quantum computer mean there are many opportunities for errors to creep in.

Fortunately, engineers can design a computer so that its pieces work together to catch errors—like keeping important information backed up to an extra hard drive or having a second person read your important email to catch typos before you send it. Both the people or the drives have to mess up for a mistake to survive. While it takes more work to finish the task, the redundancy helps ensure the final quality.

Some prevalent technologies, like cell phones and high-speed modems, currently use error correction to help ensure the quality of transmissions and avoid other inconveniences. Error correction using simple redundancy can decrease the chance of an uncaught error as long as your procedure isn’t wrong more often than it’s right—for example, sending or storing data in triplicate and trusting the majority vote can drop the chance of an error from one in a hundred to less than one in a thousand.

So while perfection may never be in reach, error correction can make a computer’s performance as good as required, as long as you can afford the price of using extra resources. Researchers plan to use quantum error correction to similarly complement their efforts to make better qubits and allow them to build quantum computers without having to conquer all the errors that quantum devices suffer from.

“What's amazing about fault tolerance, is it's a recipe for how to take small unreliable parts and turn them into a very reliable device,” says Kenneth Brown, a professor of electrical and computer engineering at Duke and a co-author. “And fault-tolerant quantum error correction will enable us to make very reliable quantum computers from faulty quantum parts.”

But quantum error correction has unique challenges—qubits are more complex than traditional bits and can go wrong in more ways. You can’t just copy a qubit, or even simply check its value in the middle of a calculation. The whole reason qubits are advantageous is that they can exist in a quantum superposition of multiple states and can become quantum-mechanically entangled with each other. To copy a qubit you have to know exactly what information it’s currently storing—in physical terms you have to measure it. And a measurement puts it into a single well-defined quantum state, destroying any superposition or entanglement that the quantum calculation is built on. 

So for quantum error correction, you must correct mistakes in bits that you aren’t allowed to copy or even look at too closely. It’s like proofreading while blindfolded. In the mid-1990s, researchers started proposing ways to do this using the subtleties of quantum mechanics, but quantum computers are just reaching the point where they can put the theories to the test.

The key idea is to make a logical qubit out of redundant physical qubits in a way that can check if the qubits agree on certain quantum mechanical facts without ever knowing the state of any of them individually.

Can’t Improve on the Atom

There are many proposed quantum error correction codes to choose from, and some are more natural fits for a particular approach to creating a quantum supercomputer. Each way of making a quantum supercomputer has its types of errors as well as unique strengths. So building a practical quantum supercomputer requires understanding and working with the particular errors and advantages that your approach brings to the table.

The ion trap-based quantum supercomputer that Monroe and colleagues work with has the advantage that their qubits are identical and very stable. Since the qubits are electrically charged ions, each qubit can communicate with all the others in the line through electrical nudges, giving freedom compared to systems that need a solid connection to immediate neighbors.

“They’re atoms of a particular element and isotope so they're perfectly replicable,” says Monroe. “And when you store coherence in the qubits and you leave them alone, it exists essentially forever. So the qubit when left alone is perfect. To make use of that qubit, we have to poke it with lasers, we have to do things to it, we have to hold on to the atom with electrodes in a vacuum chamber, all of those technical things have noise on them, and they can affect the qubit.”

For Monroe’s system, the biggest source of errors is entangling operations—the creation of quantum links between two qubits with laser pulses. Entangling operations are necessary parts of operating a quantum computer and of combining qubits into logical qubits. So while the team can’t hope to make their logical qubits store information more stably than the individual ion qubits, correcting the errors that occur when entangling qubits is a vital improvement.

The researchers selected the Bacon-Shor code as a good match for the advantages and weaknesses of their system. For this project, they only needed 15 of the 32 ions that their system can support, and two of the ions were not used as qubits but were only needed to get an even spacing between the other ions. For the code, they used nine qubits to redundantly encode a single logical qubit and four additional qubits to pick out locations where potential errors occurred. With that information, the detected faulty qubits can, in theory, be corrected without the “quantum-ness” of the qubits being compromised by measuring the state of any individual qubit.

“The key part of quantum error correction is redundancy, which is why we needed nine qubits in order to get one logical qubit,” says JQI graduate student Laird Egan, who is the first author of the paper. “But that redundancy helps us look for errors and correct them because an error on a single qubit can be protected by the other eight.”

The team successfully used the Bacon-Shor code with the ion-trap system. The resulting logical qubit required six entangling operations—each with an expected error rate between 0.7% and 1.5%. But thanks to the careful design of the code, these errors don't combine into an even higher error rate when the entanglement operations were used to prepare the logical qubit in its initial state.

The team only observed an error in the qubit's preparation and measurement 0.6% of the time, less than the lowest error expected for any of the individual entangling operations. The team was then able to move the logical qubit to a second state with an error of just 0.3%. The team also intentionally introduced errors and demonstrated that they could detect them.

“This is really a demonstration of quantum error correction improving performance of the underlying components for the first time,” says Egan. “And there's no reason that other platforms can't do the same thing as they scale up. It's really a proof of concept that quantum error correction works.”

As the team continues this line of work, they say they hope to achieve similar success in building even more challenging quantum logical gates out of their qubits, performing complete cycles of error correction where the detected errors are actively corrected, and entangling multiple logical qubits together.

“Up until this paper, everyone's been focused on making one logical qubit,” says Egan. “And now that we’ve made one, we're like, ‘Single logical qubits work, so what can you do with two?’”

UMD researchers explain how recently discovered earthquake-like disturbances occur in living cells

Animal cells get their structural integrity from their cytoskeleton, a shapeshifting mesh of filaments inside a cell that helps the cell organize its structure and communicate with its environment. A few years ago, scientists noticed that parts of the cytoskeleton would occasionally rearrange very rapidly, causing an earthquake-like disturbance in the part of the cell. They named these disturbances cytoquakes, but no one understood how or why they happened. New supercomputer simulations from UMD reveal that much like earthquakes cell “quakes”—sudden restructuring of the cytoskeleton, or scaffolding, inside animal cells—is caused by the slow buildup and rapid release of mechanical energy. This image shows a simulated model cytoskeleton (red, green and blue mesh) contained within a cell membrane depicted in light blue.  CREDIT Haoran Ni / University of Maryland

New supercomputer simulations developed by University of Maryland researchers reveal that these cytoquakes are caused by the slow buildup and sudden release of mechanical energy within the cell. The researchers believe the quakes may help the cell respond rapidly to signals from the outside environment, like chemicals produced by other cells or hormones in the bloodstream.

“Cytoquakes represent a sudden remodeling of a very important component of the cell, but the physics behind them really wasn’t known,” said Garegin Papoian, a co-author of the study who is the Monroe Martin Professor of Chemistry and Biochemistry with a joint appointment in the Institute for Physical Science and Technology at the University of Maryland. “We think these cytoquakes must be biologically important because the cytoskeleton is involved in so many functions within the cell. Understanding the physics behind them can provide insight into how cells work.”

The cytoskeleton is like an internal scaffolding within animal cells. It is made of a network of filaments that constantly grow, shrink, attach and detach from one another. In addition to providing structure to a cell, the filaments also serve as tracks for chemical signals to flow from one part of a cell to another.

Papoian and his colleagues hypothesized that the sudden rapid restructuring that happens in cytoquakes was the result of the cytoskeleton’s physical structure being particularly sensitive to its environment. He likens it to the sensitivity of a pile of sand compared with a brick. Both may be made of the same molecules, but the brick holds its structure, even under pressure, without collapsing. The pile of sand may hold its structure for a long while but then suddenly collapses into an avalanche of sliding sand.

To test the hypothesis, the team created a supercomputer simulation of a model cytoskeleton using a pioneering active matter simulation software that they developed called MEDYAN for “mechanochemical dynamics of active networks.” The software applies the laws of chemistry and physics to determine how the molecules within the cytoskeleton interact and behave.

The study revealed that the filaments in a cytoskeleton arrange themselves a bit like a shape-shifting tensegrity structure. In the macroscopic world, a tensegrity structure is a kind of geometric toy or sculpture made of cables and floating rods under tension and compression that appear to defy gravity. Analyzing these cellular tensegrity structures helped Papoian and his colleagues understand tension release within the cytoskeleton. They found that tension applied to one area of the structure can build and cause tension until it suddenly releases in another area. In other words, the cytoskeleton behaves more like a pile of sand than a brick.

The physical structure of the cytoskeleton allows tension to build between some of the filaments like the tension between grains of sand in a sand pile or between two tectonic plates along with a fault line. When some threshold is met, the tension suddenly releases, the pile of sand collapses, an earthquake rumbles or a cytoquake occurs.

“We postulate that the cytoquake mechanism poises the cell to react quickly to external signals from its environment compared to a system without this mechanism,” Papoian said.

For example, if a cell involved in repairing injuries must rush to the site of a wound, the cytoquake mechanism may respond to chemical signals from the injury site by jolting the cell into motion. When a cell migrates through the body, the leading edge may also use this mechanism to project or collapse protrusions as the cell probes its local neighborhood.

The team’s next step will be to expand on their simulation methods to include more parts of a cell such as a nucleus. They recently simulated the outer membrane of a cell and analyzed how the cytoskeleton pushes against this membrane to form finger-like protrusions.

“This work is showing us that we can use MEDYAN to model important components of a cell,” Papoian said. “Ideally, we would like to keep going and essentially build the fundamental model of a whole-cell at single-molecule resolution.”

CfA astronomers use supercomputer simulations modeling the effects of stellar wind on an exoplanet

Most stars including the sun generate magnetic activity that drives a fast-moving, ionized wind and also produces X-ray and ultraviolet emission (often referred to as XUV radiation). XUV radiation from a star can be absorbed in the upper atmosphere of an orbiting planet, where it is capable of heating the gas enough for it to escape from the planet's atmosphere. M-dwarf stars, the most common type of star by far, are smaller and cooler than the sun, and they can have very active magnetic fields. Their cool surface temperatures result in their habitable zones (HZ) being close to the star (the HZ is the range of distances within which an orbiting planet's surface water can remain liquid). Any rocky exoplanets that orbit an M-dwarf in its HZ, because they are close to the star, are especially vulnerable to the effects of photoevaporation which can result in partial or even total removal of the atmosphere. Some theorists argue that planets with substantial hydrogen or helium envelopes might become more habitable if photoevaporation removes enough of the gas blanket. An illustration of the TRAPPIST-1 system of seven planets around an M-dwarf star.  The star has both strong UV and X-ray emission as well as an ionized wind that can evaporate the atmosphere of a planet orbting nearby. Astronomers have completed simulations using the TRAPPIST-1 system parameters that reveal the complex possible consequences of a stellar wind on a planet's atmosphere. Credit: NASA/ CalTech-JPL

The effects of XUV radiation on exoplanet atmospheres have been studied for almost twenty years, but the effects of the stellar wind on exoplanet atmospheres are only poorly understood. Harvard-Smithsonian Center for Astrophysics (CfA) astronomers Laura Harbach, Sofia Moschou, Jeremy Drake, Julian Alvarado-Gomez, and Federico Frascetti and their colleagues have completed supercomputer simulations modeling the effects of stellar wind on an exoplanet with a hydrogen-rich atmosphere orbiting close to an M-dwarf star. As an example, they use the exoplanet configuration in TRAPPIST-1, a cool M-dwarf star with a system of seven planets, six of which are close enough to the star to be in its HZ.

The simulations show that, depending on the details, the stellar wind can generate outflows from a planet's atmosphere. The team finds that both the star's and the planet's magnetic fields play significant roles in defining many of the details of the outflow, which could be observed and studied via atomic hydrogen lines in the ultraviolet. The complex supercomputer simulation results indicate that planets around M-dwarf host stars are likely to display a diverse range of atmospheric properties, and some of the physical conditions can vary over short timescales making observational interpretations of sequential exoplanet transits more complex. The simulation results highlight the need to use 3D supercomputer simulations that include magnetic effects to interpret observational results for planets around M-dwarf stars.