UCF demos photonic materials for light-based supercomputing

The materials they are developing could allow for faster photonic supercomputers that use less energy and could also one day lead to quantum supercomputing.

The University of Central Florida researchers are developing new photonic materials that could one day help enable low power, ultra-fast, light-based supercomputing. The UCF-developed, new photonic material overcomes drawbacks of contemporary topological designs that offered less features and control, while supporting much longer propagation lengths for information packets by minimizing power losses. Image credit: Adobe Stock

The unique materials, known as topological insulators, are like wires that have been turned inside out, where the current runs along the outside and the interior is insulated.

Topological insulators are important because they could be used in circuit designs that allow for more processing power to be crammed into a single space without generating heat, thus avoiding the overheating problem today’s smaller and smaller circuits face.

In their latest work, the researchers demonstrated a new approach to creating materials that use a novel, chained, honeycomb lattice design.

The researcher laser-etched the chained, honeycombed design onto a sample of silica, the material commonly used to make photonic circuits.

Nodes in the design allow the researchers to modulate the current without bending or stretching the photonic wires, an essential feature needed for controlling the flow of light and thus information in a circuit.

The new photonic material overcomes drawbacks of contemporary topological designs that offered fewer features and control while supporting much longer propagation lengths for information packets by minimizing power losses.

The researchers envision that the new design approach introduced by the bimorphic topological insulators will lead to a departure from traditional modulation techniques, bringing the technology of light-based computing one step closer to reality.

Topological insulators could also one day lead to quantum computing as their features could be used to protect and harness fragile quantum information bits, thus allowing processing power hundreds of millions of times faster than today’s conventional computers.

The researchers confirmed their findings using advanced imaging techniques and numerical simulations.

“Bimorphic topological insulators introduce a new paradigm shift in the design of photonic circuitry by enabling secure transport of light packets with minimal losses,” says Georgios Pyrialakos, a postdoctoral researcher with UCF’s College of Optics and Photonics and the study’s lead author.

The next steps for the research include the incorporation of nonlinear materials into the lattice that could enable the active control of topological regions, thus creating custom pathways for light packets, says Demetrios Christodoulides, a professor in UCF’s College of Optics and Photonics and study co-author.

Rice chemists skew the odds to prevent cancer

Theory shows mutations have few easy paths to establish themselves in cells and initiate tumors

The path to cancer prevention is long and arduous for legions of researchers, but new work by Rice University scientists shows that there may be shortcuts.

Rice chemist Anatoly Kolomeisky, lead author and postdoctoral researcher Hamid Teimouri and research assistant Cade Spaulding are developing a theoretical framework to explain how cancers caused by more than one genetic mutation can be more easily identified and perhaps stopped. A new paper shows how to increase the odds of identifying cancer-causing mutations before tumors take hold. Authors are, from left, Cade Spaulding, Anatoly Kolomeisky and Hamid Teimouri.

Essentially, it does so by identifying and ignoring transition pathways that don’t contribute much to the fixation of mutations in a cell that goes on to establish a tumor.

A study in the Biophysical Journal describes their analysis of the effective energy landscapes of cellular transformation pathways implicated in a variety of cancers. The ability to limit the number of pathways to the few most likely to kick-start cancer could help to find ways to halt the process before it ever really starts.

“In some sense, cancer is a bad-luck story,” said Kolomeisky, a professor of chemistry and of chemical and biomolecular engineering. “We think we can decrease the probability of this bad luck by looking for low-probability collections of mutations that typically lead to cancer. Depending on the type of cancer, this can range between two mutations and 10.”

Calculating the effective energies that dictate interactions in biomolecular systems can predict how they behave. The theory is commonly used to predict how a protein will fold, based on the sequence of its constituent atoms and how they interact.

The Rice team is applying the same principle to cancer initiation pathways that operate in cells but sometimes carry mutations missed by the body’s safeguards. When two or more of these mutations are fixed in a cell, they are carried forward as the cells divide and tumors grow.

By their calculations, the odds favor the most dominant pathways, those that carry mutations forward while expending the least amount of energy, Kolomeisky said.

“Instead of looking at all possible chemical reactions, we identify the few that we might need to look at,” he explained. “It seems to us that most tissues involved in the initiation of cancer are trying to be as homogenous as possible. The rule is a pathway that decreases heterogeneity is always going to be the fastest on the road to tumor formation.”

The huge number of possible pathways seems to make narrowing them down an intractable problem. “But it turned out that using our chemical intuition and building an effective free-energy landscape helped by allowing us to calculate where in the process a mutation is likely to become fixated in a cell,” Kolomeisky said.

The team simplified calculations by focusing initially on pathways involving only two mutations that, when fixed, initiate a tumor. Kolomeisky said mechanisms involving more mutations will complicate calculations, but the procedure remains the same.

Much of the credit goes to Spaulding, who under Teimouri’s direction created the algorithms that greatly simplify the calculations. The visiting research assistant was 12 when he first met Kolomeisky to ask for guidance. Having graduated from a Houston high school two years early, he joined the Rice lab last year at 16 and will attend Trinity University in San Antonio this fall. An algorithm developed at Rice identifies and ignores transition pathways that don’t contribute much to the fixation of mutations in a cell that goes on to establish a tumor. Illustration by Hamid Teimouri

“Cade has outstanding ability in computer programming and in implementing sophisticated algorithms despite his very young age,” Kolomeisky said. “He came up with the most efficient Monte Carlo simulations to test our theory, where the size of the system can involve up to a billion cells.”

Spaulding said the project brought together chemistry, physics, and biology in a way that meshes with his interests, along with his computer programming skills. “It was good way to combine all of the branches of science and also programming, which is what I find most interesting,” he said.

The study follows a 2019 paper in which the Rice lab modeled stochastic (random) processes to learn why some cancerous cells overcome the body’s defenses and trigger spread of the disease.

But understanding how those cells become cancerous in the first place could help head them off at the pass, Kolomeisky said. “This has implications for personalized medicine,” he said. “If a tissue test can find mutations, our framework might tell you if you are likely to develop a tumor and whether you need to have more frequent checkups. I think this powerful framework can be a tool for prevention.”

Franco lab makes laser bursts that drive fastest-ever logic gates

Researchers at Rochester and Erlangen have taken a decisive step toward creating ultra-fast supercomputers.

A long-standing quest for science and technology has been to develop electronics and information processing that operate near the fastest timescales allowed by the laws of nature. Synchronized laser pulses (red and blue) generate a burst of real and virtual charge carriers in graphene that are absorbed by gold metal to produce a net current. “We clarified the role of virtual and real charge carriers in laser-induced currents, and that opened the way to the creation of ultrafast logic gates,” says Ignacio Franco, associate professor of chemistry and physics at Rochester. (University of Rochester illustration / Michael Osadciw)

A promising way to achieve this goal involves using laser light to guide the motion of electrons in the matter and then using this control to develop electronic circuit elements—a concept known as lightwave electronics.

Remarkably, lasers currently allow us to generate bursts of electricity on femtosecond timescales—that is, in a millionth of a billionth of a second. Yet our ability to process information in these ultrafast timescales has remained elusive.

Now, researchers at the University of Rochester and the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) have made a decisive step in this direction by demonstrating a logic gate—the building block of computation and information processing—that operates at femtosecond timescales. The feat was accomplished by harnessing and independently controlling, for the first time, the real and virtual charge carriers that compose these ultrafast bursts of electricity.

The researchers’ advances have opened the door to information processing at the petahertz limit, where one quadrillion computational operations can be processed per second. That is almost a million times faster than today’s computers operating with gigahertz clock rates, where 1 petahertz is 1 million gigahertz.

“This is a great example of how fundamental science can lead to new technologies,” says Ignacio Franco, an associate professor of chemistry and physics at Rochester who, in collaboration with doctoral student Antonio José Garzón-Ramírez ’21 (Ph.D.), performed theoretical studies that lead to this discovery.

Lasers generate ultrafast bursts of electricity

In recent years, scientists have learned how to exploit laser pulses that last a few femtoseconds to generate ultrafast bursts of electrical currents. This is done, for example, by illuminating tiny graphene-based wires connecting two gold metals. The ultrashort laser pulse sets in motion, or “excites,” the electrons in graphene and, importantly, sends them in a particular direction—thus generating a net electrical current.

Laser pulses can produce electricity far faster than any traditional method—and do so in the absence of applied voltage. Further, the direction and magnitude of the current can be controlled simply by varying the shape of the laser pulse (that is, by changing its phase).

The breakthrough: Harnessing real and virtual charge carriers

The research groups of Franco and FAU’s Peter Hommelhoff have been working for several years to turn light waves into ultrafast current pulses.

In trying to reconcile the experimental measurements at Erlangen with computational simulations at Rochester, the team had a realization: In gold-graphene-gold junctions, it is possible to generate two flavors—“real” and “virtual”—of the particles carrying the charges that compose these bursts of electricity.

  • “Real” charge carriers are electrons excited by the light that remains in directional motion even after the laser pulse is turned off.
  • “Virtual” charge carriers are electrons that are only set in net directional motion while the laser pulse is on. As such, they are elusive species that only live transiently during illumination.

Because the graphene is connected to gold, both natural and virtual charge carriers are absorbed by the metal to produce a net current.

Strikingly, the team discovered that by changing the shape of the laser pulse, they could generate currents where only the real or the virtual charge carriers play a role. In other words, they not only generated two flavors of currents, but they also learned how to control them independently, a finding that drastically augments the elements of design in lightwave electronics.

Logic gates through lasers

Using this augmented control landscape, the team was able to experimentally demonstrate, for the first time, logic gates that operate on a femtosecond timescale.

Logic gates are the basic building blocks needed for computations. They control how incoming information, which takes the form of 0 or 1 (known as bits), is processed. Logic gates require two input signals and yield a logic output.

In the researchers’ experiment, the input signals are the shape or phase of two synchronized laser pulses, each one chosen to only generate a burst of real or virtual charge carriers. Depending on the laser phases used, these two contributions to the currents can either add up or cancel out. The net electrical signal can be assigned logical information 0 or 1, yielding an ultrafast logic gate.

“It will probably be a very long time before this technique can be used in a computer chip, but at least we now know that lightwave electronics is practically possible,” says Tobias Boolakee. He led the experimental efforts as a Ph.D. student at FAU.

“Our results pave the way toward ultrafast electronics and information processing,” says Garzón-Ramírez ’21 (Ph.D.), now a postdoctoral researcher at McGill University.

“What is amazing about this logic gate,” Franco says, “is that the operations are performed not in gigahertz, like in regular computers, but in petahertz, which are one million times faster. This is because of the short laser pulses used that occur in a millionth of a billionth of a second.”

From fundamentals to applications

This new, potentially transformative technology arose from fundamental studies of how charge can be driven in nanoscale systems with lasers.

“Through fundamental theory and its connection with the experiments, we clarified the role of virtual and real charge carriers in laser-induced currents, which opened the way to creating ultrafast logic gates,” says Franco.

The study represents more than 15 years of research by Franco. In 2007, as a Ph.D. student at the University of Toronto, he devised a method to generate ultrafast electrical currents in molecular wires exposed to femtosecond laser pulses. This initial proposal was later implemented experimentally in 2013 and the detailed mechanism behind the experiments was explained by the Franco group in a 2018 study. Since then, there has been what Franco calls “explosive” experimental and theoretical growth in this area.

“This is an area where theory and experiments challenge each other and, in doing so, unveil new fundamental discoveries and promising technologies,” he says.

Stockholm University researcher discovers more difficulty than expected for glaciers to recover from climate warming

Ice shelves are floating extensions of glaciers. If Greenland’s second-largest ice shelf breaks up, it may not recover unless Earth’s future climate cools considerably. 

A team of scientists from Stockholm University and the University of California Irvine investigated whether the Petermann Ice Shelf in northern Greenland could recover from a future breakup due to climate change. They used a sophisticated supercomputer model to simulate the potential recovery of the ice shelf. A crack in Petermann Ice Shelf observed by an international team of scientists during the Oden expedition in 2019. These cracks can eventually grow across the entire ice shelf, leading to the release of large icebergs to the ocean and potentially breakup of the ice shelf. Photo: Martin Jakobsson  CREDIT Photo: Martin Jakobsson

“Even if Earth’s climate stopped warming, it would be difficult to rebuild this ice shelf once it has fallen apart”, says Henning Åkesson, who led the study at Stockholm University.

“If Petermann’s ice shelf is lost, we would have to go back in time towards a cooler climate reminiscent of the period before the industrial revolution to regrow Petermann”, Åkesson says.

Ice shelves reduce mass loss from our polar ice sheets. These gatekeepers thereby limit sea-level rise caused by climate warming. “The rationale to avoid breakup of ice shelves in the first place should be clearer than ever”, Åkesson says.

Glaciers are rapidly melting

Petermann is one of Greenland’s few remaining ice shelves and is being watched by Argus-eyed scientists worldwide after Manhattan-sized icebergs broke off from the ice shelf in 2010 and 2012, causing Petermann to lose 40 percent of its floating ice shelf. Scientists are concerned that further breakup or even collapse of the ice shelf would speed up ice flow from the interior ice sheet. In 2018, a new crack in the middle of the ice shelf was discovered, which renewed worries about Petermann’s state of health.

Ice-sheet experts are concerned

While this study focused on northwestern Greenland’s largest glacier, another grave concern is that the larger ice shelves found in Antarctica could be difficult to build back as well, should they break up too.

“This is just the first step, but chances are that our findings are not unique for Petermann Glacier and Greenland,” Åkesson says. “If they are not, near-future warming of the polar oceans may push the ice shelves protecting Earth’s ice sheets into a new retreated high-discharge state which may be exceedingly difficult to recover from.”

The ice-sheet experts stress that we need to pin down exactly how ice shelves break up and how much more warming they now can withstand before they fall apart.

DTU Compute, DIKU create ML model that maps the potentials of proteins

The biotech industry is constantly searching for the perfect mutation, where properties from different proteins are synthetically combined to achieve the desired effect. It may be necessary to develop new medicaments or enzymes that prolong the shelf-life of yogurt, break down plastics in the wild, or make washing powder effective at low water temperatures. The illustration depicts an example of the shortest path between two proteins, considering the geometry of the graphing. By defining distances in this way, it is possible to achieve biologically more precise and robust conclusions.(Credit: W. Boomsma, N. S. Detlefsen, S. Hauberg)  CREDIT Credit: W. Boomsma, N. S. Detlefsen, S. Hauberg.

New research from DTU Compute and the Department of Computer Science at the University of Copenhagen (DIKU) can in the long term help the industry to accelerate the process. The researchers show how a new way of using Machine Learning (ML) draws a map of proteins, that makes it possible to appoint a candidate list of the proteins that you need to examine more closely.

In recent years, we have started to use Machine Learning to form a picture of permitted mutations in proteins. The problem is, however, that you get different images depending on what method you use, and even if you train the same model several times, it can provide different answers about how the biology is related.

"In our work, we are looking at how to make this process more robust, and we are showing that you can extract significantly more biological information than you have previously been able to. This is an important step forward to be able to explore the mutation landscape in the hunt for proteins with special properties," says Postdoc Nicki Skafte Detlefsen from the Cognitive Systems Section at DTU Compute.

The map of the proteins
A protein is a chain of amino acids, and a mutation occurs when just one of these amino acids in the chain is replaced with another. As there are 20 natural amino acids, this means that the number of mutations increases so quickly that it is completely impossible to study them all. There are more possible mutations than there are atoms in the universe, even if you look at simple proteins. It is not possible to test everything experimentally, so you must be selective about which proteins you want to try to produce synthetically.

The researchers from DIKU and DTU Compute have used their ML model to generate a picture of how the proteins are linked. By presenting the model for many examples of protein sequences, it learns to draw a card with a dot for each protein so that closely related proteins are placed close to each other while distantly related proteins are placed far from each other.

The ML model is based on mathematics and geometry developed to draw maps. Imagine that you must make a map of the globe. If you zoom in on Denmark, you can easily draw a map on a piece of paper that preserves the geography. But if you must draw the earth, mistakes will occur because you stretch the globe, so that the Arctic becomes a long country instead of a pole. So, on the map, the earth is distorted. For this reason, research in map-making has developed a lot of mathematics that describe the distortions and compensate for the distortions on the map.

This is exactly the theory that DIKU and DTU Compute have been able to expand to cover their Machine Learning model (deep learning) for proteins. Because they have mastered the distortion on the map, they can also compensate for it.

"It enables us to talk about what a sensible distance target is between proteins that are closely related, and then we can suddenly measure it. In this way, we can draw a path through the map of the proteins that tell us which way we expect a protein to develop from to another – i.e. mutated, since they are all related to evolution. In this way, the ML model can measure a distance between the proteins and draw optimal paths between promising proteins," says Wouter Boomsma, Associate Professor in the section for Machine Learning at DIKU.

The researchers have tested the model on data from numerous proteins that are found in nature, where their structure is known, and they can see that the distance between proteins starts to correspond to the evolutionary development of the proteins so that proteins that are close to each other evolutionally are placed close to each other.

"We are now able to put two proteins on the map and draw the curve between them. On the path between the two proteins are possible proteins, which have closely related properties. This is no guarantee, but it provides an opportunity to have a hypothesis about which proteins it could be that the biotech industry ought to test when new proteins are designed," says Søren Hauberg, professor in the Cognitive Systems Section at DTU Compute.

The unique collaboration between DTU Compute and DIKU was established through a new center for Machine Learning in Life Sciences (MLLS), which started last year with the support of the Novo Nordisk Foundation. In the center, researchers in artificial intelligence from both universities are working together to solve the fundamental problems in Machine Learning driven by important issues within the field of biology.

The developed protein maps are part of a large-scale project that spans from basic research to industrial applications, e.g. in collaboration with Novozymes and Novo Nordisk.

FACT BOX: Artificial intelligence, machine learning, and deep learning

When computer programs can do something 'smart', it is called artificial intelligence – or just AI. Artificial intelligence is thus a unified concept that covers several methods. One of the methods is Machine Learning, and the latest and most advanced use of Machine Learning is called Deep Learning.

Deep Learning is based on neural networks, which is a mathematical model, where the model itself from a given dataset and without direct programming can learn to find patterns in data. Because you use data, it is called a data-driven model.

In unsupervised learning, the goal is to train a neural network to discover the underlying patterns in the data. This is typically done by attempting to compress data because it thereby rejects the trends in data that are least frequent, while the most important data takes up more information, so you can see the underlying patterns.

Utilizing many repetitions, the network learns which patterns in data can be used to compress data.

Once the model has been trained, it is tested on unknown data, which then also can be compressed into a compact representation that can be interpreted to form scientific hypotheses or form the foundation for other Machine Learning models.