LATEST

Today’s quantum technologies are set to revolutionize information processing, communications, and sensor technology in the coming decades. The basic building blocks of future quantum processors are, for example, atoms, superconducting quantum electronic circuits, spin crystals in diamonds, and photons. In recent years it has become clear that none of these quantum building blocks is able to meet all the requirements such as receiving and storing quantum signals, processing and transmitting them. A research group headed by Professors József Fortágh, Reinhold Kleiner and Dieter Kölle of the University of Tübingen Institute of Physics has succeeded in linking magnetically-stored atoms on a chip with a superconducting microwave resonator. The linking of these two building blocks is a significant step towards the construction of a hybrid quantum system of atoms and superconductors which will enable the further development of quantum processors and quantum networks. The study has been published in the latest Nature Communications.

Quantum states allow especially efficient algorithms which far outstrip the conventional options to date. Quantum communications protocols enable, in principle, unhackable data exchange. Quantum sensors yield the most precise physical measurement data. “To apply these new technologies in everyday life, we have to develop fundamentally new hardware components,” Fortágh says. Instead of the conventional signals used in today’s technology – bits – which can only be a one or a zero, the new hardware will have to process far more complex quantum entangled states.

“We can only achieve full functionality via the combination of different quantum building blocks,” Fortágh explains. In this way, fast calculations can be made using superconducting circuits; however storage is only possible on very short time scales. Neutral atoms hovering over a chip’s surface, due to their low strength for interactions with their environment, are ideal for quantum storage, and as emitters of photons for signal transmission. For this reason, the researchers connected two components to make a hybrid in their latest study. The hybrid quantum system combines nature’s smallest quantum electronic building blocks – atoms – with artificial circuits – the superconducting microwave resonators. “We use the functionality and advantages of both components,” says the study’s lead author, Dr. Helge Hattermann, “The combination of the two unequal quantum systems could enable us to create a real quantum processor with superconducting quantum lattices, atomic quantum storage, and photonic qubits.” Qubits are – analogous to bits in conventional supercomputing – the smallest unit of quantum signals.

The new hybrid system for future quantum processors and their networks forms a parallel with today’s technology, which is also a hybrid, as a look at your computer hardware shows: Calculations are made by microelectronic circuits; information is stored on magnetic media, and data is carried through fiber-optic cables via the internet. “Future quantum computers and their networks will operate on this analogy – requiring a hybrid approach and interdisciplinary research and development for full functionality,” Fortágh says.

The role and risks of bots, such as automated Twitter accounts, in influencing public opinion and political elections continues to provoke intense international debate and controversy. An insightful new collection of articles focused on “Computational Propaganda and Political Big Data” examines how these bots work, approaches to better detect and control them, and how they may have impacted recent elections around the globe. The collection is published in a special issue of Big Data, a peer-reviewed journal from Mary Ann Liebert, Inc., publishers, and is available free on the Big Data website.

Guest Editors Philip N. Howard, PhD and Gillian Bolsover, DPhil, from University of Oxford, U.K. have compiled a compelling series of articles that provide a broad range of perspectives and examples on the use of bots in social and online media to achieve massive dissemination of propaganda and political messages.

In the article entitled "Social Bots: Human-Like by Means of Human Control?" coauthors Christian Grimme, Mike Preuss, Lena Adam, and Heike Trautmann, University of Münster, Germany offer a clear definition of a bot and description of how one works using Twitter as an example. The authors examine the current technical limitations of bots and how the increasing integration of humans and bots can expand their capabilities and control, leading to more meaningful interactions with other humans. This hybridization of humans and bots requires more sophisticated approaches for identifying political propaganda distributed with social bots.



The article "Detecting Bots on Russian Political Twitter," coauthored by Denis Stukal, Sergey Sanovich, Richard Bonneau, and Joshua Tucker from New York University, presents a novel method for detecting bots on Twitter. The authors demonstrate the use of this method, which is based on a set of classifiers, to study bot activity during an important period in Russian politics. They found that on most days, more than half of the tweets posted by accounts tweeting about Russian politics were produced by bots. They also present evidence that a main activity of these bots was spreading news stories and promoting certain media outlets.

Fabian Schäfer, Stefan Evert, and Philipp Heinrich, Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany, evaluated more than 540,000 tweets from before and after Japan's 2014 general election, and present their results on the identification and behavioral analysis of social bots in the article entitled "Japan's 2014 General Election: Political Bots, Right-Wing Internet Activism, and Prime Minister Shinzō Abe’s Hidden Nationalist Agenda." The researchers focused on the repeated tweeting of nearly the identical message and present both a case study demonstrating multiple patterns of bot activity and potential methods to allow for automation identification of these patterns. They also provide insights into the purposes behind the use of social and political bots.

“Big Data is proud to present the first collection of academic articles on a subject that is front and center these days,” says Big Data Editor-in-Chief Vasant Dhar, Professor at the Stern School of Business and the Center for Data Science at New York University. “While social media platforms have created a wonderful social space for sharing and publishing information, it is becoming clear that they also pose significant risks to societies, especially open democratic ones such as ours, for use as weapons by malicious actors. We need to be mindful of such risks and take action to mitigate them. One of the roles of this special issue is to present evidence and scientific analysis of computational propaganda on social media platforms. I’m very pleased with the set of articles in this special issue.”

Pukao are large, cylindrical stones made from a volcanic rock known as 'red scoria.' Weighing multiple tons, they were placed on the heads of the moai during prehistoric times, consistent with the Polynesian traditions of honoring their ancestors.  CREDIT Carl LIpo

Analysis of giant stone hats found on Rapa Nui, Chile (Easter Island) provides evidence contrary to the widely held belief that the ancient civilization had a warrior culture. According to a new study conducted by a team of researchers, including a professor at Binghamton University, State University of New York, these stone hats suggest that the people of Rapa Nui were part of a supportive and inclusive community.

Carl Lipo, anthropology professor and director of the Environmental Studies Program at Binghamton University, and a team of researchers studied the monumental statues (moai) on Rapa Nui, and the previously unacknowledged giant stone hats (pukao) that were placed atop them. Pukao are large, cylindrical stones made from a volcanic rock known as 'red scoria.' Weighing multiple tons, they were placed on the heads of the moai during prehistoric times, consistent with the Polynesian traditions of honoring their ancestors. 

The researchers produced the first study analyzing the pukao and their significance, examining the 70 multi-ton giant hats scattered around the island that have gradually eroded over time. Using photography to produce 3D supercomputer models, the researchers were able to study the pukao in greater detail and discovered that there are far more drawings carved into the hats than was previously thought.

"With the building mitigating any sense of conflict, the moai construction and pukao placement were key parts to the success of the island," said Lipo. "In our analysis of the archaeological records, we see evidence that demonstrates the prehistoric communities repeatedly worked together to build monuments. The action of cooperation had a benefit to the community by enabling sharing of information and resources."

While Easter Island is famous, the archaeological record of the island is not well-documented, said Lipo. He believes that scientists can learn a great deal from the pukao by examining this new information.

"Every time we look at the archaeological record of the island, we are surprised by what we find. There is much more to be learned from this remarkable place -- important answers that shed light on the abilities of our ancestors, as well as potential ideas for contemporary society about what it takes to survive on a tiny and remote island," said Lipo.

Researchers in Brazil calculated the overall electron structure of the vacancy region of a crystal lattice through the unprecedented use of a hybrid functional method, which yielded results compatible with experimental data.

A study conducted at the University of São Paulo's Physics Institute (IF-USP), Brazil, has resolved a longstanding controversy dogging the international community of researchers dedicated to investigating defects in graphene. The controversy is related to the calculation of the overall electronic structure of defects. This configuration, which comprises many variables, was described in different ways depending on the researcher and the model used. The solution, which is identical for all models and compatible with experimental findings, was obtained by Chilean Ana María Valencia García and her PhD supervisor, Marília Junqueira Caldas, Full Professor at IF-USP.

An article authored by both researchers has been published in the journal Physical Review B with the title "Single vacancy defect in graphene: Insights into its magnetic properties from theoretical modeling". The journal's editors chose one of the figures from the article for inclusion in the Kaleidoscope section, which promotes interest in the esthetics of physics by featuring images selected for their artistic appeal.

García received a PhD scholarship from Chile's National Scientific & Technological Research Commission (CONICYT), while Caldas was supported by the National Organic Electronics Institute (INEO), which is funded jointly by FAPESP and Brazil's National Council for Scientific & Technological Development (CNPq).

"There were divergences in the community regarding whether the vacancy formed by removing a single carbon atom from a graphene sheet's crystal lattice causes a weak or strong magnetic moment, and regarding the strength of the magnetic interaction between vacancies," Caldas said. The vacancy prompts the surrounding atoms to rearrange themselves into new combinations to accommodate the absence of an atom, forming electron clusters known as "floating orbitals" at the vacant site.

Three important variables are associated with the phenomenon: electron density, i.e., how the electrons are distributed; electron levels, i.e., the energy levels occupied by the electrons; and magnetic moment, i.e., the torque produced in the electrons by an external magnetic field.

First-hand use of hybrid method in graphene

Reflecting on the divergence, Caldas finds strange that its proponents are all excellent researchers affiliated with renowned international institutions. Studies conducted by the research revealed that the divergent values derived from the use of different simulation methods.

"There are two ways to calculate the overall electron structure of the vacancy region, both derived from quantum mechanics: the Hartree-Fock (HF) method, and density functional theory (DFT). In DFT the calculation is performed by making each electron interact with average electron density, which includes the electron in question. In HF the operator used excludes the electron and considers only its interaction with the others. HF produces more precise results for electron structure but the calculation is far more laborious," Caldas said.

"The two methods are often combined by means of hybrid functionals, which have been mentioned in the scientific literature since the end of the twentieth century. I worked with them myself some time ago in a study on polymers, but they had never been used in the case of graphene. What Ana María [Valencia García] and I did was discover the hybrid functional that best describes the material. Applied to several models using supercomputer simulation, our hybrid functional produced the same result for them all and this result matched the experimental data."

Besides resolving the controversy, which had lasted years, and having one of its images selected for esthetic value, another interesting aspect of this research is the problem that motivated it. "We came to it via the interest aroused by a material known as anthropogenic dark earth or ADE," Caldas explained. "ADE is a kind of very dark, fertile soil found in several parts of the world including the Amazon. It retains moisture even at high temperatures and remains fertile even under heavy rain. It's called anthropogenic because its composition derives from middens and cultivation by indigenous populations in the pre-Columbian period at least two millennia ago. This intriguing material was known to have resulted from multi-stacked layers of graphene nanoflakes. It was our interest in ADE that led us to study the phenomenon of vacancy in graphene sheets."

In conclusion, it should be noted that there are potential applications of vacancy in graphene sheets, since information can be encoded in the defect and not in the entire structure. Much more research will be needed before applications can be developed, however.

Artificial intelligence (AI) is giving researchers at the University of Waterloo new insights to help reduce wear-and-tear injuries and boost the productivity of skilled construction workers.

Studies using motion sensors and AI software have revealed expert bricklayers use previously unidentified techniques to limit the loads on their joints, knowledge that can now be passed on to apprentices in training programs.

"The people in skilled trades learn or acquire a kind of physical wisdom that they can't even articulate," said Carl Haas, a professor of civil and environmental engineering at Waterloo. "It's pretty amazing and pretty important."

Surprisingly, the research shows master masons don't follow the standard ergonomic rules taught to novices. Instead, they develop their own ways of working quickly and safely. 

Examples include more swinging than lifting of blocks and less bending of their backs.

"They're basically doing the work twice as fast with half the effort - and they're doing it with higher quality," said Haas, who leads the research with Eihab Abdel-Rahman, a systems design engineering professor at Waterloo. "It's really intriguing."

In their first study, the researchers analyzed data from bricklayers of various experience levels who wore sensor suits while building a wall with concrete blocks. The data showed experts put less stress on their bodies, but were able to do much more work.

A followup study was done to determine how master masons work so efficiently. It involved the use of sensors to record their movements and AI supercomputer programs to identify patterns of body positions.

The researchers now plan to do more in-depth study of how the experts move on the job.

"Skilled masons work in ways we can show are safer, but we don't quite understand yet how they manage to do that," said Hass, who compares their skill to a professional golf swing. "Now we need to understand the dynamics."

Musculoskeletal injuries are a significant problem in bricklaying, causing many apprentices to drop out and many experienced workers to prematurely wear out.

As part of their work, the researchers are now developing a system that uses sensor suits to give trainees immediate feedback so they can modify their movements to reduce stress on their bodies.

"There is an unseen problem with craft workers who are just wearing out their bodies," he said. "It's not humane and it's not good for our economy for skilled tradespeople to be done when they're 50."

A molecular dynamics model showing a nanoparticle binding to the outer envelope of the human papillomavirus.  CREDIT Petr Kral

Viral infections kill millions of people worldwide every year, but currently available antiviral drugs are limited in that they mostly act against one or a small handful of related viruses. A few broad-spectrum drugs that prevent viral entry into healthy cells exist, but they usually need to be taken continuously to prevent infection, and resistance through viral mutation is a serious risk.

Now, an international group of researchers including UIC professor of chemistry Petr Kral, have designed new anti-viral nanoparticles that bind to a range of viruses, including herpes simplex virus, human papillomavirus, respiratory syncytial virus and Dengue and Lentiviruses. Unlike other broad-spectrum antivirals, which simply prevent viruses from infecting cells, the new nanoparticles destroy viruses.

The team's findings are reported in the journal Nature Materials.

The new nanoparticles mimic a cell surface protein called heparin sulfate proteoglycan (HSPG). A significant portion of viruses, including HIV, enter and infect healthy cells by first binding to HSPGs on the cell surface. Existing drugs that mimic HSPG bind to the virus and prevent it from binding to cells, but the strength of the bond is relatively weak. These drugs also can't destroy viruses, and the viruses can become reactivated when the drug concentration is decreased.

Kral and his colleagues, including Lela Vukovic, assistant professor of chemistry at the University of Texas at El Paso and an author on the paper, sought to design a new anti-viral nanoparticle based on HSPG, but one that would bind more tightly to viral particles and destroy them at the same time.

In order to custom-design the anti-viral nanoparticles, Kral and Vukovic's groups worked hand-in-hand with experimentalists, virus experts and biochemists from Switzerland, Italy, France and the Czech Republic.

"We knew the general composition of the HSPG-binding viral domains the nanoparticles should bind to, and the structures of the nanoparticles, but we did not understand why different nanoparticles behave so differently in terms of both binding strength and preventing viral entry into cells," said Kral.

Through elaborate supercomputer simulations, Kral and colleagues helped solve these issues and guided the experimentalists in tweaking the nanoparticle design so that they worked better.

The researchers used advanced supercomputational modeling techniques to generate precise structures of various target viruses and nanoparticles down to the location of each atom. A deep understanding of the interactions between individual groups of atoms within the viruses and nanoparticles allowed the researchers to estimate the strength and permanence of potential bonds that could form between the two entities, and helped them to predict how the bond could change over time and eventually destroy the virus.

The team's final "draft" of the anti-viral nanoparticle could bind irreversibly to a range of viruses, and caused lethal deformations to the viruses, but had no effect on healthy tissues or cells. In vitro experiments with the nanoparticles showed that they bound irreversibly to the herpes simplex virus, human papillomavirus, syncytial virus, Dengue virus and Lentivirus.

"We were able to provide the data needed to the design team so that they could develop a prototype of what we hope will be a very effective and safe broad-spectrum anti-viral that can be used to save lives," said Kral.

Credit: Vikram Mulligan. This conceptual art "Illuminating the energy landscape" shows the power of computational design to explore and illuminate structured peptides across the vast energy landscape.

New approaches developed at the UW Institute of Protein Design offer advantages for researchers attempting to design midsize drug compounds.

New supercomputational strategies reported this week in Science might help realize the promise of peptide-based drugs.

Macrocyclic peptides have sparked pharmaceutical industry interest, because they have certain physical and chemical properties that could become the basis of a new generation of medications. Peptides are similar to protein molecules, but differ in their smaller size, structure and functions.

Small peptides have the benefits of small molecule drugs, like aspirin, and large antibody therapies, like rituximab, with fewer drawbacks.  They are stable like small molecules and potent and selective like antibodies. 

An example of a macrocyclic peptide drug success story is cyclosporine, used as an immunosuppressant for organ transplants and some autoimmune disorders. 

Before the work described in the Science paper, there was no way to systematically design ordered peptide macrocycles like cyclosporine. 

Naturally occurring peptides that might serve as reliable starting points, or scaffolds, are few.  Equally as frustrating is that they often fail to perform as expected when repurposed.  Instead, researchers had resorted to screening through large, randomly generated libraries of compounds in the hopes of finding what they needed.

The methods covered in the report, "Comprehensive computational design of ordered peptide macrocycles" now solve these problems.

The lead authors are Parisa Hossienzadeh, Gaurav Bhardwaj and Vikram Mulligan, of the University of Washington School of Medicine Department of Biochemistry and the UW Institute for Protein Design. The senior author is David Baker, professor of biochemistry and head of the institute.   Baker is also a Howard Hughes Medical Institute investigator.

"In our paper," the researchers noted, "we describe computational strategies for designing peptides that adopt diverse shapes with very high accuracy and for providing comprehensive coverage of the structures that can be formed by short peptides."

A conceptual illustration of sampling many peptide structures and selecting the best through their energy landscapes by Ahmad Hosseinzadeh and Khosro Khosravi.A conceptual illustration of sampling many peptide structures and selecting the best through their energy landscapes by Ahmad Hosseinzadeh and Khosro Khosravi.

They pointed out the advantages of this new computational approach:

First, they were able to design and compile a library of many new stable peptide scaffolds that can provide the basic platforms for drug candidate architecture.  Their methods also can be used to design additional custom peptides with arbitrary shapes on demand.

"We sampled the diverse landscape of shapes that peptides can form, as a guide for designing the next generation of drugs," the researchers said.

Key to control of the geometry and chemistry of molecules was the design of peptides with natural amino acids, called L-amino acids, and their mirror opposites containing D-amino acids. (The L and D stand for Latin words for rotating to the left or the right, as some molecular structures can have left-or-right handedness or chirality).

The D-amino acids improved pharmacological properties by increasing resistance to natural enzymes that breakdown peptides.  Inclusion of D-amino acids in designs also allows for a more diverse range of shapes.

Designing peptides takes intensive computer power, resulting in expensive calculations.  The researchers credited a cadre of citizen scientists and volunteers who donated their spare cellular smartphone minutes and computer times.  The Hyak Supercomputer at the University of Washington also ran some of the programs.

The researchers pointed to future directions for their peptide supercomputational design approaches.  They hope to design peptides that can permeate cell membranes and go inside living cells.

In other aspects, they plan to add new functionalities to peptide structures by stabilizing the binding motifs at protein-protein interfaces for basic science studies.   For clinical applications, they anticipate using their methods and scaffolds for developing peptide-based drugs.

The work published this week in Science was supported by awards from the National Institutes of Health, the Washington Research Foundation, the American Cancer Society, a Pew Latin-American fellowship, and the Howard Hughes Medical Institute.   Facilities sponsored by the U.S. Department of Energy also were used.

Other researchers on the project were: Matthew D. Shortridge, Timothy W. Craven, Fatima Párdo-Avila, Stephen A. Rettie, David E. Kim, Daniel-Adriano Silva, Yehia M. Ibrahim, Ian K. Webb, John R. Cort, Joshua N. Adkins and Gabriele Varani.

Jeffrey Alan Golden, M.D., Brigham and Women's Hospital, Boston

Supercomputer algorithms detected the spread of cancer to lymph nodes in women with breast cancer as well as or better than pathologists.

Digital imaging of tissue sample slides for pathology has become possible in recent years because of advances in slide scanning technology. Artificial intelligence, where computers learn to do tasks that normally require  human intelligence, has potential for making diagnoses. Using supercomputer algorithms to analyze digital pathology slide images could potentially improve the accuracy and efficiency of pathologists.

 
Researchers competed in an international challenge in 2016 to produce computer algorithms to detect the spread of breast cancer by analyzing tissue slides of sentinel lymph nodes, the lymph node closest to a tumor and the first place it would spread. The performance of the algorithms was compared against the performance of a panel of 11 pathologists participating in a simulation exercise.

Authors: Babak Ehteshami Bejnordi, M.S., Radboud University Medical Center, Nijmegen, the Netherlands and coauthors

Results:

  • Some computer algorithms were better at detecting cancer spread than pathologists in an exercise that mimicked routine pathology workflow.
  • Some algorithms were as good as an expert pathologist interpreting images without any time constraints.

Study Limitations: The test data on which algorithms and pathologists were evaluated are not comparable to the mix of cases pathologists encounter in clinical practice.

Study Conclusions: Supercomputer algorithms detected the spread of cancer to lymph nodes in women with breast cancer as well as or better than pathologists. Evaluation in a clinical setting is required to determine the benefit of using artificial intelligence in pathology to detect cancer requires.

The editorial, "Deep Learning Algorithms for Detection of Lymph Node Metastases From Breast Cancer," by Jeffrey Alan Golden, M.D., Brigham and Women's Hospital, Boston

The study, "Development and Validation of a Deep Learning System for Diabetic Retinopathy and Related Eye Diseases Using Retinal Images From Multiethnic Populations With Diabetes," by Tien Yin Wong, M.D., Ph.D., Singapore National Eye Center, Singapore, and coauthors.

Follow a link to this report here.

Novel combination of security proof techniques and protocols help solve the puzzle of high speed quantum key distribution for secure communication

A quantum information scientist from the National University of Singapore (NUS) has developed efficient "toolboxes" comprising theoretical tools and protocols for quantifying the security of high-speed quantum communication. Assistant Professor Charles Lim is part of an international team of experimental and theoretical scientists from Duke University, Ohio State University and Oak Ridge National Laboratory that has recently achieved a significant breakthrough in high-rate quantum secure communication.

Quantum computers are powerful machines that can break today's most prevalent encryption technologies in minutes. Crucially, recent progress in quantum computing has indicated that this threat is no longer theoretical but real, and large-scale quantum computers are now becoming a reality. If successfully implemented, these computers could be exploited to decrypt any organisation's trade secrets, confidential communication, and sensitive data retrospectively or remotely.

Quantum key distribution (QKD) is an emerging quantum technology that enables the establishment of secret keys between two or more parties in an untrusted network. Importantly, unlike conventional encryption techniques, the security of QKD is mathematically unbreakable -- it is based solely on the established laws of nature. As such, messages and data encrypted using QKD keys are completely secure against any attacks on the communication channel. For this reason, QKD is widely seen as the solution that will completely resolve the security threats posed by future quantum computers.

Today, QKD technology is relatively mature and there are now several companies selling QKD systems. Very recently, researchers from China have managed to distribute QKD keys to two ground stations located 1200 kilometres apart. However, despite these major developments and advances, practical QKD systems still face some inherent limitations. One major limitation is the secret key throughput -- current QKD systems are only able to transmit 10,000 to 100,000 secret bits per second. This limitation is largely due to the choice of quantum information basis: many QKD systems are still using low-dimensional information basis, such as the polarisation basis, to encode quantum information.

"Poor secret key rates arising from current QKD implementations have been a major bottleneck affecting the use of quantum secure communication on a wider scale. For practical applications, such systems need to be able to generate secret key rates in the order of megabits per second to meet today's digital communication requirements," said Asst Prof Lim, who is from the Department of Electrical and Computer Engineering at NUS Faculty of Engineering as well as Centre for Quantum Technologies at NUS.

In the study, the research team developed a QKD system based on time and phase bases which allows for more secret bits to be packed into a single photon. Notably, the team had achieved two secret bits in a single photon, with a secret key rate of 26.2 megabits per second.

The findings of the study were published online in scientific journal Science Advances on 24 November 2017.

Time-bin encoding

Encoding quantum information in the time and phase bases is a promising approach that is highly robust against typical optical channel disturbances and yet scalable in the information dimension. In this approach, secret bits are encoded in the arrival time of single photons, while the complementary phase states -- for measuring information leakages -- are encoded in the relative phases of the time states. This encoding technique, in principle, could allow one to pack arbitrarily many bits into a single photon and generate extremely high secret key rates for QKD. However, implementing such high-dimensional systems is technically challenging and tools for quantifying the practical security of high-dimensional QKD are limited.

To overcome these problems for their QKD system, the researchers used a novel combination of security proof techniques developed by Asst Prof Lim and an interferometry technique by Professor Daniel Gauthier's research group from Duke University and Ohio State University. Asst Prof Lim was involved in the protocol design of the QKD system as well as proving the security of the protocol using quantum information theory.

"Our newly developed theoretical and experimental techniques have resolved some of the major challenges for high-dimensional QKD systems based on time-bin encoding, and can potentially be used for image and video encryption, as well as data transfer involving large encrypted databases. This will help pave the way for high-dimensional quantum information processing," added Asst Prof Lim, who is one of the co-corresponding authors of the study.

Next steps

Moving forward, the team will be exploring ways to generate more bits in a single photon using time-bin encoding. This will help advance the development of commercially viable QKD systems for ultra-high rate quantum secure communication.

  1. UCLA chemists synthesize narrow ribbons of graphene using only light, heat
  2. New machine learning algorithm recognizes distinct dolphin clicks in underwater recordings
  3. UB's Pokhriyal uses machine learning framework to create new maps for fighting extreme poverty
  4. SwRI’s Marchi models the massive collisions after moon formation remodeled early Earth
  5. Canadian swarm-based supercomputer simulation strategy proves significantly shorter
  6. German astrophysicist simulates the size of neutron stars on the brink of collapse
  7. Patkar’s new supercomputational framework shows shifting protein networks in breast cancer may alter gene function
  8. Army researchers join international team to defeat disinformation cyberattacks
  9. John Hopkins researcher Winslow builds new supercomputer model sheds light on biological events leading to sudden cardiac death
  10. Italian scientist Baroni creates new method to supercompute heat transfer from optimally short MD simulations
  11. French researchers develop supercomputer models for better understanding of railway ballast
  12. Finnish researchers develop method to measure neutron star size uses supercomputer modeling based on thermonuclear explosions
  13. Using machine learning algorithms, German team finds ID microstructure of stock useful in financial crisis
  14. Russian prof Sergeyev introduces methodology working with numerical infinities and infinitesimals; opens new horizons in a supercomputer called Infinity
  15. ASU, Chinese researchers develop human mobility prediction model that offers scalability, a more practical approach
  16. Marvell Technology buys rival chipmaker Cavium for $6 billion in a cash-and-stock deal
  17. WPI researchers use machine learning to detect when online news are a paid-for pack of lies
  18. Johns Hopkins researchers develop model estimating the odds of events that trigger sudden cardiac death
  19. Young Brazilian researcher creates supercomputer model to explain the origin of Earth's water
  20. With launch of new night sky survey, UW researchers ready for era of big data astronomy

Page 6 of 42