Using super-advanced computations, Northeastern researchers' breakthrough in quantum sensing provides new material to make qubits

For decades now, the world has become increasingly reliant on computers and sensors to do just about everything, and the technologies themselves are getting smaller, faster, and more efficient. Take your smartphone as an example: a pocket-sized piece of mostly aluminum, iron, and lithium that is millions of times more powerful than the computers that guided the Apollo 11 moon landing in 1969.

Advancements in quantum technologies, which deploy the properties of quantum physics, promise to take a step further and revolutionize virtually all of industry and daily life. The result could yield more powerful and energy-efficient devices. But to do so requires that physicists get creative about how they exploit the weird ways atoms interact with each other. 

It turns out that atomic defects in certain solid crystals may be key to unleashing the potential of the quantum revolution, according to discoveries by Northeastern researchers. The defects are essentially irregularities in the way that atoms are arranged to form crystalline structures. Those irregularities could provide the physical conditions to host something called a quantum bit, or qubit for short—a foundational building block for quantum technologies, says Arun Bansil, university distinguished professor in the Department of Physics at Northeastern. 

Qubits are fundamentally different from classical computer bits, which are the most basic units of information in computing. But because both are made out of incredibly small material, they are subject to the forces operating in the enigmatic and elusive world of nanoparticles. 

Bansil and colleagues found that defects in a certain class of materials, specifically two-dimensional transition metal dichalcogenides, contained the atomic properties conducive to making qubits. Bansil says the findings amount to something of a breakthrough, particularly in quantum sensing, and may help accelerate the pace of technological change. 

“If we can learn how to create qubits in this two-dimensional matrix, that is a big, big deal,” Bansil says. 

Transition metal dichalcogenides have a diverse range of quantum properties, making them especially attractive for scientific investigation, Bansil says. Researchers in the field have said that the unique materials have “virtually unlimited potential in various fields, including electronic, optoelectronic, sensing, and energy storage applications.” Arun Bansil, university distinguished professor in the Department of Physics at Northeastern Photo by Matthew Modoono/Northeastern University

Using super-advanced computations, Bansil and his colleagues sifted through hundreds of different material combinations to find those capable of hosting a qubit. 

“When we looked at a lot of these materials, in the end, we found only a handful of viable defects—about a dozen or so,” Bansil says. “Both the material and type of defect are important here because in principle there are many types of defects that can be created in any material.”

The key finding of the study is that the so-called “antisite” defect in films of the two-dimensional transition metal dichalcogenides carries something called “spin” with it. Spin, also called angular momentum, describes a fundamental property of electrons defined in one of two potential states: up or down, Bansil says.

To get a better sense of what a qubit is and how it can be applied to future computers and sensors, it’s important to understand how data is processed in existing “classical” computers. Classical computers use bits to perform computations. When you’re doing almost anything on a computer, you’re sending it a set of instructions that engages a central processing unit, or CPU. The CPU is made of circuitry that uses electrical signals to direct the whole computer to carry out program instructions that are stored in the system’s memory.

These signals communicate using information that’s encoded or packaged, into bits. The information is represented numerically in one of two values: 0 or 1, which describe the states of various circuits as being either on or off. All modern electronic devices operate through circuit components that send and receive information by essentially manipulating these 0s and 1s, Bansil says. 

Qubits behave quite differently from existing bits, thanks to not-well-understood quantum mechanical properties. What makes a qubit different is that its values are fluid, meaning—and here’s where things get weird—they can be both 0 and 1 at the same time. This is because of something called superposition, a core principle of quantum mechanics that states that a quantum system can exist in multiple states at a given time until it is measured. 

Quantum information systems instead can use the probability that a qubit will be in one or another state when measured, or observed, to make calculations. 

“What is unique about a quantum bit is that it can essentially code two different states at the same time,” Bansil says. “You are in a position to actually in principle store a very large number of possibilities in a very small number of qubits simultaneously.”

The challenge for researchers has been how to find qubits that are stable enough to use, given the difficulties in finding the precise atomic conditions under which they can be materially realized. 

“The current qubits available—especially those involved in quantum computing—all operate at very low temperatures, making them incredibly fragile,” Bansil says. That’s why the discovery of transition metal dichalcogenides’ defects holds such promise, he adds. 

Quantum Brilliance wins commercialization partner in $17.5M German quantum supercomputing research project

Funded by the Federal Ministry of Education and Research (BMBF), Fraunhofer Institute for Applied Solid State Physics (Fraunhofer IAF) to lead the development of a spin-photon, diamond-based quantum supercomputer demonstrator

Quantum Brilliance, a German-Australian manufacturer of innovative quantum supercomputing hardware, is the commercialization partner in a $17.5 million research project funded by the German government to develop a compact, scalable quantum supercomputer demonstrator with spin-photon qubits leveraging synthetic diamonds.

Led by the Fraunhofer Institute for Applied Solid State Physics IAF, the goal of the three-year project is to develop a demonstrator that delivers low error rates and reliable operation at cryogenic temperatures so it can be used adjacent to classical computer systems. Researchers believe the quantum processor will be able to calculate the results of highly complex quantum chemical reactions in the future, among other applications.

Funded by the Federal Ministry of Education and Research, the Spinning – Spin-Photon-based Quantum Computer based on Diamond project will include the participation of 28 experts from science and industry. As a commercialization partner, Quantum Brilliance will be providing input regarding the economics of producing the system and the broad, practical applications that will benefit from its development.

"The aim of our work is, among other things, to ensure reliable operation of such an innovative quantum computer and to create a peripheral system to make computing power available to a broad group of users, for example via cloud computing," said Prof. Rüdiger Quay, Ph.D., project coordinator and managing director of Fraunhofer IAF.

To develop the quantum processor with spin qubits made of synthetic diamonds, nitrogen atoms (NV centers) are specifically implanted in the diamond lattice. These act as computer nodes between quantum properties transmitted by light, laying the foundation for later scaling. The first demonstrator model is planned to deliver up to 10 qubits, a later model with 100 qubits or more while offering maximum connectivity and configuration. 

"We are delighted to be part of this exciting BMBF-funded project led by Fraunhofer IAF. Quantum computing is one of the key industries of the future – with a potential that is second to none,” said Mark Mattingley-Scott, European Head of Quantum Brilliance. “With its research landscape, local industry, and the support of the public sector, Germany has the perfect conditions to be a leader in this promising industry.”

In addition to Quantum Brilliance, six universities, two non-profit research institutions, four industrial companies, and fourteen associated partners are working on the project under the direction of Fraunhofer IAF. All participants are highly active in the field of pre-competitive hardware, firmware, and software development and include:

  • Fraunhofer Institute for Applied Solid State Physics IAF (Coordinator)
  • Fraunhofer Institute for Integrated Systems and Device Technology IISB
  • Forschungszentrum Jülich GmbH
  • Karlsruhe Institute of Technology (KIT)
  • University of Konstanz
  • Heidelberg University
  • Technical University of Munich
  • University of Ulm
  • Diamond Materials GmbH, Freiburg im Breisgau
  • NVision Imaging Technologies GmbH, Ulm
  • Qinu GmbH, Karlsruhe
  • University of Stuttgart
  • Quantum Brilliance GmbH, Stuttgart
  • Swabian Instruments GmbH, Stuttgart

Korean research team develops AI core software, Deep Learning Compiler

ETRI disclosed System SW which improves deep learning inference latency, while the SW is established as a standard. Its compatibility with AI chips and applications is improved while reducing its development cost and time. 

A Korean research team has unveiled a core technology that reduces the time and cost invested in developing artificial intelligence (AI) chips in small and medium-sized enterprises and startups. Thanks to the development of system software that solved the problems of compatibility and scalability between hardware and software which have been obstacles in the past, the development of AI chips is also expected to accelerate.

Electronics and Telecommunications Research Institute (ETRI) has developed an AI core software, Deep Learning Compiler 'NEST-C'. It was released on the web (Github) along with the internally developed AI chip hardware so that developers can use it easily.

As AI technology develops, deep learning application services are expanding into various fields. As the AI algorithm to implement this service becomes more complex, the necessity for better and more efficient arithmetic processing has increased.

ETRI resolved this problem by defining a common intermediate representation suitable for AI applications to apply it to the nest compiler. By resolving the heterogeneity between AI applications and AI chips, AI chip development becomes easier. This technology was also established as a standard of the Telecommunications Technology Association (TTA).

Instead of CPU or GPU, Neural Processing Unit (NPU) AI chip specializing in AI computation processing is drawing more attention. To run applications such as autonomous driving, Internet of Things (IoT), and sensors, optimized AI chips to each application must be designed.

At the same time, an optimized compiler must produce an accurate execution code for each application to achieve optimal performance. It is because Deep Learning Compiler, core system software that guarantees the inference performance of deep learning models, is important. It acts as a bridge between hardware and application software.

In general, manufacturers develop this tool along with AI chip and system software for sale. So far, SMEs and Startups have difficulty in focusing their capabilities on chip design. It is because a considerable amount of time needs to be devoted to developing and optimizing system software and application. System software supplied by a large manufacturer is optimized to its own chip, and there is a limitation to its application as it is developed privately.

Especially, there is an inconvenience of developing different compilers depending on the chip type and AI application. There has been a limitation on inter-compatibility and extensibility into new areas.

This development will make it possible to shorten the time for developing applications and their optimization. It is also related to reducing the cost of chip production and sales. It is compatible with NPU processors as well as CPU and GPU.

Its difference becomes even more significant when it supports more types of AI applications and chips. It was necessary to develop as many compilers as ‘deep learning platform type and chip type’ in the past, but it became possible to replace the compilers with one highly versatile nest compiler now.

While releasing the Nest Compiler as an open-source, ETRI also revealed a reference model where the Nest Compiler was applied to an AI chip developed by ETRI internally to revitalize the related industry ecosystem. This is the first time when both software and hardware for AI chip development have been disclosed.

The research team announced that this disclosure is significant in that Nest Compiler can play a pivotal role in the current AI chip ecosystem where development is fragmented.

ETRI also applied Nest Compiler to the high-performance AI chips which were internally developed by a Korean AI chip startup. ETRI plans to expand the scope of supporting deep learning compilers by collaborating with related companies, and it is also promoting the commercialization of AI chip application services through specialized software companies. Moreover, ETRI plans to contribute to creating new services by improving the performance and convenience required for developing AI application services.

Taeho Kim, Assistant Vice President of AI SoC Research Division, said, “The release of the standard deep learning compiler open source is to revitalize the Korean AI chip ecosystem. Technology cooperation is underway to apply the technology to various AI chip companies.”