Yale researchers build simulations of particles that more accurately model their collective behavior

If you take a bucket of water balloons and jostle one of them, the neighboring balloons will respond as well. This is a scaled-up example of how collections of cells and other deformable particle packings respond to forces. Modeling this phenomenon with supercomputer simulations can shed light on questions about how cancer cells invade healthy tissue or how leaves and flowers grow. But the behavior of cell aggregates is extremely complex, and fully capturing their structure and dynamics has proved tricky. 

A team of researchers in the lab of Corey O’Hern, professor of mechanical engineering & materials science, physics, and applied physics, has developed novel supercomputer simulations of deformable particles that more accurately model their collective behavior. The study was led by John Treado, a Ph.D. student, and postdoctoral researcher Dong Wang, both in the O’Hern lab. It was recently published in Physical Review Materials.  summaryFig2 0 ac71e

Cells, bubbles, droplets, and other small particles that make up soft solids – which include anything from mayonnaise and shaving cream to cells and tissues - are all highly deformable. There’s significant variability in how they change shape, and how they respond to forces.  

“There is a strong connection between the response of the collection of particles to applied forces, particle shape, and deformability,” Treado said. “Particle deformability determines how they’re going to move because they’re compressed tightly with many neighbors who are squishing them on all sides.” 

Conventional computer models typically represent soft particles as spheres. When the spheres press against each other, the models represent the spheres’ deformations by having them overlap. This approach works to a certain extent, but crucial information about the particle shapes and interactions is lost or misrepresented. 

The O’Hern team, though, developed a supercomputer model that can tune the particles from being floppy, with the ability to easily change shape, to being completely rigid. This model treats each particle as a ring of connected small spheres. In the simulation, forces are applied to the spherical beads, and the model tracks how the connected beads change positions and orientations. 

The researchers found that allowing for collective shape changes produced material responses that they wouldn’t have observed with fixed spherical shapes of the particles. The results underscore the importance of incorporating shape variability into models of tissues, foams, and other soft solids composed of deformable particles.

“We now need to extend the model to three dimensions, which more closely mimics the real world,” Wang said. “We can also apply the deformable particle model to active biological systems, which can form swarms, schools, and flocks.” 

Treado and Wang are also currently using this new supercomputer model to study how tumor cells invade adipose tissue in breast cancer. In most cancers, the tumor cells can change their shapes to crawl through dense tissue, reach blood vessels, and spread to other sites. 

“We are now seeking to determine the physical limits of tumor cells’ deformability, and the forces that they must exert to push through a dense tissue,” Treado said.  Their work may lead to improvements in the ability to predict whether cancers will metastasize or not.  

UArizona engineers demo a quantum tech advantage

Quantum supercomputing and quantum sensing have the potential to be vastly more powerful than their classical counterparts. Not only could a fully realized quantum computer take just seconds to solve equations that would take a classical computer thousand of years, but it could have incalculable impacts on areas ranging from biomedical imaging to autonomous driving.

However, the technology isn't quite there yet.

In fact, despite widespread theories about the far-reaching impact of quantum technologies, very few researchers have been able to demonstrate, using the technology available now, that quantum methods have an advantage over their classical counterparts.

In a paper published today in the journal Physical Review X, University of Arizona researchers experimentally show that quantum has an advantage over classical computing systems.

"Demonstrating a quantum advantage is a long-sought-after goal in the community, and very few experiments have been able to show it," said paper co-author Zheshen Zhang, assistant professor of materials science and engineering, principal investigator of the UArizona Quantum Information and Materials Group and one of the paper’s authors. "We are seeking to demonstrate how we can leverage the quantum technology that already exists to benefit real-world applications."

How (and When) Quantum Works

Quantum supercomputing and other quantum processes rely on tiny, powerful units of information called qubits. The classical computers we use today work with units of information called bits, which exist as either 0s or 1s, but qubits are capable of existing in both states at the same time. This duality makes them both powerful and fragile. The delicate qubits are prone to collapse without warning, making a process called error correction – which addresses such problems as they happen – very important.

The quantum field is now in an era that John Preskill, a renowned physicist from the California Institute of Technology, termed "noisy intermediate-scale quantum," or NISQ. In the NISQ era, quantum computers can perform tasks that only require about 50 to a few hundred qubits, though with a significant amount of noise, or interference. Any more than that and the noisiness overpowers the usefulness, causing everything to collapse. It is widely believed that 10,000 to several million qubits would be needed to carry out practically useful quantum applications.

Imagine inventing a system that guarantees every meal you cook will turn out perfectly, and then giving that system to a group of children who don't have the right ingredients. It will be great in a few years, once the kids become adults and can buy what they need. But until then, the usefulness of the system is limited. Similarly, until researchers advance the field of error correction, which can reduce noise levels, quantum computations are limited to a small scale.

Entanglement Advantages

The experiment described in the paper used a mix of both classical and quantum techniques. Specifically, it used three sensors to classify the average amplitude and angle of radio frequency signals.

The sensors were equipped with another quantum resource called entanglement, which allows them to share information with one another and provides two major benefits: First, it improves the sensitivity of the sensors and reduces errors. Second, because they are entangled, the sensors evaluate global properties rather than gathering data about specific parts of a system. This is useful for applications that only need a binary answer; for example, in medical imaging, researchers don't need to know about every single cell in a tissue sample that isn't cancerous – just whether there's one cell that is cancerous. The same concept applies to detecting hazardous chemicals in drinking water.

The experiment demonstrated that equipping the sensors with quantum entanglement gave them an advantage over classical sensors, reducing the likelihood of errors by a small but critical margin.

"This idea of using entanglement to improve sensors is not limited to a specific type of sensor, so it could be used for a range of different applications, as long as you have the equipment to entangle the sensors," said study co-author Quntao Zhuang, assistant professor of electrical and computer engineering and principal investigator of the Quantum Information Theory Group" In theory, you could consider applications like lidar (Light Detection and Ranging) for self-driving cars, for example."

Zhuang and Zhang developed the theory behind the experiment and described it in a 2019 Physical Review X paper. They co-authored the new paper with lead author Yi Xia, a doctoral student in the James C. Wyant College of Optical Sciences, and Wei Li, a postdoctoral researcher in materials science and engineering.

Qubit Classifiers

There are existing applications that use a mix of quantum and classical processing in the NISQ era, but they rely on preexisting classical datasets that must be converted and classified in the quantum realm. Imagine taking a series of photos of cats and dogs, then uploading the photos into a system that uses quantum methods to label the photos as either "cat" or "dog."

The team is tackling the labeling process from a different angle, by using quantum sensors to gather their own data in the first place. It's more like using a specialized quantum camera that labels the photos as either "dog" or "cat" as the photos are taken.

"A lot of algorithms consider data stored on a computer disk, and then convert that into a quantum system, which takes time and effort," Zhuang said. "Our system works on a different problem by evaluating physical processes that are happening in real-time."

The team is excited for future applications of their work at the intersection of quantum sensing and quantum computing. They even envision one day integrating their entire experimental setup onto a chip that could be dipped into a biomaterial or water sample to identify disease or harmful chemicals.

"We think it's a new paradigm for both quantum computing, quantum machine learning, and quantum sensors because it really creates a bridge to interconnect all these different domains," Zhang said.

Closer hardware systems bring the future of artificial intelligence into view

Machine learning is the process by which computers adapt their responses without human intervention. This form of artificial intelligence (AI) is now common in everyday tools such as virtual assistants and is being developed for use in areas from medicine to agriculture. A challenge posed by the rapid expansion of machine learning is the high energy demand of complex computing processes. Researchers from The University of Tokyo have reported the first integration of a mobility-enhanced field-effect transistor (FET) and a ferroelectric capacitor (FE-CAP) to bring the memory system into the proximity of a microprocessor and improve the efficiency of the data-intensive computing system. Their findings were presented at the 2021 Symposium on VLSI Technology.

Memory cells require both a memory component and an access transistor. In currently available examples, the access transistors are generally silicon-metal-oxide semiconductor FETs. While the memory elements can be formed in the 'back end of line' (BEOL) layers, the access transistors need to be formed in what is known as the 'front end of the line' layers of the integrated circuit, which isn't a good use of this space. Researchers from the Institute of Industrial Science at The University of Tokyo, Kobe Steel, Ltd, and Kobelco Research Institute, Inc, develop high-density, energy-efficient 3D embedded RAM for artificial intelligence applications.

In contrast, oxide semiconductors such as indium gallium zinc oxide (IGZO) can be included in BEOL layers because they can be processed at low temperatures. By incorporating both the access transistor and the memory into a single monolith in the BEOL, high-density, energy-efficient embedded memory can be achieved directly on a microprocessor.

The researchers used IGZO doped with tin (IGZTO) for both the oxide semiconductor FET and ferroelectric capacitor (FE-cap) to create 3D embedded memory.

"In light of the high mobility and excellent reliability of our previously reported IGZO FET, we developed a tin-doped IGZTO FET," explains study first author Jixuan Wu. "We then integrated the IGZTO FET with an FE-cap to introduce its scalable properties."

Both the drive current and the effective mobility of the IGZTO FET were twice those of the IGZO FET without tin. Because the mobility of the oxide semiconductor must be high enough to drive the FE-cap, introducing the tin ensures successful integration.

"The proximity achieved with our design will significantly reduce the distance that signals must travel, which will speed up learning and inference processes in AI computing, making them more energy-efficient," study author Masaharu Kobayashi explains. "We believe our findings provide another step towards hardware systems that can support future AI applications of higher complexity."