Japanese astrophysicists show how gas giants form from dust to planet

Gas giant planets, such as Jupiter, can form rapidly by incorporating nearby icy bodies made from drifting pebbles born in the outer parts of young planetary systems – all in about 200,000 years. This finding has implications for understanding how habitable planets are created; not just in our solar system, but in others too.  Result of dust-to-planet simulation: Mass distribution of bodies from dust to planets at about 200,000 years. (Credit: Hiroshi Kobayashi)

Gas giants are made of a massive solid core surrounded by an even larger mass of helium and hydrogen. But even though these planets are quite common in the Universe, scientists still don’t fully understand how they form. Now, astrophysicists Hiroshi Kobayashi of Nagoya University and Hidekazu Tanaka of Tohoku University have developed supercomputer simulations that simultaneously use multiple types of celestial matter to gain a more comprehensive understanding of how these colossal planets grow from specks of dust. Their findings were published in The Astrophysical Journal.

“We already know quite a bit about how planets are made,” says Kobayashi. “Dust lying within the far-reaching ‘protoplanetary disks’ surrounding newly formed stars collides and coagulates to make celestial bodies called planetesimals. These then amass together to form planets. Despite everything we know, the formation of gas giants, like Jupiter and Saturn, has long baffled scientists.”

This is a problem because gas giants play huge roles in the formation of potentially habitable planets within planetary systems.

For gas giants to form, they must first develop solid cores that have enough mass, about ten times that of Earth, to pull in the huge amounts of gas for which they are named. Scientists have long struggled to understand how these cores grow. The problem is two-fold. First, core growth from the simple amassing of nearby planetesimals would take longer than the several million years during which the dust-containing protoplanetary disks survive. Second, forming planetary cores interact with the protoplanetary disk, causing them to migrate inward towards the central star. This makes conditions impossible for gas accumulation.

To tackle this problem, Kobayashi and Tanaka used state-of-the-art computer technologies to develop simulations that can model how dust lying within the protoplanetary disk can collide and grow to form the solid core necessary for gas accumulation. A major issue with current programs was that they could only simulate planetesimal or pebble collisions separately. “The new program can handle celestial bodies of all sizes and simulate their evolution via collisions,” explains Kobayashi.

The simulations showed that pebbles from the outer parts of the protoplanetary disk drift inwards to grow into icy planetesimals at about 10 astronomical units (au) from the central star. A single astronomical unit represents the mean distance between the Earth and the Sun. Jupiter and Saturn are about 5.2au and 9.5au away from the Sun, respectively. Pebble growth into icy planetesimals increases their numbers in the region of the developing planetary system that is about 6-9 au from the central star. This encourages high core growth rates, resulting in the formation of solid cores massive enough to accumulate gas and develop into gas giants in about 200,000 years.

“We expect our research will help lead to the full elucidation of the origin of habitable planets, not only in the solar system but also in other planetary systems around stars,” says Kobayashi.

Edge processing research takes Surrey discovery closer to use in AI networks

Researchers at the University of Surrey have successfully demonstrated proof-of-concept of using their multimodal transistor (MMT) in artificial neural networks, which mimic the human brain. This is an important step towards using thin-film transistors as artificial intelligence hardware and moves edge computing forward, with the prospect of reducing power needs and improving efficiency, rather than relying solely on computer chips.

The MMT, first reported by Surrey researchers in 2020, overcomes long-standing challenges associated with transistors and can perform the same operations as more complex circuits. This latest research, published in the peer-reviewed journal Scientific Reports, uses mathematical modeling to prove the concept of using MMTs in artificial intelligence systems, which is a vital step towards manufacturing.

Using measured and simulated transistor data, the researchers show that well-designed multimodal transistors could operate robustly as rectified linear unit-type (ReLU) activations in artificial neural networks, achieving practically identical classification accuracy as pure ReLU implementations. They used both measured and simulated MMT data to train an artificial neural network to identify handwritten numbers and compared the results with the built-in ReLU of the software. The results confirmed the potential of MMT devices for thin-film decision and classification circuits. The same approach could be used in more complex AI systems.

Unusually, the research was led by Surrey undergraduate Isin Pesch, who worked on the project during the final year research module of her BEng (Hons) in Electronic Engineering with Nanotechnology. Covid meant she had to study remotely from her home in Turkey, but she still managed to spearhead the development, complemented by an international research team, which also included collaborators in the University of Rennes, France, and UCL, London.

Isin Pesch, the lead author of the paper, which was written before she graduated in July 2021, said: “There is a great need for technological improvements to support the growth of low cost, large-area electronics which were shown to be used in artificial intelligence applications. Thin-film transistors have a role to play in enabling high processing power with low resource use. We can now see that MMTs, a unique type of thin-film transistor, invented at the University of Surrey, have the reliability and uniformity needed to fulfill this role.”

Dr. Radu Sporea, Senior Lecturer at the University of Surrey’s Advanced Technology Institute, said: “These findings are a reminder of how Surrey is a leader in AI research. Many of my colleagues focus on people-centred AI and how best to maximize the benefits for humans, including how to apply these new concepts ethically. Our research at the Advanced Technology Institute takes forward the physical implementation, as a stepping stone towards powerful yet affordable next-generation hardware. It’s fantastic that collaboration is resulting in such successes with researchers involved at all levels, from undergraduates like Isin when she led this research, to seasoned experts.”

A major project brings together Finnish industry, research for quantum technology development

A new research project has been launched to accelerate the progress of Finnish quantum technology. The QuTI project, coordinated by VTT Technical Research Centre of Finland, will develop new components, manufacturing and testing solutions, and algorithms for the needs of quantum technology. The QuTI consortium, partly financed by Business Finland, consists of 12 partners and has a total budget of around EUR 10 million.

Quantum technology is developing into a wide field in the industry. This quantum wave is motivated by the unprecedented performance improvements and paradigm shifts that the utilization of quantum phenomena can provide for computing, communication, and sensing applications. The Quantum Technologies Industrial (QuTI) ecosystem project, coordinated by VTT, brings together the expertise of Finnish industry and research organizations to find new quantum technology solutions.

The QuTI project covers the full value chain of the quantum industry from materials and hardware to software and system-level solutions. The project involves 12 organizations: the research partners are VTT, Aalto University, Tampere University, and CSC – IT Center for Science, and the industrial partners are Bluefors, Afore, Picosun, IQM Quantum Computers, Rockley Photonics, Quantastica, Saab, and Vexlum.

“Quantum technology is a multidisciplinary and rapidly advancing field. The QuTI consortium provides an ideal starting point for strengthening the international competitiveness of Finnish technology and industry in this fast-growing field,” says QuTI project’s coordinator, Professor Mika Prunnila from VTT.

The quantum computing, communication, and sensing devices to be developed in the QuTI project are largely based on expertise in microsystems, photonics, electronics, and cryogenics. The project develops customized software and algorithms hand in hand with the hardware, strengthening the Finnish quantum computing infrastructure. In addition, new tools will be created for quantum technology product development that will serve the needs of the QuTI project as well as the entire field of quantum technology.

The three-year QuTI project will be implemented as a jointly funded project that is partly financed by Business Finland (EUR 5.6 million) the total budget being about EUR 10 million.

“Quantum technology offers great opportunities for Finnish industry, and we want to be involved in supporting this development. We see that the QuTI project is in many ways a concrete starting point for the Finnish quantum ecosystem,” says Kari Leino, Ecosystem Lead at Business Finland.

Cleanrooms are a prerequisite for quantum technology research and business

Like computer microprocessors, the fabrication of quantum technology components requires a cleanroom environment. The Micronova cleanroom facility in Espoo, Finland, operated jointly by VTT and Aalto University,  enables applied research and small-scale commercial manufacturing of quantum microsystems for the needs of quantum computing, communication, and sensing. Micronova, part of the national Otanano research infrastructure, plays a significant role in both the QuTI project and quantum technology R&D in Finland. QuTI will also utilize the complementary cleanroom of Tampere University focusing on optoelectronics fabrication.

University of Waterloo researchers use AI to analyze tweets debating vaccination, climate change

Using artificial intelligence (AI) researchers have found that between 2007 and 2016 online sentiments around climate change were uniforms, but this was not the case with vaccination. 

Climate change and vaccinations might share many of the same social and environmental elements, but that doesn’t mean the debates are divided along with the same demographics.

A research team from the University of Waterloo and the University of Guelph trained a machine-learning algorithm to analyze a massive number of tweets about climate change and vaccination.

The researchers found that climate change sentiment was overwhelmingly on the pro side of those that believe climate change is because of human activity and requires action. There was also a significant amount of interaction between users with opposite sentiments about climate change.

However, in the snapshot of the timeframe of the dataset, vaccine sentiment was nowhere near so uniform. Only some 15 or 20 percent of users expressed a pro-vaccine sentiment, while around 70 percent expressed no strong sentiment. Perhaps more importantly, individuals and entire online communities with differing sentiments toward vaccination interacted much less than the climate change debate.

“It is an open question whether these differences in user sentiment and social media echo chambers concerning vaccines created the conditions for highly polarized vaccine sentiment when the COVID-19 vaccines began to roll out,” said Chris Bauch, professor of applied mathematics at the University of Waterloo. “If we were to do the same study today with data from the past two years, the results might be wildly different. Vaccination is a much hotter topic right now and appears to be much more polarized given the ongoing pandemic.”

The research goal was to learn how sentiments on climate change and vaccination may be related, how users form networks and share information, the relationship between online sentiments, and how people act and make decisions in daily life.

“There’s been some work done on the polarization of opinions in Twitter and other social media,” said Madhur Anand, professor of environmental sciences at the University of Guelph. “Most other research looks at these as isolated issues, but we wanted to look at these two issues of climate change and vaccination side-by-side. Both issues have social and environmental components, and there are lots to learn in this research pairing.”

The dataset for the project was drawn from a few sources, including some that were purchased from Twitter. In total, the analysis takes into consideration roughly 87 million tweets. The time range for the tweets is between 2007 and 2016.

This means that the data precedes COVID-19 and offers a snapshot of vaccine sentiment in the years leading up to the pandemic.

The AI ranked the millions of tweets as either pro, anti or neutral sentiment on the issues and then classified users in pro, anti or neutral categories. It also analyzed the structure of online communities and the degree to which users with opposing sentiments interacted.

“We expected to find that user sentiment and how users formed networks and communities to be more or less the same for both issues,” said Bauch. “But actually, we found that the way climate change discourse and vaccine discourse worked on Twitter were quite different.”

Anand, Bauch, and team members Justin Schonfeld, Edward Qian, Jason Sinn, and Jeffrey Cheng published their findings, “Debates about vaccines and climate change on social media networks: a study in contrasts,” in the journal Humanities and Social Sciences Communications.

QuTech takes important step in quantum supercomputing with error correction

Researchers at QuTech, a collaboration between the TU Delft, the oldest and largest Dutch public technical university, and TNO, Netherlands Organisation for Applied Scientific Research, have reached a milestone in quantum error correction. They have integrated high-fidelity operations on encoded quantum data with a scalable scheme for repeated data stabilization.  Artistic image of a seven-transmon superconducting quantum processor similar to the one used in this work  CREDIT DiCarlo Lab and Marieke de Lorijn

Physical quantum bits, or qubits, are vulnerable to errors. These errors arise from various sources, including quantum decoherence, crosstalk, and imperfect calibration. Fortunately, the theory of quantum error correction stipulates the possibility to compute while synchronously protecting quantum data from such errors.

“Two capabilities will distinguish an error-corrected quantum computer from present-day noisy intermediate-scale quantum (NISQ) processors”, says Prof Leonardo DiCarlo of QuTech. “First, it will process quantum information encoded in logical qubits rather than in physical qubits (each logical qubit consisting of many physical qubits). Second, it will use quantum parity checks interleaved with computation steps to identify and correct errors occurring in the physical qubits, safeguarding the encoded information as it is being processed.”  According to theory, the logical error rate can be exponentially suppressed provided that the incidence of physical errors is below a threshold and the circuits for logical operations and stabilization are fault-tolerant.

All the operations

The basic idea thus is that if you increase the redundancy and use more and more qubits to encode data, the net error goes down. The researchers at TU Delft, together with colleagues at TNO, have now realized a major step toward this goal, realizing a logical qubit consisting of seven physical qubits (superconducting transmons). “We show that we can do all the operations required for computation with the encoded information. This integration of high-fidelity logical operations with a scalable scheme for repeated stabilization is a key step in quantum error correction”, says Prof Barbara Terhal, also of QuTech.

First-author and Ph.D. candidate Jorge Marques further explains: “Until now researchers have encoded and stabilized. We now show that we can compute as well. This is what a fault-tolerant computer must ultimately do: process and protect data from errors all at the same time. We do three types of logical-qubit operations: initializing the logical qubit in any state, transforming it with gates, and measuring it. We show that all operations can be done directly on encoded information. For each type, we observe higher performance for fault-tolerant variants over non-fault-tolerant variants.” Fault-tolerant operations are key to reducing the build-up of physical-qubit errors into logical-qubit errors.

Long term

DiCarlo emphasizes the multidisciplinary nature of the work: “This is a combined effort of experimental physics, theoretical physics from Barbara Terhal’s group, and also electronics developed with TNO and external collaborators. The project is mainly funded by IARPA and Intel Corporation.”

“Our grand goal is to show that as we increase encoding redundancy, the net error rate decreases exponentially”, DiCarlo concludes. “Our current focus is on 17 physical qubits and next up will be 49. All layers of our quantum computer’s architecture were designed to allow this scaling.”