University of Waterloo researchers use AI to analyze tweets debating vaccination, climate change

Using artificial intelligence (AI) researchers have found that between 2007 and 2016 online sentiments around climate change were uniforms, but this was not the case with vaccination. 

Climate change and vaccinations might share many of the same social and environmental elements, but that doesn’t mean the debates are divided along with the same demographics.

A research team from the University of Waterloo and the University of Guelph trained a machine-learning algorithm to analyze a massive number of tweets about climate change and vaccination.

The researchers found that climate change sentiment was overwhelmingly on the pro side of those that believe climate change is because of human activity and requires action. There was also a significant amount of interaction between users with opposite sentiments about climate change.

However, in the snapshot of the timeframe of the dataset, vaccine sentiment was nowhere near so uniform. Only some 15 or 20 percent of users expressed a pro-vaccine sentiment, while around 70 percent expressed no strong sentiment. Perhaps more importantly, individuals and entire online communities with differing sentiments toward vaccination interacted much less than the climate change debate.

“It is an open question whether these differences in user sentiment and social media echo chambers concerning vaccines created the conditions for highly polarized vaccine sentiment when the COVID-19 vaccines began to roll out,” said Chris Bauch, professor of applied mathematics at the University of Waterloo. “If we were to do the same study today with data from the past two years, the results might be wildly different. Vaccination is a much hotter topic right now and appears to be much more polarized given the ongoing pandemic.”

The research goal was to learn how sentiments on climate change and vaccination may be related, how users form networks and share information, the relationship between online sentiments, and how people act and make decisions in daily life.

“There’s been some work done on the polarization of opinions in Twitter and other social media,” said Madhur Anand, professor of environmental sciences at the University of Guelph. “Most other research looks at these as isolated issues, but we wanted to look at these two issues of climate change and vaccination side-by-side. Both issues have social and environmental components, and there are lots to learn in this research pairing.”

The dataset for the project was drawn from a few sources, including some that were purchased from Twitter. In total, the analysis takes into consideration roughly 87 million tweets. The time range for the tweets is between 2007 and 2016.

This means that the data precedes COVID-19 and offers a snapshot of vaccine sentiment in the years leading up to the pandemic.

The AI ranked the millions of tweets as either pro, anti or neutral sentiment on the issues and then classified users in pro, anti or neutral categories. It also analyzed the structure of online communities and the degree to which users with opposing sentiments interacted.

“We expected to find that user sentiment and how users formed networks and communities to be more or less the same for both issues,” said Bauch. “But actually, we found that the way climate change discourse and vaccine discourse worked on Twitter were quite different.”

Anand, Bauch, and team members Justin Schonfeld, Edward Qian, Jason Sinn, and Jeffrey Cheng published their findings, “Debates about vaccines and climate change on social media networks: a study in contrasts,” in the journal Humanities and Social Sciences Communications.

QuTech takes important step in quantum supercomputing with error correction

Researchers at QuTech, a collaboration between the TU Delft, the oldest and largest Dutch public technical university, and TNO, Netherlands Organisation for Applied Scientific Research, have reached a milestone in quantum error correction. They have integrated high-fidelity operations on encoded quantum data with a scalable scheme for repeated data stabilization.  Artistic image of a seven-transmon superconducting quantum processor similar to the one used in this work  CREDIT DiCarlo Lab and Marieke de Lorijn

Physical quantum bits, or qubits, are vulnerable to errors. These errors arise from various sources, including quantum decoherence, crosstalk, and imperfect calibration. Fortunately, the theory of quantum error correction stipulates the possibility to compute while synchronously protecting quantum data from such errors.

“Two capabilities will distinguish an error-corrected quantum computer from present-day noisy intermediate-scale quantum (NISQ) processors”, says Prof Leonardo DiCarlo of QuTech. “First, it will process quantum information encoded in logical qubits rather than in physical qubits (each logical qubit consisting of many physical qubits). Second, it will use quantum parity checks interleaved with computation steps to identify and correct errors occurring in the physical qubits, safeguarding the encoded information as it is being processed.”  According to theory, the logical error rate can be exponentially suppressed provided that the incidence of physical errors is below a threshold and the circuits for logical operations and stabilization are fault-tolerant.

All the operations

The basic idea thus is that if you increase the redundancy and use more and more qubits to encode data, the net error goes down. The researchers at TU Delft, together with colleagues at TNO, have now realized a major step toward this goal, realizing a logical qubit consisting of seven physical qubits (superconducting transmons). “We show that we can do all the operations required for computation with the encoded information. This integration of high-fidelity logical operations with a scalable scheme for repeated stabilization is a key step in quantum error correction”, says Prof Barbara Terhal, also of QuTech.

First-author and Ph.D. candidate Jorge Marques further explains: “Until now researchers have encoded and stabilized. We now show that we can compute as well. This is what a fault-tolerant computer must ultimately do: process and protect data from errors all at the same time. We do three types of logical-qubit operations: initializing the logical qubit in any state, transforming it with gates, and measuring it. We show that all operations can be done directly on encoded information. For each type, we observe higher performance for fault-tolerant variants over non-fault-tolerant variants.” Fault-tolerant operations are key to reducing the build-up of physical-qubit errors into logical-qubit errors.

Long term

DiCarlo emphasizes the multidisciplinary nature of the work: “This is a combined effort of experimental physics, theoretical physics from Barbara Terhal’s group, and also electronics developed with TNO and external collaborators. The project is mainly funded by IARPA and Intel Corporation.”

“Our grand goal is to show that as we increase encoding redundancy, the net error rate decreases exponentially”, DiCarlo concludes. “Our current focus is on 17 physical qubits and next up will be 49. All layers of our quantum computer’s architecture were designed to allow this scaling.”

Temperature Record 101: How We Know What We Know about Climate Change

2021 was tied for the sixth warmest year on NASA’s record, stretching more than a century.But, what is a temperature record?GISTEMP, NASA’s global temperature analysis, takes in millions of observations from instruments on weather stations, ships and ocean buoys, and Antarctic research stations, to determine how much warmer or cooler Earth is on average from year to year.Stretching back to 1880...
...

Read more