Danish students develop Carbontracker to predict the carbon footprint of algorithms

On a daily basis, and perhaps without realizing it, most of us are in close contact with advanced AI methods known as deep learning. Deep learning algorithms churn whenever we use Siri or Alexa, when Netflix suggests movies and tv shows based upon our viewing histories, or when we communicate with a website's customer service chatbot.

However, the rapidly evolving technology, one that has otherwise been expected to serve as an effective weapon against climate change, has a downside that many people are unaware of -- sky-high energy consumption. Artificial intelligence, and particularly the subfield of deep learning, appears likely to become a significant climate culprit should industry trends continue. In only six years -- from 2012 to 2018 -- the compute needed for deep learning has grown 300,000%. However, the energy consumption and carbon footprint associated with developing algorithms are rarely measured, despite numerous studies that clearly demonstrate the growing problem.

In response to the problem, two students at the University of Copenhagen's Department of Computer Science, Lasse F. Wolff Anthony and Benjamin Kanding, together with Assistant Professor Raghavendra Selvan, have developed a software program they call Carbontracker. The program can calculate and predict the energy consumption and CO2 emissions of training deep learning models. Carbontracker 1100x600 9afe5

"Developments in this field are going insanely fast and deep learning models are constantly becoming larger in scale and more advanced. Right now, there is exponential growth. And that means an increasing energy consumption that most people seem not to think about," according to Lasse F. Wolff Anthony. {module INSIDE STORY}

One training session = the annual energy consumption of 126 Danish homes

Deep learning training is the process during which the mathematical model learns to recognize patterns in large datasets. It's an energy-intensive process that takes place on specialized, power-intensive hardware running 24 hours a day.

"As datasets grow larger by the day, the problems that algorithms need to solve become more and more complex," states Benjamin Kanding.

One of the biggest deep learning models developed thus far is the advanced language model known as GPT-3. In a single training session, it is estimated to use the equivalent of a year's energy consumption of 126 Danish homes, and emit the same amount of CO2 as 700,000 kilometers of driving.

"Within a few years, there will probably be several models that are many times larger," says Lasse F. Wolff Anthony.

Room for improvement

"Should the trend continue, artificial intelligence could end up being a significant contributor to climate change. Jamming the brakes on technological development is not the point. These developments offer fantastic opportunities for helping our climate. Instead, it is about becoming aware of the problem and thinking: How might we improve?" explains Benjamin Kanding.

The idea of Carbontracker, which is a free program, is to provide the field with a foundation for reducing the climate impact of models. Among other things, the program gathers information on how much CO2 is used to produce energy in whichever region the deep learning training is taking place. Doing so makes it possible to convert energy consumption into CO2 emission predictions.

Among their recommendations, the two computer science students suggest that deep learning practitioners look at when their model training takes place, as power is not equally green over a 24-hour period, as well as what type of hardware and algorithms they deploy.

"It is possible to reduce climate impact significantly. For example, it is relevant if one opts to train their model in Estonia or Sweden, where the carbon footprint of a model training can be reduced by more than 60 times thanks to greener energy supplies. Algorithms also vary greatly in their energy efficiency. Some require less compute, and thereby less energy, to achieve similar results. If one can tune these types of parameters, things can change considerably," concludes Lasse F. Wolff Anthony.

Ultrapotent COVID-19 vaccine candidate designed via supercomputer

Preclinical data published in Cell show the nanoparticle vaccine spurs extremely high levels of protective antibodies in animal models

An innovative nanoparticle vaccine candidate for the pandemic coronavirus produces virus-neutralizing antibodies in mice at levels ten-times greater than is seen in people who have recovered from COVID-19 infections. Designed by scientists at the University of Washington School of Medicine in Seattle, the vaccine candidate has been transferred to two companies for clinical development.

Compared to vaccination with the soluble SARS-CoV-2 Spike protein, which is what many leading COVID-19 vaccine candidates are based on, the new nanoparticle vaccine produced ten times more neutralizing antibodies in mice, even at a six-fold lower vaccine dose. The data also show a strong B-cell response after immunization, which can be critical for immune memory and a durable vaccine effect. When administered to a single nonhuman primate, the nanoparticle vaccine produced neutralizing antibodies targeting multiple different sites on the Spike protein. Researchers say this may ensure protection against mutated strains of the virus, should they arise. The Spike protein is part of the coronavirus infectivity machinery. CAPTION Artist's depiction of an ultrapotent COVID-19 vaccine candidate in which 60 pieces of a coronavirus protein (red) decorate nanoparticles (blue and white). The vaccine candidate was designed using methods developed at the UW Medicine Institute for Protein Design. The molecular structure of the vaccine roughly mimics that of a virus, which may account for its enhanced ability to provoke an immune response.  CREDIT Ian Haydon/ UW Medicine Institute for Protein Design{module INSIDE STORY}

The findings are published in Cell. The lead authors of this paper are Alexandra Walls, a research scientist in the laboratory of David Veesler, who is an associate professor of biochemistry at the UW School of Medicine; and Brooke Fiala, a research scientist in the laboratory of Neil King, who is an assistant professor of biochemistry at the UW School of Medicine.

The vaccine candidate was developed using structure-based vaccine design techniques invented at UW Medicine. It is a self-assembling protein nanoparticle that displays 60 copies of the SARS-CoV-2 Spike protein's receptor-binding domain in a highly immunogenic array. The molecular structure of the vaccine roughly mimics that of a virus, which may account for its enhanced ability to provoke an immune response.

"We hope that our nanoparticle platform may help fight this pandemic that is causing so much damage to our world," said King, inventor of the computational vaccine design technology at the Institute for Protein Design at UW Medicine. "The potency, stability, and manufacturability of this vaccine candidate differentiate it from many others under investigation."

Hundreds of candidate vaccines for COVID-19 are in development around the world. Many require large doses, complex manufacturing, and cold-chain shipping and storage. An ultrapotent vaccine that is safe, effective at low doses, simple to produce, and stable outside of a freezer could enable vaccination against COVID-19 on a global scale.

"I am delighted that our studies of antibody responses to coronaviruses led to the design of this promising vaccine candidate," said Veesler, who spearheaded the concept of a multivalent receptor-binding domain-based vaccine.

Where were Jupiter and Saturn born?

An additional planet between Saturn and Uranus was kicked out of the Solar System in its infancy

New work led by Carnegie's Matt Clement reveals the likely original locations of Saturn and Jupiter. These findings refine our understanding of the forces that determined our Solar System's unusual architecture, including the ejection of an additional planet between Saturn and Uranus, ensuring that only small, rocky planets, like Earth, formed inward of Jupiter.

In its youth, our Sun was surrounded by a rotating disk of gas and dust from which the planets were born. The orbits of early formed planets were thought to be initially close-packed and circular, but gravitational interactions between the larger objects perturbed the arrangement and caused the baby giant planets to rapidly reshuffle, creating the configuration we see today. New work led by Carnegie's Matt Clement reveals the likely original locations of Saturn and Jupiter. These findings refine our understanding of the forces that determined our Solar System's unusual architecture, including the ejection of an additional planet between Saturn and Uranus, ensuring that only small, rocky planets, like Earth, formed inward of Jupiter.  In its youth, our Sun was surrounded by a rotating disk of gas and dust from which the planets were born. The orbits of early formed planets were thought to be initially close-packed and circular, but gravitational interactions between the larger objects perturbed the arrangement and caused the baby giant planets to rapidly reshuffle, creating the configuration we see today.  {module INSIDE STORY} The majority of supercomputing for this project was performed at the OU Supercomputing Center. Some of the supercomputing for this project was performed on Carnegie's Memex.

"We now know that there are thousands of planetary systems in our Milky Way galaxy alone," Clement said. "But it turns out that the arrangement of planets in our own Solar System is highly unusual, so we are using models to reverse engineer and replicate its formative processes. This is a bit like trying to figure out what happened in a car crash after the fact--how fast were the cars going, in what directions, and so on."

Clement and his co-authors--Carnegie's John Chambers, Sean Raymond of the University of Bordeaux, Nathan Kaib of the University of Oklahoma, Rogerio Deienno of the Southwest Research Institute, and André Izidoro of Rice University performed 6,000 simulations of our Solar System's evolution, revealing an unexpected detail about Jupiter and Saturn's original relationship.

Jupiter in its infancy was thought to orbit the Sun three times for every two orbits that Saturn completed. But this arrangement is not able to satisfactorily explain the configuration of the giant planets that we see today. The team's models showed that a ratio of two Jupiter orbits to one Saturnian orbit more consistently produced results that look like our familiar planetary architecture.

"This indicates that while our Solar System is a bit of an oddball, it wasn't always the case," explained Clement, who is presenting the team's work at the American Astronomical Society's Division for Planetary Sciences virtual meeting. "What's more, now that we've established the effectiveness of this model, we can use it to help us look at the formation of the terrestrial planets, including our own, and to perhaps inform our ability to look for similar systems elsewhere that could have the potential to host life."

The model also showed that the positions of Uranus and Neptune were shaped by the mass of the Kuiper belt--an icy region on the Solar System's edges composed of dwarf planets and planetoids of which Pluto is the largest member--and by an ice giant planet that was kicked out in the Solar System's infancy.