Solving an Earth-sized Jigsaw Puzzle

Petascale simulations by Caltech and University of Texas researchers map the motions of the Earth’s mantle

Cross section showing the adaptively refined mesh with a finest resolution of about 1km in the region from the New Hebrides to Tonga in the SW Pacific. The refinement occurs both around plate boundaries and dynamically in response to the nonlinear rheology.

On parts of the sea floor, the motion of tectonic plates can be extremely fast. Nowhere is this better displayed than in the southwest Pacific where one literally sees plates falling into the mantle in a process called trench rollback.

“It’s one of the most spectacular phenomena to happen within the planet’s interior,” exclaimed Michael Gurnis, Professor of Geophysics at the California Institute of Technology and Director of the Seismological Laboratory.

There, on a little microplate in the kingdom of Tonga, near Australia, two islands converge at a rate of 25 centimeters per year, nearly 10 times faster than the speed of tectonic plates elsewhere.

“This phenomenon has eluded numerical simulation for years,” Gurnis continued. “So when it emerged in our simulations of the whole planet, to be honest, it was a surprise. It shouldn’t have exceeded our expectations, because that was one of our objectives. But when goals are so intractable for years, you wonder whether you’ll ever solve them.”

A canonical view of plate tectonics has existed in geophysics for several decades. But researchers have never been able to simulate the complex dynamics of the entire mantle, especially with sufficient resolution to explore the junctures where the jostling of plates causes earthquakes and volcanic eruptions to occur.

The challenge in simulating plate tectonics is the wide range of time- and length-scales involved in tectonic motion. Models must be able to capture the coarse structure of the mantle — thousand kilometer slabs of slowly creeping rock — and the fine-grained features as well, especially the kilometer-scale boundary areas where plates slams against plates, build tension, and rupture.

Three years ago, researchers from Caltech and The University of Texas at Austin came together to create a computational tool that could model the Earth and answer the most pressing questions in geophysics: What controls the speed of plates? How do microplates interact? How much energy do the plates generate and how does it dissipate?

Using a new geodynamics software package they developed, the researchers have modeled plate motion with greater accuracy than ever before. Their findings were published in the Aug. 27, 2010 issue of Science and featured on the cover. [See 'Related Publications,' right, for full information.] The project is also a finalist for the Gordon Bell Prize — high performance computing’s Oscar — at this year’s SC10 conference.

Tectonic plate motion (arrows) and viscosity arising from global mantle flow simulation. Plate boundaries, which can be seen as narrow red lines are resolved using an adaptively refined mesh with 1km local resolution. Shown are the Pacific and the Australian tectonic plates and the New Hebrides and Tonga microplates.

These detailed simulations were enabled by the availability of petascale high performance computing systems that are part of the National Science Foundation’s (NSF) TeraGrid as well as other government labs. Ranger, at the Texas Advanced Computing Center (TACC), and Jaguar, at the Oak Ridge National Laboratory (ORNL), provided the computing power required to simulate the dynamics of the mantle faster and with greater resolution than was possible even a few years ago.

In 2007, NSF awarded the team a Petascale Applications (PetaApps) grant to study the mantle simulation problem. The PetaApps program was designed to bring together interdisciplinary teams of researchers with the goal of creating scalable algorithms to make use of emerging petascale supercomputers. This, in turn, would allow researchers to tackle scientific grand challenge problems.

Over the course of three years, the researchers engineered a suite of algorithms from “a clean sheet of paper” that address the difficulties of the global mantle dynamics problem and can use computing resources at the largest scale.

A core component of the software performs a process called adaptive mesh refinement (AMR) that focuses the simulation on the parts of the model that matter. Higher resolution (and computing power) is applied to these areas, while areas of less activity are rendered with less resolution.

“By using adaptive mesh refinement to reduce the number of unknowns by three orders of magnitude, we can resolve both fine scale and large-scale features simultaneously,” said Omar Ghattas, Jackson Chair in Computational Geosciences in the Departments of Geological Sciences and Mechanical Engineering and in the Institute for Computational Engineering and Sciences at The University of Texas at Austin.

The researchers are quick to point out that the project is not simply a computational success. “Models are important, but this isn’t just a modeling exercise,” Gurnis said. “The models are intimately coupled with observations, and validated as well.”

The geophysics simulations can be scaled to over 100,000 processing cores, with the AMR algorithms scaling to over 200,000 cores — the absolute limit of today’s technology. A routine run on Ranger (the researchers’ main resource) used 4,000 to 8,000 processors simultaneously for 15 hours. In total, the project used more than 12 million hours on TeraGrid computing systems.

Earlier this year, the research team, which includes UT-Austin research scientists Carsten Burstedde, Georg Stadler, and Lucas Wilcox, carried out global geodynamic simulations of the Earth’s mantle on Ranger with 1 km resolution at plate boundaries. These simulations closely matched observational data and answered long-standing geophysics questions.

”It’s almost as if we jumped over a whole generation of computation in the sense of being able to reproduce all of the fine-scaled details, the deformations and earthquakes, which occur regionally, but that come out of a global model,” Gurnis said.

Other research groups have matched the motion of plates before. But it was in areas like the southwest Pacific, where they replicated the dynamics of many microplates acting together, that proved the code’s capabilities. Another important outcome was simulations that closely matched the deformations in Chile, and that were able to depict the subsurface forces at work in the powerful February 2010 earthquake.

The group’s simulations also overturned a long-held notion in geophysics that the bending of submerged plates dissipates most energy in the Earth. “We discovered that there is a enormous amount of dissipation where plates bend, but in terms of the total energy release, it’s actually a small factor,” Gurnis explained. “This means there is dissipation throughout almost the whole interior and not just in one place.”

Team members from the Mantle Convection PetaApps project with visualizations from the Rhea mantle convection code on TACC's Stallion tiled display. From L to R: Omar Ghattas UT-Austin), Lucas Wilcox (UT-Austin), Carsten Burstedde (UT-Austin), Georg Stadler (UT-Austin), Michael Gurnis (Caltech). Team member not pictured: Laura Alisic (Caltech). Photo by Leigh Brodie.

The researchers believe the AMR algorithms could be valuable tools for a wide range of multiscale problems. For example, the team is applying their algorithms to simulate the dynamics of ice sheets in Antarctica, which, like the Earth’s mantle, is modeled as a slowly flowing nonlinear fluid.

“We’re finding that the methodology we’ve developed for the mantle convection project has implications and direct applications to this very important problem in climate science,” Ghattas said.

An important aspect of the PetaApps program is an interest in sharing successful software with the broader scientific community. Thus, the AMR library has been incorporated into the popular open source finite element code, deal.II. This will enable researchers to use robust, parallel AMR techniques seamlessly in applications that are characterized by a wide range of time and length scales, such as problems in astrophysics or materials processing.

The impact of the project goes further. As a successful PetaApps project, the collaboration is a model for multi-disciplinary computational science research, and demonstrates the significant scientific impact that such collaborations can have.

“NSF created this program to challenge groups to gear up for the petascale era,” Ghattas said. “They envisioned teams would work out the scientific computing issues underlying grand challenge simulations requiring next generation supercomputers. I think this project is a good example of the sort of things that they had in mind.”