Geology professor Lijun Liu used computer simulations to study the origins of the Yellowstone supervolcano.  Photo by L. Brian Stauffer

Understanding the complex geological processes that form supervolcanoes could ultimately help geologists determine what triggers their eruptions. A new study using an advanced supercomputer model casts doubt on previously held theories about the Yellowstone supervolcano’s origins, adding to the mystery of Yellowstone’s formation.

“Our model covered the entire history of Yellowstone volcanic activities,” said Lijun Liu, a geology professor at the University of Illinois. Liu’s supercomputer model accounted for the last 40 million years, prior to even the earliest signs of Yellowstone’s volcanism.

Yellowstone is one of the largest remaining active supervolcanoes. True to its name, a supervolcano is capable of erupting on a much larger scale than an ordinary volcano. The origins of Yellowstone are still under much debate. One of the most prevalent views is that Yellowstone’s supervolcano was formed by a vertical column of hot rocks rising from the top of the earth’s core, known as a mantle plume.

“The majority of previous studies have relied on conceptual, idealized models, which are not physically and geologically accurate,” Liu said. Some recent studies reproduced key geophysical factors in a laboratory setting, including a rising plume and a sinking oceanic plate. However, these studies failed to account for the comprehensive set of geological variables that change over time, influencing the volcanic history.

“Our physical model is more sophisticated and realistic than previous studies, because we simultaneously consider many more relevant dynamic processes,” Liu said.

Using the supercomputer at the U. of I., one of the fastest supercomputers in the world, Liu’s team created a supercomputer model that replicated both the plate tectonic history of the surface and the geophysical image of the Earth’s interior. This study is the first to use a high-performance supercomputer to interpret the layers of complicated geophysical data underlying Yellowstone, Liu said.

The main goal of the study was to examine whether the initiation and subsequent development of the Yellowstone volcanic system was driven by a mantle plume. The simulated data showed that the plume was blocked from traveling upward toward the surface by ancient tectonic plates, meaning that the plume could not have played a significant role in forming Yellowstone, Liu said.

The researchers published their findings in the journal Geophysical Research Letters.

The researchers also examined many other factors that could have played a role in forming Yellowstone. These simulations discounted most of the other theories of Yellowstone’s origins, Liu said. As a result, formation of the Yellowstone volcanic system remains mysterious.

Supervolcanoes are hazardous natural phenomena that evoke public concern, partly because their formation is not well understood. While this area of research is still far from predicting eruptions, Liu said, improving the fundamental understanding of the underlying dynamics of supervolcano formation is key to many future applications of relevant geophysical knowledge.

“This research indicates that we need a multidisciplinary approach to understand complicated natural processes like Yellowstone,” Liu said. “I know people like simple models, but the Earth is not simple.”

An LED light (left) and an image of the same light produced using the Phase Stretch Transform-based algorithm.

A UCLA Engineering research group has made public the computer code for an algorithm that helps computers process images at high speeds and “see” them in ways that human eyes cannot. The researchers say the code could eventually be used in face, fingerprint and iris recognition for high-tech security, as well as in self-driving cars’ navigation systems or for inspecting industrial products.

The algorithm performs a mathematical operation that identifies objects’ edges and then detects and extracts their features. It also can enhance images and recognize objects’ textures.

The algorithm was developed by a group led by Bahram Jalali, a UCLA professor of electrical engineering and holder of the Northrop-Grumman Chair in Optoelectronics, and senior researcher Mohammad Asghari.

It is available for free download on two open source platforms, Github and Matlab File Exchange. Making it available as open source code will allow researchers to work together to study, use and improve the algorithm, and to freely modify and distribute it. It also will enable users to incorporate the technology into computer vision and pattern recognition applications and other image-processing applications.

The Phase Stretch Transform algorithm, as it is known, is a physics-inspired computational approach to processing images and information. The algorithm grew out of UCLA research on atechnique called photonic time stretch, which has been used for ultrafast imaging anddetecting cancer cells in blood.

The algorithm also helps computers see features of objects that aren’t visible using standard imaging techniques. For example, it might be used to detect an LED lamp’s internal structure, which — using conventional techniques — would be obscured by the brightness of its light, and it can see distant stars that would normally be invisible in astronomical images.

The research that led to algorithm’s development was funded by the Office of Naval Research through the Optical Computing Multi-disciplinary University Research Initiative program.

Hiroshima University, the National Institute of Information and Communications Technology, and Panasonic Corporation announced the development of a terahertz (THz) transmitter capable of signal transmission at a per-channel data rate of over ten gigabits per second over multiple channels at around 300 GHz. The aggregate multi-channel data rate exceeds one hundred gigabits per second. The transmitter was implemented as a silicon CMOS integrated circuit, which would have a great advantage for commercialization and consumer use.

This technology could open a new frontier in wireless communication with data rates ten times higher than current technology allows. Details of the technology were presented at the "International Solid-State Circuit Conference (ISSCC) 2016," held from January 31 to February 4 in San Francisco, California.

The THz band is a new and vast frequency resource not currently exploited for wireless communications. Its frequencies are even higher than those used by the millimeter-wave wireless local area network (from 57 GHz to 66 GHz), and the available bandwidths are much wider. Since the speed of a wireless link is proportional to the bandwidth in use, THz is ideally suited to ultrahigh-speed communications. The research group has developed a transmitter that covers the frequency range from 275 GHz to 305 GHz. This frequency range is currently unallocated, and its future frequency allocation is to be discussed at the World Radiocommunication Conference (WRC) 2019 under the International Telecommunication Union Radiocommunication Sector (ITU-R).

Today, most wireless communication technologies use lower frequencies (5 GHz or below) with high-order digital modulation schemes, such as the quadrature amplitude modulation (QAM), to enhance data rates within limited bandwidths available. The research group has successfully demonstrated that QAM is feasible at 300 GHz with CMOS and that THz wireless technology could offer a serious boost in wireless communication speed.

"Now THz wireless technology is armed with very wide bandwidths and QAM-capability. The use of QAM was a key to achieving 100 gigabits per second at 300 GHz," said Prof. Minoru Fujishima, Graduate School of Advanced Sciences of Matter, Hiroshima University.

"Today, we usually talk about wireless data-rates in megabits per second or gigabits per second. But I foresee we'll soon be talking about terabits per second. That's what THz wireless technology offers. Such extreme speeds are currently confined in optical fibers. I want to bring fiber-optic speeds out into the air, and we have taken an important step toward that goal," he added.

The research group plans to further develop 300-GHz ultrahigh-speed wireless circuits.

"We plan to develop receiver circuits for the 300-GHz band as well as modulation and demodulation circuits that are suitable for ultrahigh-speed communications," said Prof. Fujishima.

The Ohio Supercomputer Center (OSC) soon will boost scientific and industrial discovery and innovation even more effectively across the state by offering researchers the services of a powerful new supercomputer, to be built later this year by Dell.

Chancellor John Carey today announced that the new system will increase the center’s total computing capacity by a factor of four and storage capacity by three. The $9.7 million investment, which received approval from the State Controlling Board in January, is part of a $12 million appropriation included in the 2014-15 Ohio biennial capital budget. The remainder of the appropriation is targeted for ancillary systems and facilities upgrades to support the new supercomputer.

“From the time the Ohio Supercomputer Center was created in 1987, OSC has been charged with advancing research in academia and industry across the state through the provision of high performance computing services,” said Carey. “Deploying this new system at the center will give Ohio researchers a powerful new tool with which they can make amazing discoveries and innovative breakthroughs.”

OSC is a member of the Ohio Technology Consortium, the technology and information arm of the Ohio Department of Higher Education. The center currently offers computational services via three supercomputer clusters: the HP/Intel Ruby Cluster, the HP/Intel Oakley Cluster and the IBM/AMD Glenn Cluster. The Glenn Cluster and part or all of Oakley will be retired soon to make sufficient space and power available for the new supercomputer.

“This major acquisition will make an enormous positive impact on the work of our clients, both academic and industrial,” said David Hudak, Ph.D., interim executive director of OSC. “Our current systems are running near peak capacity most of the time. Ohio researchers are eager for this massive increase in computing power and storage space.”

CAPTION This image shows a slice through the three-dimensional simulation volume after 105 days in the common envelope. In the orbital plane (figure 1), the companion star and the red giant core are circling around each other. CREDIT Image: Sebastian Ohlmann / HITS

HITS astrophysicists use new methods to simulate the common-envelope phase of binary stars, discovering dynamic irregularities that may help to explain how supernovae evolve

When we look at the night sky, we see stars as tiny points of light eking out a solitary existence at immense distances from Earth. But appearances are deceptive. More than half the stars we know of have a companion, a second nearby star that can have a major impact on their primary companions. The interplay within these so-called binary star systems is particularly intensive when the two stars involved are going through a phase in which they are surrounded by a common envelope consisting of hydrogen and helium. Compared to the overall time taken by stars to evolve, this phase is extremely short, so astronomers have great difficulty observing and hence understanding it. This is where theoretical models with highly supercompute-intensive simulations come in. Research into this phenomenon is relevant understanding a number of stellar events such as supernovae. 

Using new methods, astrophysicists Sebastian Ohlmann, Friedrich Roepke, Ruediger Pakmor, and Volker Springel of the Heidelberg Institute for Theoretical Studies (HITS) have now made a step forward in modeling this phenomenon. As they report in The Astrophysical Journal Letters, the scientists have successfully used simulations to discover dynamic irregularities that occur during the common-envelope phase and are crucial for the subsequent existence of binary star systems. These so-called instabilities change the flow of matter inside the envelope, thus influencing the stars' distance from one another and determining, for example, whether a supernova will ensue and, if so, what kind it will be. 

Sebastian Ohlmann, Physic of Stellar Objects Group (Photo: HITS)

The article is the fruit of collaboration between two HITS research groups, the Physics of Stellar Objects (PSO) group and the Theoretical Astrophysics group (TAP). Prof. Volker Springel's Arepo code for hydrodynamic simulations was used and adapted for the modeling. It solves the equations on a moving mesh that follows the mass flow, and thus enhances the accuracy of the model.

Two stars, one envelope

More than half the stars we know of have evolved in binary star systems. The energy for their luminosity comes from the nuclear fusion of hydrogen at the core of the stars. As soon as the hydrogen fueling the nuclear fusion is exhausted in the heavier of the two stars, the star core shrinks. At the same time, a highly extended stellar envelope evolves, consisting of hydrogen and helium. The star becomes a red giant.

As the envelope of the red giant goes on expanding, the companion star draws the envelope to itself via gravity, and part of the envelope flows towards it. In the course of this process the two stars come closer to one another. Finally, the companion star may fall into the envelope of the red giant and both stars are then surrounded by a common envelope. As the core of the red giant and the companion draw closer together, the gravity between them releases energy that passes into the common envelope. As a result, the envelope is ejected and mixes with interstellar matter in the galaxy, leaving behind it a close binary star system consisting of the core of the giant and the companion star.

The path to stellar explosion

Sebastian Ohlmann of the PSO group explains why this common-envelope phase is important for our understanding of the way various star systems evolve: "Depending on what the system of the common envelope looks like initially, very different phenomena may ensue in the aftermath, such as thermonuclear supernovae." Ohlmann and colleagues are investigating the run-up to these stellar explosions, which are among the most luminous events in the universe and can light up a whole galaxy. But modeling the systems that can lead to such explosions is bedeviled by major uncertainty in the description of the common-envelope phase. One of the reasons for this is that the core of the giant is anything between a thousand and ten thousand times smaller than the envelope, so that spatial and temporal scale differences complicate the modeling process and make approximations necessary. The methodically innovative simulations performed by the Heidelberg scientists are a first step towards a better understanding of this phase.

Page 2 of 106