DKFZ deploys StorageMAP to manage 27 petabytes of unstructured data

The German Cancer Research Institute (DKFZ)'s Working Group Leader for Central Servers, Tobias Reber, highlights the value of Datadobi's software, StorageMAP, in addressing their data management challenges. Reber highlights the ease of deployment and the platform's capabilities in providing a comprehensive view of their data landscape.
 
{media id=313,layout=solo}

Read more

uOttawa-built models put the age of the universe at 26.7 billion years

Our universe could be twice as old as current estimates, according to a new study that challenges the dominant cosmological model and sheds new light on the so-called “impossible early galaxy problem.”

“Our newly-devised model stretches the galaxy formation time by several billion years, making the universe 26.7 billion years old, and not 13.7 as previously estimated,” says author Rajendra Gupta, adjunct professor of physics in the Faculty of Science at the University of Ottawa. Rajendra Gupta 41047

For years, astronomers and physicists have calculated the age of our universe by measuring the time elapsed since the Big Bang and by studying the oldest stars based on the redshift of light coming from distant galaxies. In 2021, thanks to new techniques and advances in technology, the age of our universe was thus estimated at 13.797 billion years using the Lambda-CDM concordance model.

However, many scientists have been puzzled by the existence of stars like the Methuselah that appear to be older than the estimated age of our universe and by the discovery of early galaxies in an advanced state of evolution made possible by the James Webb Space Telescopeexternal link. These galaxies, existing a mere 300 million years after the Big Bang, appear to have a level of maturity and mass typically associated with billions of years of cosmic evolution. Furthermore, they’re surprisingly small in size, adding another layer of mystery to the equation.

Zwicky’s tired light theory proposes that the redshift of light from distant galaxies is due to the gradual loss of energy by photons over vast cosmic distances. However, it was seen to conflict with observations. Yet Gupta found that “by allowing this theory to coexist with the expanding universe, it becomes possible to reinterpret the redshift as a hybrid phenomenon, rather than purely due to expansion.”

In addition to Zwicky’s tired light theory, Gupta introduces the idea of evolving “coupling constants,” as hypothesized by Paul Dirac. Coupling constants are fundamental physical constants that govern the interactions between particles. According to Dirac, these constants might have varied over time. By allowing them to evolve, the timeframe for forming early galaxies observed by the Webb telescope at high redshifts can be extended from a few hundred million years to several billion years. This provides a more feasible explanation for the advanced level of development and mass observed in these ancient galaxies.

Moreover, Gupta suggests that the traditional interpretation of the “cosmological constant,” which represents dark energy responsible for the universe's accelerating expansion, needs revision. Instead, he proposes a constant that accounts for the evolution of the coupling constants. This modification in the cosmological model helps address the puzzle of small galaxy sizes observed in the early universe, allowing for more accurate observations.

Volcano erupting near El Paso, La Palma, Spain  Credit: Andreas Weibel via Getty Images
Volcano erupting near El Paso, La Palma, Spain Credit: Andreas Weibel via Getty Images

Cambridge's simulations show the impacts of volcanic eruptions on climate are miscalculated

Researchers have found that the cooling effect that volcanic eruptions have on Earth's surface temperature is likely underestimated by a factor of two, and potentially as much as a factor of four, in common climate projections.

While this effect is far from enough to offset the effects of global temperature rise caused by human activity, the researchers, led by the University of Cambridge, say that small-magnitude eruptions are responsible for as much as half of all the sulfur gases emitted into the upper atmosphere by volcanoes. 

The results suggest that improving the representation of volcanic eruptions of all magnitudes will in turn make climate projections more robust.

Where and when a volcano erupts is not something that humans can control, but volcanoes do play an important role in the global climate system. When volcanoes erupt, they can spew sulfur gases into the upper atmosphere, which form tiny particles called aerosols that reflect sunlight into space. For very large eruptions, such as Mount Pinatubo in 1991, the volume of volcanic aerosols is so large that it single-handedly causes global temperatures to drop.

However, these large eruptions only happen a handful of times per century – most small-magnitude eruptions happen every year or two.  

“Compared with the greenhouse gases emitted by human activity, the effect that volcanoes have on the global climate is relatively minor, but it’s important that we include them in climate models, to accurately assess temperature changes in the future,” said May Chim, a Ph.D. candidate in the Yusuf Hamied Department of Chemistry.

Standard climate projections, such as the Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report, assume that explosive volcanic activity over 2015–2100 will be at the same level as the 1850–2014 period, and overlook the effects of small-magnitude eruptions.

“These projections mostly rely on ice cores to estimate how volcanoes might affect the climate, but smaller eruptions are too small to be detected in ice-core records,” said Chim. “We wanted to make a better use of satellite data to fill the gap and account for eruptions of all magnitudes.”

Using the latest ice-core and satellite records, Chim and her colleagues from the University of Exeter, the German Aerospace Center (DLR), the Ludwig-Maximilians University of Munich, Durham University, and the UK Met Office, generated 1000 different scenarios of future volcanic activity. They selected scenarios representing lower, median, and high levels of volcanic activity, and then performed climate simulations using the UK Earth System Model.

Their simulations show that the impacts of volcanic eruptions on climate, including global surface temperature, sea level, and sea ice extent, are underestimated because current climate projections largely underestimate the plausible future level of volcanic activity.

For the median future scenario, they found that the effect of volcanoes on the atmosphere, known as volcanic forcing, is being underestimated in climate projections by as much as 50%, due in large part to the effect of small-magnitude eruptions.

“We found that not only is volcanic forcing being underestimated, but small-magnitude eruptions are actually responsible for as much as half of the volcanic forcing,” said Chim. “These small-magnitude eruptions may not have a measurable effect individually, but collectively, their effect is significant.

“I was surprised to see just how important these small-magnitude eruptions are – we knew they had an effect, but we didn’t know it was so large.”

Although the cooling effect of volcanoes is being underestimated in climate projections, the researchers stress that it does not compare with human-generated carbon emissions.

“Volcanic aerosols in the upper atmosphere typically stay in the atmosphere for a year or two, whereas carbon dioxide stays in the atmosphere for much, much longer,” said Chim. “Even if we had a period of extraordinarily high volcanic activity, our simulations show that it wouldn’t be enough to stop global warming. It’s like a passing cloud on a hot, sunny day: the cooling effect is only temporary.”

The researchers say that fully accounting for the effect of volcanoes can help make climate projections more robust. They are now using their simulations to investigate whether future volcanic activity could threaten the recovery of the Antarctic ozone hole, and in turn, maintain a relatively high level of harmful ultraviolet radiation at the Earth’s surface.

The research was supported in part by the Croucher Foundation and The Cambridge Commonwealth, European & International Trust, the European Union, and the Natural Environment Research Council (NERC), part of UK Research and Innovation (UKRI).

Artificial intelligence brain  Credit: Andriy Onufriyenko via Getty Images
Artificial intelligence brain Credit: Andriy Onufriyenko via Getty Images

Cambridge builds new type of memory that could greatly reduce energy use, improve performance

Researchers have developed a new design for computer memory that could both greatly improve performance and reduce the energy demands of internet and communications technologies, which are predicted to consume nearly a third of global electricity within the next ten years. 

The researchers, led by the University of Cambridge, developed a device that processes data in a similar way as the synapses in the human brain. The devices are based on hafnium oxide, a material already used in the semiconductor industry, and tiny self-assembled barriers, which can be raised or lowered to allow electrons to pass.

This method of changing the electrical resistance in computer memory devices, and allowing information processing and memory to exist in the same place, could lead to the development of computer memory devices with far greater density, higher performance, and lower energy consumption.

Our data-hungry world has led to a ballooning of energy demands, making it ever more difficult to reduce carbon emissions. Within the next few years, artificial intelligence, internet usage, algorithms, and other data-driven technologies are expected to consume more than 30% of global electricity.  

“To a large extent, this explosion in energy demands is due to shortcomings of current computer memory technologies,” said Dr Markus Hellenbrand, from Cambridge’s Department of Materials Science and Metallurgy. “In conventional computing, there’s memory on one side and processing on the other, and data is shuffled back between the two, which takes both energy and time.”

One potential solution to the problem of inefficient computer memory is a new type of technology known as resistive switching memory. Conventional memory devices are capable of two states: one or zero. A functioning resistive switching memory device, however, would be capable of a continuous range of states – computer memory devices based on this principle would be capable of far greater density and speed.

“A typical USB stick based on the continuous range would be able to hold between ten and 100 times more information, for example,” said Hellenbrand.

Hellenbrand and his colleagues developed a prototype device based on hafnium oxide, an insulating material that is already used in the semiconductor industry. The issue with using this material for resistive switching memory applications is known as the uniformity problem. At the atomic level, hafnium oxide has no structure, with the hafnium and oxygen atoms randomly mixed, making it challenging to use for memory applications.

However, the researchers found that by adding barium to thin films of hafnium oxide, some unusual structures started to form, perpendicular to the hafnium oxide plane, in the composite material.

These vertical barium-rich ‘bridges’ are highly structured, and allow electrons to pass through, while the surrounding hafnium oxide remains unstructured. At the point where these bridges meet the device contacts, an energy barrier was created, which electrons can cross. The researchers were able to control the height of this barrier, which in turn changes the electrical resistance of the composite material.

“This allows multiple states to exist in the material, unlike conventional memory which has only two states,” said Hellenbrand.

Unlike other composite materials, which require expensive high-temperature manufacturing methods, these hafnium oxide composites self-assemble at low temperatures. The composite material showed high levels of performance and uniformity, making them highly promising for next-generation memory applications.

A patent on the technology has been filed by Cambridge Enterprise, the University’s commercialization arm.

“What’s really exciting about these materials is they can work like a synapse in the brain: they can store and process information in the same place, like our brains can, making them highly promising for the rapidly growing AI and machine learning fields,” said Hellenbrand.

The researchers are now working with industry to carry out larger feasibility studies on the materials, to understand more clearly how the high-performance structures form. Since hafnium oxide is a material already used in the semiconductor industry, the researchers say it would not be difficult to integrate it into existing manufacturing processes.

The research was supported in part by the U.S. National Science Foundation and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).

Dr. Andreas Tittl
Dr. Andreas Tittl

German physicist Tittl develops a metasurface that enables strong coupling effects between light, transition metal dichalcogenides

The interaction of light and matter on the nanoscale is a vital aspect of nanophotonics. Resonant nanosystems allow scientists to control and enhance electromagnetic energy at volumes smaller than the wavelength of the incident light. As well as allowing sunlight to be captured much more effectively, they also facilitate improved optical wave-guiding and emissions control. The strong coupling of light with electronic excitation in solid-state materials generates hybridized photonic and electronic states, so-called polaritons, which can exhibit interesting properties such as Bose-Einstein condensation and superfluidity. 

A new study presents progress in the coupling of light and matter on the nanoscale. Researchers led by Ludwig-Maximilians-Universität München in Germany (LMU) physicist Dr. Andreas Tittl have developed a metasurface that enables strong coupling effects between light and transition metal dichalcogenides (TMDCs). This novel platform is based on photonic bound states in the continuum, so-called BICs, in nanostructured tungsten disulfide (WS2). The simultaneous utilization of WS2 as the base material for the manufacture of metasurfaces with sharp resonances and as a coupling partner supporting the active material excitation opens up new possibilities for research into polaritonic applications.

An important breakthrough in this research is controlling the coupling strength, which is independent of losses within the material. Because the metasurface platform can integrate other TMDCs or excitonic materials without difficulty, it can furnish fundamental insights and practical device concepts for polaritonic applications. Moreover, the concept of the newly developed metasurface provides a foundation for applications in controllable low-threshold semiconductor lasers, photocatalytic enhancement, and quantum supercomputing.