Pivotal technique harnesses cutting-edge AI capabilities to model, map the natural environment

UK scientists have developed a pioneering new technique that harnesses the cutting-edge capabilities of AI to model and map the natural environment in intricate detail. 

A team of experts, including Charlie Kirkwood from the University of Exeter, has created a sophisticated new approach to modeling the Earth’s natural features in greater detail and accuracy. 

The new technique can recognize intricate features and aspects of the terrain far beyond the capabilities of more traditional methods and use these to generate enhanced-quality environmental maps. 

Crucially, the new system could also pave the way to unlocking discoveries of the relationships within the natural environment, that may help tackle some of the greater climate and environment issues of the 21st century. 

The study is published in the academic journal Mathematical Geosciences, as part of a special issue on geostatistics and machine learning. 

Modeling and mapping the environment is a lengthy, time-consuming, and expensive process. Cost limits the number of observations that can be obtained, which means that creating comprehensive spatially-continuous maps depends upon filling in the gaps between these observations.  

Scientists can use a range of information sources to help fill in these observation gaps, such as terrain elevation data and satellite imagery. However, conventional modeling methods rely on users to manually engineer predictive features from these datasets – for example generating slope angles and curvatures from terrain elevation data in the hope that these can help explain the spatial distribution of the variable being mapped. 

However, scientists believe there are likely to be many more nuanced relationships at play within the natural environment that models based on traditional manual feature-engineering approaches may simply miss. 

The pioneering new AI approach, developed in the study, poses environmental information extraction as an optimization problem. Doing so allows it to automatically recognize and make use of relationships that may otherwise go unnoticed and unutilized by humans using more traditional modeling methods.  

In addition to improving map quality, this also unlocks the potential for the discovery of new relationships in the natural environment by AI, while simultaneously eliminating huge amounts of trial-and-error experimentation in the modeling process. 

Charlie Kirkwood, a postgraduate student at the University of Exeter said: “To be useful for decision making, we need our models to provide answers that are as specific as possible while also being trustworthy – and that means creating accurate measures of the uncertainty associated with our estimates, which in this case are predictions at unmeasured locations.” 

“Our AI approach is set within a Bayesian statistical framework which allows us to quantify these uncertainties and provide a range of uncertainty measures, including credible intervals, exceedance probabilities, and other more bespoke products that will feed directly into decision-making processes. Crucially, all this is provided whilst harnessing any available information more effectively than traditional approaches allow – which you can see coming through in the detail of the map“ 

The new approach was demonstrated using stream sediment calcium concentration observations from the British Geological Survey’s Geochemical Baseline Survey of the Environment (G-BASE) project.  

The distribution of calcium in the environment, which has standalone importance for its impact on soil fertility, is controlled primarily by geology – with different rock types containing different proportions of calcium – but also by hydrological processes at the surface.  

Calcium, therefore, provides a challenging use case for the AI approach, which must learn to recognize and utilize features relating to both bedrock geology (e.g. differing terrain textures, breaks of slope) and surface hydrology (e.g. drainage, river channels). 

The method, the scientists say, has produced a spectacularly detailed and accurate map which, despite depicting just one element – calcium, reveals the geology of Britain in arguably a new level of detail thanks to the information-extracting power of the new AI approach. The team believes that by combining the research skills, expertise, and data resources of its partners - the University of Exeter, Met Office, and British Geological Survey - this work presents a new dawn for environmental mapping practices in the age of AI.

 Professor Gavin Shaddick, from the University of Exeter, added “This is a fantastic example of Environmental Intelligence, the use of AI to help solve challenges in environmental science. This work is an exemplar in integrating technical knowledge of AI and machine learning with expertise in geosciences to produce a new methodology that directly addresses crucial questions in mapping environmental information. The resulting methodological advances could be used to produce detailed maps of a wide variety of environmental hazards and have the potential to provide a rich source of information for both scientists and decision-makers.” 

Garry Baker, Interim Chief Digital Officer, British Geological Survey added: “This paper is an excellent demonstration of how environmental information such as the BGS geochemical database can be re-assessed via new approaches (AI spatial interpolation). It exemplifies the benefits of ongoing environmental research and how this can draw upon the extensive datasets available to everyone through the National Geoscience Data Centre and wider NERC, and UKRI data repositories.” 

Dr. Kirstine Dale, the Met Office’s Principal Fellow for Data Science and Co-Director for Joint Centre for Excellence in Environmental Intelligence commented on the value of this work: “This is an important example of how data science has the potential to transform our understanding of the natural world. Critically, it highlights what can be achieved by working across disciplines, in this case bringing together mathematicians, weather specialists and computer scientists enrich our knowledge of the natural world in a way that no single discipline can.” 

Duke, Birmingham’s research into sugar-based plastics shows the shape of things to come

Researchers at the University of Birmingham, U.K., and Duke University, U.S., have described the exceptional strength and toughness of novel polymers made from sugars, and the chemistry underpinning their characteristics, in a study published in Angew Chemie.

The study examined two polymers based on isosorbide and isomannide that were produced by a recently disclosed method that uses sugars as a starting point for synthesis. Both polymers have superior properties to conventional thermoplastic elastomers, and in addition, are degradable and mechanically recyclable.

In a key finding, the researchers discovered that the polymer made from isosorbide displayed superior elastic recovery and toughness which was shown to be a result of the stereochemistry of the sugar groups in the materials.

Using supercomputer simulations and other experimental techniques, the researchers showed that the difference in elastic recovery results from the way that the sugar stereochemistry directs the network of hydrogen bonds between and within the long-chain molecules.

The researchers concluded that both polymers have high optical clarity, exceptional mechanical strength, and extensibility, but the isosorbide-based polymer has superior toughness, due to its higher elasticity.

Professor Andrew Dove, from Birmingham’s School of Chemistry, commented, who led the research team, commented: “The long-term impacts of modern polymers on the environment are a significant concern. Isosorbide is a renewable feedstock alternative to petroleum derivatives for commercial polymer production. It is derived from plants, and, as one of the top 20 biomass sourced molecules, is available at a scale that is consistent with commercial production of bioplastics.”

Duke University professor Dr. Matthew Becker said: “Most bio-sourced plastics have lacked the mechanical properties needed to compete in commercial applications and lose nearly all of their mechanical properties when reprocessed. The materials outlined in this paper change that paradigm”.

A joint patent application has been filed by the University of Birmingham Enterprise and Duke University, covering both the polymers and the method of making them. The researchers are now looking for industrial partners who are interested in licensing the technology.

CXL-Based memory disaggregation technology opens up a new direction for big data solution frameworks

CXL solution developed by the Computer Architecture and Memory System Laboratory at KAIST.A KAIST team compute express link (CXL) provides new insights on memory disaggregation and ensures direct access and high-performance capabilities

A team from the Computer Architecture and Memory Systems Laboratory (CAMEL) at KAIST presented a new compute express link (CXL) solution whose directly accessible, and high-performance memory disaggregation opens new directions for big data memory processing. Professor Myoungsoo Jung said the team’s technology significantly improves performance compared to existing remote direct memory access (RDMA)-based memory disaggregation.

CXL is a peripheral component interconnect-express (PCIe)-based new dynamic multi-protocol made for efficiently utilizing memory devices and accelerators. Many enterprise data centers and memory vendors are paying attention to it as the next-generation multi-protocol for the era of big data.  

Emerging big data applications such as machine learning, graph analytics, and in-memory databases require large memory capacities. However, scaling out the memory capacity via a prior memory interface like double data rate (DDR) is limited by the number of the central processing units (CPUs) and memory controllers. Therefore, memory disaggregation, which allows connecting a host to another host’s memory or memory nodes, has appeared.

RDMA is a way that a host can directly access another host’s memory via InfiniBand, the commonly used network protocol in data centers. Nowadays, most existing memory disaggregation technologies employ RDMA to get a large memory capacity. As a result, a host can share another host’s memory by transferring the data between local and remote memory.  Figure 1. a comparison of the architecture between CAMEL’s CXL solution and conventional RDMA-based memory disaggregation.

Although RDMA-based memory disaggregation provides a large memory capacity to a host, two critical problems exist. First, scaling out the memory still needs an extra CPU to be added. Since passive memory such as dynamic random-access memory (DRAM), cannot operate by itself, it should be controlled by the CPU. Second, redundant data copies and software fabric interventions for RDMA-based memory disaggregation cause longer access latency. For example, remote memory access latency in RDMA-based memory disaggregation is multiple orders of magnitude longer than local memory access.

To address these issues, Professor Jung’s team developed the CXL-based memory disaggregation framework, including CXL-enabled customized CPUs, CXL devices, CXL switches, and CXL-aware operating system modules. The team’s CXL device is a pure passive and directly accessible memory node that contains multiple DRAM dual inline memory modules (DIMMs) and a CXL memory controller. Since the CXL memory controller supports the memory in the CXL device, a host can utilize the memory node without processor or software intervention. The team’s CXL switch enables scaling out a host’s memory capacity by hierarchically connecting multiple CXL devices to the CXL switch allowing more than hundreds of devices. Atop the switches and devices, the team’s CXL-enabled operating system removes redundant data copy and protocol conversion exhibited by conventional RDMA, which can significantly decrease access latency to the memory nodes.

In a test comparing loading 64B (cache line) data from memory pooling devices, CXL-based memory disaggregation showed 8.2 times higher data load performance than RDMA-based memory disaggregation and even similar performance to local DRAM memory. In the team’s evaluations for a big data benchmark such as a machine learning-based test, CXL-based memory disaggregation technology also showed a maximum of 3.7 times higher performance than prior RDMA-based memory disaggregation technologies. Figure 2. A performance comparison between CAMEL’s CXL solution and prior RDMA-based disaggregation.

“Escaping from the conventional RDMA-based memory disaggregation, our CXL-based memory disaggregation framework can provide high scalability and performance for diverse datacenters and cloud service infrastructures,” said Professor Jung. He went on to stress, “Our CXL-based memory disaggregation research will bring about a new paradigm for memory solutions that will lead the era of big data.” 

Ebrahimi simulates the dispersal strategies that drive marine microbial diversity

The study suggests ecological trade-offs between growth and death allow marine microbes with different dispersal strategies to coexist on small particles in the ocean Image credit: Anastasia Taioglou (CC0)

Trade-offs between the benefit of colonizing new particles and the risk of being wiped out by predators allow diverse populations of marine microbes to exist together shows a study published today in eLife.

The findings help explain how a vast array of diverse bacteria and microbes coexist on floating particle rafts in oceans.

Microbial foraging in patchy environments, where resources are fragmented into particles, plays a key role in natural environments. In oceans and freshwater systems, bacteria and microbes can interact with particle surfaces in different ways: some only colonize them for short periods, while others form long-lived, stable colonies.

Scientists have long puzzled over the greater-than-expected diversity of microscopic creatures in oceans, a phenomenon called the 'plankton paradox'. While researchers have begun to understand the factors that support so many different types of plankton, many questions remain about the more plentiful ocean microbes that live on floating particles. 

"We wanted to study the role that dispersal strategies play in the successful coexistence of different microbes living on the same set of particles," says co-first author Ali Ebrahimi, who completed the study while he was a postdoctoral fellow at the Ralph M. Parsons Laboratory for Environmental Science and Engineering, Massachusetts Institute of Technology (MIT), Cambridge, US.

Ebrahimi and the team used mathematical modeling and computer simulations to test how different dispersal strategies may help marine microbes exist together in this way. They found that differently navigating the trade-offs between growth and survival can allow microbes to thrive together.

Their model showed that organisms that stay put on a single particle for longer have more opportunities to multiply. However, they face a higher risk of being wiped out by a virus or other predator capable of engulfing whole particles. On the other hand, microbes that more frequently hop between particles have less opportunity to multiply, but also have a lower risk of facing a mass mortality event. The success of one strategy over another may depend on differing environmental conditions.

"When the particle supply is high, microbes that hop rapidly between them will have a greater chance of survival," explains co-first author Akshit Goyal, Physics of Living Systems Fellow at the MIT Department of Physics. "But when particles are harder to come by, the bacteria that stay put will have an advantage."

Additionally, the team found that coexistence can remain stable in the face of changing environmental conditions, such as algal blooms of particles, favoring growth, and changing numbers of predators, favoring mortality. Together, these differing factors significantly increase the likelihood that populations with diverse dispersal strategies can live together.

"Our work focused on the link between dispersal and mortality in the ocean, but there’s plenty more going on in these environments," Goyal concludes. "Future research could provide important new insights on how environmental changes might impact these minuscule communities and, in turn, their wider marine ecosystem."

Co-first authors Ebrahimi and Goyal worked on this study alongside senior author Otto Cordero, Associate Professor at the MIT Department of Civil and Environmental Engineering.

Groundbreaking earthquake discovery: Risk models overlook an important element

Earthquakes themselves affect the movement of Earth's tectonic plates, which in turn could impact future earthquakes, according to new research from the University of Copenhagen. This new knowledge should be incorporated in computer models used to gauge earthquake risk, according to the researchers behind the study.

Like a gigantic puzzle, Earth’s tectonic plates divide the surface of our planet into larger and smaller pieces. These pieces are in constant motion due to the fluid-like part of Earth’s mantle, upon which they slowly sail. These movements regularly trigger earthquakes, some of which can devastate cities and cost thousands of lives. In 1999, the strongest European earthquake in recent years struck the town of İzmit, Turkey – taking the lives of 17,000 of its residents.

Among researchers and earthquake experts, it is well accepted that earthquakes are caused by a one-way mechanism: as plates move against one another, energy is slowly accrued along plate margins, and then suddenly released via earthquakes. This happens time and again over decades- or century-long intervals, in a constant stick-slip motion.

But in a new study, researchers from the Geology Section at the University of Copenhagen’s Department of Geosciences and Natural Resource Management demonstrate that the behavior of tectonic plates can change following an earthquake.

Using extensive GPS data and analysis of the 1999 İzmit earthquake, the researchers have been able to conclude that the Anatolian continental plate that Turkey sits upon has changed direction since the earthquake. Data also show that this influenced the frequency of quakes around Turkey after 1999.

"It appears that the link between plate motion - earthquake occurrence is not a one-way street. Earthquakes themselves feedback, as they can cause plates to move differently afterward," explains the study’s lead author, postdoc Juan Martin De Blas, who adds:

"As the plate movements change, it somewhat affects the pattern of the later earthquakes. If a tectonic plate shifts direction or moves at a different rate than before, this potentially impacts onto the seismicity of its margins with neighboring plates."

Quake models  can be improved

According to the researchers, the new findings provide a clear basis for reevaluating the risk models that interpret data gathered from the monitoring of tectonic plate movements. This data is used to assess the risk of future earthquakes in terms of probability, somehow like the nice/bad weather forecast.  

"An important aspect of these models is that they operate under the assumption that plate movements remain constant. With this study, we can see that this isn’t the case. Therefore, the models can now be further evolved so they take the feedback mechanism that occurs following an earthquake into account, where plates shift direction and speed," says Associate Professor Giampiero Iaffaldano, the study’s co-author.

The assumption that plate movements are constant has largely been a "necessary" assumption according to the researchers because monitoring plate motions over a few years were once impossible. But with the advent of geodesy in Geosciences and the extensive and ever-growing use of GPS devices over the last 20 years, we can track plate motion changes over year-long periods.

Could make us better at assessing risk

How tectonic plates are monitored varies greatly from place to place. Often GPS transmitters are positioned preferentially near the edges of a tectonic plate. This allows public agencies and researchers to track the movement of plate boundaries. But according to the researchers, we can also benefit from even more GPS devices continuously monitoring plate interiors, away from their margins.

"Plate boundaries undergo constant deformation and poorly represent the movement of plates as a whole. Therefore, GPS data from monitors positioned farther away from the plate boundaries should be used to a much greater degree. This can better inform us weather plates are changing motion and how, and provide information useful for assessing the risk of future events somewhere other than the known hot-spots," says Giampiero Iaffaldano.

The researchers point out that their study is limited to the Anatolian continental plate, as the İzmit earthquake is one of the few events for which a combination of sufficient seismic and GPS data is available. However, they expect that the picture is the same for other tectonic plates around the planet.