Dr Anshuman Bhardwaj (left), Baoling Gui (centre) and Dr Lydia Sam
Dr Anshuman Bhardwaj (left), Baoling Gui (centre) and Dr Lydia Sam

AI breakthrough at the University of Aberdeen to enhance global environmental monitoring

A pioneering team at the University of Aberdeen in Scotland has introduced an AI model named SAGRNet, which can potentially transform environmental and agricultural monitoring worldwide.

Developed by Dr. Lydia Sam, Dr. Anshuman Bhardwaj, and their colleagues, SAGRNet—short for Sampling and Attention-based Graph Convolutional Residual Network—utilizes deep learning to map land cover from satellite imagery with greater accuracy and efficiency. Instead of analyzing individual pixels, the model examines entire landscape features, such as forests, fields, and waterways, providing deeper insights into vegetation types and their contexts.

Initially trained on the diverse terrains of northeast Scotland, encompassing habitats ranging from farmland to urban areas, SAGRNet has demonstrated impressive adaptability. It has performed well in various regions worldwide, including Guangzhou (China), Durban (South Africa), Sydney (Australia), New York City (USA), and Porto Alegre (Brazil). The team has made the model open-source so that decision-makers, researchers, and conservationists can implement it in their local contexts.

“Our system of deep learning algorithms can instantly and accurately recognize different types of land cover, vegetation, or crops in an area,” said Dr. Sam.

Significantly, the model provides detailed information while minimizing computational demands—an essential advantage for timely monitoring of climate impacts, such as wildfires, floods, and droughts.

Dr. Bhardwaj emphasized its versatility: “It can also monitor crop growth, facilitating more accurate harvest predictions and helping make better-informed decisions about land-use sustainability.”

PhD researcher Baoling Gui pointed out how seamlessly SAGRNet integrates into operational pipelines, benefiting various applications from ecological studies to national land-use surveys.

This research, published in the prestigious ISPRS Journal of Photogrammetry and Remote Sensing, was supported by the UK’s BBSRC International Institutional Award, with contributions from international collaborators in Spain, and Germany.

Woolpert wins a $250M NOAA contract supporting shoreline mapping

The firm will offer various geospatial services to support nautical charts, maritime navigation, coastal resource management, and the definition of territorial boundaries.

Woolpert has been chosen by the National Oceanic and Atmospheric Administration (NOAA) to provide shoreline mapping services under a $250 million, multiple-award, indefinite-delivery, indefinite-quantity contract in support of the National Geodetic Survey and its Coastal Mapping Program.

The Coastal Mapping Program aims to survey approximately 95,000 miles of U.S. coastline, producing a seamless digital database of accurate and consistent national shoreline data for use in nautical charting, maritime navigation, coastal resource management, and defining territorial boundaries.

Under this contract, Woolpert will deliver a range of geospatial services, including:

- Topographic and bathymetric lidar acquisition
- Aerial imagery collection
- Ground control surveys
- Tide and water level monitoring
- Geographic cell shoreline cleanup
- Data editing, attribution, and compilation

“The work being done under this contract is essential for ensuring the accuracy of U.S. shoreline data, which supports everything from safe navigation to disaster response,” said Jeff Lovin, Woolpert’s Government Solutions Market Director. “We’re proud to continue our long-standing partnership with NOAA and the National Geodetic Survey and to contribute to this vital work that safeguards coastal communities and enhances national resilience.”

The contract is currently underway.

This artist's concept offers a new view of the Milky Way, featuring two major arms extending from a thick central bar, with two less prominent minor arms located in between.
This artist's concept offers a new view of the Milky Way, featuring two major arms extending from a thick central bar, with two less prominent minor arms located in between.

USC's COZMIC Milky-Way simulation: Powerful supercomputer, or just hype?

 In bold news, the University of Southern California claimed its new “COZMIC” simulation, dubbed “Milky-Way twins,” leverages a powerful supercomputer to uncover hidden truths about dark matter. The announcement paints a picture of groundbreaking progress but how much of this is genuine scientific advance, and how much is hype?

The PR spin

COZMIC stands for Cosmological Zoom-in Simulations with Initial Conditions beyond Cold Dark Matter. The USC-led team, led by cosmologist Vera Gluscevic and co-researchers Ethan Nadler and Andrew Benson, celebrate the system’s ability to model galaxies with novel physics and test various dark-matter behaviors.

Yet the coverage is heavy with speculation. The project has run through three dark-matter scenarios—billiard-ball interactions, mixed-sector, and self-interacting models—and then compared them to real telescopic views. The team hopes that matching their “twin” galaxies to what we observe will narrow down what dark matter actually is. However, critics argue that the leap from simulation to cosmological truth remains vast and perhaps unbridgeable.

Which supercomputer?

Despite promotional material crediting "supercomputers" in the plural, the USC story leaves key details conspicuously vague. What is the specific machine running COZMIC, and how powerful is it? There are no hardware specifications, operating speeds, or funding sources—just vague assertions of computational prowess. 

This mystery stands in sharp contrast to other astrophysics simulations. For example, the Eris simulation—the first detailed virtual Milky Way—was run over eight months on NASA’s Pleiades supercomputer, totaling nearly 1.4 million processor hours. Other projects used Titan, Summit, or even Frontier, explicitly naming their machines and citing performance data. The recently launched Frontier is the world’s fastest, capable of an 10^18 FLOPS throughput.

By omitting similar details, USC may be aiming to conceal any comparative weakness in its setup. The claims sound plausible—but without specifying whether COZMIC runs on a Pleiades-class, Summit-class, or some lesser-known cluster, there’s no basis to evaluate the simulation’s actual might. 

The cautionary tale

High-profile simulations have stumbled on this. In 2014, Swiss and US teams used Piz Daint and Titan to simulate a 51-billion-particle Milky Way, achieving 24.8 petaflops on GPUs—but they were transparent about hardware, performance, and time-to-solution. The USC coverage lacks such metrics.

Absent numbers, the claim "programmed a supercomputer to create very detailed cosmological simulations" begs the question—how detailed, and how quickly? The milestone sounds impressive, but at what cost in time, compute hours, or raw scale?

Final analysis

COZMIC certainly represents an intriguing approach, it’s rare to see dark matter–normal matter collisions simulated at this level. That much is novel. Still, branding alone doesn’t make a simulation revolutionary. If USC cannot substantiate how its "supercomputer" compares with the titans of the field (Frontier, Summit, Pleiades), its claims risk being little more than buzz‑bait.

Until researchers provide details such as FLOPS, runtime, memory footprints, or node count, their "twins" remain shadow puppets, dancing in the dark. And claims of new light on dark matter may just be smoke, without the hardware to burn it away.