SUPERCOMPUTING NEWS SUPERCOMPUTING NEWS
    • EMAIL NEWSLETTER SUBSCRIPTION

    • ACADEMIA
    • AEROSPACE
    • APPLICATIONS
    • ASTRONOMY
    • AUTOMOTIVE
    • BIG DATA
    • BIOLOGY
    • CHEMISTRY
    • CLIENTS
    • CLOUD
    • DEFENSE
    • DEVELOPER TOOLS
    • ECONOMICS
    • EARTH SCIENCES
    • ENGINEERING
    • ENTERTAINMENT
    • GAMING
    • GOVERNMENT
    • HEALTH
    • INDUSTRY
    • INTERCONNECTS
    • MANUFACTURING
    • MIDDLEWARE
    • MOVIES
    • NETWORKS
    • OIL & GAS
    • PHYSICS
    • PROCESSORS
    • RETAIL
    • SCIENCE
    • STORAGE
    • SYSTEMS
    • VISUALIZATION
    • POPULAR ARTICLES
    • RSS FEED
    • ADD YOUR VIDEOS
    • MANAGE VIDEOS
    • EVENTS
      • CALENDAR
      • POST YOUR EVENT
      • GENERAL EVENTS CATEGORY
      • MEETING EVENTS CATEGORY
    • CONVERSATION INBOX
    • SOCIAL ADVERTISER
    • SOCIAL NETWORK VIDEOS
    • SOCIAL ADVERTISEMENTS
    • SURVEYS
    • GROUPS
    • PAGES
    • MARKETPLACE LISTINGS
    • APPLICATIONS BROWSER
    • PRIVACY CONFIRM REQUEST
    • PRIVACY CREATE REQUEST
    • LEADERBOARD
    • POINTS LISTING
      • BADGES
    • MEDIA KIT
    • ADD BANNERS
    • ADD CAMPAIGN
    • CAMPAIGNS PAGE
    • MANAGE ADS
    • MY ORDERS
    • LOGIN/REGISTER
Sign In
Machine learning meets the Cerrado: Mapping the hidden carbon power of Brazil’s wetlands
Machine learning meets the Cerrado: Mapping the hidden carbon power of Brazil’s wetlands
Palantir, NVIDIA propose a ‘sovereign AI operating system,’ a new blueprint for AI supercomputing infrastructure
Palantir, NVIDIA propose a ‘sovereign AI operating system,’ a new blueprint for AI supercomputing infrastructure
Mapping a sea of light: Astronomers use supercomputers to probe the early Universe, but how much is signal vs. interpretation?
Mapping a sea of light: Astronomers use supercomputers to probe the early Universe, but how much is signal vs. interpretation?
New method improves precision of particle collision simulations
New method improves precision of particle collision simulations
CoreWeave, Perplexity forge a strategic HPC-driven AI partnership
CoreWeave, Perplexity forge a strategic HPC-driven AI partnership
AI agents open new frontiers in predicting preterm birth
AI agents open new frontiers in predicting preterm birth
Peering into cosmic darkness: Supercomputers illuminate one of the faintest galaxies ever seen
Peering into cosmic darkness: Supercomputers illuminate one of the faintest galaxies ever seen
Supercomputers tackle a stellar puzzle, but have we really solved it?
Supercomputers tackle a stellar puzzle, but have we really solved it?
Can Scientific AI truly solve quantum chemistry’s hardest problems?
Can Scientific AI truly solve quantum chemistry’s hardest problems?
previous arrow
next arrow
 
Shadow
How to resolve AdBlock issue?
Refresh this page
Larissa Verona measures greenhouse gas emissions from the soil using the LI-COR instrument. Photo: Juliana Di Beo
Larissa Verona measures greenhouse gas emissions from the soil using the LI-COR instrument. Photo: Juliana Di Beo
Featured

Machine learning meets the Cerrado: Mapping the hidden carbon power of Brazil’s wetlands

O'NEAL March 12, 2026, 10:30 am
The Brazilian Cerrado, often overshadowed by the Amazon rainforest, is emerging as a new frontier for computational climate science. According to researchers at the Cary Institute of Ecosystem Studies, wetlands scattered across this vast tropical savanna may act as unexpectedly powerful carbon reservoirs, yet quantifying their role in the global carbon cycle is proving to be a complex data problem increasingly addressed with machine learning and large-scale environmental modeling.
 
For machine learning professionals working with environmental data, the research highlights a fascinating challenge: detecting and modeling carbon storage in ecosystems that are spatially heterogeneous, seasonally dynamic, and poorly mapped.

The Cerrado’s Hidden Carbon System

The Cerrado biome covers roughly two million square kilometers across central Brazil and is widely recognized as one of the most biodiverse savanna ecosystems on Earth. But ecologically, its most important features may lie underground.
 
Researchers often describe the Cerrado as an “underground forest”, where plants store a significant portion of their biomass in deep root networks rather than aboveground trunks and canopies.
 
Seasonal wetlands within this landscape, such as veredas, peatlands, and marshy valley systems, play an outsized role in carbon storage. These ecosystems accumulate organic carbon in waterlogged soils where decomposition occurs slowly, allowing carbon to build up over centuries.
 
Some estimates suggest that Cerrado peatlands may hold around 13% of the region’s soil carbon while covering less than 1% of its surface area, illustrating the concentration of carbon within these specialized environments.
 
Yet despite their importance, the spatial distribution and total carbon stocks of these wetlands remain poorly constrained.

A Data Problem Well Suited to Machine Learning

This is where computational methods come in.
 
To understand how Cerrado wetlands influence regional and global carbon cycles, researchers must integrate several challenging datasets simultaneously:
  • Satellite imagery capturing seasonal hydrology and vegetation structure.
  • Soil carbon measurements from sparse field sampling campaigns
  • Topographic and hydrological models predicting water flow and wetland formation
  • Climate data describing temperature, rainfall, and evapotranspiration dynamics
Machine learning models, particularly ensemble regression and geospatial deep learning frameworks, are increasingly used to interpolate carbon density across unsampled regions and to identify wetland systems that conventional maps miss.
 
Such models often operate on multi-terabyte remote-sensing datasets, requiring HPC pipelines capable of processing satellite imagery, generating spatial features, and training predictive models across millions of grid cells.
 
For ML engineers, this workflow closely resembles large-scale geospatial modeling tasks seen in climate simulation or Earth-observation analytics.

Mato Grosso do Sul: A Case Study in Rapid Landscape Change

The state of Mato Grosso do Sul provides a particularly revealing example of the computational challenge.
 
Cerrado landscapes dominate much of the state, covering more than 60% of its territory, and include a mosaic of savannas, grasslands, forests, and wetland fields that feed major river basins connected to the Pantanal.
 
However, the region has undergone rapid land-use change in recent decades. Between 1985 and 2022, more than 4.6 million hectares of native vegetation were largely replaced by cattle pasture and soybean agriculture.
 
For environmental modelers, these changes introduce a moving target. Carbon storage potential must be estimated not just for intact ecosystems but also for landscapes undergoing continuous transformation.
 
Machine learning models, therefore, need to account for temporal dynamics, incorporating satellite time-series data and land-use classification models that track vegetation shifts over decades.

Building the Next Generation of Ecological Models

Researchers associated with the Cary Institute of Ecosystem Studies, including ecologist Amy Zanne, are exploring how plant traits, microbial processes, and wetland hydrology influence carbon storage and greenhouse gas fluxes across the Cerrado.
 
For the machine learning community, these questions translate into a broader computational challenge:
 
How can models capture interactions among vegetation traits, soil microbiology, hydrology, and climate across continental-scale landscapes?
 
Traditional ecological models struggle with the dimensionality of these systems. Data-driven approaches, combining remote sensing, statistical inference, and ML, offer a pathway toward scalable predictions.

Curiosity for the ML Community

From an algorithmic standpoint, the Cerrado wetlands project illustrates an emerging domain sometimes called computational ecosystem science.
 
It sits at the intersection of:
  • Geospatial machine learning
  • Earth-system modeling
  • Large-scale environmental data assimilation
For machine learning engineers, the appeal is clear. Few real-world datasets are as complex, or as consequential, as those describing Earth’s carbon cycle.
 
And in the Cerrado’s wetlands, the stakes may be surprisingly high. Beneath the grasses and shrubs of Brazil’s savanna lies a vast, partially hidden carbon reservoir whose behavior could influence climate models for decades to come.
 
Understanding it will require more than field biology alone.
 
It will require algorithms capable of learning from the landscape itself.
Featured

Palantir, NVIDIA propose a ‘sovereign AI operating system,’ a new blueprint for AI supercomputing infrastructure

Deck March 12, 2026, 9:30 am
With the rapid expansion of large-scale AI infrastructure, Palantir Technologies and NVIDIA have launched a joint initiative that is attracting significant interest from the high-performance computing sector. Their new Sovereign AI Operating System Reference Architecture is a comprehensive blueprint designed to help organizations create production-ready AI data centers that can operate advanced models while preserving stringent control over data and infrastructure.
 
Initially, this approach mirrors familiar high-performance computing (HPC) reference architectures, offering a validated stack that brings together compute, networking, storage, orchestration, and application frameworks. However, the system aims to go further by establishing what its developers call a true AI infrastructure operating system, one that unifies the stack from GPU hardware all the way to model deployment and enterprise workflows.
 
For supercomputing engineers accustomed to designing clusters for scientific simulation or AI training, the announcement raises a curious question: are we witnessing the emergence of an “AI operating system” layer for entire data centers?

A Turnkey AI Datacenter Stack

The new architecture, referred to as AIOS-RA, is designed as a turnkey platform that encompasses everything from hardware procurement to the development of production AI applications. It builds on NVIDIA’s enterprise reference architectures and has been validated to run Palantir’s full software ecosystem, including its data-integration and AI platforms.
 
Key components of the stack include:
  • GPU-accelerated compute nodes based on NVIDIA’s Blackwell-class systems
  • High-bandwidth networking, including Spectrum-X Ethernet fabrics
  • CUDA-X libraries and NVIDIA AI Enterprise software for optimized AI workloads
  • Palantir’s AIP, Foundry, Apollo, Rubix, and AIP Hub platforms for data integration, orchestration, and AI deployment.
At the software layer, the system runs on a Kubernetes-based orchestration substrate, coordinating distributed services and enabling AI models to interact directly with enterprise data sources.
 
From an HPC perspective, the architecture resembles a hybrid of traditional supercomputing clusters and modern cloud platforms, combining tightly coupled GPU resources with containerized service orchestration and model-driven applications.

Why “Sovereign” AI?

The most distinctive feature of the architecture is its emphasis on data sovereignty.
Organizations deploying large-scale AI increasingly face regulatory and security constraints that require data and models to remain within specific jurisdictions or controlled infrastructure. The proposed platform allows enterprises or governments to deploy AI systems on domestic or on-premises infrastructure while maintaining full control over data, models, and applications.
 
This requirement has become especially prominent in sectors such as defense, healthcare, and finance, where data residency and regulatory compliance often prohibit the use of global public-cloud AI services.
 
In this sense, the architecture reflects a broader industry shift: AI workloads are no longer just software pipelines; they are strategic infrastructure assets.

HPC Convergence With Enterprise AI

For HPC practitioners, the proposed architecture highlights a growing convergence between AI factories and traditional supercomputing systems.
 
Several design principles familiar to HPC engineers appear throughout the architecture:
  • GPU-dense compute nodes optimized for AI training and inference.
  • High-bandwidth networking fabrics designed to minimize latency across distributed workloads
  • Parallel data pipelines capable of feeding large models efficiently
  • Unified orchestration layers that coordinate heterogeneous workloads across clusters
However, unlike many scientific HPC environments, the stack is designed to support continuous operational AI workloads rather than batch simulation jobs.
 
In other words, the architecture treats the data center not as a machine that occasionally runs AI jobs, but as a persistent AI system operating at production scale.

Curiosity for the Supercomputing Community

The idea of an “AI operating system” for infrastructure invites both curiosity and debate among HPC engineers.
 
Traditional supercomputing environments already integrate complex software layers: schedulers, parallel file systems, MPI stacks, container runtimes, and resource managers. The new architecture attempts to unify many of these concepts within a platform designed specifically for AI-native workloads and enterprise data integration.
 
Whether this approach represents a genuine architectural shift or simply a rebranding of established HPC design patterns adapted for AI remains an open question.
 
What is clear, however, is that AI workloads are pushing infrastructure design toward tighter integration across hardware, orchestration, and application layers. As models grow larger and data pipelines more complex, the boundaries between cloud architecture, enterprise software, and supercomputing are rapidly dissolving.
 
For HPC practitioners observing the transformation of AI infrastructure, the partnership between Palantir and NVIDIA represents more than just a new product. It signals a larger shift, an exploration of how supercomputing architectures might become the standard foundation for production-scale AI systems.
Featured

Mapping a sea of light: Astronomers use supercomputers to probe the early Universe, but how much is signal vs. interpretation?

Tyler O'Neal, Staff Editor March 10, 2026, 4:00 am
Astronomers at the McDonald Observatory, collaborating with the Hobby-Eberly Telescope Dark Energy Experiment, have created what they call the most detailed 3D map to date of faint hydrogen emissions from the early universe. This achievement is powered by massive data processing and supercomputing, highlighting both the opportunities and interpretive hurdles of computational cosmology.
 
This research seeks to map Lyman-alpha emission, the light given off when hydrogen atoms are energized by star formation, during a pivotal era about 9 to 11 billion years ago. The findings provide insight into how galaxies and intergalactic gas developed in this crucial period of cosmic history.
 
For HPC engineers and computational scientists, however, the project poses a key question: how much of the resulting map is based on direct observation, and how much is inferred through large-scale data processing?

Turning Half a Petabyte Into a Map

The raw data behind the project is formidable. Observations collected by the Hobby-Eberly Telescope produced more than 600 million spectra across a wide region of the sky. To process the data, researchers used supercomputing resources at the Texas Advanced Computing Center.
 
In total, roughly half a petabyte of observational data was sifted through using custom software pipelines designed to extract faint spectral signatures from the background noise.
 
This is a familiar workflow for HPC users: large-scale reduction pipelines, statistical signal extraction, and multi-stage modeling designed to convert massive observational datasets into structured scientific products.
 
But the map itself was not built by directly detecting every galaxy.
 
Instead, the team relied on a statistical technique known as line intensity mapping.

A Blurred Picture of the Cosmos

Traditional galaxy surveys attempt to catalog individual objects one by one. Intensity mapping takes a different approach: it measures the combined brightness of specific spectral lines across large regions of space, effectively capturing aggregate emission from both bright and faint sources simultaneously.
 
One scientist involved in the project compared the method to looking through a “smudged plane window,” the image is blurrier, but it reveals light from many otherwise invisible sources.
 
For HPC practitioners, this analogy should sound familiar. Intensity mapping is less about high-resolution object detection and more about statistical reconstruction from incomplete data, similar to techniques used in tomography, cosmological simulations, and signal processing.
 
In this case, the reconstruction relied on a computational assumption: regions near known bright galaxies are likely to host additional faint galaxies and intergalactic gas, due to the gravitational clustering of matter. The positions of bright galaxies were therefore used as anchors to infer the locations of surrounding faint structures.
 
This strategy dramatically increases the amount of usable information extracted from observational surveys, but it also introduces a layer of modeling.

When Data Analysis Becomes Astrophysics

The resulting map reveals what researchers describe as a “sea of light” filling the spaces between previously cataloged galaxies. The signal suggests the presence of numerous faint galaxies and diffuse hydrogen gas that traditional surveys have missed.
 
From a computational standpoint, the achievement is significant. Processing hundreds of millions of spectra and reconstructing a three-dimensional cosmic structure from partial signals requires large-scale parallel workflows, sophisticated statistical filtering, and high-throughput data handling.
 
But the skeptical HPC user might ask an uncomfortable question:

If the map relies partly on statistical inference and clustering assumptions, how much of the detected structure is truly observed, and how much is model-dependent reconstruction?

The researchers themselves acknowledge this tension. The new map, they say, can now serve as a reference point for testing cosmological simulations of the same epoch.

In other words, the observational data may help validate or challenge theoretical models that attempt to describe the early universe.

HPC’s Expanding Role in Observational Cosmology

Regardless of interpretive debates, the project highlights a growing trend in astronomy: observational science is becoming increasingly computational.
 
Large surveys such as HETDEX collect far more data than traditional analysis pipelines can process manually. Instead, researchers rely on supercomputers to filter, correlate, and model enormous datasets.
 
In practice, this means that discoveries increasingly emerge not just from telescopes, but from the intersection of instrumentation, algorithms, and HPC infrastructure.
 
For supercomputing engineers, this evolution presents both opportunity and responsibility. As astronomical datasets continue to scale toward the exabyte era, the distinction between data analysis and theoretical modeling will become increasingly intertwined.
 
And sometimes, the most important question is not simply what the universe is telling us, but how much of that message is being interpreted through the lens of our algorithms.
POPULAR RIGHT NOW
  • Supercomputing advances the quest to resolve the Hubble tension in cosmology
  • Supercomputing’s next frontier: NVIDIA, CoreWeave unite to build the AI factories of tomorrow
  • Supercomputing drives materials breakthrough for green computing: 3D graphene-like electronic behavior unlocks new low-energy electronics
  • Supercomputing reveals hidden galactic architecture around the Milky Way
  • ML, supercomputing unite to revolutionize high-power laser optics
  • Supercomputers unravel the mystery of missing Tatooine-like planets
  • Supercomputers illuminate deep Earth: How giant 'blobs' shape our magnetic shield
  • Glasgow sets its sights on 'cognitive' cities, where urban systems learn, predict, adapt
  • How big can a planet be? Supercomputing unlocks the secrets of giant worlds
  • Cracking the code of spider silk: Supercomputers reveal nature's molecular secrets
How to resolve AdBlock issue?
Refresh this page
THIS YEAR'S MOST READ
  • UVA unveils the power of AI in accelerating new treatment discoveries
  • New study tracks pollution worldwide
  • AI meets DNA: Scientists create custom gene editors with machine learning
  • Darkening oceans: New study reveals alarming decline in marine light zones
  • At SC25, Phison pushes AI storage to Gen5 speeds, brings AI agents to everyday laptops
  • WSU study pinpoints molecular weak spot in virus entry; supercomputing helps reveal the hidden dance
  • Supercomputers unlock the chemistry of gecko binding: Vienna team breaks new ground in modeling large molecules
  • Big numbers, big bets: Dell scales up HPC for the AI era
  • Harnessing the fury of plasma turbulence: Supercomputer simulations illuminate fusion’s next frontier
  • HMCI, Rapt.ai deploy NVIDIA GB10 systems to power Rancho Cordova’s new AI & Robotics Ecosystem
MOST READ OF ALL-TIME
  • Largest Computational Biology Simulation Mimics The Ribosome
    The amino acid (green) slithers into the chemical reaction center, moving through an evolutionarily ancient corridor of the ribosome (purple). The amino acid is delivered to the reaction core by the transfer RNA molecule (yellow).
    The amino acid (green) slithers into the chemical reaction center, moving through an evolutionarily ancient corridor of the ribosome (purple). The amino acid is delivered to the reaction core by the transfer RNA molecule (yellow).
  • Silicon 'neurons' may add a new dimension to chips
  • Linux Networx Accelerators Expected to Drive up to 4x Price/Performance
  • Complex Concepts That Really Add Up
  • Blue Sky Studios Donates Animation SuperComputer to Wesleyan
    Each rack holds 52 Angstrom Microsystem-brand “blades,” with a memory footprint of 12 or 24 gigabytes each. (Photos by Olivia Bartlett Drake)
    Each rack holds 52 Angstrom Microsystem-brand “blades,” with a memory footprint of 12 or 24 gigabytes each. (Photos by Olivia Bartlett Drake)
  • Humanities, HPC connect at NERSC
  • TeraGrid ’09 'Call for Participation'
  • Turbulence responsible for black holes' balancing act
  • Cray Wins $52 Million SuperComputer Contract
  • SDSC Researchers Accurately Predict Protein Docking
  • FRONTPAGE
  • LATEST
  • POPULAR
  • SOCIAL
  • EVENTS
  • VIDEO
  • SUBSCRIPTION
  • RSS
  • GUIDELINES
  • PRIVACY
  • TOS
  • ABOUT
  • +1 (816) 799-4488
  • editorial@supercomputingonline.com
© 2001 - 2026 SuperComputingOnline.com, LLC.
Sign In
  • FRONT PAGE
  • LATEST
    • POPULAR ARTICLES
    • RSS FEED
    • ACADEMIA
    • AEROSPACE
    • APPLICATIONS
    • ASTRONOMY
    • AUTOMOTIVE
    • BIG DATA
    • BIOLOGY
    • CHEMISTRY
    • CLIENTS
    • CLOUD
    • DEFENSE
    • DEVELOPER TOOLS
    • EARTH SCIENCES
    • ECONOMICS
    • ENGINEERING
    • ENTERTAINMENT
    • HEALTH
    • INDUSTRY
    • INTERCONNECTS
    • GAMING
    • GOVERNMENT
    • MANUFACTURING
    • MIDDLEWARE
    • MOVIES
    • NETWORKS
    • OIL & GAS
    • PHYSICS
    • PROCESSORS
    • RETAIL
    • SCIENCE
    • STORAGE
    • SYSTEMS
    • VISUALIZATION
    • REGISTER
  • VIDEOS
    • ADD YOUR VIDEOS
    • MANAGE VIDEOS
  • COMMUNITY
    • LEADERBOARD
    • APPLICATIONS BROWSER
    • CONVERSATION INBOX
    • GROUPS
    • MARKETPLACE LISTINGS
    • PAGES
    • POINTS LISTING
      • BADGES
    • PRIVACY CONFIRM REQUEST
    • PRIVACY CREATE REQUEST
    • SOCIAL ADVERTISER
    • SOCIAL ADVERTISEMENTS
    • SOCIAL NETWORK VIDEOS
    • SURVEYS
    • EVENTS
      • CALENDAR
      • POST YOUR EVENT
      • GENERAL EVENTS CATEGORY
      • MEETING EVENTS CATEGORY
  • ADVERTISE
    • ADD CAMPAIGN
    • ADD BANNERS
    • CAMPAIGNS PAGE
    • MANAGE ADS
    • MY ORDERS
    • MEDIA KIT
    • LOGIN/REGISTER
  • +1 (816) 799-4488
  • editorial@supercomputingonline.com

Hey there! We noticed you’re using an ad blocker.