New low-memory fluid & heat flow algo could turbocharge supercomputing simulations

If there's one thing supercomputing enthusiasts appreciate as much as raw processing power, it's achieving more with less memory. Researchers at Tokyo Metropolitan University have made a promising advancement by reimagining the Lattice-Boltzmann Method (LBM), a workhorse in computational fluid dynamics, which significantly reduces memory requirements while maintaining accuracy and stability.

Why This Matters to Supercomputing

Fluid and heat flow simulations, from modeling airflow over aircraft wings to predicting climate patterns and even simulating blood flow in biomedical research, are classic examples of problems that push supercomputers hard. These simulations partition a physical domain into millions, or even billions, of grid points. At each point, the Lattice-Boltzmann Method (LBM) tracks the distributions of particle “parcels” as they move and collide across the grid to compute phenomena such as velocity and temperature.
 
However, there's a catch: storing additional information at each grid point significantly increases memory usage. In large-scale HPC environments, memory is a valuable resource. Memory costs, both financially and in terms of energy, can restrict the scale, resolution, and duration of simulations. This is where the new algorithm truly excels.

The Innovation: Low-Memory LBM

Associate Professor Toshio Tagawa and doctoral student Yoshitaka Mochizuki redesigned the LBM, incorporating a clever trick: they added small "optional moments" that implicitly encode gradient information, essentially, telling the algorithm how values change from point to point without needing to store all that data explicitly. Because gradients are built into the formulation, the simulation doesn’t have to keep huge sets of intermediate variables in memory.
 
In tests across multiple fluid and heat flow benchmarks, the new method slashed memory usage by roughly 50% in certain scenarios, which is enormous in HPC terms. If a simulation previously just barely fit into a supercomputer’s memory, this approach could make it comfortably fit or allow it to run at a much higher resolution.

Why Supercomputers Will Care

Supercomputers are extremely parallel machines, but they still contend with finite memory per node, per core, and per job. Reducing memory footprints can:
  • Enable larger, more detailed simulations without needing bigger machines.
  • Lower energy use (memory operations are a significant power draw).
  • Improve scalability by reducing communication overhead tied to data shuffling.
In practice, this algorithmic advance can influence how developers optimize code for systems and next-generation HPC architectures. Memory constraints are a major bottleneck in fluid and heat simulations, particularly in 3D, multiphysics, or long-duration runs. The new low-memory LBM addresses this challenge.

Wide Relevance Beyond Academia

The innovation isn’t just for textbook problems. LBM and similar lattice-based schemes are used in:
  • Aerospace and automotive design
  • Weather and climate modeling
  • Porous media flow (e.g., oil reservoir simulation)
  • Biomedical simulations (e.g., capillary networks)
Any domain where fluid or heat behavior matters at scale and where HPC resources are stretched thin could benefit.

Computational Insight, Not Just Raw Power

It’s always tempting in supercomputing to chase more cores, more flops, or bigger clusters. But advances like this remind us that algorithmic ingenuity often beats brute force. Memory efficiency isn’t just a nice-to-have; it’s a multiplier that lets existing systems do far more with what they already have.
 
As future systems come online, low-memory formulations like this will be an important part of the HPC playbook. They help supercomputers push into previously unreachable problem sizes, enabling science, engineering, and industry to ask bigger questions and get answers faster.
 
In the world of supercomputing, sometimes less memory used means more science done, and that’s worth celebrating. 

New Chinese study claims Earth’s deep water mystery solved, but the supercomputing link isn’t so clear

A high-profile Science paper and its accompanying press release from the Chinese Academy of Sciences make a bold claim: Earth may have locked away massive amounts of water deep in its mantle during the first tens of millions of years after formation, thanks to the mineral bridgmanite acting like a high-pressure “water container.” This challenges decades of textbook assumptions about a desiccated early mantle and a late veneer of water delivery.
 
Before we award supercomputing a gold star, it's important to consider whether the computational methods and modeling used are sufficiently transparent and if this impactful result truly relies on the world's fastest machines or primarily on traditional laboratory and theoretical calculations.
 
The press release highlights experimental setups, such as diamond anvil cells with laser heating and ultra-high-pressure imaging tools, used to replicate conditions at depths greater than 660 km below Earth’s surface. These are physical simulators of pressure and temperature, not digital ones.
 
However, nearly every Science paper on deep Earth processes, including this one, relies on numerical models to extrapolate limited lab data to planetary scales.
 
Here's the catch: The team doesn't clearly explain whether or how high-performance computing (HPC) or supercomputing simulations were used in their work. The Science article's abstract and press summary focus on experiments and analytical techniques for measuring microscopic water content, and Science's DOI listing confirms the publication details but makes no explicit mention of HPC resources.
 
That's somewhat unusual given the claim of planetary-scale simulations. Typically, understanding Earth's thermal and compositional evolution necessitates 3D models of mantle convection, phase transitions under extreme pressure, and coupled multiphysics, all tasks well-suited for supercomputers. However, the publicly available summary of this study doesn't detail such modeling or reference specific HPC centers or computational frameworks.
 
In Earth and planetary sciences, HPC integration is standard when scaling lab data to global processes. Papers in other fields explicitly link computational results to HPC resources and acknowledge the supercomputers used for simulations. However, this study's press release and the public Science abstract fail to cite any computing facility or discuss numerical simulation workflows, and Science's repository entry doesn't list HPC under methods. This creates a notable gap between the claim of "vast water storage deep in the mantle" and the evidence that supercomputing played a central role in the research.
 
This differs from genuinely computation-heavy studies in Science and similar journals, where authors provide details on the supercomputers used, parallel codes, numerical solvers, grid resolution, and performance scaling metrics, the essential elements of reproducible computational science. The lack of this technical information here raises questions about how significantly this result advanced beyond analytical theory and high-pressure experiments.
 
It's possible the researchers used computational models behind the scenes, perhaps calibrating thermodynamic or kinetic models with HPC simulations, but such details may be hidden in supplementary materials not yet public. However, based on the press materials and abstract, readers are left to guess whether this research is experimental geochemistry complemented by modest numerical modeling, or truly HPC-driven planetary challenge computing.
 
Why does this matter? The frontier of Earth science increasingly intersects with supercomputing. Massive datasets from seismic tomography, petrological phase diagrams, and global geodynamic simulations all depend on HPC to turn physics into planetary predictions. In an era when AI and HPC are reshaping scientific discovery, expanding what is novel and highly cited across fields (see broader analyses of HPC’s impact on research output and novelty), transparency about computational methods is not an optional extra; it is central to confidence in the claims.
 
The idea that Earth's deep interior may have sequestered oceans' worth of water is fascinating and potentially paradigm-shifting. However, until the computational underpinnings are clearly described, it is premature to celebrate this as a triumph of supercomputing. In an age when HPC is often the invisible engine of scientific breakthroughs, clarity about its role is not a luxury; it's a requirement for trust.
 
Supercomputing may yet prove essential for modeling Earth's earliest conditions at scale, but based on the available summaries of this work, we are not there yet.

Siemens + GlobalFoundries forge an AI manufacturing alliance

In a move with far-reaching implications beyond the factory floor, Siemens and GlobalFoundries (GF) have announced a strategic collaboration to introduce AI-driven automation, electrification, and predictive systems into the core of semiconductor manufacturing. The two companies are not simply modernizing fabs; they are quietly bolstering the global supercomputing ecosystem.
 
At first glance, this partnership appears to be a manufacturing-efficiency initiative, but a broader view reveals its significance. Supercomputer systems and AI accelerators all rely on a consistent, secure, and energy-efficient supply of chips. By solidifying the semiconductor pipeline, Siemens and GF are effectively strengthening the foundation of the world's most advanced computing systems.

AI as the New Fabrication Foreman

The press release highlights a future in which fabs run on AI-enabled software, real-time sensor feedback, robotics, and predictive maintenance, all stitched together by Siemens’ automation platform and GF’s process technology. With fabs operating around the clock, even minor equipment downtime can ripple through global supply chains. AI eliminates that fragility.
 
More uptime in fabs → more chips → more GPUs, CPUs, accelerators, and controllers → more fuel for supercomputing and AI growth.
 
Supercomputing centers have already hit power walls, supply constraints, and long lead times for specialized silicon. This collaboration is a direct attempt to widen that bottleneck.

Why This Matters for Supercomputing’s Future

Supercomputing lives and dies by chip availability. Every exascale machine, from Frontier in the US to Aurora and Europe’s LUMI, relies on stable, high-yield semiconductor production. If the chip pipeline hiccups, innovation slows.
 
This deal addresses that in several ways:
• AI-driven fab automation increases yield reliability
Better yield means more chips meeting the precision tolerances required for HPC and AI workloads. Variability is the enemy of systems; AI reduces it.
• Predictive maintenance trims delays
Supercomputing depends on multi-year roadmaps for upgrades. A fab outage in Dresden or New York can throw global timelines off. AI gives visibility and predictability where none existed before.
• Energy-efficient manufacturing aligns with HPC sustainability goals
AI-guided energy systems in fabs lower production costs and carbon footprint. HPC centers, already under pressure to be sustainable, benefit indirectly from chips with lower embedded energy.
• Localized, secure semiconductor supply is critical for national supercomputing leadership
 
With GF operating major fabs in the US and Europe, Siemens and GF are reinforcing regional chip independence. That matters deeply as nations compete for AI leadership.
 
When Siemens says “our economy runs on Silicon,  one wafer at a time,” it’s not hyperbole.
 
Supercomputers, AI clusters, edge devices, and industrial robotics all trace back to a single origin: wafers moving through a fab.

A Subtle but Powerful Shift: Physical AI

A fascinating element in the release is the mention of “physical AI chips at scale.” GF (bolstered by MIPS and RISC-V IP) is positioning itself to build chips that bring intelligence into real-world devices, robots, vehicles, and industrial systems.
 
Supercomputing has long been the brain.
Physical AI becomes the nervous system.
 
This partnership helps marry both worlds:
• Supercomputing trains the models
• Fab automation fabricates the chips
• Physical AI devices deploy them back into the real world
 
It’s a flywheel.

The Optimistic View: A Supply Chain That Can Finally Keep Up

We’re entering an era where demand for high-performance chips is accelerating faster than at any time in history, from sovereign AI to quantum research to edge robotics. The Siemens + GF announcement is not just corporate news; it’s infrastructure news.
 
If AI is the engine of tomorrow’s economy, semiconductor supply is the fuel.
 
And supercomputing? It’s the ignition system.
 
By tightening the AI-driven loop between design, automation, fabrication, and deployment, this collaboration represents a confident step toward a future where:
• fabs don’t fail,
• supply chains don’t crack,
• supercomputers don’t stall,
• and innovation doesn’t wait.
 
In a world hungry for compute, Siemens and GF are quietly strengthening the ground beneath the entire AI revolution.

Uranus, Neptune. What lies beneath, a supercomputer unlocks the mystery

In a breakthrough that feels like cosmic archaeology, a research team at the University of Zurich (UZH) used cutting-edge supercomputer modeling to challenge decades-old assumptions about the icy giants of our solar system, Uranus and Neptune. The results? These planets might not be ice giants at all, but something far more surprising: worlds rich in rock and mystery.

Rebooting the “Ice Giant” Idea

For generations, scientists categorized Uranus and Neptune as “ice giants”, planets composed predominantly of water, ammonia, and methane ices beneath their atmospheres. Yet the new UZH study uses a hybrid modelling method that’s deliberately agnostic. Instead of assuming what’s inside, the computer starts with randomized internal profiles, then iterates until those profiles match known properties like gravity and density.
 
The outcome: both planets could just as plausibly be “rock-rich” as “ice-rich,” or somewhere in between. For Uranus, the models returned rock-to-water mass ratios ranging from ~0.04 up to nearly 4, a huge spread. Neptune’s best fits suggest similar ambiguity.
 
Modeling a planet’s interior is not trivial. You must simulate pressure, temperature, chemical composition, mass distribution, and gravitational moments, all under conditions that can’t be replicated on Earth. The UZH team used iterative algorithms that try countless plausible internal configurations, discard what doesn’t fit, and refine what does. This kind of brute force analysis demands enormous computational power.
 
It’s the same principle that has transformed cosmology and planetary science: powerful hardware + clever software = a time machine for the universe’s hidden corners. Much like how supercomputers once helped simulate star and galaxy formation for researchers at UZH and beyond.

A New View of Magnetic Fields and Planetary Identity

One of the strangest facts about Uranus and Neptune is their bizarre, multipolar magnetic fields, nothing like Earth’s simple dipole. The new models offer a possible explanation: if the interiors are layered differently than assumed, with rock-rich regions and various convective zones, magnetic field generation could behave differently than before thought.
 
In other words, these planets’ internal structure, not just external appearance, may be far more diverse than “ice giants.” They might be “rock giants,” “mixed giants,” or something in between.
  • The conclusions don’t assert Uranus or Neptune must be rock-heavy. Rather, they show that with current data, multiple internal configurations remain plausible.
  • As researchers note, uncertainties remain, especially regarding how materials behave under extreme interior pressures and temperatures, and exact composition gradients.
  • Definitive answers likely require future space missions to Uranus or Neptune to collect more observational data.
Because this work flips our assumptions. For decades, Uranus and Neptune comfortably sat in a neat box: “ice giants.” Now, thanks to computational bravery, that box is dissolving. These planets, mysterious, blue, distant, become laboratories for possibility.
 
Think about it: we can’t drill into them or bring back samples. We can’t replicate their interiors on Earth. But with enough processing power and creative algorithms, we can peer inside. It’s a reminder: sometimes the most radical discoveries aren’t from better telescopes, but from better simulations. The cosmos doesn’t always give answers, so we build them ourselves.
 
This kind of research, agile, computational, wide-open to possibilities, invites a rethinking of how we categorize planets, how we understand planet formation, and even how we search for exoplanets.
 
Maybe the neat categories we learned in school are just placeholders until someone powers up a supercomputer. And then: boom, the universe gets more weird, more beautiful.
 
AI rides into the arena: how code is reimagining rodeo
AI rides into the arena: how code is reimagining rodeo

Edge-AI meets spurs, saddles

Palantir Technologies, together with TWG AI and backed by Teton Ridge, is launching a bold experiment that brings real-time artificial intelligence and computer vision into the dusty, data-scarce world of rodeo. This week, they announced a partnership with NVIDIA to deploy “edge AI” systems at live rodeo venues.
 
Instead of streaming raw video to the cloud and waiting, the new system processes footage on-site, using NVIDIA’s Holoscan infrastructure and powerful RTX PRO 6000 Blackwell GPUs, enabling lightning-fast analytics.
 
In effect, the rodeo arena becomes a living lab. Horses, riders, bulls, all tracked not just by human judges or spectators, but by silicon and algorithms. 
 

From past scores to live feedback

The project isn’t starting from scratch. Teton Ridge and its partners aggregated years of historical data: ride times, animal performance, rider stats across different rodeo disciplines. Using Palantir’s Foundry and AI-Platform (AIP), the collaborators trained computer-vision models to interpret each ride,  detecting motion, evaluating interactions between human and animal athletes, and exposing biomechanical and performance insights invisible to the naked eye.
 
What this means: Instead of relying solely on judges or memory, rodeo organizers and coaches can tap into a data-rich backend that dissects every gallop, pivot, and buck in near real-time.
 
According to reporting in Fast Company, this isn’t just a novelty; it reflects a broader push by Teton Ridge to transform one of America’s oldest sports through AI.

Why AI may change the rodeo game

  • Performance optimization for cowboys and cowgirls
    Algorithms can quantify subtle motion: body posture, reaction time, animal-rider dynamics. Over time, aggregated analytics might highlight training blind spots or ideal riding techniques.
  • Animal-athlete safety & welfare
    Tracking animal behavior and movement could help veterinarians, trainers, and event organizers detect stress or injury risks, giving rodeo a more humane, data-backed side.
  • Enhanced fan experience & broadcasting
    Real-time stats and analytics, delivered as overlays during broadcasts or arena jumbo-trons,  bring rodeo into the 21st century of immersive sports viewing. This aligns with broader trends of AI reshaping sports media and fan engagement.
  • Validating tradition through modern measurement
    Rodeo has always thrived on tradition, intuition, and human judgment. Now AI introduces a layer of objective data, a way to measure excellence and performance beyond lore and anecdotes.

Challenges and questions because reality isn't a clean Git push

This isn't Hollywood. Implementing real-time AI in the rodeo world will bump against real constraints:
  • Edge-AI hardware in dusty, unpredictable arenas may face connectivity, maintenance, or latency challenges. Running GPUs under such conditions isn’t trivial.
  • Data fairness and animal welfare: Introducing analytics could shift the spotlight. Will riders, trainers, or animals be pressured into chasing numbers rather than safety or tradition?
  • Cultural pushback: Rodeo is deeply rooted in heritage; adding algorithmic scrutiny might ruffle feathers among purists who believe in gut, instinct, and human judgment over code.

The long view, rodeo 2.0

Those dusty arenas, once reserved for tradition and adrenaline, may soon host another kind of spectacle, data + performance + insight. With Palantir, TWG AI, and NVIDIA building the infrastructure, and Teton Ridge investing in the vision, rodeo could evolve into one of the first “high-tech frontier sports.”
 
Watching a cowboy ride? Soon, you might also see real-time stats on posture, force, animal response, and analytics dashboards powered by edge AI. Maybe one day you’ll even watch a chatbot co-commentate a bull ride.
 
It’s wild west meets high-tech. And just like that, the future of rodeo looks like code rode sidesaddle with tradition.