Modeling life at the microscopic scale: A computational breakthrough in oxygen transport

Within the human body, the delivery of oxygen occurs at the microscale, where the disciplines of physics, chemistry, and biology intersect. Elucidating the mechanisms by which oxygen is transported through the bloodstream, diffuses out of erythrocytes, and is utilized by surrounding tissues has remained a formidable challenge, primarily due to the inherent complexity of these processes.
 
Recently, a study published in the International Journal of Heat and Mass Transfer introduced a significant advancement: a fully three-dimensional computational model that simultaneously simulates oxygen transport alongside the motion and deformation of individual red blood cells (RBCs). This development constitutes an important step toward addressing one of the most complex multiphysics challenges in biomedical science.

A problem too complex to see directly

At the scale of capillaries, oxygen transport is governed by a delicate interplay of mechanisms:
  • Fluid flow through narrow vessels
  • Diffusion across multiple regions (cells, plasma, tissue)
  • Chemical reactions involving hemoglobin
  • Continuous deformation and interaction of red blood cells
Traditional models simplified this system, often ignoring individual cells or treating vessels as static tubes. But such approximations fall short of capturing how oxygen is actually delivered in living tissue.
 
The new study breaks from that tradition by embracing the full complexity.

A unified multiphysics framework

The researchers developed a diffuse interface model that unifies multiple physical processes into a single computational framework. Instead of treating boundaries, like the surface of a red blood cell, as sharp discontinuities, the method smooths them into a continuous transition region. This allows the governing equations to be solved seamlessly across the entire domain.
 
At its core, the model simultaneously solves:
  • The incompressible Navier–Stokes equations for blood flow
  • Advection–diffusion–reaction equations for oxygen transport
  • Fluid–structure interaction governing deformable red blood cells
Red blood cells are modeled as elastic membranes interacting with fluid using an immersed boundary method, enabling them to move, deform, and respond dynamically to their environment.
 
The result is a fully coupled 3D simulation where flow, chemistry, and cellular mechanics evolve together.

Capturing the behavior of living blood

One of the most striking outcomes of the study is the ability to simulate how red blood cells actively regulate oxygen delivery.
 
Rather than acting as passive carriers, the simulations suggest that RBCs:
  • Adjust oxygen release based on local tissue demand.
  • Interact with one another in ways that influence flow distribution.
  • Contribute to maintaining relatively uniform oxygenation across tissue.
This emergent behavior, arising purely from physics and chemistry, offers new insight into how the body maintains balance at the microscale.

The computational challenge beneath the surface

While the study does not explicitly reference supercomputers or high-performance computing (HPC) systems, the scale and sophistication of the model place it firmly within the realm of HPC-class workloads.
 
The simulation involves:
  • Three-dimensional, time-dependent PDEs
  • Moving and deforming interfaces
  • High-order numerical schemes (including fifth-order advection methods)
  • Coupled nonlinear physics across multiple domains
These are precisely the kinds of problems that increasingly drive demand for advanced computing infrastructure.
 
Interestingly, rather than relying solely on brute-force computational power, the researchers focused on algorithmic efficiency:
  • A mixture formulation eliminates the need for complex interface reconstruction.
  • Fixed Cartesian grids simplify geometry handling.
  • Carefully chosen numerical schemes balance accuracy and cost.
This approach reflects a broader trend in computational science: pairing smarter algorithms with scalable hardware to tackle previously intractable problems.

A glimpse of scalable biomedical simulation

The implications extend beyond this specific study. By demonstrating a practical way to simulate oxygen transport with deformable cells in 3D, the work lays a foundation for:
  • Patient-specific microcirculation modeling
  • Disease studies involving impaired oxygen delivery
  • Integration with larger-scale physiological simulations
As these models grow in size and realism, they are likely to transition naturally onto parallel and high-performance computing platforms, where their full potential can be realized.

Looking Ahead

This research highlights a subtle but important shift. The frontier of biomedical modeling is no longer defined solely by biological insight, but increasingly by computational capability.
 
Even when supercomputers are not explicitly named, they linger in the background, implicit in the complexity of the equations, the dimensionality of the models, and the ambition of the questions being asked.
 
In that sense, this study is not just about oxygen transport. It is a preview of a future where understanding life at its smallest scales depends as much on computational innovation as it does on biology itself.

Japanese scientists decode dolphin speed with supercomputing: Turbulence, vortices, and the hidden physics of propulsion

 
The exceptional swimming speed and efficiency of dolphins has long intrigued researchers. Recent advances in computational science have now enabled a novel approach to this question, utilizing high-performance supercomputing rather than relying solely on empirical observation.
 
Researchers at the University of Osaka employed large-scale numerical simulations to investigate the turbulent fluid dynamics associated with dolphin locomotion. Their analysis uncovers a hierarchical organization of vortices within the flow, structures that serve as the primary drivers of propulsion, thereby advancing the scientific understanding of efficient fluid motion.

Turbulence as a computational frontier

Fluid dynamics remains one of the most computationally demanding domains in physics. Governed by the nonlinear Navier–Stokes equations, turbulent flows exhibit multiscale behavior, where energy cascades from large coherent structures down to smaller, chaotic eddies.
 
Capturing this hierarchy requires direct numerical simulation (DNS) or similarly high-resolution computational approaches, techniques that are infeasible without supercomputing infrastructure. In this study, researchers used a supercomputer to resolve the full spatiotemporal evolution of flow fields generated by oscillating dolphin tails, enabling them to:
  • Decompose turbulent flow into scale-dependent vortex structures.
  • Track energy transfer across scales (the “energy cascade”).
  • Quantify thrust contributions from different flow components.
  • Perform parameter sweeps across multiple swimming regimes.
This computational framework effectively turns the supercomputer into a “fluid microscope,” revealing details inaccessible to experimental measurement alone.

The hierarchy of vortices

The simulations demonstrate that dolphin propulsion is dominated not by the total turbulence generated, but by a hierarchical structure of vortices.
 
At the top of this hierarchy are large-scale vortex rings, generated by the oscillatory motion of the dolphin’s tail. These structures:
  • Carry the majority of the momentum transfer.
  • Push water backward efficiently.
  • Generate the bulk of forward thrust.
As these large vortices evolve, they break down into progressively smaller vortices through nonlinear interactions, a hallmark of turbulence known as the energy cascade. However, the simulations show that:
  • Small-scale vortices contribute minimally to propulsion.
  • Their role is largely dissipative, redistributing energy rather than generating thrust.
This distinction is critical. While classical turbulence theory emphasizes the complexity of small-scale structures, the Osaka team’s results indicate that biological propulsion exploits the largest coherent structures, effectively filtering useful motion from chaotic flow.
 
“Our goal is to understand which parts of the turbulent flow help dolphins swim so quickly,” noted lead researcher Yutaro Motoori, emphasizing the importance of isolating dominant flow components through computation.

Supercomputing enables flow decomposition

A key innovation in the study is the ability to computationally decompose turbulence into scale-specific contributions, a task nearly impossible in laboratory settings. By simulating the full velocity and vorticity fields, researchers could isolate:
  • Coherent vortex rings (large-scale structures).
  • Intermediate eddies contribute to energy transfer.
  • Fine-scale turbulent dissipation.
This decomposition allows for a quantitative mapping between flow structure and propulsion efficiency. The results remained robust across varying swimming speeds, suggesting a universal mechanism underlying dolphin locomotion.
 
Such insights depend critically on high-resolution simulation grids and parallel computation, where millions, or more degrees of freedom, must be solved simultaneously over time.

From biological insight to engineering design

Beyond biology, the implications of this work extend into engineering and applied physics. Understanding how dolphins harness turbulence efficiently could inform:
  • Next-generation underwater vehicles, optimized for thrust efficiency
  • Bio-inspired propulsion systems, mimicking oscillatory tail dynamics
  • Flow control strategies, reducing drag or enhancing lift in turbulent regimes
The study highlights a broader paradigm shift: rather than avoiding turbulence, advanced systems may learn to exploit its structure, guided by insights derived from supercomputing.

A curious glimpse into nature’s algorithms

Perhaps most intriguing is what this research suggests about nature itself. Dolphins, through evolution, appear to have “solved” a complex fluid dynamics problem, one that scientists are only now unraveling using some of the most powerful computational tools available.
 
By revealing that propulsion depends primarily on large-scale vortex organization rather than the full turbulent spectrum, the study offers a simpler, more elegant picture of motion in fluids.
 
It is a reminder that, in the age of supercomputing, curiosity driven science can uncover not only the mechanics of the natural world but the underlying principles that make it efficient.
 
And in this case, the answer to a deceptively simple question, why dolphins swim so fast, turns out to be written in the language of vortices, decoded by machines powerful enough to simulate the sea itself.

Cosmic ambition at scale: UK’s supercomputer unlocks a 2.5 petabytes universe

 
Marking a significant advancement in computational astrophysics, researchers at Durham University have released one of the most extensive cosmological simulation datasets to date. This comprehensive digital reconstruction of the Universe, enabled by high-performance supercomputing, demonstrates the unprecedented scale at which modern astrophysical phenomena can be modeled and analyzed.
 
Central to this achievement is the FLAMINGO project, an international collaborative effort aimed at simulating the evolution of matter across cosmological timescales. The resulting dataset, exceeding 2.5 petabytes, a volume comparable to approximately half a million high-definition films, provides the global scientific community with access to highly detailed virtual universes that trace the formation and evolution of cosmic structures from the post-Big Bang epoch to the present era.

Supercomputing as the engine of discovery

The simulations were executed on the COSMA-8 supercomputer, part of the UK’s DiRAC national high-performance computing infrastructure. This system, purpose-built for data-intensive cosmological workloads, enabled the integration of vast spatial scales with sophisticated physical modeling.
 
Unlike earlier generations of cosmological simulations, which often forced a trade-off between resolution and scale, FLAMINGO bridges both extremes. It simultaneously models:
  • Gigaparsec-scale cosmic volumes, spanning billions of light-years
  • Galaxy formation physics, including gas dynamics, star formation, and feedback processes
  • Dark matter and dark energy evolution, the dominant drivers of cosmic structure
  • Large-scale clustering, producing the filamentary “cosmic web” observed in galaxy surveys
This dual capability reflects a fundamental shift enabled by supercomputing: the convergence of astrophysical detail with cosmological precision.

The computational architecture of a universe

From a technical standpoint, the FLAMINGO simulations represent a triumph of parallel computing and algorithmic design. The underlying software stack, built around advanced cosmological simulation codes such as SWIFT, leverages:
  • Massively parallel processing, distributing billions of computational elements across thousands of cores
  • Hybrid gravity–hydrodynamics solvers, capturing both collisionless dark matter and baryonic physics
  • Time-resolved evolution, tracking the growth of structure across cosmic epochs
  • Petascale I/O pipelines, capable of writing, storing, and indexing multi-petabyte outputs
These simulations follow matter as it collapses under gravity, forming halos, galaxies, and clusters, while simultaneously modeling the energetic feedback from stars and black holes that regulates galaxy growth. The result is a statistically robust, physically grounded synthetic universe.
 
Crucially, the dataset’s size and complexity required not only raw compute power but also innovations in data accessibility. The team developed a web-based platform that allows researchers to query and extract subsets of the data without downloading entire petabyte-scale files, effectively democratizing access to supercomputer-scale science.

A global resource for precision cosmology

The scientific potential of the dataset is vast. Cosmological simulations are essential tools for interpreting observational data from next-generation telescopes and surveys. By comparing simulated universes with real observations, researchers can test competing models of:
  • Dark matter particle properties
  • Dark energy and cosmic acceleration
  • Galaxy formation and evolution
  • Large-scale structure statistics
FLAMINGO’s scale enables percent-level precision cosmology, allowing subtle deviations between theory and observation to be identified and explored.
 
Moreover, the ability to simulate rare, large-scale structures, such as massive galaxy clusters, provides insights that smaller simulations cannot capture. These structures serve as sensitive probes of cosmological parameters and fundamental physics.

Inspiring the next era of computational science

The release of this dataset is more than a technical achievement; it is a statement about the future of science. By making one of the largest supercomputer-generated datasets openly available, the team is lowering the barrier to entry for researchers, students, and institutions worldwide.
 
As noted by project leaders, access to facilities like COSMA-8 is typically limited. By distributing the results of these simulations globally, the project transforms a localized supercomputing capability into a shared scientific resource.
 
This approach reflects a broader trend: supercomputing is no longer just a tool; it is an infrastructure for collaboration and discovery.

Toward an exaflops cosmos

Looking ahead, projects like FLAMINGO foreshadow the coming era of exaflops computing, where simulations will achieve even higher resolution, incorporate additional physical processes, and integrate real-time observational data.
 
For now, Durham’s 2.5 petabytes universe stands as a powerful demonstration of what is possible when computational ambition meets scientific vision. It is a reminder that, in the age of supercomputing, humanity is no longer limited to observing the cosmos; we are beginning to recreate it.