Increased polar ocean turbulence observed in supercomputer simulations of a warming planet

A recent study conducted by the Institute for Basic Science (IBS) and its collaborating institutions presents compelling new evidence indicating that the planet's polar oceans are likely to experience a significant increase in turbulence as climate change progresses. High-performance supercomputing technology was essential to this research.

Simulation at Scale

Researchers at the IBS Center for Climate Physics (ICCP) analyzed ocean currents, ice cover, and horizontal stirring—a process involving winds and currents that stretch and mix water masses, using ultra-high-resolution climate models. These models were executed on the supercomputer Aleph, which is situated at IBS in Daejeon, South Korea.
 
According to IBS, Aleph has a processing capacity of approximately 1.43 petaflops (1.4 quadrillion floating-point operations per second). It is important to note that running these finely-resolved simulations—which track ocean turbulence, sea ice decline, and the mixing of marine ecosystems—demands substantial computational resources, as well as considerable storage and data-handling capabilities. For instance, one ultra-high-resolution simulation generated approximately 5.3 terabytes of data per year of simulated time.
 
The research team conducted simulations using present-day CO₂ levels, doubled CO₂, and quadrupled CO₂ levels, incorporating atmospheric, oceanic, sea ice, and land feedback mechanisms within a fully coupled Earth system model. The simulations revealed a significant increase in "mesoscale horizontal stirring" (MHS) within both the Arctic and Southern Oceans under warming conditions, indicating heightened mixing, eddy activity, currents, and overall turbulence. Regional mechanisms vary; in the Arctic, sea ice loss facilitates more direct wind forcing and eddy generation, while in the Antarctic coastal region, increased freshwater input from melting ice enhances density gradients and strengthens currents such as the Antarctic Slope Current. The ecological consequences are considerable, as intensified mixing impacts nutrient distribution, plankton populations, larval transport of marine life, and even the dispersion of pollutants like microplastics.

Why supercomputing matters

This research underscores the crucial role of supercomputers in advancing climate science. Conventional climate models frequently employed coarse resolutions, with grid scales exceeding 100 kilometers, which limited their ability to accurately represent small-scale eddies and turbulence. In contrast, the simulations conducted by ICCP utilized resolutions of approximately 0.25° for atmospheric modeling and 0.1° for oceanic modeling, corresponding to a scale of roughly 10 kilometers for certain components. Without computational resources such as Aleph, achieving such high levels of resolution and scale would be infeasible. The demands on storage, memory, input/output, and computational throughput are substantial; one study utilized 11,960 cores, generating approximately 1.8 petabytes of output for a particular high-resolution configuration. In summary, supercomputing capabilities unlock critical insights, transitioning models from simplified approximations to detailed, physically accurate simulations of turbulence, eddies, ice-ocean interactions, and marine ecosystems.
 
From a practical perspective, these findings suggest that, in a warming climate, the polar oceans may exhibit behaviors currently underestimated by existing climate models. Increased turbulence will lead to heightened mixing of heat and nutrients, potentially influencing sea surface temperatures, ecosystem structures, and the dispersion of pollutants. From a broader scientific and infrastructural perspective, the study underscores the critical need for advanced computing resources in climate research. As models continue to advance in resolution, coupling, and complexity, such as through the integration of biological systems with physical dynamics, computational demands will continue to escalate. Institutions like IBS are currently preparing for the next generation of computing capabilities.
 
The planet is undergoing a period of increased activity, literally. Thanks to computing power once considered science fiction, scientists are now investigating the complex dynamics of our polar oceans and discovering that the next phase of climate change may manifest not only as gradual warming but also as accelerated turbulence, mixing, and widespread system rearrangements. The supercomputer Aleph has become one of the primary instruments in this evolving process.

RiverMamba: Advancements in flood forecasting

In an era characterized by increasingly unpredictable river fluctuations, a novel tool developed by the LAMARR Institute (for Machine Learning & Artificial Intelligence) in Germany may represent a significant advancement. Their recent introduction of RiverMamba, a deep-learning model, is designed to forecast river discharge and fluvial floods on a global grid.
 
Researchers indicate that RiverMamba can generate forecasts of global river discharge on a 0.05° grid (approximately 5 km resolution) with a lead time of up to seven days.

Key technical features include:

  • The utilization of "Mamba blocks" (bidirectional state-space modules) to model the spatio-temporal routing of rivers and meteorological forcing.
  • Integration of long-term reanalysis data (e.g., ERA5-Land), static river attributes, and meteorological forecasts (ECMWF HRES) to inform predictions.
  • The developers assert that RiverMamba "surpasses both operational AI- and physics-based models" in forecasting accuracy for extreme events.
This is particularly significant for the St. Louis, Missouri, home of this year's SC25 supercomputing show, and beyond. Flood forecasting is a critical undertaking in areas like the Kansas City region, where river basins (e.g., the Missouri and Kansas rivers and their tributaries) can pose unexpected challenges. A model with global coverage and medium-range lead times offers several advantages:
  • Lead-time extension: Up to seven days provides emergency planners with increased preparedness time.
  • Granular spatial resolution: The 0.05° grid enables finer discrimination of catchments.
  • Extreme flood modeling: Enables the analysis of rare, high-impact events, rather than just "typical" flows.
  • Scalability: A global model allows for potential application beyond major rivers to smaller basins, which are often less well-monitored.
For MoveInLab and its KC-first orientation, this means: as climate change nudges more frequent extreme weather, the ability to forecast at finer scales can inform property risk assessment, neighbourhood resilience strategies, and buyer/seller consults about flood hazard.
 
The model, as noted by its developers, has certain limitations. Specifically, observational data may not fully capture human interventions, such as dams and levees, and uncertainties remain in meteorological forecasting. The model's operational readiness across all catchment areas requires further demonstration. For users in the KC area, local calibration may be necessary, as global models often require adaptation to local hydro-geomorphology, urbanization patterns, and data characteristics.

Supercomputing's Advancement in River Flow Modeling

The underlying technology, supercomputing (i.e., large-scale clusters, high-performance computing, GPU farms), has transitioned from modeling galactic structures to modeling granular elements, such as water molecules in rivers. This shift is significant because: Data volume and velocity necessitate processing terabytes of meteorological, land-surface, and hydrological data to predict floods globally at approximately 5 km resolution with a seven-day lead time, an undertaking that posed challenges for traditional models.

Model complexity

The Mamba blocks in RiverMamba embed spatio-temporal routing, which is computationally intensive. Without supercomputing or GPU acceleration, the model would operate too slowly to be practical for real-time forecasting. Operational resilience: Flood-warning centers in regions like the Midwest require models that run quickly, reliably, and frequently to issue timely alerts, which is facilitated by supercomputing infrastructure.

Democratization risk

However, compute-heavy models necessitate resources (energy, hardware, expertise). If only a few institutions can operate them, the benefits may not reach underserved regions, raising equity concerns. In summary, supercomputing is not merely "big machines doing big math" but rather the new infrastructure supporting Earth-system resilience. For flood forecasting, this infrastructure is finally undergoing the necessary upgrades.

RiverMamba represents a significant advancement, characterized by global awareness, fine resolution, and deep-learning capabilities. For the #SC25 and STL audience, this translates to improved tools for understanding and communicating flood risk. However, it is not a "magic bullet." Local adaptation, data limitations, and access to computational resources remain crucial. The era of "smart rivers" is emerging, and the technological underpinnings are being significantly enhanced.

Semiconductor miracle claimed, but what does it mean for supercomputing?

New claim: semiconductor turns superconductor

Researchers at New York University (NYU), in collaboration with teams from the University of Queensland, ETH Zürich, and Ohio State University, report the creation of a novel material: hyper-doped germanium (Ge) with gallium (Ga) substitution, which the authors claim exhibits superconductivity at approximately 3.5 K.
 
According to the published paper in Nature Nanotechnology titled “Superconductivity in substitutional Ga-hyperdoped Ge epitaxial thin films,” the key points are:
  • Ga atoms were substitutionally incorporated into the Ge lattice at very high concentrations (~17.9 % Ga substitution, hole concentration ≈ 4.15 × 10²¹ cm⁻³).
  • The material was grown epitaxially by molecular-beam epitaxy (MBE), yielding a relatively ordered (low-disorder) structure compared to prior “hyper-doped” attempts.
  • The team measured a superconducting critical temperature (T_c) of 3.5 K.
  • The authors suggest that this development could serve as a “superconductor–semiconductor platform” within the familiar group-IV semiconductor environment.
NYU’s press materials frame the work as a step toward “scalable, foundry-ready quantum devices”, “low-power cryogenic electronics,” and the integration of classical and quantum chips.
 
From the vantage of high-performance computing (HPC) and supercomputing infrastructure, the implications of a semiconductor material that also superconducts are enticing:
  • If one could integrate superconducting circuits with semiconducting chip infrastructure, one might reduce resistive energy losses, perhaps enabling faster interconnects, denser cryogenic processors, or more efficient quantum-accelerated hardware.
  • The fact that the base material is germanium (Ge), which already features in advanced semiconductor processes, prompts optimism about compatibility with existing fabrication pipelines and with chip-scale superconducting/semiconducting hybrids.
  • Lower energy dissipation is especially significant for supercomputing centers, where power and cooling are major cost/constraint factors.
However, and importantly, the paper and associated press material also reveal significant limitations, which a skeptical observer must highlight:
  1. A very low operating temperature of 3.5 K presents a significant challenge. While superconducting quantum circuits can function at millikelvin or low kelvin temperatures, the need for cryogenics remains demanding. For conventional supercomputing hardware, which typically operates at around 300 K or even with moderate cryogenic cooling, the practical application of this technology is uncertain.
  2. Thin-film, experimental material: The reported material is an epitaxial thin film grown under highly controlled conditions (MBE) with extreme doping levels and careful crystallographic ordering. Scaling this to large-area wafers, reliable yields, multilayer integration, packaging, and thermal/cryogenic infrastructure is non-trivial.
  3. Limited performance metrics beyond superconductivity: The paper reports the existence of superconductivity, but does not (at least in the abstract) provide data on other performance metrics relevant to supercomputing: e.g., critical current densities, magnetic-field resilience, junction behaviour, switching speeds, coherence/phase noise, integration with semiconducting logic, thermal cycling, and long-term reliability.
  4. Integration and interface issues: The promise of “superconductor–semiconductor platform” hinges on clean interfaces and controlled doping, but real supercomputing systems require complex multilayer interconnects, packaging, and rugged environments. Translating lab-scale thin films into full system components is a long road.
  5. Scope inflation risk: The press releases use terms like "scalable," "foundry-ready," and "quantum devices and low-power cryogenic electronics," which seem to overstate the current findings. There's a clear gap between demonstrating superconductivity in a thin film and deploying it in actual supercomputing hardware.
To be clear: this work does not yet change the supercomputing landscape. Among the missing pieces:
  • No demonstration of a computing circuit or interconnect built from this material running at supercomputing speeds or under realistic loads.
  • No evidence of a full logic device or even a prototype cryogenic classical logic device using the new material.
  • No cost/footprint or manufacturing path analysis. The material may require exotic fabrication, extreme cooling, or doping regimes impractical for commercial processors or HPC centers.
  • No benchmark against existing superconducting interconnects (e.g., Nb, NbTi, high-Tc materials) or advanced semiconductor interconnects.
  • No demonstration of switching speeds, control logic, or system scalability.
In other words, the headline “semiconductor that superconducts” is accurate, but the leap to “supercomputing revolution” is not yet justified.

Why this still matters, cautiously

Despite the caveats, this research is interesting and worth tracking for these reasons:
  • It demonstrates that superconductivity can be achieved in a group-IV semiconductor environment (germanium) with relatively low disorder, opening a novel materials platform.
  • It could enable new hybrid architectures where semiconducting and superconducting components are integrated more closely on a chip, potentially reducing interconnect parasitics and thermal mismatches.
  • For cryogenic computing architectures (emerging research field: cryo-cooled classical logic, deep-learning at low temperatures, superconducting logic), having a more “familiar” semiconductor substrate might reduce integration complexity compared to exotic superconductors alone.
  • From a materials science standpoint, identifying substitutional Ga in Ge, along with the resulting structural distortion (tetragonal distortion) and high hole concentration, significantly contributes to our understanding of superconductivity in non-metallic materials.

Outlook and key questions for supercomputers

Here are questions supercomputing architects and technology watchers should ask when evaluating this kind of research for future applicability:

  • What is the critical current density (Jₙ) of this Ge:Ga superconductor, and how does it compare to existing superconductors used for interconnects or logic?
  • How robust is the superconductivity under magnetic fields, temperature cycling, thermal load, and mechanical stress? HPC environments impose non-ideal conditions.
  • Can this material be fabricated at the wafer scale, with high yield, multi-layer connectivity, and compatible with high-volume semiconductor fabrication?
  • Can circuits be built that take advantage of the superconductivity (e.g., superconducting interconnects, logic, sensors) and integrate with classical CMOS/Ge logic in the same chip or module?
  • What is the energy/cost trade-off when including the required cryogenic cooling, packaging, and supporting infrastructure? Does the reduction in resistive loss offset the additional overhead in cooling and system complexity?
  • Do the superconducting transition and operation domain align with the operating conditions of a practical supercomputer module (i.e., cooling budget, maintenance, reliability)?
  • Are there unanticipated material limitations, e.g., dopant clustering over time, degradation, interface mismatches, or manufacturing variability that would hamper mass deployment?
In summary, the announcement from NYU and collaborators represents a noteworthy advance in materials science; the creation of a superconducting thin film from germanium is a significant achievement. However, for the supercomputing field, it is premature to characterize this as a breakthrough that will reshape HPC architectures. Demonstrations of fabrication scale-up, device performance, system integration, cost-effectiveness, and reliability are still needed.
 
Until those pieces fall into place, the hype surrounding "scalable quantum devices" or "foundry-ready superconducting semiconductors" should be tempered with caution. The future may indeed bring hybrid semiconductor-superconductor chips, low-power cryogenic logic, and novel interconnects, but this research represents a significant step, not the final destination.

AI drug discovery models: Physics falls short

In a thought-provoking twist for computational chemistry and biomedicine, researchers at the University of Basel (Switzerland) have uncovered that even the most advanced AI models used for drug design may not truly understand the physics of molecular binding; they appear to be pattern-matching rather than reasoning.

When learning isn’t the same as understanding

The study, reported today, describes how deep-learning "co-folding" models, systems designed to predict how a protein and a potential drug molecule will fit together, fail to uphold basic physical laws when deliberately tested under challenging conditions.
 
In one striking experiment, researchers mutated or blocked binding sites on proteins, or edited ligands so they would no longer bind. Yet, AI models frequently predicted a binding pose anyway, as though the disruption had not occurred. In more than half the cases, the AI output ignored the alterations.
 
The authors argue that these models rely on statistical correlations, the shapes and sequence patterns they’ve observed in training, rather than truly modeling the underlying physics of electrostatics, steric factors (the crowding of atoms), hydrogen bonds, and so on.

Why this matters for drug discovery

The implications are significant. The promise of AI in drug discovery is enormous finding new molecules more quickly, predicting how they will bind, and shortening the time to a viable therapeutic. However, as the Basel team notes, if the model doesn't truly understand what makes a ligand bind to a protein, predictions for novel, unseen targets (a key objective) may be unreliable.
 
As Prof. Markus Lill of the University of Basel states, "When they see something completely new, they quickly fall short, but that is precisely where the key to new drugs lies."
 
In other words, models trained on known protein-ligand pairs may perform well "within sample," but when faced with novel challenges, they may revert to "safe guesses" rather than principled predictions. This puts a caveat on many current hype narratives surrounding AI drug design.
 
Key findings include:
  • The deep-learning co-folding models were exposed to adversarial examples, including mutating binding sites, altering ligand charge distributions, and blocking binding pockets.
  • Despite physically implausible or impossible binding configurations (for instance, ligand charged the wrong way, binding site residues replaced by sterically blocking amino acids), the models still often predicted good binding poses.
  • From this, the authors conclude these models do not reliably respect physical constraints (e.g., electrostatics, hydrogen‐bonding networks, steric hindrance), and they fail to generalize when faced with new types of protein/ligand systems.
  • The paper argues for integrating physical and chemical priors into future models, making sure that machine‐learning models are not simply “black-box” pattern matchers, but respect the underlying molecular science.

A cautious but curious tone on where to go next

The news from Basel isn’t a refutation of AI in drug research; rather, it is a clarion call for more nuance and care. AI models have already changed what’s possible: predicting protein folds, accelerating docking predictions, and broadly expanding the realm of computational chemistry. Yet this research suggests there’s still an important gap between “predicting what we know” and “reasoning about what we don’t know.”
Going forward, several directions are ripe:
  • Hybrid modeling: combining data‐driven deep learning with traditional physics‐based modeling (electrostatics, molecular mechanics, quantum effects) might strengthen reliability.
  • Benchmarking on novel/rare systems: rather than just “hold‐out” samples similar to the training data, models should be challenged with radically new proteins or ligands to test generalization.
  • Transparent AI: understanding not just the output but the reasoning of models (why did they predict binding despite physically implausible input?).
  • Experimental validation remains crucial: even the most sophisticated prediction needs lab and computational cross-checks that consider real chemistry and physics.

Bottom line

In summary, the study conducted by the University of Basel presents a compelling assessment: while current AI models demonstrate remarkable capabilities, they may primarily rely on patterns derived from historical data rather than accurately simulating molecular interactions. This disparity is particularly significant in drug discovery, where novel targets and unforeseen chemical phenomena are commonplace. Therefore, bridging this gap is essential. Moving forward, a focus on integrating machine learning with physics-based insights holds the key to advancing the development of innovative therapeutics.

Supercomputers unlock the chemistry of gecko binding: Vienna team breaks new ground in modeling large molecules

Scientists at the Vienna University of Technology (TU Wien) have developed a high-precision computational approach to enhance the understanding of how large molecules interact, specifically through the weak but pervasive van der Waals forces that enable geckos to adhere to surfaces. This breakthrough is anticipated to drive advancements in materials science, pharmaceuticals, and energy storage by providing greater reliability in predicting molecular behavior.

A puzzle solved

For many years, researchers in quantum chemistry have relied on two prominent computational methods: the "gold standard" coupled-cluster theory, specifically CCSD(T), and the stochastic diffusion quantum Monte Carlo (DMC) method. While both methods have provided near-benchmark accuracy for small molecules, discrepancies in predicted interaction energies emerged when applied to large, highly polarizable molecular systems.
 
The TU Wien team, led by Prof. Andreas Grüneis, along with Tobias Schäfer, Andreas Irmler, and Alejandro Gallo, investigated this divergence. They identified that CCSD(T) systematically overestimated binding energies in large molecular complexes, predicting stronger molecular interactions than were actually present.
 
Their new computational variant, designated CCSD(cT), incorporates selected higher-order corrections to the treatment of triple particle-hole excitations, which are significant for large, polarizable systems. This refinement effectively mitigates over-binding and aligns the computed values with the DMC results. The authors demonstrate in their study that CCSD(cT) achieves "chemical accuracy" (within approximately 1 kcal/mol) even for complexes comprising over 100 atoms.

The super-computational method: what makes it special

The key to the breakthrough isn’t simply more powerful hardware, but a clever adaptation of computation techniques and basis sets that fully exploit today’s supercomputing infrastructure. The authors report three major enablers:
  1. Massive parallelization – The workflow was implemented on high-performance computing (HPC) clusters using up to 50 compute nodes (each with 128 cores) for their largest tasks. The ability to distribute the workload allowed the team to avoid many of the local‐correlation approximations that earlier coupled-cluster calculations used to save time but at the cost of accuracy.
  2. Plane-wave basis sets – Instead of the conventional Gaussian-type atom-centered orbitals, the team employed a plane-wave basis set (commonly used in solid-state physics) for large molecular complexes, along with natural‐orbital truncation and singular‐value decomposed Coulomb integral factorization. These choices allowed unbiased and systematically improvable estimates for the interaction energies and reduced basis‐set error.
  3. Refined triple-excitation correction (cT) – The heart of the improvement is the correction to CCSD(T)’s (T) approximation. Summary: (T) neglects certain diagrams—specifically terms like ([ [\hat V, \hat T_2 ], \hat T_2 ]), which are small for small, weakly polarizable molecules, but become significant when molecules are large and very polarizable. By including these terms in CCSD(cT), the team corrected the systematic over-binding of CCSD(T).
The method combines computational power with refined theory, effectively merging supercomputing and quantum chemistry. This "super-computational method" enables the reliable analysis of molecular systems that were previously too complex for theoretical models.

Why this matters: optimistic outlook

The implications are far-reaching:
  • Materials science & energy: Many next-generation materials, hydrogen storage media, novel catalysts, 2D materials, surfaces, rely on noncovalent interactions between large molecular or extended systems. Having accurate benchmark interaction energies means better design of materials from first principles. The TU Wien team note the importance for hydrogen binding energy prediction, drug crystallization, etc.
  • Pharmaceuticals & biomolecules: Large molecules with many atoms—think proteins, drug–target systems, and crystals—are now becoming accessible to reliable computational modeling. That means faster, smarter virtual screening, better understanding of how drugs bind, how crystals form, and more.
  • AI and machine learning models: Accurate benchmark data is the lifeblood of machine learning in chemical and materials modelling. The new method generates high‐quality reference data for large molecules, which can then train ML models for faster predictions down the line. (“Our results show that even well-established methods must be continuously re-examined to keep pace…” says the TU Wien release.
  • Science advancing: Perhaps most exciting is the idea that this demonstrates a new frontier: we are expanding the domain of accuracy in many-electron theory to ever larger systems. As the authors put it, “we are witnessing an unremitting expansion of the frontiers of accurate electronic structure theories to ever larger systems … which … has the potential to transform the paradigm of modern computational materials science.”
In short, the method opens doors. With ever-growing computational power and clever theoretical innovation, the old boundary of “accurate only for small molecules” is being lifted. That means more realistic modelling of real‐world systems, faster innovation in materials and biotech, and a hopeful horizon for computational science.

Looking ahead

Of course, challenges remain. The computations reported still required significant supercomputing resources (e.g., ~100k CPU hours for the benchmark coronene dimer), and the authors note that full canonical CCSD(cT) for still larger systems is not yet feasible—they use a fitted approximation (CCSD(cT)-fit) for the largest complexes they studied.
 
But the path forward is clear: local correlation approaches and low-scaling methods can inherit the improvements of CCSD(cT), bringing accuracy to more systems at lower cost. As the paper states, “The more accurate CCSD(cT) approximation can directly be transferred to computationally efficient low-scaling and local correlation approaches, which will substantially advance…”
 
In an optimistic note, the “gold standard” itself has been improved. The TU Wien team shows that even widely-trusted methods must evolve—and by making that evolution, they are advancing the entire field. As we explore ever more complex molecular systems, from new energy materials to advanced drugs, having reliable computational methods is not just helpful; it is essential. With this breakthrough, the future of computational chemistry and materials science looks brighter than ever.