RiverMamba: Advancements in flood forecasting

In an era characterized by increasingly unpredictable river fluctuations, a novel tool developed by the LAMARR Institute (for Machine Learning & Artificial Intelligence) in Germany may represent a significant advancement. Their recent introduction of RiverMamba, a deep-learning model, is designed to forecast river discharge and fluvial floods on a global grid.
 
Researchers indicate that RiverMamba can generate forecasts of global river discharge on a 0.05° grid (approximately 5 km resolution) with a lead time of up to seven days.

Key technical features include:

  • The utilization of "Mamba blocks" (bidirectional state-space modules) to model the spatio-temporal routing of rivers and meteorological forcing.
  • Integration of long-term reanalysis data (e.g., ERA5-Land), static river attributes, and meteorological forecasts (ECMWF HRES) to inform predictions.
  • The developers assert that RiverMamba "surpasses both operational AI- and physics-based models" in forecasting accuracy for extreme events.
This is particularly significant for the St. Louis, Missouri, home of this year's SC25 supercomputing show, and beyond. Flood forecasting is a critical undertaking in areas like the Kansas City region, where river basins (e.g., the Missouri and Kansas rivers and their tributaries) can pose unexpected challenges. A model with global coverage and medium-range lead times offers several advantages:
  • Lead-time extension: Up to seven days provides emergency planners with increased preparedness time.
  • Granular spatial resolution: The 0.05° grid enables finer discrimination of catchments.
  • Extreme flood modeling: Enables the analysis of rare, high-impact events, rather than just "typical" flows.
  • Scalability: A global model allows for potential application beyond major rivers to smaller basins, which are often less well-monitored.
For MoveInLab and its KC-first orientation, this means: as climate change nudges more frequent extreme weather, the ability to forecast at finer scales can inform property risk assessment, neighbourhood resilience strategies, and buyer/seller consults about flood hazard.
 
The model, as noted by its developers, has certain limitations. Specifically, observational data may not fully capture human interventions, such as dams and levees, and uncertainties remain in meteorological forecasting. The model's operational readiness across all catchment areas requires further demonstration. For users in the KC area, local calibration may be necessary, as global models often require adaptation to local hydro-geomorphology, urbanization patterns, and data characteristics.

Supercomputing's Advancement in River Flow Modeling

The underlying technology, supercomputing (i.e., large-scale clusters, high-performance computing, GPU farms), has transitioned from modeling galactic structures to modeling granular elements, such as water molecules in rivers. This shift is significant because: Data volume and velocity necessitate processing terabytes of meteorological, land-surface, and hydrological data to predict floods globally at approximately 5 km resolution with a seven-day lead time, an undertaking that posed challenges for traditional models.

Model complexity

The Mamba blocks in RiverMamba embed spatio-temporal routing, which is computationally intensive. Without supercomputing or GPU acceleration, the model would operate too slowly to be practical for real-time forecasting. Operational resilience: Flood-warning centers in regions like the Midwest require models that run quickly, reliably, and frequently to issue timely alerts, which is facilitated by supercomputing infrastructure.

Democratization risk

However, compute-heavy models necessitate resources (energy, hardware, expertise). If only a few institutions can operate them, the benefits may not reach underserved regions, raising equity concerns. In summary, supercomputing is not merely "big machines doing big math" but rather the new infrastructure supporting Earth-system resilience. For flood forecasting, this infrastructure is finally undergoing the necessary upgrades.

RiverMamba represents a significant advancement, characterized by global awareness, fine resolution, and deep-learning capabilities. For the #SC25 and STL audience, this translates to improved tools for understanding and communicating flood risk. However, it is not a "magic bullet." Local adaptation, data limitations, and access to computational resources remain crucial. The era of "smart rivers" is emerging, and the technological underpinnings are being significantly enhanced.

Semiconductor miracle claimed, but what does it mean for supercomputing?

New claim: semiconductor turns superconductor

Researchers at New York University (NYU), in collaboration with teams from the University of Queensland, ETH Zürich, and Ohio State University, report the creation of a novel material: hyper-doped germanium (Ge) with gallium (Ga) substitution, which the authors claim exhibits superconductivity at approximately 3.5 K.
 
According to the published paper in Nature Nanotechnology titled “Superconductivity in substitutional Ga-hyperdoped Ge epitaxial thin films,” the key points are:
  • Ga atoms were substitutionally incorporated into the Ge lattice at very high concentrations (~17.9 % Ga substitution, hole concentration ≈ 4.15 × 10²¹ cm⁻³).
  • The material was grown epitaxially by molecular-beam epitaxy (MBE), yielding a relatively ordered (low-disorder) structure compared to prior “hyper-doped” attempts.
  • The team measured a superconducting critical temperature (T_c) of 3.5 K.
  • The authors suggest that this development could serve as a “superconductor–semiconductor platform” within the familiar group-IV semiconductor environment.
NYU’s press materials frame the work as a step toward “scalable, foundry-ready quantum devices”, “low-power cryogenic electronics,” and the integration of classical and quantum chips.
 
From the vantage of high-performance computing (HPC) and supercomputing infrastructure, the implications of a semiconductor material that also superconducts are enticing:
  • If one could integrate superconducting circuits with semiconducting chip infrastructure, one might reduce resistive energy losses, perhaps enabling faster interconnects, denser cryogenic processors, or more efficient quantum-accelerated hardware.
  • The fact that the base material is germanium (Ge), which already features in advanced semiconductor processes, prompts optimism about compatibility with existing fabrication pipelines and with chip-scale superconducting/semiconducting hybrids.
  • Lower energy dissipation is especially significant for supercomputing centers, where power and cooling are major cost/constraint factors.
However, and importantly, the paper and associated press material also reveal significant limitations, which a skeptical observer must highlight:
  1. A very low operating temperature of 3.5 K presents a significant challenge. While superconducting quantum circuits can function at millikelvin or low kelvin temperatures, the need for cryogenics remains demanding. For conventional supercomputing hardware, which typically operates at around 300 K or even with moderate cryogenic cooling, the practical application of this technology is uncertain.
  2. Thin-film, experimental material: The reported material is an epitaxial thin film grown under highly controlled conditions (MBE) with extreme doping levels and careful crystallographic ordering. Scaling this to large-area wafers, reliable yields, multilayer integration, packaging, and thermal/cryogenic infrastructure is non-trivial.
  3. Limited performance metrics beyond superconductivity: The paper reports the existence of superconductivity, but does not (at least in the abstract) provide data on other performance metrics relevant to supercomputing: e.g., critical current densities, magnetic-field resilience, junction behaviour, switching speeds, coherence/phase noise, integration with semiconducting logic, thermal cycling, and long-term reliability.
  4. Integration and interface issues: The promise of “superconductor–semiconductor platform” hinges on clean interfaces and controlled doping, but real supercomputing systems require complex multilayer interconnects, packaging, and rugged environments. Translating lab-scale thin films into full system components is a long road.
  5. Scope inflation risk: The press releases use terms like "scalable," "foundry-ready," and "quantum devices and low-power cryogenic electronics," which seem to overstate the current findings. There's a clear gap between demonstrating superconductivity in a thin film and deploying it in actual supercomputing hardware.
To be clear: this work does not yet change the supercomputing landscape. Among the missing pieces:
  • No demonstration of a computing circuit or interconnect built from this material running at supercomputing speeds or under realistic loads.
  • No evidence of a full logic device or even a prototype cryogenic classical logic device using the new material.
  • No cost/footprint or manufacturing path analysis. The material may require exotic fabrication, extreme cooling, or doping regimes impractical for commercial processors or HPC centers.
  • No benchmark against existing superconducting interconnects (e.g., Nb, NbTi, high-Tc materials) or advanced semiconductor interconnects.
  • No demonstration of switching speeds, control logic, or system scalability.
In other words, the headline “semiconductor that superconducts” is accurate, but the leap to “supercomputing revolution” is not yet justified.

Why this still matters, cautiously

Despite the caveats, this research is interesting and worth tracking for these reasons:
  • It demonstrates that superconductivity can be achieved in a group-IV semiconductor environment (germanium) with relatively low disorder, opening a novel materials platform.
  • It could enable new hybrid architectures where semiconducting and superconducting components are integrated more closely on a chip, potentially reducing interconnect parasitics and thermal mismatches.
  • For cryogenic computing architectures (emerging research field: cryo-cooled classical logic, deep-learning at low temperatures, superconducting logic), having a more “familiar” semiconductor substrate might reduce integration complexity compared to exotic superconductors alone.
  • From a materials science standpoint, identifying substitutional Ga in Ge, along with the resulting structural distortion (tetragonal distortion) and high hole concentration, significantly contributes to our understanding of superconductivity in non-metallic materials.

Outlook and key questions for supercomputers

Here are questions supercomputing architects and technology watchers should ask when evaluating this kind of research for future applicability:

  • What is the critical current density (Jₙ) of this Ge:Ga superconductor, and how does it compare to existing superconductors used for interconnects or logic?
  • How robust is the superconductivity under magnetic fields, temperature cycling, thermal load, and mechanical stress? HPC environments impose non-ideal conditions.
  • Can this material be fabricated at the wafer scale, with high yield, multi-layer connectivity, and compatible with high-volume semiconductor fabrication?
  • Can circuits be built that take advantage of the superconductivity (e.g., superconducting interconnects, logic, sensors) and integrate with classical CMOS/Ge logic in the same chip or module?
  • What is the energy/cost trade-off when including the required cryogenic cooling, packaging, and supporting infrastructure? Does the reduction in resistive loss offset the additional overhead in cooling and system complexity?
  • Do the superconducting transition and operation domain align with the operating conditions of a practical supercomputer module (i.e., cooling budget, maintenance, reliability)?
  • Are there unanticipated material limitations, e.g., dopant clustering over time, degradation, interface mismatches, or manufacturing variability that would hamper mass deployment?
In summary, the announcement from NYU and collaborators represents a noteworthy advance in materials science; the creation of a superconducting thin film from germanium is a significant achievement. However, for the supercomputing field, it is premature to characterize this as a breakthrough that will reshape HPC architectures. Demonstrations of fabrication scale-up, device performance, system integration, cost-effectiveness, and reliability are still needed.
 
Until those pieces fall into place, the hype surrounding "scalable quantum devices" or "foundry-ready superconducting semiconductors" should be tempered with caution. The future may indeed bring hybrid semiconductor-superconductor chips, low-power cryogenic logic, and novel interconnects, but this research represents a significant step, not the final destination.

AI drug discovery models: Physics falls short

In a thought-provoking twist for computational chemistry and biomedicine, researchers at the University of Basel (Switzerland) have uncovered that even the most advanced AI models used for drug design may not truly understand the physics of molecular binding; they appear to be pattern-matching rather than reasoning.

When learning isn’t the same as understanding

The study, reported today, describes how deep-learning "co-folding" models, systems designed to predict how a protein and a potential drug molecule will fit together, fail to uphold basic physical laws when deliberately tested under challenging conditions.
 
In one striking experiment, researchers mutated or blocked binding sites on proteins, or edited ligands so they would no longer bind. Yet, AI models frequently predicted a binding pose anyway, as though the disruption had not occurred. In more than half the cases, the AI output ignored the alterations.
 
The authors argue that these models rely on statistical correlations, the shapes and sequence patterns they’ve observed in training, rather than truly modeling the underlying physics of electrostatics, steric factors (the crowding of atoms), hydrogen bonds, and so on.

Why this matters for drug discovery

The implications are significant. The promise of AI in drug discovery is enormous finding new molecules more quickly, predicting how they will bind, and shortening the time to a viable therapeutic. However, as the Basel team notes, if the model doesn't truly understand what makes a ligand bind to a protein, predictions for novel, unseen targets (a key objective) may be unreliable.
 
As Prof. Markus Lill of the University of Basel states, "When they see something completely new, they quickly fall short, but that is precisely where the key to new drugs lies."
 
In other words, models trained on known protein-ligand pairs may perform well "within sample," but when faced with novel challenges, they may revert to "safe guesses" rather than principled predictions. This puts a caveat on many current hype narratives surrounding AI drug design.
 
Key findings include:
  • The deep-learning co-folding models were exposed to adversarial examples, including mutating binding sites, altering ligand charge distributions, and blocking binding pockets.
  • Despite physically implausible or impossible binding configurations (for instance, ligand charged the wrong way, binding site residues replaced by sterically blocking amino acids), the models still often predicted good binding poses.
  • From this, the authors conclude these models do not reliably respect physical constraints (e.g., electrostatics, hydrogen‐bonding networks, steric hindrance), and they fail to generalize when faced with new types of protein/ligand systems.
  • The paper argues for integrating physical and chemical priors into future models, making sure that machine‐learning models are not simply “black-box” pattern matchers, but respect the underlying molecular science.

A cautious but curious tone on where to go next

The news from Basel isn’t a refutation of AI in drug research; rather, it is a clarion call for more nuance and care. AI models have already changed what’s possible: predicting protein folds, accelerating docking predictions, and broadly expanding the realm of computational chemistry. Yet this research suggests there’s still an important gap between “predicting what we know” and “reasoning about what we don’t know.”
Going forward, several directions are ripe:
  • Hybrid modeling: combining data‐driven deep learning with traditional physics‐based modeling (electrostatics, molecular mechanics, quantum effects) might strengthen reliability.
  • Benchmarking on novel/rare systems: rather than just “hold‐out” samples similar to the training data, models should be challenged with radically new proteins or ligands to test generalization.
  • Transparent AI: understanding not just the output but the reasoning of models (why did they predict binding despite physically implausible input?).
  • Experimental validation remains crucial: even the most sophisticated prediction needs lab and computational cross-checks that consider real chemistry and physics.

Bottom line

In summary, the study conducted by the University of Basel presents a compelling assessment: while current AI models demonstrate remarkable capabilities, they may primarily rely on patterns derived from historical data rather than accurately simulating molecular interactions. This disparity is particularly significant in drug discovery, where novel targets and unforeseen chemical phenomena are commonplace. Therefore, bridging this gap is essential. Moving forward, a focus on integrating machine learning with physics-based insights holds the key to advancing the development of innovative therapeutics.