A new world emerges in our cosmic backyard

Astronomers have revealed exciting new evidence of a Saturn-mass planet orbiting Alpha Centauri A, the nearest Sun-like star to Earth, thanks to the advanced capabilities of NASA's James Webb Space Telescope and the analytical prowess of supercomputer simulations.

The planet candidate, concealed amidst the brilliance of two glowing suns, was detected using Webb's Mid-Infrared Instrument (MIRI) with a coronagraphic mask to block out starlight. Researchers then painstakingly extracted the planet's signal through advanced image processing and modeling. However, capturing the light is just part of the story, supercomputers provided the essential support.

Simulations: Illuminating an Invisible World

After Webb’s initial observation identified a source more than 10,000 times fainter than Alpha Centauri A, subsequent images were unable to recreate the detection. To understand why, scientists turned to high-powered simulations, digital models that trace hypothetical orbits, analyze light behavior around bright stars, and interpret the optical signatures left by a planet that is too faint to be seen clearly.

These simulations, conducted on supercomputers designed to handle massive datasets, enabled researchers to rule out artifacts, confirm that the signal was consistent with that of a planet, and explain why it might disappear from view in subsequent observations. Such modeling is critical; by virtually recreating the interplay of light, shadow, and movement around the star, researchers could confidently support their candidate's existence.

The Promise of What Lies Ahead

This combination of Webb’s observational power and supercomputer modeling expands our horizons. If confirmed, this gas giant orbiting within the habitable zone of one of our nearest stellar neighbors would represent a significant advancement, achieved through the synergy of cutting-edge engineering and computational capabilities.

As Caltech graduate student Aniket Sanghi notes, confirming this discovery would signify “a new milestone for exoplanet imaging,” driven not only by Webb’s observations but also by the powerful computational tools that bring clarity to complex data.

In an era where every pixel presents a puzzle and every faint dot may represent a new world, it is the collaboration between telescopes and supercomputers that lights our path forward.

Supermicro's sales increased by 7%, but profits fell due to rising costs

Concerns Mount Despite Strong AI Demand and Revenue Growth

Super Micro Computer, Inc. has reported a 7% increase in revenue, exceeding $5.8 billion in Q4 FY25 and achieving an impressive $22 billion for the full year. However, underlying this growth are several cautionary signs that have investors and analysts concerned.

Despite a substantial 47% year-over-year sales increase, net income fell to $195 million in Q4, a sharp decline from $297 million in the same quarter last year. Diluted earnings per share dropped over 32%, landing at just $0.31. For a company positioned for growth in the AI infrastructure boom, these decreasing profits are raising eyebrows.

In its earnings release, Supermicro credited strong demand from neoclouds, cloud service providers, sovereign entities, and enterprise customers as key revenue drivers. The company’s Datacenter Building Block Solutions (DCBBS) have been particularly well received for their plug-and-play efficiency in AI-ready environments.

However, the financial details reveal that growth has come at a cost. Operating expenses soared to $1.18 billion for the year, up from $851 million in FY24. Stock-based compensation increased over 35% year-over-year to $314 million, and interest expenses tripled to nearly $60 million, largely due to the company's aggressive financing strategy, which included the issuance of $3 billion in new convertible notes.

These rising costs and tightening gross margins (down to 9.5% GAAP in Q4 from 10.2% a year ago) have triggered concern among financial observers.

Stock Market Reaction

According to a CNBC report on August 6, investors responded with caution. While shares initially surged during trading hours, enthusiasm waned after the earnings call revealed narrowing profitability and increasing debt obligations. The stock had skyrocketed over 200% in the past year due to the promise of AI-driven growth, but investors are now questioning whether Supermicro can sustain both rapid expansion and profitable earnings.

Supermicro anticipates Q1 FY26 net sales between $6.0 billion and $7.0 billion, with diluted GAAP earnings per share possibly as low as $0.30. While top-line growth may persist, the expected EPS range suggests continued pressure on profitability.

The company currently carries $4.8 billion in convertible debt—almost equal to its cash reserves of $5.2 billion. Although Supermicro insists this is part of a strategic expansion to meet global demand, it also exposes the company to risks in a volatile macroeconomic environment characterized by fluctuating interest rates and inflation.

The Bigger Picture

Supermicro is making a significant bet on hyperscale demand for AI, cloud, and edge computing. CEO Charles Liang remains optimistic, stating that the company is “on track to grow its large-scale datacenter customers from four in FY25 to six to eight in FY26.”

However, the numbers illustrate a story of razor-thin profitability amid a quest for aggressive growth. The company's Adjusted EBITDA margin declined to just 5.9% in Q4, down from 7.2% the previous year—a troubling trend given its ambitious growth plans.

As the AI gold rush continues, Supermicro has positioned itself at the forefront of next-gen computing infrastructure. The pressing question for investors is how long the company can maintain this rapid pace of growth before profit erosion undermines its potential rewards.

Bottom Line

Supermicro’s results indicate a company thriving on revenue growth but burdened by rising costs, shrinking margins, and increasing debt. While its investment in AI scale seems to be paying off in sales for now, unless profit metrics improve, Wall Street's patience may wear thin despite its optimism.

CrowdStrike launches Signal: AI technology unveils hidden threats

This week at Black Hat USA 2025, CrowdStrike introduced Signal, a groundbreaking self-learning detection engine that represents a significant advancement in AI-driven cybersecurity. Positioned to transform how organizations identify sophisticated threats, Signal aims to detect stealthy intrusions long before they escalate, generating considerable excitement within security circles.

Understanding Signal: Learning normal behavior and spotting subtle anomalies

At the core of Signal is a series of statistical time-series models that continuously learn the normal behavior of each user, host, and process over time and across systems. Rather than relying on static rules, Signal adapts in real-time by detecting even the slightest deviations that could indicate malicious activity.

By correlating these low-signal events over time, Signal generates high-confidence leads grouped into sequences of suspicious actions that cut through the noise of regular activity. This capability enhances the speed of investigation and response, significantly reducing alert fatigue for security teams.

Identifying the invisible: Connecting the dots

While traditional tools may interpret isolated benign behaviors, Signal makes connections among them. It identifies the use of living-off-the-land tools, unusual process executions, or atypical temporary directory activity—activities that may appear harmless in isolation but form part of a larger, more concerning pattern over hours or days.

This layered, temporal intelligence transforms fragmented events into actionable threat leads, enabling defenders to act early, often well before any compromise becomes apparent.

CrowdStrike's cloud-native strength

Signal is integrated into the Falcon platform and supported by the CrowdStrike Security Cloud. It operates at an immense scale, analyzing billions of events in each customer environment daily, yet distills this information into a few high-fidelity leads that can be acted upon.

CrowdStrike’s AI-native architecture allows for swift deployment and effective detection from day one without the need for heavy agents or complex setups.

The role of supercomputing: Enhancing AI detection at scale

While Signal utilizes distributed cloud infrastructure, the development and ongoing refinement of these advanced behavioral models rely heavily on modern high-performance computing (HPC). Research indicates that cutting-edge AI systems—particularly those focused on anomaly detection and time-series learning—benefit immensely from being trained at scale on supercomputers equipped with tens of thousands of cores and GPU clusters.

In fields like gravitational-wave research and anomaly detection, AI models are trained on supercomputing resources utilizing thousands of GPUs, later optimized for swift inference across large datasets using specialized tools like NVIDIA TensorRT.

By developing and testing detection models on such powerful infrastructure, cybersecurity innovators can enhance engines like Signal to become faster, more accurate, and more adaptive, enabling them to manage billions of events in nearly real-time.

Why It Matters

The combination of AI and supercomputing creates a powerful feedback loop:

- Model Training at Scale: Utilizing HPC to identify subtle patterns and behaviors.

- Real-Time Inference at the Edge: Operating in production environments on lightweight Falcon agents, allowing for instant decision-making.

- Continuous Feedback: Models continuously update, learning new baselines as organizational environments evolve.

This synergy allows CrowdStrike to deliver a detection engine that is both intelligent and operationally efficient.

Looking forward: A safer cyber future

Signal represents a transformative step toward an AI-native, proactive defense strategy, one that anticipates stealthy threats rather than simply reacting to them. As adversaries employ increasingly sophisticated tactics to evade detection across time and systems, tools like Signal provide a promising defense, enabling security teams to recognize the early signs of an attack, piece together patterns, and respond rapidly.

In the future, continued progress in supercomputing and cloud AI will further empower detection engines to remain ahead of attackers, paving the way toward a future where cyber resilience is anchored in intelligence, scale, and speed.

ASU researchers uncover gigantic lightning

A decade-old thunderstorm that stretched across the Great Plains, from eastern Texas to nearly Kansas City—spanning 515 miles—has set a new world record for lightning, as discovered through an advanced global network of antennas located above Earth's surface.

In a recent study led by scientists at Arizona State University (ASU) and published in the Bulletin of the American Meteorological Society, the team re-examined satellite data from October 2017. They identified an astonishing megaflash extending 38 miles longer than the previous record set in April 2020.

From Antennas on Earth to Lightning Mappers in Orbit

Traditionally, lightning networks have relied on ground-based antenna arrays scattered across regions to locate strikes. However, this megaflash could only be fully mapped using space-based sensors. NOAA’s GOES-16 satellite, the first geostationary satellite equipped with a lightning mapper, joins similar instruments operated by Europe and China, enabling the detection of lightning from orbit.

These lightning mappers function like ultra-precise antennas in space. Each time a flash occurs, the sensors record its origin to the millisecond and trace its horizontal extent across continents.

Weaving Together Petabytes of Flash Data

The volume of data is staggering. GOES-16 detects about one million flashes each day, with each flash logged by time, location, and geographic extent. This massive stream of data must be continuously processed to identify the rare megaflashes, which are defined as exceeding approximately 100 kilometers (60 miles) in length.

Michael Peterson at Georgia Tech, the lead author of the published report, explains that modern data-processing techniques are essential. They sift through the vast number of ordinary lightning flashes, connecting fragmented pulses that belong to the same extended stroke. Only then can researchers reconstruct the full scale of these flashes, which can span hundreds of miles.

Networks of Antennas at Multiple Scales

Imagine dozens of satellite antennas, including GOES-16 in geostationary orbit and its counterparts operated by European and Chinese agencies. Together, they create a continuous, overlapping network of detection. Because these satellites cover most storm regions globally, even sprawling flashes can be captured in great detail.

While traditional ground networks are still useful for finer localization and cross-validation, the real breakthrough lies in the ability to measure continent-sized flashes from space.

Why It Matters—in Curiosity and Science

Fewer than 1% of thunderstorms produce megaflashes, which typically develop over more than 14 hours and cover areas the size of New Jersey. Capturing these rare phenomena allows scientists to explore storm dynamics and extreme weather from a new perspective.

Cerveny, a rapporteur for weather and climate extremes at the World Meteorological Organization, states, “It is likely that even greater extremes still exist.” As satellite systems advance and data archives grow, our ability to detect increasingly longer lightning events continues to improve.

In Summary

Satellites act as a network of space-based antennas, capturing lightning with millisecond precision and continental coverage. Advanced data-processing pipelines analyze millions of flash events each day, enabling the reconstruction of rare megaflashes that stretch across hundreds of miles. Ground networks still play a role, but the true advancement lies in the synergy of multiple satellites, assisting researchers in finding and analyzing the planet’s most extreme electrical events.

ASU’s work illustrates how innovations in detection and processing are redefining the limits of what we thought lightning could achieve—stretching across nations.

NIH researchers develop GeneAgent AI for gene-set analysis

Researchers at the National Institutes of Health (NIH) have created an artificial intelligence (AI) agent called GeneAgent that enhances the accuracy and informativeness of gene set analysis. This AI is powered by a large language model (LLM) and improves upon existing systems by providing more accurate and detailed descriptions of biological processes and their functions.

GeneAgent cross-checks its initial predictions, also known as claims, for accuracy against information stored in established, expert-curated databases. It then generates a verification report that details its successes and failures. This AI agent aids researchers in interpreting high-throughput molecular data and identifying relevant biological pathways or functional modules, which can deepen our understanding of how various diseases and conditions impact groups of genes both individually and collectively.

While AI-generated content is produced by LLMs trained on vast amounts of text data from the internet, these models are not designed to verify facts. As a result, AI-generated content can sometimes be false, misleading, or fabricated—a phenomenon known as AI hallucination. LLMs can also exhibit circular reasoning, whereby they fact-check their outputs against their data, which can increase confidence in incorrect information.

Addressing AI hallucinations is crucial when using LLM tools for gene set analysis, which involves generating collective functional descriptions of grouped genes and their potential interactions. Previous studies utilizing LLMs to answer genomic questions or summarize biological processes did not adequately address the issue of hallucinations in generated content.

GeneAgent tackles this challenge by independently comparing its claims against established knowledge in external expert-curated databases. The research team initially tested GeneAgent on 1,106 gene sets sourced from existing databases that had known functions and process names. For each gene set, GeneAgent first generated an initial list of functional claims. It then used its self-verification module to cross-check these claims against the curated databases and produced a verification report indicating whether each claim was supported, partially supported, or refuted.

To evaluate the accuracy of its self-verification process, the researchers enlisted two human experts to manually review 10 randomly selected gene sets, comprising a total of 132 claims. The experts assessed whether GeneAgent's self-verification reports were correct, partially correct, or incorrect. Their analysis revealed that 92% of the decisions made by GeneAgent were accurate, demonstrating high performance in self-verification, particularly when compared to GPT-4. The experts confirmed the model's effectiveness in reducing hallucinations and producing more reliable analytical narratives.

The research team also explored real-world applications of GeneAgent using animal-model gene sets. When tested on seven novel gene sets derived from mouse melanoma cell lines, GeneAgent provided valuable insights into the functions of specific genes, potentially leading to the discovery of new drug targets for diseases such as cancer.

While LLMs like GeneAgent are still constrained by the information they can access and their inability to reason like humans, GeneAgent's self-driven fact-checking capability shows significant promise in addressing AI hallucinations.