NASA’s DSOC is composed of a flight laser transceiver attached to Psyche and a ground system that will send and receive laser signals. Clockwise from top left: the Psyche spacecraft with DSOC attached, flight laser transceiver, downlink ground station at Palomar, and downlink detector.
NASA’s DSOC is composed of a flight laser transceiver attached to Psyche and a ground system that will send and receive laser signals. Clockwise from top left: the Psyche spacecraft with DSOC attached, flight laser transceiver, downlink ground station at Palomar, and downlink detector.

NASA demos deep space optical communications

NASA's Deep Space Optical Communications (DSOC) experiment will showcase the use of laser or optical-based communications as far as Mars. The technology involves the use of equipment in space and on Earth, which includes a flight laser transceiver, two ground telescopes, and a high-power near-infrared laser transmitter. Despite the challenges of dealing with faint laser photon signals and a lag of over 20 minutes at the farthest distance, the experiment will offer a groundbreaking experience for transmitting higher data rates from deep space. DSOC will be launched on Oct. 12 as part of NASA's Psyche mission. It will pave the way for future missions to Mars by testing key technologies that would allow the transmission of denser science data and even stream video from the Red Planet. 

 

It is important to know about the amazing technology demonstration that's happening. NASA is testing a new technology called DSOC which uses lasers to increase data transmission from deep space. Until now, NASA has been using only radio waves to communicate with missions that travel beyond the Moon. With optical communications, much like fiber optics replacing old telephone lines on Earth, we can expect much higher data rates throughout the solar system, with 10 to 100 times the capacity of state-of-the-art systems currently used by spacecraft. This will help us better enable future human and robotic exploration missions, as well as support higher-resolution science instruments.

The 200-inch (5.1-meter) Hale Telescope at Caltech's Palomar Observatory in San Diego County, California, has also been equipped with a special superconducting high-efficiency detector array to collect data sent from the flight transceiver. The tech demo involves equipment both in space and on Earth. While NASA's Psyche spacecraft relies on traditional radio communications for mission operations, the DSOC flight laser transceiver, which is an experiment attached to the spacecraft, features both a near-infrared laser transmitter and a sensitive photon-counting camera. The laser transceiver is designed to send high-rate data to Earth and receive a laser beam sent from Earth. However, it is just one part of the technology demonstration.

Since there is no dedicated infrastructure on Earth for deep space optical communications, two ground telescopes have been updated to communicate with the flight laser transceiver. The Optical Communications Telescope Laboratory located at NASA's Jet Propulsion Laboratory in Southern California has integrated a high-power near-infrared laser transmitter with the technology demonstration. The transmitter will deliver a modulated laser signal to DSOC's flight transceiver and serve as a beacon, or pointing reference, to enable accurate aiming of the returned laser beam back to Earth.

DSOC faces unique challenges as it aims to transmit data at a high rate over a distance of up to 240 million miles (390 million kilometers) during the first two years of Psyche's six-year journey to the asteroid belt. As Psyche travels further away from Earth, the laser photon signal weakens, making it increasingly difficult to decode the data. Furthermore, the photons take longer to reach their destination, resulting in a lag of over 20 minutes at the farthest distance of the tech demo. As the positions of Earth and the spacecraft are constantly changing as the photons travel, the DSOC ground and flight systems will need to adjust and point to where the ground receiver (at Palomar) and flight transceiver (on Psyche) will be when the photons arrive.

Advanced technologies will collaborate to ensure that the lasers are accurately targeted and that high-bandwidth data is transmitted from deep space. Precise pointing of the flight laser transceiver and ground-based laser transmitter is crucial. It is comparable to hitting a dime from a mile away while it is moving. Therefore, the transceiver must be isolated from vibrations that could nudge the laser beam off the target. Initially, Psyche will direct the flight transceiver toward Earth, while autonomous systems on the flight transceiver assisted by the Table Mountain uplink beacon laser will control the pointing of the downlink laser signal to Palomar Observatory.

JPL has developed a cryogenically cooled superconducting nanowire photon-counting array receiver, which is integrated into the Hale Telescope. The instrument is equipped with high-speed electronics that record the time of arrival of single photons, allowing the signal to be decoded. The DSOC team has even developed new signal-processing techniques to extract information from the weak laser signals that will have been transmitted over tens to hundreds of millions of miles.

NASA is working on an optical communications project that aims to revolutionize communication in space. In 2013, NASA conducted the Lunar Laser Communications Demonstration which resulted in record-breaking uplink and downlink data rates between Earth and the Moon. In 2021, a new project called Laser Communications Relay Demonstration was launched to test high-bandwidth optical communications relay capabilities from geostationary orbit, enabling spacecraft to communicate with Earth even without a direct line of sight. Additionally, NASA's TeraByte InfraRed Delivery system achieved the highest-ever data rate from a satellite in low-Earth orbit to a ground-based receiver in the last year.

The latest project, called DSOC, is taking optical communications beyond the Moon, paving the way for high-bandwidth communications in deep space. This has the potential to lead to high-data-rate communications that can support streaming and high-definition imagery. This technology could be crucial in enabling humanity's next giant leap when NASA sends astronauts to Mars.

Energy consumption of AI could be equivalent to that of a small country

AI has the potential to have a large energy footprint in the future, potentially exceeding the power demands of some countries. Improvements in AI efficiency can lead to increased demand and the Jevons Paradox. Google processes up to 9 billion searches a day and if every search used AI it would need about 29.2 TWh of power a year, the same as Ireland's annual electricity consumption.

Artificial intelligence (AI) is believed to help coders code faster, make daily tasks less time-consuming, and improve driving safety. However, a recent commentary published in the Joule journal by the founder of Digiconomist suggests that the tool's massive adoption could result in a large energy footprint. In the future, this energy demand could exceed that of some countries.

The author of the commentary, Alex de Vries, a Ph.D. candidate at Vrije Universiteit Amsterdam, states "Looking at the growing demand for AI services, it’s very likely that energy consumption related to AI will significantly increase in the coming years."

Generative AI, which produces text, images, or other data, has undergone rapid growth since 2022, including OpenAI’s ChatGPT. To train these AI tools, large amounts of data are required, which is an energy-intensive process. Hugging Face, an AI-developing company based in New York reported that its multilingual text-generating AI tool consumed about 433 megawatt-hours (MWH) during training, which is enough to power 40 average American homes for a year.

Furthermore, AI's energy demand does not end with training. De Vries's analysis shows that when the tool generates data based on prompts, every text or image it produces uses a significant amount of computing power and energy. For example, ChatGPT may consume 564 MWh of electricity every day.

As companies worldwide are striving to improve the efficiency of AI hardware and software to make the tool less energy-intensive, an increase in the machines’ efficiency often leads to an increase in demand. According to de Vries, technological advancements will lead to a net increase in resource use, a phenomenon known as Jevons’ Paradox.

Making these tools more efficient and accessible can result in allowing more applications it and more people to use them, de Vries says. For instance, Google has been incorporating generative AI in its email service and testing out powering its search engine with AI. Currently, the company processes up to 9 billion searches a day. Based on the data, de Vries estimates that if every Google search uses AI, it would require approximately 29.2 TWh of power per year, equivalent to the annual electricity consumption of Ireland.

However, de Vries notes that this extreme scenario is unlikely to occur in the short term due to the high costs associated with additional AI servers and bottlenecks in the AI server supply chain. Nevertheless, AI server production is projected to grow rapidly soon. By 2027, worldwide AI-related electricity consumption could increase by 85 to 134 TWh annually based on the projection of AI server production, according to de Vries.

This amount is comparable to the annual electricity consumption of countries such as the Netherlands, Argentina, and Sweden. Furthermore, improvements in AI efficiency could also enable developers to repurpose some computer processing chips for AI use, which could further increase AI-related electricity consumption.

“The potential growth highlights that we need to be very mindful about what we use AI for. It’s energy-intensive, so we don't want to put it in all kinds of things where we don’t actually need it,” de Vries says.

This schematic illustrates the most geophysically plausible explanation for the abundance of HSE metals present in the Earth’s mantle. During the long period of bombardment, impactors would strike the Earth and deliver materials. (a) Liquid metals would sink in the locally produced impact-generated magma ocean before percolating through the partially molten zone beneath. (b) Compression causes the metals in the molten zone to solidify and sink. (c) Then thermal convection mixes and redistributes the metal-impregnated mantle components over long geologic time frames.
This schematic illustrates the most geophysically plausible explanation for the abundance of HSE metals present in the Earth’s mantle. During the long period of bombardment, impactors would strike the Earth and deliver materials. (a) Liquid metals would sink in the locally produced impact-generated magma ocean before percolating through the partially molten zone beneath. (b) Compression causes the metals in the molten zone to solidify and sink. (c) Then thermal convection mixes and redistributes the metal-impregnated mantle components over long geologic time frames.

New research proposes impact-driven mixing of mantle materials for current mantle composition, shedding light on Earth's precious metals

A new study has found a geophysically plausible scenario to explain the abundance of certain precious metals, including gold and platinum, in the Earth’s mantle.

Scientists hypothesize that early in Earth’s evolution, about 4.5 billion years ago, the Earth sustained an impact with a Mars-sized planet and the Moon formed from the debris that was ejected into an Earth-orbiting disk.

The study's simulations used the mixing of mantle materials to explain how the metals could have been prevented from completely sinking into the Earth’s core, and that mantle convection could be responsible for redistributing the materials and retaining HSEs in the mantle.

Dr. Simone Marchi from Southwest Research Institute collaborated on a recent study that found the first geophysically plausible scenario explaining the abundance of precious metals in the Earth's mantle, including gold and platinum. The simulations carried out by scientists suggest that an impact-driven mixing of mantle materials could prevent the metals from completely sinking into the Earth's core.

The Earth sustained an impact with a Mars-sized planet about 4.5 billion years ago, resulting in the formation of the Moon from the debris ejected into an Earth-orbiting disk. The so-called "late accretion" followed, during which planetesimals as large as our Moon impacted the Earth, delivering materials like highly "siderophile" elements (HSEs) - metals with a strong affinity for iron - that were integrated into the young Earth.

Previous simulations of impacts penetrating Earth's mantle showed that only small fractions of a metallic core of planetesimals are available to be assimilated by Earth's mantle, while most of these metals, including HSEs, quickly drain down to the Earth's core. This brings us to the question: how did Earth get some of its precious metals? To explain the metal and rock mix of materials in the present-day mantle, the researchers developed new simulations.

The relative abundance of HSEs in the mantle points to delivery via impact after Earth's core had formed; however, retaining those elements in the mantle proved difficult to model - until now. The new simulation considered how a partially molten zone under a localized impact-generated magma ocean could have stalled the descent of planetesimal metals into Earth's core.

The researchers modeled mixing an impacting planetesimal with mantle materials in three flowing phases - solid silicate minerals, molten silicate magma, and liquid metal. The rapid dynamics of such a three-phase system, combined with the long-term mixing provided by convection in the mantle, allows HSEs from planetesimals to be retained in the mantle.

In this scenario, an impactor would crash into the Earth, creating a localized liquid magma ocean where heavy metals sink to the bottom. When metals reach the partially molten region beneath, the metal would quickly percolate through the melt and, after that, slowly sink toward the bottom of the mantle. During this process, the molten mantle solidifies, trapping the metal. That's when convection takes over, as heat from the Earth's core causes a very slow creeping motion of materials in the solid mantle, and the ensuing currents carry heat from the interior to the planet's surface.

"Mantle convection refers to the process of rising hot mantle material and sinking colder material," lead author Dr. Jun Korenaga from Yale University said. "The mantle is almost entirely solid although, over long geologic time spans, it behaves as a ductile and highly viscous fluid, mixing and redistributing mantle materials, including HSEs accumulated from large collisions that took place billions of years ago."

Ein Multi-Wellenlängenblick auf die Umgebung des supermassiven Schwarzen Lochs SgrA* (gelbes X). Rot sind die Sterne, blau der Staub. Viele der jungen Sterne in dem Sternenhaufen IRS13 werden vom Staub verdeckt oder von den hellen Sternen überblendet. Credits: Florian Peißker / Universität zu Köln
Ein Multi-Wellenlängenblick auf die Umgebung des supermassiven Schwarzen Lochs SgrA* (gelbes X). Rot sind die Sterne, blau der Staub. Viele der jungen Sterne in dem Sternenhaufen IRS13 werden vom Staub verdeckt oder von den hellen Sternen überblendet. Credits: Florian Peißker / Universität zu Köln

German astrophysicist discovers a turbulent fountain of youth in the center of our galaxy with a remarkable formation history

A team of researchers led by Dr Florian Peißker at the University of Cologne’s Institute of Astrophysics has discovered an unexpectedly large number of young stars in the vicinity of a supermassive black hole. The study, titled ‘The Evaporating Massive Embedded Stellar Cluster IRS 13 Close to Sgr A*. Detection of a Rich Population of Dusty Objects in the IRS13 Cluster’ and published in The Astrophysical Journal, suggests that the star cluster IRS13, discovered over twenty years ago, migrated towards the supermassive black hole through a variety of processes, with its formation history being turbulent.

The researchers used a wide variety of data gathered over several decades from various telescopes to determine the cluster members in detail. The stars are a few hundred thousand years old, which is considered young for such conditions. Given the high-energy radiation and tidal forces of the galaxy, it is unexpected to have such a large number of young stars in the direct vicinity of the supermassive black hole.

In addition to the discovery of young stars, the researchers found water ice at the center of our galaxy for the first time. The James Webb Space Telescope (JWST) was used for the study, and the prism on board the telescope used to record a spectrum free of atmospheric interference from the Galactic Center was developed at the Institute of Astrophysics in the working group led by Professor Dr. Andreas Eckart, a co-author of the publication. The presence of water ice around very young stellar objects is another independent indicator of the young age of some stars near the black hole.

The researchers also found that IRS13 has a turbulent history of formation behind it and that it migrated towards the supermassive black hole through friction with the interstellar medium, collisions with other star clusters, or internal processes. From a certain distance, the cluster was then captured by the gravitation of the black hole. In this process, a bow shock may have formed at the top of the cluster from the dust surrounding the cluster, similar to the tip of a ship in the water. The increase in dust density stimulated further star formation, which explains why these young stars are mostly at the top or front of the cluster.

According to Dr Peißker, the analysis of IRS13 and the accompanying interpretation of the cluster is the first attempt to unravel a decade-old mystery about the unexpectedly young stars in the Galactic Center. Dr. Michal Zajaček, the second author of the study and scientist at Masaryk University in Brno (Czech Republic), added that the star cluster IRS13 may be the key to unraveling the origin of the dense star population at the center of our galaxy. The researchers have gathered extensive evidence that very young stars within the range of the supermassive black hole may have formed in star clusters such as IRS13, and have identified star populations of different ages in the cluster so close to the center of the Milky Way.

Further Information:
https://iopscience.iop.org/article/10.3847/1538-4357/acf6b5

The ATLAS detector (Image: CERN)
The ATLAS detector (Image: CERN)

ATLAS establishes strict boundaries on the potential existence of dark matter particles that are supersymmetric

The ATLAS collaboration has conducted an extensive search for weakly interacting supersymmetric particles, which could potentially explain the mystery of dark matter. The searches have been very sensitive, exploring a broad range of supersymmetric particle masses, resulting in the exclusion of some previously favored regions.

Despite the progress made, many models remain elusive, and further developments in search strategies will be required to improve the sensitivity of ATLAS searches.

If new particles do exist, the Large Hadron Collider (LHC) is the perfect place to search for them. According to the theory of supersymmetry, a whole new family of partner particles exists for each of the known fundamental particles. While this might seem extravagant, these partner particles could address various shortcomings in current scientific knowledge, such as the source of the mysterious dark matter in the Universe, the “unnaturally” small mass of the Higgs boson, the anomalous way that the muon spins and even the relationship between the various forces of nature. But if these supersymmetric particles exist, where might they be hiding?

This is what physicists at the LHC have been trying to find out, and in a recent study of proton-proton collision data from Run 2 of the LHC (2015-2018), the ATLAS collaboration provides the most comprehensive overview yet of their searches for some of the most elusive types of supersymmetric particles - those that would only rarely be produced through the “weak” nuclear force or the electromagnetic force. The lightest of these weakly interacting supersymmetric particles could be the source of dark matter.

Thanks to the increased collision energy and the higher collision rate provided by Run 2, as well as new search algorithms and machine-learning techniques, ATLAS has been able to explore deeper into this difficult-to-reach territory of supersymmetry.

The ATLAS physicists have combined results from eight searches, each seeking evidence for supersymmetric particles in a different way. The combined power and sensitivity of the different search strategies have allowed ATLAS researchers to study tens of thousands of supersymmetry models, each with different predictions about the masses of supersymmetric particles.

These ATLAS searches have unprecedented sensitivity and explore a wide range of supersymmetric particle masses. The ATLAS physicists looked for evidence of “lab-made” dark matter – that is, dark matter created in LHC collisions. Their searches have proved complementary to other experiments seeking natural, “relic” dark matter left over from the Big Bang. Unlike collider searches, which don’t need to see the dark matter to infer its presence, the latter experiments rely on the sufficiently large probability of dark matter particles hitting normal materials and therefore being detected.

One of the most significant findings of this combination of searches is that some regions for supersymmetric-particle masses that were previously viewed favorably, where the dark matter particle has about half the mass of the Z boson or the Higgs boson, have now been almost entirely ruled out.

Another benefit of such a comprehensive study is an understanding of which supersymmetry models have not yet been probed. ATLAS has presented examples of such surviving models, which can be used to optimize future searches. Though possible hiding places for supersymmetric particles are being systematically reduced, many models remain stubbornly elusive. Improving the sensitivity of ATLAS searches to these models will require more collision data and further clever developments in search strategies.