Four Landsat path/row footprints in purple, which were selected for accuracy assessment, are overlaid on the National Land Cover Database (NLCD) Forest Disturbance Date 1986-2019 science product.
Four Landsat path/row footprints in purple, which were selected for accuracy assessment, are overlaid on the National Land Cover Database (NLCD) Forest Disturbance Date 1986-2019 science product.

EROS Center scientist Jin automatically produces forest disturbance maps by continually monitoring forest disturbance in real-time for deforestation detection

Every second, the planet loses a stretch of forest equivalent to a football field due to logging, fires, insect infestation, disease, wind, drought, and other factors. In a recently published study, researchers from the U.S. Geological Survey Earth Resources Observation and Science (EROS) Center presented a comprehensive strategy to detect when and where forest disturbance happens at a large scale and provide a deeper understanding of forest change.

“Our strategy leads to more accurate land cover mapping and updating,” said Suming Jin, a physical scientist with the EROS Center.

To understand the big picture of a changing landscape, scientists rely on the National Land Cover Database, which turns Earth-observation satellite (Landsat) images into pixel-by-pixel maps of specific features. Between 2001 and 2016, the database showed that nearly half of the land cover change in the contiguous United States involved forested areas.

“To ensure the quality of National Land Cover Database land cover and land cover change products, it is important to accurately detect the location and time of forest disturbance,” said Jin.

Jin and their team developed a method to detect forest disturbance by year. The approach combines strengths from a time-series algorithm and a 2-date detection method to improve large-region operational mapping efficiency, flexibility, and accuracy. The new technique facilitates more effective forest management and policy, among other applications.

Landsat data have been widely used to detect forest disturbance because of their long history, high spatial and radiometric resolutions, free and open data policy, and suitability for creating continental or even global mosaic images for different seasons.

“We need algorithms that can create consistent large-region forest disturbance maps to assist in producing a multi-epoch National Land Cover Database,” said Jin. “We also need those algorithms to be scalable so we can track forest change over longer periods of time.”

A commonly employed method called “2-date forest change detection” involves comparing images from two different dates while the “time-series algorithm” can provide observations for yearly or even monthly Landsat time series.

In general, 2-date forest change detection algorithms are more flexible than time-series methods and use richer spectral information. The 2-date method can easily determine changes between image bands, indices, classifications, and combinations and, therefore, detect forest disturbances more accurately. However, the 2-date method only detects changes for one time period and usually requires additional information or further processing to separate forest changes from other land cover changes.

On the other hand, time-series-based forest change detection algorithms can use spectral and long-term temporal information and produce changes for multiple dates simultaneously. However, these methods usually require every step of the time series algorithm to be processed again when a new date is added, which can be cumbersome for continuous monitoring updates and lead to inconsistencies.

Previous studies proposed ensemble approaches to improve forest change mapping accuracy, including “stacking,” or combining the output of different mapping methods. While stacking reduces omission and commission error rates, the method is computationally intensive and requires reference data for training.

Jin and team’s approach combined strengths from 2-date change detection methods and the continuous time-series change detection method, which was called the Time-Series method Using Normalized Spectral Distance (NSD) index (TSUN), to improve large-region operational mapping efficiency, flexibility, and accuracy. Using this combination, the researchers produced the NLCD 1986–2019 forest disturbance product, which shows the most recent forest disturbance date between the years 1986 and 2019 for every two-to-three-year interval.

“The TSUN index detects multi-date forest land cover changes and was shown to be easily extended to a new date even when new images were processed in a different way than previous date images,” Jin said.

The research team plans to improve the tool by increasing the time frequency and producing an annual forest disturbance product from 1986 to the present.

“Our ultimate goal is to automatically produce forest disturbance maps with high accuracy with the capability of continually monitoring forest disturbance, hopefully in real-time,” Jin said.

This work was supported by the USGS-NASA Landsat Science Team Program for Toward Near Real-time Monitoring and Characterization of Land Surface Change for the Conterminous US.

Other contributors include Jon Dewitz from the U.S. Geological Survey Earth Resources Observation and Science (EROS) Center; Congcong Li with ASRC Federal Data Solutions; Daniel Sorenson with the U.S. Geological Survey; Zhe Zhu with the University of Connecticut; Md Rakibul Islam Shogib, Patrick Danielson and Brian Granneman from KBR; Catherine Costello with the U.S. Geological Survey, Geosciences, and Environmental Change Science Center; Adam Case with Innovate! Inc.; and Leila Gass with the U.S. Geological Survey, Western Geographic Science Center.

UK space missions set to improve solar storm forecasts

Satellites launched into outer space could send back improved warnings of dangerous solar storms thanks to a breakthrough in the way scientists use space weather measurements.   

Experts from the University of Reading have found that using satellite data that is less reliable but is returned to Earth rapidly can be used to improve the accuracy of solar wind forecasts - which are harmful streams of charged particles sent from the sun - by nearly 50 percent.

Their research, published today in Space Weather, could pave the way for agencies, such as the Met Office, to provide more accurate forecasts for severe space weather, which can cause blackouts and harm human health.

Lead researcher Harriet Turner, from the University of Reading’s Department of Meteorology, said: “We know lots about how to prepare for storms that form on Earth, but we need to improve our forecasts of the dangerous weather we get from space. Space weather threatens our technology-focused way of life as it can cause power grids to fail, damage satellites, such as GPS, and even make astronauts ill. 

“Our research has shown that using rapid satellite measurements to forecast space weather is effective. By sending spacecraft far from Earth, we can use this new technique to get better solar storm predictions and ensure we are prepared for what’s to come.”

Simon Machin, Met Office Space Weather Manager, said: “This is a great example of the value that can result through our collaboration with academia. By pulling through scientific research into the operational domain, improved space weather forecasting will ultimately enhance our nation's ability to prepare for and mitigate against space weather events.”

Old dogs and new tricks

To predict space weather, scientists need to forecast the solar wind conditions on Earth. To do this, they combine supercomputer simulations with observations from space to estimate what space weather will be like. This is known as data assimilation. The highest quality observations only become available many days after they are made, as they are processed on the ground and ‘cleaned’, meaning forecasts take longer to achieve.  

To obtain forecasts faster, the research team tried using near-real-time (NRT) data. NRT data undergo no processing or cleaning, meaning it is less accurate but can be made available within a couple of hours. The research team found that forecasts produced using the NRT data still produce reliable predictions and enable greater warning time. This could enable authorities to better prepare for power failures that cost up to 2.1 trillion dollars over a century in the USA and Europe. 

To the stars

The scientists behind this new study say using this new technique with upcoming space missions will enable better forecasts.

The European Space Agency (ESA) will launch ‘Vigil’ in the mid-2020s, a first-of-its-kind mission that will monitor potentially hazardous solar activity using several UK-built instruments. 

By launching the spacecraft into a position 60 degrees behind Earth in longitude, the Met Office will be able to improve space weather forecasts by using data assimilation of the NRT solar wind data. 

It is hoped the unique location of Vigil will allow scientists to see the solar wind that will later arrive at Earth, maximizing forecast accuracy and warning time.

Figure 1 of Wang Z, Liu C, Cheng C, Qin Q, Yan L, Qian J, Sun C, Zhang L. On the Multiscale Oceanic Heat Transports Toward the Bases of the Antarctic Ice Shelves. Ocean-Land-Atmos. Res. 2023;2:Article 0010.  CREDIT Wang Z, Liu C, Cheng C, Qin Q, Yan L, Qian J, Sun C, Zhang L.
Figure 1 of Wang Z, Liu C, Cheng C, Qin Q, Yan L, Qian J, Sun C, Zhang L. On the Multiscale Oceanic Heat Transports Toward the Bases of the Antarctic Ice Shelves. Ocean-Land-Atmos. Res. 2023;2:Article 0010. CREDIT Wang Z, Liu C, Cheng C, Qin Q, Yan L, Qian J, Sun C, Zhang L.

Chinese researcher's simulations show why Antarctic ice shelves are losing their mass, how it leads to global sea level rise

Many nuances factor into the behavior of large ice sheets in the Earth’s oceans; these nuances and the progress toward understanding and accurately simulating these behaviors are being reviewed in this study

The Greenland ice sheet (GIS) and Antarctic ice sheet (AIS) contribute largely to global mean sea level (GMSL) changes, though the seas surrounding the Antarctic like the Bellinghausen-Amundsen Seas and the Indian Ocean sector are seeing significantly more warming than the rest of the marginal seas, with immediately noticeable effects on the mass balance (net weight of the glacier mainly accounting for ice gained by snow and lost by melting and calving) of the AIS. The level AIS will contribute to the overall increase in sea level is unknown, and current models vary drastically, leaving a major question regarding future sea levels unanswered. The development of accurate modeling and technology that can help predict the future state of the Earth’s oceans and ice sheets will help answer these questions.

“In this paper, we identify key multiscale oceanic processes that are responsible for heat delivery to the bases of the Antarctic ice shelves and review our current understanding of these processes,” said Zhaomin Wang, professor and first author of the study.

One of these processes responsible for heat delivery is circumpolar deep water (CDW). Fig. 2. Map of the Antarctic continent and the Southern Ocean. The color shading is the seafloor topography of the Southern Ocean and the Antarctic marginal seas. Light blue color represents the Antarctic Ice Shelves. Colored lines indicate the locations of the cross sections in Fig. 4 for 3 typical shelf regions: a cold and fresh shelf (118°E; cyan), a cold and saline shelf (35°W; green), and a warm and saline shelf (100°W; red).

CDW is a mix of the ocean’s water masses from different ocean basins, culminating in a warm, salty mass of water in the Southern Ocean. This water can cut through the base of ice shelves rapidly, leading to “cavities”, or cleaves in a glacier due to warm water currents. These cavities are then filled with warm-modified CDW and high salinity shelf water which eventually leads to the loss of chunks from the tip of the glacier, known as “calving”. CDW and cavity development are substantial processes, along with basal melting and calving, in which the AIS loses its mass and consequently is a significant contributor to the rise in GMSL.

The effects CDW has on the melting of Antarctic ice shelves, along with other mechanisms contributing to warm air and water circulation, are generally understood though they are poorly modeled with consistency. This may be due to not understanding small-scale processes, particularly when it comes to the effects eddies (short-lived oceanic circulation patterns) and the topography of cavities in the glacier have on melting.

“Both eddies and the dynamic effects of bottom topography have been proposed to be crucial in heat transport toward the fronts of ice shelves, in addition to heat transport by coastal currents,” Wang said. 

These topographical subtleties help with understanding the transport of CDW and how coastal currents, surface winds, and bottom pressure torque all play into the interactions of these warm water currents with glacial masses and ice sheets.

In review, ice melting thanks to warm water aren’t as simple as it seems on the surface. Researchers surmised that while progress in learning the mechanisms in which oceanic warming is affecting the AIS is occurring, there needs to be improvement and innovation to assess where the continued melting of ice shelves in the Antarctic will leave humanity in the future. Retreating coastlines and GMSL rise are anticipated, though the levels to be expected are poorly understood.

Researchers suggest priorities are made, starting with improving cavity geometry, bathymetry (measuring the depth of water), and future projections of the mass balance of ice sheets. Spending time investigating small-scale processes may also provide valuable information leading to better future models being developed, and critically, determining what the mass loss of the AIS means for atmospheric, oceanic, and sea ice circulations.

China National Natural Science Foundation Projects, The Independent Research Foundation of Southern Marine Science and Engineering Guangdong Laboratory, and the National Science Foundation of Jiangsu Province made this research possible through funding.

Zhaomin Wang, Chengyan Liu, and Chen Cheng of Southern Marine Science and Engineering Guangdong Laboratory(Zhuhai), Qing Qin, Liangjun Yan, Jiangchao Qian, and Chong Sun of College of Oceanography at Hohai University, and Li Zhang of the School of Atmospheric Sciences at Sun Yat-sen University contributed to this research.

 Renata Romeo / Ocean Image Bank
Renata Romeo / Ocean Image Bank

WCS marine scientist McClanahan builds first-ever AI algo that correctly estimates fish stocks

  • First-ever A.I. algorithm correctly estimates fish stocks, could save millions, and bridges global data and sustainability divide
  • Understanding fish stocks is critical to sustainably managing fisheries and ensuring they can continue to provide crucial livelihoods and nutrition, especially in coastal areas where hundreds of millions of people depend on local fisheries 
  • This tool will get data into the hands of local and national governments, so they can make informed decisions about their natural resources and keep “blue foods” on the table
  • The algorithm works with 85 percent accuracy for the first pilot region in the Indian Ocean coral reefs - new partnerships and funding are now needed to scale the tool for worldwide use

For the first time, a newly published artificial intelligence (AI) algorithm allows researchers to quickly and accurately estimate coastal fish stocks without ever entering the water. This breakthrough could save millions of dollars in annual research and monitoring costs while bringing data access to least-developed countries about the sustainability of their fish stocks.

Understanding “fish stocks” – the amount of living fish found in an area’s waters – is critical to understanding the health of our oceans. This is especially true in coastal areas where 90 percent of people working in the fisheries industry live and work. In the wealthiest countries, millions of dollars are spent each year on “stock assessments” – expensive and labor-intensive efforts to get people and boats out into the water to count fish and calculate stocks. That immensely high cost has long been a barrier for tropical countries in Africa and Asia, home to the highest percentage of people who depend on fishing for food and income. Small-scale fishers working coastal waters in many countries are essentially operating blindly, with no accurate data about how many fish are available in their fisheries. Without data, coastal communities and their governments cannot create management plans to help keep their oceans healthy and productive for the long term.

Now, thanks to advances in satellite data and machine learning algorithms, researchers have created a model that has successfully estimated fish stocks with 85 percent accuracy in the Western Indian Ocean pilot region. This tool has the potential to get data quickly and cheaply into the hands of local and national governments, so they can make informed decisions about their natural resources and keep “blue foods” on the table.

“Our goal is to give people the information required to know the status of their fish resources and whether their fisheries need time to recover or not. The long-term goal is that they, their children, and their neighbors can find a balance between people’s needs and ocean health,” said Tim McClanahan, Director of Marine Science at WCS. “This tool can tell us how fish stocks are doing, and how long it will take for them to recover to healthy levels using various management options. It can also tell you how much money you’re losing or can recoup every year by managing your fishery – and in the Western Indian Ocean region where we piloted this tool, it’s no less than $50 to $150 million each year.”

WCS McClanahan and fellow co-authors used years of fish abundance data combined with satellite measurements and an AI tool to produce this model. The result? A simple, easy-to-use pilot tool to better understand and manage our oceans. With further development, anyone from anywhere in the world would be able to input seven easily accessible data points - things like distance from shore, water temperature, ocean productivity, existing fisheries management, and water depth - and receive back an accurate fish stock estimate for their nearshore ecosystems. 

“We know that during times of crisis and hardship, from climate change-induced weather events to the COVID-19 pandemic, people living on the coast increasingly rely on fishing to feed themselves and their families,” said Simon Cripps, Executive Director of Marine Conservation at WCS. “The value of this model is that it tells managers, scientists, and importantly, local communities how healthy a fishery is and how well it can support the communities that depend on it, especially during times of crisis. Once a fishery’s status is known, it gives communities and managers the information to move forward to design solutions to improve fish stocks and improve the resilience of local communities, the fishing industry, and local and national economies.” 

The algorithm has been shown to work with high accuracy for coral reef fisheries in the Western Indian Ocean pilot region. WCS is currently seeking new partnerships and funding to scale the tool so it can be deployed and fill critical data gaps around the world. 

This work was completed over several years and with the support of grants from The Tiffany and Co. Foundation, the John D. and Catherine T. MacArthur Foundation, the Bloomberg Ocean Initiative, the UK Darwin Initiative, and the Western Indian Ocean Marine Science Association’s Marine Science for Management Program (WIOMSA-MASMA).

The financial planning service of robo-advising, in which people get personalized, automated investment advice and portfolio management from an algorithm or AI, is already a big market and growing fast, said Pawan Jain, assistant professor of finance at the WVU John Chambers College of Business and Economics. (WVU Photo/Alyssa Reeves)
The financial planning service of robo-advising, in which people get personalized, automated investment advice and portfolio management from an algorithm or AI, is already a big market and growing fast, said Pawan Jain, assistant professor of finance at the WVU John Chambers College of Business and Economics. (WVU Photo/Alyssa Reeves)

WVU finance prof Jain describes why people are turning to AI for investment advice

With ChatGPT and other artificial intelligence systems making waves, some people are turning to AI for financial planning, according to one West Virginia University researcher.

The AI-powered financial planning service “robo-advising,” or automated investing, is relatively new but rapidly expanding, and many large U.S. investment firms now offer robo-advising accounts. 

Pawan Jain, assistant professor of finance at the WVU John Chambers College of Business and Economics, is an expert on technology-forward financial tools such as blockchain and high-frequency trading and coauthor of a demographic analysis of a robo-advising firm’s customer base that appeared in the Financial Review.

“In robo-advising, algorithms monitor portfolios and evaluate clients’ financial conditions, risk tolerances, and objectives to provide investment recommendations with little to no human help, typically via a diversified mix of low-cost, exchange-traded funds or index funds.

“Robo-advising could disrupt traditional wealth management by enabling investment of smaller sums at lower costs.

“From a user perspective, robo-advising typically involves creating an account on the robo-adviser’s website or phone app, answering a series of questions about your investment goals and risk tolerance, and linking the bank account to fund the investment account. Then the robo-adviser or AI analyzes the information collected from the client — risk tolerance, investment goals, time horizon — and creates a personalized investment portfolio for the client. Robo-advisers share information in real-time and can be accessed online 24/7.

“The portfolio is typically composed of a diversified mix of low-cost ETFs or index funds that align with the investor’s goals and risk tolerance. Clients can view their portfolios, track investments and make account changes such as adding or withdrawing funds through a web-based or mobile platform. The robo-adviser’s algorithm monitors the portfolio and adjusts as needed to keep the portfolio aligned with the investor’s goals, usually through rebalancing. Some robo-advisers offer additional services such as personalized financial planning and tax optimization.

“The minimum dollar amount required to start an investment account that comes with personalized financial advice from an AI is much smaller than the investment that’s required to get tailored financial planning support from a human, so robo-advising lowers the economic barrier to guided participation in the stock market. Robo-advising enables people who could not afford traditional investment services to start saving and investing based on advice and portfolio management that is automated but tailored to individual situations, preferences, and goals. Pawan Jain, assistant professor, finance, WVU John Chambers College of Business and Economics (WVU Photo)

“Robo-advising has the potential to fuel economic growth on a macro scale by engaging new, lower-income demographics in, first of all, saving for retirement and, secondly, doing so through investments in stocks, bonds, and other assets.

“One downside is that robo-advising portfolios generate significantly lower returns as compared to the market index, so a portfolio managed by AI is unlikely to perform as well as one managed by a professional human adviser. And robo-advising is not for everyone — someone who prefers personal interaction and emotional support should choose a human adviser to guide them through the investment process. Finally, as with any centralized technology company, there is always a cybersecurity risk,” explained Pawan Jain, assistant professor, WVU John Chambers College of Business and Economics