Fritzner's AI work revolutionizes sea ice warnings in Norway

For vessels that journey into the polar seas, keeping control of the spread of sea ice is critical, which means that large resources are spent to collect data and determine future developments to provide reliable sea ice warnings.

- As of now, large resources are needed to create these ice warnings, and most of them are made by The Norwegian Meteorological Institute and similar centers, Sindre Markus Fritzner tells us, who is a Doctoral Research Fellow at UiT The Arctic University of Norway.

He is employed at the Department of Physics and Technology and has recently submitted a doctoral thesis where he has looked at the option of using artificial intelligence to make ice warnings faster, better, and more accessible than they are today.

In need of supercomputers CAPTION Sea ice in the polar sea.  CREDIT Jørn Berger-Nyvoll, UiT{module SOCIAL LOGIN}

The ice warnings used today are traditionally based on dynamic computer models that are fed with satellite observations of the ice cover, and whatever updated data can be gathered about ice thickness and snow depth. This generates considerable amounts of data, which then needs to be processed by powerful supercomputers to provide calculations.

- Dynamic models are physical models and require a lot of data to be processed. If you are going to make warnings about future events, you need to use a supercomputer, Fritzner explains.

This is a limited and costly resource, which makes these warnings impossible to do without access to the right resources.

Artificial intelligence makes calculations accessible on a regular laptop

Fritzner has looked at how artificial intelligence can be used to provide these sea ice warnings faster, better, and cheaper than ever - on a regular laptop.

Machine learning is a specialized field within artificial intelligence, where statistical methods are used to let computers find patterns and coherences in large sets of data. The machine learns instead of being programmed, and it all comes down to developing algorithms that enable computers to learn from and make calculations, based on empirical data.

In Fritzner's work, for example, he has loaded in data to see how one specific week will unfold, and then data for how it will look one week later on.

- Thus, it is the coherence in the development between these weeks that the machines learn itself, and in this way it can predict how it evolves, Fritzner says.

When fully developed, such an algorithm will demand far less computing power than the traditional physical model.

- If you use artificial intelligence and have a fully trained model, you can run such a calculation on a regular laptop, Fritzner says.

Every vessel can make calculations on their own

This opens up for several fields of usage, one of them being more precise weather reports in The High North. Fritzner also points out that this can be used by the shipping industry that operate close to the marginal ice zone, and that this is a form of traffic that will only increase.

- One example is cruise traffic, where it will be very important for the cruise vessels to know where the ice is, and where it will move in the next couple of days, Fritzner says.

As it stands, high-resolution models can not be run on the vessel. They have to contact The Norwegian Meteorological Institute, who then needs to run the model on a supercomputer before they transmit the data back to the vessel.

- If you are on a vessel in The Barents Sea, you are dependent on being connected to a network to download the warnings from The Norwegian Meteorological Institute.

If equipped with the right program and artificial intelligence, this can be done from the vessel itself, with nearly no computing power required at all, Fritzner says.

More development needed

Although the research so far looks promising, the results are still not as good as the traditional methods, but the evolution of machine learning/artificial intelligence is reaching full steam, and Fritzner has no doubts about its potential.

- The experiences so far are good, but not perfect. What I observed when comparing machine learning and the traditional physical models was that they were much faster, and as long as the changes in the ice were small, the machine learning functioned quite well. When the changes were greater, with a lot of melting, the models struggled more than the physical models, Fritzner explains.

He points to the challenge of the models running on artificial intelligence only relying on historical data, while the physical models constantly are adapted to large geophysical changes like increased melting and rapid changes to the weather.

In his experiments, Fritzner used data like temperature, the concentration of sea ice, and sea temperature. He believes the preciseness can be increased by adding more data to the model so that it has a larger set of data for the warnings it provides.

- Especially if you add wind and ice thickness, the machine learning will work much better, he says.

He believes further research and development will release the great potential that lies in this form of machine learning.

Risks of disease from microbial pathogens in food can be predicted more quickly

The 'Food Safety Knowledge Markup Language (FSK-ML)' format allows us to uniformly document mathematical models and model-based simulation results, and make these available to other researchers for supercomputer-based forecasts or further optimization of models. With FSK-ML, even models that were developed in different ways programming languages can be exchanged in a harmonized format. For the first time, it is possible to integrate suitable models from other scientists into in-house calculations, simulations, and assessments at the push of a button. Also, simulation results are transparent to others, as the used software code and all model parameters are visible to everyone and thus, results can be recalculated.

The FSK-ML information exchange format, which was extended and tested by the BfR under the AGINFRA+ project (2017-2019), allows us o better and more quickly assess human health risks in the future. This means that previously developed predictive models can now quickly be calculated with different simulation scenarios and adapted to fit the issue at hand - whether it concerns the risk of salmonella in fresh eggs or possible transmission of Campylobacter germs from raw chicken breast fillet to green salad in the kitchen. {module INSIDE STORY}

The new FSK-ML data standard also makes it easier for researchers to make their results available in accordance with the FAIR data principles (findability, accessibility, interoperability, and reusability). In particular, the support of the FAIR data principles means that data and information can be found, accessed, and used by different software solutions in a long-term manner.

With the development of the FSK-ML information exchange format, the BfR provides the basis for the future initialization risk assessment. With FSK-ML, software developers in the food safety domain can now easily expand their current and future tools to include new functions for importing and exporting models. FSK-ML also represents the basis for the development of web-based model databases, where researchers from different disciplines can search for established models or even share their own models. One example of such a model database is the 'RAKIP_portal' (https://aginfra.d4science.org/web/rakip_portal/catalogue), developed in the AGINFRA+ project. Models, which can be made available and downloaded via this online platform, can then be used in different software tools on in-house computers or on other online platforms.

The use of FSK-ML models on one's own computer is for example possible by the open-source software named "FSK-Lab" (https://foodrisklabs.bfr.bund.de/fsk-lab/) that was also developed by the BfR. In-house and external models can be imported, exported, edited, joined, and even run with this intuitive software. In this way, each user can set up their own predictions or simulation calculations. There is also an extension named "FSK2R" for the open-source scripting language R, which was previously presented at an international conference (esa.ipb.pt/icpmf11/welcome) in 2019.

Moreover, there are already scientific journals, such as the Food Modelling Journal (FMJ) (https://fmj.pensoft.net/), which enable FSK-ML compliant models to be imported with all relevant metadata. For example, an 'executable model paper' can be automatically generated in the FMJ in this way. The presented model is not only downloaded but is also calculated online with user-defined input parameters. Such innovative digital solutions make a significant contribution to increasing the transparency and reproducibility of scientific work, as the results presented in the article, e.g. in the review process, can be tested effectively. Moreover, the models contain all relevant metadata, such as the range of applicability.

Cleveland Clinic researchers build model online to predict risk of COVID-19, disease outcomes

Cleveland Clinic researchers have developed the world's first risk prediction model for healthcare providers to forecast an individual patient's likelihood of testing positive for COVID-19 as well as their outcomes from the disease.

According to a new study published in CHEST, the risk prediction model (called a nomogram) shows the relevance of age, race, gender, socioeconomic status, vaccination history, and current medications in COVID-19 risk. The risk calculator is a new tool for healthcare providers to aid them in predicting patient risk and tailoring decision-making about care. It provides a more scientific approach to testing which is important for the healthcare community which has faced increased demand for testing and limited resources.

"The ability to accurately predict whether or not a patient is likely to test positive for COVID-19, as well as potential outcomes including disease severity and hospitalization, will be paramount in effectively managing our resources and triaging care," said Lara Jehi, M.D., Cleveland Clinic's Chief Research Information Officer and corresponding author on the study. "As we continue to battle this pandemic and prepare for a potential second wave, understanding a person's risk is the first step in potential care and treatment planning." {module INSIDE STORY}

The nomogram, which has been deployed as a freely available online risk calculator at https://riskcalc.org/COVID19/ , was developed using data from nearly 12,000 patients enrolled in Cleveland Clinic's COVID-19 Registry, which includes all individuals tested at Cleveland Clinic for the disease, not just those that test positive.

Data scientists, including co-author of the study Michael Kattan, Ph.D., Chair of Lerner Research Institute's Department of Quantitative Health Sciences, used statistical algorithms to transform data from registry patients' electronic medical records into the first-of-its-kind nomogram.

This study revealed several novel insights into disease risk, including:

  • Patients who have received the pneumococcal polysaccharide vaccine (PPSV23) and flu vaccine are less likely to test positive for COVID-19 than those who have not received the vaccinations.
  • Patients actively taking melatonin (over-the-counter sleep aid), carvedilol (high blood pressure and heart failure treatment) or paroxetine (anti-depressant) are less likely to test positive than patients not taking the drugs.
  • Patients of low socioeconomic status (as measured in this study by zip code) are more likely to test positive than patients of greater economic means.
  • Patients of Asian descent are less likely than Caucasian patients to test positive.

"Our findings corroborated several risk factors already reported in the existing literature - including that being male and of advancing age both increase the likelihood of testing positive for COVID-19 - but we also put forth some new associations," said Dr. Jehi. "Further validation and research are needed into these initial insights but these correlations are extremely intriguing."

In a previous network medicine study led by Lerner Research Institute scientists, 16 drugs (including melatonin, carvedilol, and paroxetine) and three-drug combinations were identified as candidates for repurposing as potential COVID-19 treatments. While these findings suggest an association between taking these medications and reduced risk of testing positive for COVID-19, additional studies are needed to assess how these drugs may affect disease progression.

"The data suggest some interesting correlations but do not confer cause and effect," said Kattan. "For example, our data do not prove that melatonin reduces your risk of testing positive for COVID-19. There may be something else about patients who take melatonin that is indeed responsible for their apparent reduced risk, and we don't know what that is. Consumers should not change anything about their behavior based on our findings."

The nomogram, developed using data from patients tested at Cleveland Clinic for COVID-19 before April 2, 2020, showed good performance and reliability when used in a different geographic region (Florida) and over time (patients tested after April 2, 2020). This suggests that the patterns and predictors identified in the model are consistent across regions and communities and can be potentially adapted for clinical practice in healthcare systems across the country.

"This nomogram will bring precision medicine to the COVID-19 pandemic, helping to enable researchers and physicians to predict an individual's risk of testing positive," said Kattan. "Additionally, while testing solutions continue to be needed, it is so important to make sure we are responsibly and optimally dispatching our resources ¬- including clinical personnel, personal protective equipment, and hospital beds. Our risk prediction model stands to greatly assist hospital systems in this planning."