UCF's Virtual Readability Lab presents COVID-19 forecasting research at IEOM Orlando

Implications of this work include a better understanding of opportunities and appropriate tools for short-term prediction of future trends when variability is high, as well as replacement strategies for datasets with missing values.

 A UCF-developed forecast model, which uses machine learning, was able to predict the spread of COVID-19 cases in Florida. Although more research is needed, the model could become useful in predicting the spread of the next big virus, which could help healthcare agencies prepare their response. The researchers’ work predicts different parameters key to a proper COVID-19 response, through machine learning methods. Image credit: Adobe Stock

Md Mamunur Rashid, a graduate researcher at UCF’s Virtual Readability Lab, has presented the study results alongside fellow researcher and UCF Assistant Professor Ben Sawyer ’14MS ’15PhD at the seventh North American International Conference on Industrial Engineering and Operations Management (IEOM) in Orlando June 12-14.

The meeting is a top conference for its field and is a forum for researchers, academics, and practitioners to exchange ideas and discuss recent developments within the field.

Their work predicts different parameters key to a proper COVID-19 response, through machine learning methods. If activated, a proper health response considers the planning and management of healthcare systems, determines where healthcare providers are most needed, initiates programs ensuring wellbeing, and facilitates quality education. These responses can be planned and designed only when the future can be forecasted, which can be accomplished through the models developed in this research.

The number of patients affected, hospitalizations, and deaths in the context of the state of Florida is considered to provide a blueprint for regional pandemic response.

Implications of this work include a better understanding of opportunities and appropriate tools for short-term prediction of future trends when variability is high, as well as replacement strategies for datasets with missing values. Partial, unrecorded, or non-updated data from the Florida Department of Health contributed to the missing values in the datasets. A strategy was applied to complete the partially recorded or missing data, but the data retained high variability.

“Robust predictions of COVID-19 for the future rely upon modeling of COVID data for unique regional populations,” Sawyer says. “As we transition to living alongside this virus, the present study provides important insight, characterizing how this virus interacts with the exceptional diversity of age and culture found in Florida."

How the Model Works

To predict these variables, 20 inputs in four categories (type of tests performed, gender, race, and age group) were collected. Official data from the Florida State Department of Health were collected and submitted to a linear regression model, fuzzy logic model, and long short-term memory (LSTM) deep learning model with the intent of producing predictions as close as possible to the actual numbers.

The mean absolute percentage error (MAPE) was calculated to measure the deviation of the predicted result from the actual values. In addition, a one-way analysis of the variance model was developed for each output parameter to statistically assess the results of these models.

The LSTM deep learning model outperformed the fuzzy model, which in turn outperformed linear regression, in terms of the MAPE for the “number of Florida residents affected.”

For the “number of patients hospitalized,” the LSTM deep learning model again outperformed the regression model, while the regression model and the fuzzy model were not significantly different.

However, no model significantly outperformed the other for “the number of patient deaths.”

“These findings suggest that in regional data, the fuzzy model outperforms linear regression, and the LSTM deep learning model outperforms both the fuzzy model and the linear regression model,” Rashid says.

Case Western Reserve built AI predicts response to immunotherapy in cancer patients

Collaboration between pharmaceutical companies and the Center for Computational Imaging and Personalized Diagnostics (CCIPD) at Case Western Reserve University has led to the development of artificial intelligence (AI) tools to benefit patients with non-small cell lung cancer (NSCLC) based on an analysis of routine tissue biopsy images, according to new research. Anant Madabhushi

This year, more than 236,000 adults in the United States will be diagnosed with lung cancer—about 82% of them with non-small cell lung cancer, according to the American Society of Clinical Oncology.

Researchers at the CCIPD used AI to identify biomarkers from biopsy images for patients with NSCLC, as well as gynecologic cancers, that help predict the response to immunotherapy and clinical outcomes, including survival.

“We have shown that the spatial interplay of features relating to the cancer nuclei and tumor-infiltrating lymphocytes drives a signal that allows us to identify which patients are going to respond to immunotherapy and which ones will not,” said Anant Madabhushi, CCIPD director and Donnell Institute Professor of Biomedical Engineering at Case Western Reserve.

Immunotherapy is expensive, and studies show that only 20-30% of patients respond to the treatment, according to the National Institutes of Health and other sources. These findings validate that the AI technologies developed by the CCIPD can help clinicians determine how best to treat patients with NSCLC and gynecologic cancers, including cervical, endometrial, and ovarian cancer, Madabhushi said.

The study, drawn from a retrospective analysis of data, also revealed new biomarker information regarding a protein called PD-L1 that helps prevent immune cells from attacking non-harmful cells in the body.

Patients with high PD-L1 often receive immunotherapy as part of their treatment for NSCLC, while patients with low PD-L1 are often not offered immunotherapy, or it’s coupled with chemotherapy.

“Our work has identified a subset of patients with low PD-L1 who respond very well to immunotherapy and may not require immunotherapy plus chemotherapy,” Madabhushi said. “This could potentially help these patients avoid the toxicity associated with chemotherapy while also having a favorable response to immunotherapy.”

The multi-site, multi-institutional study examined three common immunotherapy drugs (called checkpoint inhibitor agents) that target PD-L1: atezolizumab, nivolumab, and pembrolizumab. The AI tools consistently predicted the response and clinical outcomes for all three immunotherapies.

The study is part of broader research conducted at CCIPD to develop and apply novel AI and machine-learning approaches to diagnose and predict the therapy response for various diseases and cancers, including breast, prostate, head and neck, brain, colorectal, gynecologic, and skin.

The study coincides with Case Western Reserve recently signing a license agreement with Picture Health to commercialize AI tools to benefit patients with NSCLC and other cancers.

HZB's Annika Bande calculates the 'fingerprints' of molecules with AI

"Macromolecules but also quantum dots, which often consist of thousands of atoms, can hardly be calculated in advance using conventional methods such as DFT," says PD Dr. Annika Bande at HZB. With her team, she has now investigated how the computing time can be shortened by using methods from artificial intelligence. The graphical neural network GNN receives small molecules as input with the task of determining their spectral responses. By matching them with the known spectra, the GNN programme learns to calculate spectra reliably.  CREDIT K. Singh, A. Bande/HZB

The idea: a computer program from the group of "graphical neural networks" or GNN receives small molecules as input with the task of determining their spectral responses. In the next step, the GNN program compares the calculated spectra with the known target spectra (DFT or experimental) and corrects the calculation path accordingly. Round after round, the result becomes better. The GNN program thus learns on its own how to calculate spectra reliably with the help of known spectra.

"We have trained five newer GNNs and found that enormous improvements can be achieved with one of them, the SchNet model: The accuracy increases by 20%, and this is done in a fraction of the computation time," says first author Kanishka Singh. Singh participates in the HEIBRiDS graduate school and is supervised by two experts from different backgrounds: computer science expert Prof. Ulf Leser from Humboldt University Berlin and theoretical chemist Annika Bande.

"Recently developed GNN frameworks could do even better," she says. "And the demand is very high. We, therefore, want to strengthen this line of research and are planning to create a new postdoctoral position for it from summer onwards as part of the Helmholtz project "eXplainable Artificial Intelligence for X-ray Absorption Spectroscopy"."