UTA's prof Sullivan wins grant to improve efficiency in testing software updates

More efficient modeling to test software updates Allison Sullivan

Software developers use modeling to test reliability, but since software updates regularly, modeling future versions may take valuable time. 

Allison Sullivan, assistant professor in the Computer Science and Engineering (CSE) Department at The University of Texas at Arlington, recently received a three-year, $490,000 grant from the National Science Foundation to explore testing software updates without testing unchanged parts of the code. 

“Software is growing more and more complex. It’s hard to create a software model with millions of lines of code and multiple developers,” Sullivan said. “So if all we’ve done is added a feature, can we just run a model on that change? If so, we can cut the time it takes to test code from several hours overnight to maybe just an hour.”  

Using modeling software called Alloy, Sullivan will focus on three ways of interacting with a model: writing, testing, and synthesizing.  

For writing, the research will look at ways to maximize the use of past scenarios. It will also look at techniques for new strategies based on what components of the model changed and present the impact of the changes on the user. 

Testing will involve writing the model, executing the code, and observing what the model allows. This includes tests that reason over the changed code. Users can then decide whether to allow specific errors or to correct them.  

Synthesizing allows the user to refine testing by giving the model an expected set of behaviors, then automatically generating Java, C, or C++ programs to match the behaviors. Sullivan said she hopes to develop a way to write test cases over the model and automatically build a new model that will just look at changes and not re-run the entire process. 

“Traditionally, it has been very expensive to analyze models,” she said. “With hardware advances, it is now more feasible to apply software to analyze models of real-world systems, and we are working to make the process more efficient.”  

Sullivan’s grant is important because it will help software developers improve their products while also increasing efficiency, said Hong Jiang, CSE chair. 

“Software models are the gold standard for testing, but writing a model correctly and applying it to immense amounts of code, no matter how small a change, is time-consuming and inefficient,” Jiang said. “Dr. Sullivan’s work has the potential to make testing easier and better, which will improve quality.”  

South Korean researchers develop deep learning model to predict adverse drug-drug interactions

Using gene expression data, the new model can predict how some drug-drug interactions can lead to adverse effects on the human body

Prescriptions for multiple drugs, or polypharmacy, are often recommended for the treatment of complex diseases. However, upon ingestion, multiple drugs may interact in an undesirable manner, resulting in severe adverse effects or decreased clinical efficacy. Early detection of such drug-drug interactions (DDIs) is therefore essential to prevent patients from experiencing adverse effects. Researchers at the GIST Develop Deep Learning Model to Predict Adverse Drug-Drug Interactions. Using gene expression data, the new model can predict how some drug-drug interactions can lead to adverse effects in the human body

Currently, computational models and neural network-based algorithms examine prior records of known drug interactions and identify the structures and side effects they are associated with.  These approaches assume that similar drugs have similar interactions and identify drug combinations associated with similar adverse effects.

Although understanding the mechanisms of DDIs at a molecular level is essential to predict their undesirable effects, current models rely on structures and properties of drugs, with predictive range limited to previously observed interactions. They do not consider the effect of DDIs on genes and cell functionality.

To address these limitations, Associate Professor Hojung Nam and Ph.D. candidate Eunyoung Kim from the Gwangju Institute of Science and Technology in South Korea developed a deep learning-based model to predict DDIs based on drug-induced gene expression signatures.  

The DeSIDE-DDI model consists of two parts: a feature generation model and a DDI prediction model. The feature generation model predicts a drug's effect on gene expression by considering both the structure and properties of the drug while the DDI prediction model predicts various side effects resulting from drug combinations.

To explain the key features of this model, Prof. Nam explains, “Our model considers the effects of drugs on genes by utilizing gene expression data, providing an explanation for why a certain pair of drugs cause DDIs. It can predict DDIs for currently approved drugs as well as for novel compounds. This way, the threats of polypharmacy can be resolved before new drugs are made available to the public.”

What’s more, since all compounds do not have drug-treated gene expression signatures, this model uses a pre-trained compound generation model to generate expected drug-treated gene expressions.

Discussing its real-life applications, Prof. Nam remarks, “This model can discern potentially dangerous drug pairs, acting as a drug safety monitoring system. It can help researchers define the correct usage of the drug in the drug development phase.”

MGH, HMS researchers use machine-learning model that predicts homelessness among US soldiers before their transition to civilian life

A study by a Mass General-led team could lead to more targeted strategies to prevent homelessness among military personnel

Investigators led by Massachusetts General Hospital (MGH) and Harvard Medical School (HMS) have found that lifetime depression, the trauma of having a loved one murdered, and post-traumatic stress disorder (PTSD) are the three greatest predictors of homelessness among U.S. Army soldiers after transitioning to civilian life. Their study, published in the American Journal of Preventive Medicine, used an innovative machine-learning approach to accurately predict which military personnel are at greatest risk and should therefore be targeted with specific interventions to mitigate their chances of becoming homeless. Webp.net resizeimage 2022 05 03T201840.967 540e6

“We’ve long been limited in our ability to predict and prevent homelessness because most approaches have been focused on helping people after they’ve become homeless, rather than taking action before it ever occurs,” says lead author Katherine Koh, MD, an investigator at MGH and for the Boston Health Care for the Homeless Program. “Our prediction model is highly actionable and we’re now designing an intervention that links the most vulnerable soldiers to support services before their active duty ends, then follows them over time.”

Currently, there are an estimated 40,000 homeless veterans in the U.S., comprising eight percent of the homeless population. In 2009, the Obama administration announced a national initiative to end veteran homelessness within five years, committing significant federal resources to the effort. While homelessness has decreased about 50 percent since then, veterans remain disproportionately represented in the homeless population, and in 2020 the number of homeless veterans increased for the first time in years.

As part of their study, MGH and other academic partners drew on data from nearly 17,000 soldiers between 2011 and 2014 as part of the Army’s STARRS-LS study, which asked questions about housing history, adverse childhood experiences, traumatic events or stressors in their lives, and physical and mental health problems. Using machine-learning versus traditional statistical methodology, researchers coded the responses to establish a model to predict who might be at the greatest risk of homelessness. Of the approximately 2,000 predictor variables the model considered, self-reported lifetime histories of depression, the trauma of having a loved one murdered, and post-traumatic stress disorder were found to be the strongest predictors.

“For the first time we’re applying to homelessness a ‘personalized medicine approach’ that leverages differences in an individual’s biology, lifestyle, and environment to determine who is at greatest risk with a higher degree of precision than ever before,” notes Ronald Kessler, Ph.D., a nationally recognized sociologist and senior author of the study. Plans call for giving soldiers nearing the end of their active duty questionnaires, he adds, that would help the U.S. Department of Veterans Affairs to identify and proactively target at-risk soldiers with case management intervention.

“The presence of veterans among the homeless population in this country is still regarded by many as a matter of public shame, and for decades wasn’t given the attention it deserves,” says Koh. “Our collaborative work is directly addressing that problem, and we believe utilizing prediction models such as the one we’ve developed could play a role in preventing homelessness not only among veterans but also other high-risk populations.”

Koh is an instructor in the Department of Psychiatry at MGH and Harvard Medical School. Kessler is a professor of Health Care Policy at Harvard Medical School. Co-authors include Murray Stein, MD, MPH, professor of Psychiatry and Public Health and vice-chair for Clinical Research in Psychiatry at the University of California San Diego, and Robert Ursano, MD, professor of Psychiatry and Neuroscience at Uniformed Services University of the Health Sciences.