iStock
iStock

University of Gothenburg develops AI-based decisions online to support doctors’ hard judgments on cardiac arrest

When patients receive care after cardiac arrest, doctors can now by entering patient data in a web-based app find out how thousands of similar patients have fared. Researchers at the University of Gothenburg in Sweden have developed three such systems of decision support for cardiac arrest that may, in the future, make a major difference to doctors’ work.

One of these decision support tools (SCARS-1), now published, is downloadable free of charge from the Gothenburg Cardiac Arrest Machine Learning Studies website. However, results from the algorithm need to be interpreted by people with the right skills. AI-based decision support is expanding strongly in many areas of health care, and extensive discussions are underway on how care services and patients alike can benefit the most from it. 

The app accesses data from the Swedish Cardiopulmonary Resuscitation Register on tens of thousands of patient cases. The University of Gothenburg researchers have used an advanced form of machine learning to teach clinical prediction models to recognize various factors that have affected previous outcomes. The algorithms take into account numerous factors relating, for example, to the cardiac arrest, treatment provided, previous ill health, medication, and socioeconomic status.

New evidence-based methods

It will be a few years before official recommendations for cardiac arrest are likely to include AI-based decision support, but doctors are free to use these prediction models and other new, evidence-based methods. The research group working on decision support for cardiac arrest is headed by Araz Rawshani, a researcher at the University’s Sahlgrenska Academy and resident physician in cardiology at Sahlgrenska University Hospital.

“My colleagues and I who tested the tool see great potential in its use in our clinical everyday life. Often the answer from the decision support means that the doctor is strengthened in an opinion he has already arrived at. It helps us not to expose patients to painful care without benefit, while at the same time, it saves on healthcare resources,” Rawshani says. Araz Rawshani, principal investigator at the Institute of medicine, University of Gothenburg, consultant physician at Sahlgrenska University Hospital, and registrar for the Swedish Cardiopulmonary Resuscitation Register. Photo: Johan Wingborg

He emphasizes, however, that the decision support is still at a research stage and that it has not yet been implemented in the hospital's guidelines. However, it can be used to make a survival calculation, in the same way, that, for example, the National Diabetes Registry assists doctors and nurses with a so-called risk engine (read more here).

Highly accurate

To date, the research group has published two decision support tools. One clinical prediction model, known as SCARS-1, is presented in The Lancet’s eBioMedicine journal. This model indicates whether a new patient case resembles other, previous cases where, 30 days after their cardiac arrest, patients had survived or died.

The model’s accuracy is unusually high. Based on the ten most significant factors alone, the model has a sensitivity of 95 percent and a specificity of 89 percent. The “AUC-ROC value” (ROC being the receiver operating characteristic curve for the model and AUC the area under the ROC curve) for this model is 0.97. The highest possible AUC-ROC value is 1.0 and the threshold for a clinically relevant model is 0.7.

One piece of the puzzle

This decision support was developed by Fredrik Hessulf, a doctoral student at Sahlgrenska Academy, University of Gothenburg, and an anesthesiologist at Sahlgrenska University Hospital/Mölndal.

“This decision support is one of several pieces in a big puzzle: the doctor’s overall assessment of a patient. We have many different factors to consider in deciding whether to go ahead with cardiopulmonary resuscitation. It’s a highly demanding treatment that we should give only to patients who will benefit from it and be able, after their hospital stay, to lead a life of value to themselves,” Hessulf says.

This form of support is based on 393 factors affecting patients’ chances of surviving their cardiac arrest for 30 days after the event. The model’s high accuracy may be explained by the huge number of patient cases (roughly 55,000) on which the algorithm is based and the fact that ten of the nearly 400 factors have been found to impact heavily on survival. By far the most important factor was whether the heart regained a viable cardiac rhythm again after the patient’s admission to the emergency department.

Risk of new cardiac arrest

The second decision support tool published has been presented in the journal Resuscitation. This tool is based on data from patients who survived their out-of-hospital cardiac arrest until they were discharged from the hospital. The predictive models are based on 886 factors in 5098 patient cases from the Swedish Cardiopulmonary Resuscitation Register. This tool is partly aimed at helping doctors identify which patients are at risk of another cardiac arrest or death within a year of discharge from the hospital following their cardiac arrest. It also aims to highlight which factors are important for long-term survival after cardiac arrest — an aspect of the subject area that has not been well studied.

“The accuracy of this tool is reasonably good. It can predict with about 70 percent reliability whether the patient will die, or will have had another cardiac arrest, within a year. Like Fredrik’s tool, this one has the advantage that just a few factors can predict outcome almost as well as the model with several hundred variables,” says Gustaf Hellsén, the research doctor who developed this decision support tool.

“We hope,” he continues, “to succeed in developing this prediction model, to enhance its precision. Today, it can already serve as support for doctors in identifying factors with an important bearing on survival among cardiac arrest patients who are to be discharged from hospital.”

Three decision support tools for different aspects of cardiac arrest

Currently, the SCARS-1 tool (developed by Fredrik Hessulf, addressing survival and neurological function 30 days after cardiac arrest) is available to use as an online app. SCARS-2 (developed by Gustaf Hellsén and designed to support decisions on the risk of new cardiac arrest after discharge) will be launched shortly. During 2023, the publication of SCARS-3 (for in-hospital cardiac arrest) is also planned.
Doctors and other health professionals can read more about these decision support tools and download the applications at http://gocares.se.

Caltech engineers discover that Leonardo da Vinci's understanding of gravity was centuries ahead of his time

In an article published in the journal Leonardo, the researchers draw upon a fresh look at one of da Vinci's notebooks to show that the famed polymath had devised experiments to demonstrate that gravity is a form of acceleration—and that he further modeled the gravitational constant to around 97 percent accuracy. 

Da Vinci, who lived from 1452 to 1519, was well ahead of the curve in exploring these concepts. It wasn't until 1604 that Galileo Galilei would theorize that the distance covered by a falling object was proportional to the square of time elapsed and not until the late 17th century that Sir Isaac Newton would expand on that to develop a law of universal gravitation, describing how objects are attracted to one another. Da Vinci's primary hurdle was being limited by the tools at his disposal. For example, he lacked a means of precisely measuring time as objects fell.

Da Vinci's experiments were first spotted by Mory Gharib, the Hans W. Liepmann Professor of Aeronautics and Medical Engineering, in the Codex Arundel, a collection of papers written by da Vinci that cover science, art, and personal topics. In early 2017, Gharib was exploring da Vinci's techniques of flow visualization to discuss with students he was teaching in a graduate course when he noticed a series of sketches showing triangles generated by sand-like particles pouring out from a jar in the newly released Codex Arundel, which can be viewed online courtesy of the British Library.

"What caught my eye was when he wrote ‘Equatione di Moti' on the hypotenuse of one of his sketched triangles—the one that was an isosceles right triangle," says Gharib, lead author of the Leonardo paper. "I became interested to see what Leonardo meant by that phrase."

To analyze the notes, Gharib worked with colleagues Chris Roh, at the time a postdoctoral researcher at Caltech and now an assistant professor at Cornell University, as well as Flavio Noca of the University of Applied Sciences and Arts Western Switzerland in Geneva. Noca provided translations of da Vinci's Italian notes (written in his famous left-handed mirror writing that reads from right to left) as the trio pored over the manuscript's diagrams. Gharib LeGharib Leonardo DaVinci GravityExpe.max 500x500 f60d5

In the papers, da Vinci describes an experiment in which a water pitcher would be moved along a straight path parallel to the ground, dumping out either water or a granular material (most likely sand) along the way. His notes make it clear that he was aware that the water or sand would not fall at a constant velocity but rather would accelerate—also that the material stops accelerating horizontally, as it is no longer influenced by the pitcher, and that its acceleration is purely downward due to gravity.

If the pitcher moves at a constant speed, the line created by falling material is vertical, so no triangle forms. If the pitcher accelerates at a constant rate, the line created by the collection of falling material makes a straight but slanted line, which then forms a triangle. And, as da Vinci pointed out in a key diagram, if the pitcher's motion is accelerated at the same rate that gravity accelerates the falling material, it creates an equilateral triangle—which is what Gharib originally noticed that da Vinci had highlighted with the note "Equatione di Moti," or "equalization (equivalence) of motions."

Da Vinci sought to mathematically describe that acceleration. It is here, according to the study's authors, that he didn't quite hit the mark. To explore da Vinci's process, the team used computer modeling to run his water vase experiment. Doing so yielded da Vinci's error.

"What we saw is that Leonardo wrestled with this, but he modeled it as the falling object's distance was proportional to 2 to the t power [with t representing time] instead proportional to t squared," Roh says. "It's wrong, but we later found out that he used this sort of wrong equation in the correct way." In his notes, da Vinci illustrated an object falling for up to four intervals of time—a period through which graphs of both types of equations line up closely.

"We don't know if da Vinci did further experiments or probed this question more deeply," Gharib says. "But the fact that he was grappling with this problem in this way—in the early 1500s—demonstrates just how far ahead his thinking was."

The paper is titled "Leonardo da Vinci's Visualization of Gravity as a Form of Acceleration."

University of North Florida wins Congressional appropriation for improved IT infrastructure

The University of North Florida's Information Technology Services recently received a congressional directed appropriation in the amount of $750,000 to support a much-needed cloud supercomputing and cybersecurity infrastructure initiative.

The enhanced virtual cloud supercomputing infrastructure will address the need to create a scalable secure learning and research environment for UNF students, faculty, and staff. The project will benefit both traditional and non-traditional students by providing a consistent set of tools, software, security, and a level of interactivity with faculty during their academic career at UNF.

It will provide the UNF community with access to secure campus lab computers and systems from any location; enhance existing computer and research labs; allow university resources to be utilized anytime, anywhere, in a device-agnostic environment; and provide the tools students need to build real-world systems while providing faculty the ability to closely monitor and assist students both on and off campus.

The new virtual infrastructure will allow cyber security instruction, research, and operations in a controlled and scalable environment.  The environment will seamlessly allow remote access to software restricted to running on university-owned hardware. In addition, it will provide remote control of a dedicated machine for student assignments and projects.

Using this technology, UNF will be able to utilize pools of available computers to work together to solve a common problem, as in a supercomputing cluster or a Google-like indexing and search task. Additionally, this environment and service will offload the need for students to continually upgrade their computers to handle demanding assignments and projects. Students will be able to use software or hardware when classroom computers are not powerful enough. 

An aerial view of the Arctic tundra in Nunavut, Canada, from 2023. (Image credit: Getty Images)
An aerial view of the Arctic tundra in Nunavut, Canada, from 2023. (Image credit: Getty Images)

New data gives NOAA a more vast view of global climate

The update includes more Arctic data, the longer historical record

NOAA’s National Centers for Environmental Information (NCEI) is updating its current global climate dataset to provide more information about the Earth’s climate, while also extending the planet’s observed temperature record by 30 years.

The update to NCEI’s current NOAA Global Temperature dataset — one of the most visible and widely used datasets to assess global climate — will debut in the upcoming January 2023 global climate report to be released on February 14, 2023. This new global climate dataset will expand upon and replace the current one used since 2019.  A NOAA crew deploying an Argo float, which provides real-time climate data about the ocean. Credit: NOAA

"This new version of NOAA's global surface temperature dataset is part of NCEI’s commitment to providing a complete and comprehensive perspective of the Earth’s climate," said NCEI director Deke Arndt. “Regular updates to our datasets help us expand our understanding of our dynamic planet."

There are two significant additions in this update:

  • More data for the Arctic region are included, as well as new scientific methods for monitoring climate in other locations with limited climate data. 
  • Using improved methodology to analyze NCEI’s archival land and ocean observations, 30 more years will be added to the world’s current climate record, extending to 1850. 

NOAA’s Global Temperature datasets consist of data from weather stations across the world’s land surface, as well as ocean surface data from ships, buoys, surface drifters, profiling floats, and other uncrewed automatic systems. Until recently, however, monitoring environmental conditions around the Arctic and Antarctic has been more challenging due to fewer temperature observations in these regions. 

The updated version now includes data from more buoys from around the Arctic, along with enhanced methods of calculating temperatures in the Earth’s polar regions. 

The new version of NOAA’s Global Temperature dataset shows similar warming trends in the Earth’s climate when compared to the previous version, indicating that short-and long-term climate trends remain consistent across datasets.

This new information comes at a critical time in the Earth’s climate history. The Arctic is the fastest-warming region globally, warming at least three times faster than any other region. The top 10 warmest years on record for the globe have all occurred after 2010. The last nine years (2014–2022) have been the warmest on record.

Ryan McClelland displays a structural mount for the Survey and Time-domain Astrophysical Research Explorer (STAR-X) mission. Credits: Henry Dennis
Ryan McClelland displays a structural mount for the Survey and Time-domain Astrophysical Research Explorer (STAR-X) mission. Credits: Henry Dennis

NASA Goddard engineer McClelland shifts to AI to design mission hardware

Spacecraft and mission hardware designed by artificial intelligence may resemble bones left by some alien species, but they weigh less, tolerate higher structural loads, and require a fraction of the time parts designed by humans take to develop. Defined by a human designer, and filled in by an artificial intelligence program, this scaffold was milled from a solid block of aluminum and features connections for mirrors and instruments as well as pathways preserved for laser light and human hands to attach and adjust sensors. Credit: Henry Dennis

“They look somewhat alien and weird,” research engineer Ryan McClelland said, “but once you see them in function, it makes sense.”

McClelland pioneered the design of specialized, one-off parts using commercially available AI software at NASA’s Goddard Space Flight Center in Greenbelt, Maryland, producing hardware he has dubbed evolved structures.

To create these parts, a computer-assisted design (CAD) specialist starts with the mission’s requirements and draws in the surfaces where the part connects to the instrument or spacecraft – as well as any bolts and fittings for electronics and other hardware. The designer might also need to block out a path so that the algorithm doesn’t block a laser beam or optical sensor. Finally, more complex builds might require spaces for technicians’ hands to maneuver for assembly and alignment.

Once all off-limits areas are defined, the AI connects the dots, McClelland said, producing complex structure designs in as little as an hour or two. “The algorithms do need a human eye,” he said. “Human intuition knows what looks right, but left to itself, the algorithm can sometimes make structures too thin."

These evolved structures save up to two-thirds of the weight compared to traditional components, he said and can be milled by commercial vendors. “You can perform the design, analysis, and fabrication of a prototype part, and have it in hand in as little as one week,” McClelland said. “It can be radically fast compared with how we’re used to working.”

Parts are also analyzed using NASA-standard validation software and processes to identify potential points of failure, McClelland said. “We found it lowers risk. After these stress analyses, we find the parts generated by the algorithm don’t have the stress concentrations that you have with human designs. The stress factors are almost ten times lower than parts produced by an expert human.”

McClelland’s evolved components have been adopted by NASA missions in different stages of design and construction, including astrophysics balloon observatories, Earth-atmosphere scanners, planetary instruments, space weather monitors, space telescopes, and even the Mars Sample Return mission.

Goddard physicist Peter Nagler turned to evolved structures to help develop the EXoplanet Climate Infrared TElescope (EXCITE) mission, a balloon-borne telescope developed to study hot Jupiter-type exoplanets orbiting other stars. Currently, under construction and testing, EXCITE plans to use a near-infrared spectrograph to perform continuous observations of each planet's orbit about its host star.

“We have a couple of areas with very tricky design requirements,” Nagler said. “There were combinations of specific interfaces and exacting load specifications that were proving to be a challenge for our designers.”

McClelland designed a titanium scaffold for the back of the EXCITE telescope, where the IR receiver housed inside an aluminum cryogenic chamber connects to a carbon fiber plate supporting the primary mirror. “These materials have very different thermal expansion properties,” Nagler said. “We had to have an interface between them that won’t stress either material.”

A long-duration NASA Super-Pressure Balloon will loft the EXCITE mission’s SUV-sized payload, with an engineering test flight planned as early as the fall of 2023.

Ideal Design Solution for NASA’s Custom Parts

AI-assisted design is a growing industry, with everything from equipment parts to entire car and motorcycle chassis being developed by computers.

The use case for NASA is particularly strong, McClelland said.

“If you’re a motorcycle or car company,” McClelland said, “there may be only one chassis design that you’re going to produce, and then you’ll manufacture a bunch of them. Here at NASA, we make thousands of bespoke parts every year.”

3D printing with resins and metals will unlock the future of AI-assisted design, he said, enabling larger components such as structural trusses, complex systems that move or unfold, or advanced precision optics. “These techniques could enable NASA and commercial partners to build larger components in orbit that would not otherwise fit in a standard launch vehicle, they could even facilitate construction on the Moon or Mars using materials found in those locations.”

Merging AI, 3D printing or additive manufacturing, and in-situ resource utilization will advance In-space Servicing, Assembly, and Manufacturing (ISAM) capabilities. ISAM is a key priority for U.S. space infrastructure development as defined by the White House Office of Science and Technology Policy’s ISAM National Strategy and ISAM Implementation Plan.

This work is supported by the Center Innovation Fund in NASA's Space Technology Mission Directorate as well as Goddard’s Internal Research and Development (IRAD) program.