St George's AI discovers twisting of eye vessels could cause high blood pressure, heart disease

Research led by scientists at St George’s, University of London has discovered 119 areas in the genome that help to determine the size and shape of blood vessels at the back of the eye, and that an increase in ‘twisting’ of the arteries could cause high blood pressure and heart disease.

It’s relatively easy to take a high-resolution digital image of the back of the eye, allowing medical professionals and researchers to visualize the retina and its associated blood vessels and nerves. The eyes can act as a ‘window’ into the body, allowing researchers to directly study the characteristics of these blood vessels and gain information about the body.

Scientists have previously shown that the shape and size of blood vessels on the retina are associated with health conditions including high blood pressure, heart disease, diabetes, and obesity. However, until now, little was known about how genetics play a role in determining the architectural characteristics of these blood vessels.

Researchers studied retinal images from nearly 53,000 people who were enrolled in a large study called the UK Biobank.

They applied artificial intelligence (AI) technology to the images to quickly and automatically distinguish between the different types of blood vessels (arteries and veins), and to measure blood vessel width and the extent to which the vessels twist and turn.

The team then used a technique called a genome-wide association study (GWAS) to determine whether there were similarities in the DNA of people with similar blood vessel characteristics. They carried this out on the genetic data of 52,798 UK Biobank members.

The team then repeated the analyses on 5000 people who were part of the EPIC-Norfolk’s Eye Study. Together with the UK Biobank, they identified 119 sections of the genome that are associated with retinal blood vessel shape and size characteristics – more than any previous study. Of the 119 sections found, 89 regions were linked to arterial twisting.

The level of twisting and turning of retinal arteries was the feature that was most strongly genetically determined. A higher level of twisting to the arteries also appeared to cause high diastolic blood pressure and heart disease. Diastolic blood pressure is a measure of the pressure in the arteries when the heart is between beats.

Professor Christopher Owen, Head of Chronic Disease Epidemiology at St George’s, University of London said: “It had been thought that high blood pressure might cause twisted arteries, but our work unveils that it’s the other way around. This genetic information is a vital piece of the puzzle in our understanding and could pave the way for new treatments in the future.

“Retinal imaging is already a mainstay in high-street optometrists. Our AI analysis of these images as part of routine eye checks could easily be done as part of a health check to identify those at high risk of developing high blood pressure or heart disease and in need of early intervention.”

The study was funded by the Medical Research Council and the British Heart Foundation.

Di Wang (from left), Rui Zhang, Tim Cernak, and Yingfu Lin in the Cernak Lab at the Chemistry Building. Image credit: Austin Thomason, Michigan Photography
Di Wang (from left), Rui Zhang, Tim Cernak, and Yingfu Lin in the Cernak Lab at the Chemistry Building. Image credit: Austin Thomason, Michigan Photography

Michigan builds AI algo to dramatically reduce the time to build molecules for better medicines

With a big assist from artificial intelligence and a heavy dose of human touch, Tim Cernak’s lab at the University of Michigan made a discovery that dramatically speeds up the time-consuming chemical process of building molecules that will be tomorrow’s medicines, agrichemicals, or materials.

The discovery, published in the Feb. 3 issue of Science, is the culmination of years of chemical synthesis and data science research by the Cernak Lab in the College of Pharmacy and Department of Chemistry.

The goal of the research was to identify key reactions in the synthesis of a molecule, ultimately reducing the process to as few steps as possible. In the end, Cernak and his team achieved the synthesis of a complex alkaloid found in nature in just three steps. Previous syntheses took between seven and 26 steps. Replica of the complex molecule, stemoamide, built in mere three steps in Tim Cernak’s Lab. Image credit: Austin Thomason, Michigan Photography

“Making a chemical structure that has atoms in just the right place to give you efficacious and nontoxic medicines, for instance, is tricky,” said Cernak, assistant professor of medicinal chemistry and chemistry. “It requires a chemical synthesis strategy grounded in the chemical building blocks you can actually buy and then stitch together using chemical reactions.”

The accomplishment has powerful implications for speeding up the development of medicines.

Cernak compared the construction of these complex molecules to playing chess. You need to orchestrate a series of moves to get to the end of the game. While there’s a near-infinite number of possible moves, there’s a logic that can be followed.

“We developed a logic here, based on graph theory, to get to the end as quickly as possible,” he said.

Cernak and colleagues used SYNTHIA Retrosynthesis Software, which provides scientists with a database of pathways, or steps, and formulas for millions of molecular structures. This gave the team an enormous amount of computational synthesis data to play with.

Using an algorithm they developed to curate the data, the researchers identified the steps along the pathway that was high impact, or key steps, and the steps that were making progress toward completing the synthesis but ultimately inefficient for the whole process.

“We hope this research can lead to better medicines,” Cernak said. “So far, we have been limited in the molecular structures we can quickly access with chemical synthesis.”

Co-authors include Yingfu Lin, a senior research fellow in pharmacy; Rui (Sam) Zhang, a doctoral student in chemistry; and Di Wang, a doctoral student in pharmacy.

BYU professor D.J. Lee and students Shad Torrie and Andrew Sumsion sit in the press box at LaVell Edwards Stadium. Their AI technology could improve film study for college and NFL football teams. Photo by Nate Edwards/BYU Photo
BYU professor D.J. Lee and students Shad Torrie and Andrew Sumsion sit in the press box at LaVell Edwards Stadium. Their AI technology could improve film study for college and NFL football teams. Photo by Nate Edwards/BYU Photo

BYU creates AI algo to benefit Super Bowl rivals

Players and coaches for the Philadelphia Eagles and Kansas City Chiefs will spend hours and hours in film rooms this week in preparation for the Super Bowl. They’ll study positions, plays, and formations, trying to pinpoint what opponent tendencies they can exploit while looking to their film to shore up weaknesses.

New artificial intelligence technology being developed by engineers at Brigham Young University could significantly cut down on the time and cost that goes into film study for Super Bowl-bound teams (and all NFL and college football teams), while also enhancing game strategy by harnessing the power of big data.

BYU professor D.J. Lee, master’s student Jacob Newman and Ph.D. students Andrew Sumsion and Shad Torrie are using AI to automate the time-consuming process of analyzing and annotating game footage manually. Using deep learning and computer vision, the researchers have created an algorithm that can consistently locate and label players from game film and determine the formation of the offensive team — a process that currently requires a slew of video assistants.

“We were having a conversation about this and realized, whoa, we could probably teach an algorithm to do this,” said Lee, a professor of electrical and computer engineering. “So we set up a meeting with BYU football to learn their process and immediately knew, yeah, we can do this a lot faster.”

A game still used to train the algorithm.

While still early in the research, the team has already obtained better than 90% accuracy on player detection and labeling with their algorithm, along with 85% accuracy on determining formations. They believe the technology could eventually eliminate the need for the inefficient and tedious practice of manual annotation and analysis of recorded video used by NFL and college teams.

Lee and Newman first looked at real game footage provided by BYU’s football team. As they started to analyze it, they realized they needed some additional angles to properly train their algorithm. So they bought a copy of Madden 2020, which shows the field from above and behind the offense, and manually labeled 1,000 images and videos from the game.

They used those images to train a deep-learning algorithm to locate the players, which then feeds into a Residual Network framework to determine what position the players are playing. Finally, their neural network uses the location and position information to determine what formation (of more than 25 formations) the offense is using — anything from the Pistol Bunch TE to the I Form H Slot Open.

Lee said the algorithm can accurately identify formations at 99.5% when the player location and labeling information is correct. The I Formation, where four players are lined up one in front of the next — center, quarterback, fullback, and running back — proved to be one of the most challenging formations to identify.

Lee and Newman said the AI system could also have applications in other sports. For example, in baseball, it could locate player positions on the field and identify common patterns to assist teams in refining how they defend against certain batters. Or it could be used to locate soccer players to help determine more efficient and effective formations.

The BYU algorithm is detailed in a journal article “Automated Pre-Play Analysis of American Football Formations Using Deep Learning,” recently published in a special issue of Advances of Artificial Intelligence and Vision Applications in Electronics.

“Once you have this data there will be a lot more you can do with it; you can take it to the next level,” Lee said. “Big data can help us know the strategies of this team or the tendencies of that coach. It could help you know if they are likely to go for it on 4th Down and 2 or if they will punt. The idea of using AI for sports is really cool, and if we can give them even 1% of an advantage, it will be worth it.”

Caption:This superconducting parametric amplifier can achieve quantum squeezing over much broader bandwidths than other designs, which could lead to faster and more accurate quantum measurements.
Caption:This superconducting parametric amplifier can achieve quantum squeezing over much broader bandwidths than other designs, which could lead to faster and more accurate quantum measurements.

MIT scientists boost quantum signals while reducing noise

“Squeezing” noise over a broad frequency bandwidth in a quantum system could lead to faster and more accurate quantum measurements.

A certain amount of noise is inherent in any quantum system. For instance, when researchers want to read information from a quantum supercomputer, which harnesses quantum mechanical phenomena to solve certain problems too complex for classical computers, the same quantum mechanics also imparts a minimum level of unavoidable error that limits the accuracy of the measurements. Caption:This image shows many Josephson traveling-wave parametric amplifiers on a silicon wafer. Chaining more than 3,000 of these devices together enabled the researchers to achieve broadband amplification and high levels of quantum squeezing.

Scientists can effectively get around this limitation by using “parametric” amplification to “squeeze” the noise –– a quantum phenomenon that decreases the noise affecting one variable while increasing the noise that affects its conjugate partner. While the total amount of noise remains the same, it is effectively redistributed. Researchers can then make more accurate measurements by looking only at the lower-noise variable.

A team of researchers from MIT and elsewhere has now developed a new superconducting parametric amplifier that operates with the gain of previous narrowband squeezers while achieving quantum squeezing over much larger bandwidths. Their work is the first to demonstrate squeezing over a broad frequency bandwidth of up to 1.75 gigahertz while maintaining a high degree of squeezing (selective noise reduction). In comparison, previous microwave parametric amplifiers generally achieved bandwidths of only 100 megahertz or less.

This new broadband device may enable scientists to read out quantum information much more efficiently, leading to faster and more accurate quantum systems. By reducing the error in measurements, this architecture could be utilized in multiqubit systems or other metrological applications that demand extreme precision.

“As the field of quantum computing grows, and the number of qubits in these systems increases to thousands or more, we will need broadband amplification. With our architecture, with just one amplifier you could theoretically read out thousands of qubits at the same time,” says electrical engineering and computer science graduate student Jack Qiu, who is a member of the Engineering Quantum Systems Group and lead author of the paper detailing this advance.

The senior authors are William D. Oliver, the Henry Ellis Warren professor of electrical engineering and computer science and physics, director of the Center for Quantum Engineering, and associate director of the Research Laboratory of Electronics; and Kevin P. O’Brien, the Emanuel E. Landsman Career Development professor of electrical engineering and computer science. 

Squeezing noise below the standard quantum limit

Superconducting quantum circuits, like quantum bits or “qubits,” process and transfer information in quantum systems. This information is carried by microwave electromagnetic signals comprising photons. But these signals can be extremely weak, so researchers use amplifiers to boost the signal level such that clean measurements can be made.

However, a quantum property known as the Heisenberg Uncertainty Principle requires a minimum amount of noise to be added during the amplification process, leading to the “standard quantum limit” of background noise. However, a special device, called a Josephson parametric amplifier, can reduce the added noise by “squeezing” it below the fundamental limit by effectively redistributing it elsewhere.

Quantum information is represented in the conjugate variables, for example, the amplitude and phase of electromagnetic waves. However, in many instances, researchers need only measure one of these variables — the amplitude or the phase — to determine the quantum state of the system. In these instances, they can “squeeze the noise,” lowering it for one variable, say amplitude, while raising it for the other, in this case, phase. The total amount of noise stays the same due to Heisenberg’s Uncertainty Principle, but its distribution can be shaped in such a way that less noisy measurements are possible on one of the variables.

A conventional Josephson parametric amplifier is resonator-based: It’s like an echo chamber with a superconducting nonlinear element called a Josephson junction in the middle. Photons enter the echo chamber and bounce around to interact with the same Josephson junction multiple times. In this environment, the system nonlinearity — realized by the Josephson junction — is enhanced and leads to parametric amplification and squeezing. But, since the photons traverse the same Josephson junction many times before exiting, the junction is stressed. As a result, both the bandwidth and the maximum signal the resonator-based amplifier can accommodate are limited.

The MIT researchers took a different approach. Instead of embedding a single or a few Josephson junctions inside a resonator, they chained more than 3,000 junctions together, creating what is known as a Josephson traveling-wave parametric amplifier. Photons interact with each other as they travel from junction to junction, resulting in noise squeezing without stressing any single­­­­­ junction.

Their traveling-wave system can tolerate much higher-power signals than resonator-based Josephson amplifiers without the bandwidth constraint of the resonator, leading to broadband amplification and high levels of squeezing, Qiu says.

“You can think of this system as a long optical fiber, another type of distributed nonlinear parametric amplifier. And, we can push to 10,000 junctions or more. This is an extensible system, as opposed to the resonant architecture,” he says.

Nearly noiseless amplification

A pair of pump photons enter the device, serving as the energy source. Researchers can tune the frequency of photons coming from each pump to generate squeezing at the desired signal frequency. For instance, if they want to squeeze a 6-gigahertz signal, they would adjust the pumps to send photons at 5 and 7 gigahertz, respectively. When the pump photons interact inside the device, they combine to produce an amplified signal with a frequency right in the middle of the two pumps. This is a special process of a more generic phenomenon called nonlinear wave mixing.

“Squeezing of the noise results from a two-photon quantum interference effect that arises during the parametric process,” he explains.

This architecture enabled them to reduce the noise power by a factor 10 below the fundamental quantum limit while operating with 3.5 gigahertz of amplification bandwidth — a frequency range that is almost two orders of magnitude higher than previous devices.

Their device also demonstrates the broadband generation of entangled photon pairs, which could enable researchers to read out quantum information more efficiently with a much higher signal-to-noise ratio, Qiu says.

While Qiu and his collaborators are excited by these results, he says there is still room for improvement. The materials they used to fabricate the amplifier introduce some microwave loss, which can reduce performance. Moving forward, they are exploring different fabrication methods that could improve the insertion loss.

“This work is not meant to be a standalone project. It has tremendous potential if you apply it to other quantum systems — to interface with a qubit system to enhance the readout, or entangle qubits, or extend the device operating frequency range to be utilized in dark matter detection and improve its detection efficiency. This is essentially like a blueprint for future work,” he says.

Additional co-authors include Arne Grimsmo, senior lecturer at the University of Sydney; Kaidong Peng, an EECS graduate student in the Quantum Coherent Electronics Group at MIT; Bharath Kannan, Ph.D. ’22, CEO of Atlantic Quantum; Benjamin Lienhard Ph.D. ’21, a postdoc at Princeton University; Youngkyu Sung, an EECS grad student at MIT; Philip Krantz, an MIT postdoc; Vladimir Bolkhovsky, Greg Calusine, David Kim, Alex Melville, Bethany Niedzielski, Jonilyn Yoder, and Mollie Schwartz, members of the technical staff at MIT Lincoln Laboratory; Terry Orlando, professor of electrical engineering at MIT and a member of RLE; Irfan Siddiqi, a professor of physics at the University of California at Berkeley; and Simon Gustavsson, a principal research scientist in the Engineering Quantum Systems group at MIT.  

This work was funded, in part, by the NTT Physics and Informatics Laboratories and the Office of the Director of National Intelligence IARPA program.

Accuracy of ChatGPT on USMLE. For USMLE Steps 1, 2CK, and 3, AI outputs were adjudicated to be accurate, inaccurate, or indeterminate based on the ACI scoring system provided in S2 Data. A: Accuracy distribution for inputs encoded as open-ended questions. B: Accuracy distribution for inputs encoded as multiple choice single answer without (MC-NJ) or with forced justification (MC-J).
Accuracy of ChatGPT on USMLE. For USMLE Steps 1, 2CK, and 3, AI outputs were adjudicated to be accurate, inaccurate, or indeterminate based on the ACI scoring system provided in S2 Data. A: Accuracy distribution for inputs encoded as open-ended questions. B: Accuracy distribution for inputs encoded as multiple choice single answer without (MC-NJ) or with forced justification (MC-J).

ChatGPT can almost pass the US Medical Licensing Exam

AI software was able to achieve passing scores for the exam, which usually requires years of medical training

ChatGPT can score at or around the approximately 60 percent passing threshold for the United States Medical Licensing Exam (USMLE), with responses that make coherent, internal sense and contain frequent insights, according to a study published February 9, 2023, in the open-access journal PLOS Digital Health by Tiffany Kung, Victor Tseng, and colleagues at AnsibleHealth.

ChatGPT is a new artificial intelligence (AI) system, known as a large language model (LLM), designed to generate human-like writing by predicting upcoming word sequences. Unlike most chatbots, ChatGPT cannot search the internet. Instead, it generates text using word relationships predicted by its internal processes.

Kung and colleagues tested ChatGPT’s performance on the USMLE, a highly standardized and regulated series of three exams (Steps 1, 2CK, and 3) required for medical licensure in the United States. Taken by medical students and physicians-in-training, the USMLE assesses knowledge spanning most medical disciplines, ranging from biochemistry, to diagnostic reasoning, to bioethics.

After screening to remove image-based questions, the authors tested the software on 350 of the 376 public questions available from the June 2022 USMLE release. 

After indeterminate responses were removed, ChatGPT scored between 52.4% and 75.0% across the three USMLE exams. The passing threshold each year is approximately 60%. ChatGPT also demonstrated 94.6% concordance across all its responses and produced at least one significant insight (something that was new, non-obvious, and clinically valid) for 88.9% of its responses. Notably, ChatGPT exceeded the performance of PubMedGPT, a counterpart model trained exclusively on biomedical domain literature, which scored 50.8% on an older dataset of USMLE-style questions.

While the relatively small input size restricted the depth and range of analyses, the authors note their findings provide a glimpse of ChatGPT’s potential to enhance medical education, and eventually, clinical practice. For example, they add, clinicians at AnsibleHealth already use ChatGPT to rewrite jargon-heavy reports for easier patient comprehension.

“Reaching the passing score for this notoriously difficult expert exam, and doing so without any human reinforcement, marks a notable milestone in clinical AI maturation,” say the authors.

Author Dr. Tiffany Kung added that ChatGPT's role in this research went beyond being the study subject: "ChatGPT contributed substantially to the writing of [our] manuscript... We interacted with ChatGPT much like a colleague, asking it to synthesize, simplify, and offer counterpoints to drafts in progress...All of the co-authors valued ChatGPT's input."