Lu's program predicts success of implants in hearing-impaired children

A new supercomputer program that analyzes functional brain MRIs of hearing impaired children can predict whether they will develop effective language skills within two years of cochlear implant surgery, according to a study in the journal Brain and Behavior.

In the journal's Oct. 12 online edition, researchers at Cincinnati Children's Hospital Medical Center say their computer program determines how specific regions of the brain respond to auditory stimulus tests that hearing-impaired infants and toddlers receive before surgical implantation.

With additional research and development, the authors suggest their supercomputer model could become a practical tool that allows clinicians to more effectively screen patients with sensori-neural hearing loss before surgery. This could reduce the number of children who undergo the invasive and costly procedure, only to be disappointed when implants do not deliver hoped-for results.

"This study identifies two features from our computer analysis that are potential biomarkers for predicting cochlear implant outcomes," says Long (Jason) Lu, PhD, a researcher in the Division of Biomedical Informatics at Cincinnati Children's. "We have developed one of the first successful methods for translating research data from functional magnetic resonance imaging (fMRI) of hearing-impaired children into something with potential for practical clinical use with individual patients."

When analyzing results from pre-surgical auditory tests, the researchers identified elevated activity in two regions of the brain that effectively predict which children benefit most from implants, making them possible biomarkers. One is in the speech-recognition and language-association areas of the brain's left hemisphere, in the superior and middle temporal gyri. The second is in the brain's right cerebellar structures. The authors say the second finding is surprising and may provide new insights about neural circuitry that supports language and auditory development in the brain.

Lu's laboratory focuses on designing computer algorithms that interpret structural and functional MRIs of the human brain. His team uses this information to identify image biomarkers that can improve diagnosis and treatment options for children with brain and related neurological disorders.

Along with Scott Holland, PhD, a scientist in the Pediatric Neuroimaging Consortium at Cincinnati Children's, and other collaborators from Cincinnati Children's and the University of Cincinnati College of Medicine, the researchers were able to blend human biology and supercomputer technology in their current study. The mix produced a model in which supercomputers learn how to extract and interpret data from pre-surgery functional MRIs that measure blood flow in infant brains during auditory tests.

After data is collected from the functional MRIs, the computer algorithm uses a process called Bag-of-Words to project the functional MRIs to vectors, which were subsequently used to predict which children are good candidates for cochlear implants. 

The study included 44 infants and toddlers between the ages of 8 months and 67 months. Twenty-three of the children were hearing impaired and underwent auditory exams and functional MRIs prior to cochlear implant surgery. Twenty-one children had normal hearing and participated in the study as control subjects, undergoing standardized hearing, speech and cognition tests.

Two years following cochlear implant surgery, the language performance was measured for the cochlear implant recipients, which was used as the gold standard benchmark for the computational analysis.

The authors report that they tested two types of auditory stimuli during pre-surgical tests that are designed to stimulate blood flow and related activity in different areas of the brain. The stimuli included natural language speech and narrow-band noise tones. After analyzing functional MRI data from pre-surgery auditory tests and the two-year, post-surgery language tests, the researchers determined that the brain activation patterns stimulated by natural language speech have greater predictive ability.

Other collaborators on the study included: first author Lirong Tan, PhD student, Division of Biomedical Informatics, Cincinnati Children's; and researchers from the Department of Electrical Engineering and Computing Systems, University of Cincinnati; the departments of Otolaryngology and Environmental Health, University of Cincinnati College of Medicine and the Department of Otolaryngology – Head & Neck Surgery, Carver College of Medicine, University of Iowa.