UR Medicine suggests the brain processes smell both like a painting, a symphony

What happens when we smell a rose? How does our brain process the essence of its fragrance? Is it like a painting – a snapshot of the flickering activity of cells – captured in a moment in time? Or like a symphony, an evolving ensemble of different cells working together to capture the scent? New research suggests that our brain does both. 

“These findings reveal a core principle of the nervous system, flexibility in the kinds of calculations the brain makes to represent aspects of the sensory world,” said Krishnan Padmanabhan, Ph.D., an associate professor of Neuroscience and senior author of the study recently published in Cell Reports. “Our work provides scientists with new tools to quantify and interpret the patterns of activity of the brain.” Krishnan Padmanabhan, Ph.D.

Researchers developed a model to simulate the workings of the early olfactory system – the network the brain relies on for smelling. Employing supercomputer simulations, they found a specific set of connections, called centrifugal fibers, which carry impulses from other parts of the central nervous system to the early sensory regions of the brain, played a critical role. These centrifugal fibers act as a switch, toggling between different strategies to efficiently represent smells. When the centrifugal fibers were in one state, the cells in the piriform cortex – where the perception of an odor forms – relied on the pattern of activity within a given instant in time. When the centrifugal fibers were in the other state, the cells in the piriform cortex improved both the accuracy and the speed with which cells detected and classified the smell by relying on the patterns of brain activity across time.

These processes suggest the brain has multiple responses to representing a smell. In one strategy, the brain uses a snapshot, like a painting or a photograph, at a given moment to capture the essential features of the odor. In the other strategy, the brain keeps track of the evolving patterns. It is attuned to which cells turn on and off and when – like a symphony.

The mathematical models the researchers developed highlight the critical feature of the nervous system – not only diversity in terms of the components that make up the brain but also how these components work together to help the brain experience the world of smell. “These mathematical models reveal critical aspects of how the olfactory system in the brain might work and could help build brain-inspired artificial computing systems,” Padmanabhan said. “Computational approaches inspired by the circuits of the brain such as this have the potential to improve the safety of self-driving cars, or help computer vision algorithms more accurately identify and classify objects in an image.”

All antennas at the VLA ready to stream massive data for technosignature research at COSMIC

Once the digital backend comes online, SETI observations will be possible 24/7 at the VLA

COSMIC SETI (the Commensal Open-Source Multimode Interferometer Cluster Search for Extraterrestrial Intelligence) took a big step toward using the National Science Foundation’s Karl G. Jansky Very Large Array (VLA) for 24/7 SETI observations. Fiber optic amplifiers and splitters are now installed for all 27 VLA antennas, giving COSMIC access to a complete and independent copy of the data streams from the entire VLA. In addition, the COSMIC system has used these links to successfully acquire VLA data, and the primary focus now is on developing the high-performance GPU (Graphical Processing Unit) code for analyzing data for the possible presence of technosignatures. SETI Institute Post-Doctoral Researchers, Dr Savin Varghese and Dr Chenoa Tremblay, in front of one of the 82-foot diameter dishes that makes up the Very Large Array.

COSMIC is a collaboration between the SETI Institute and the National Radio Astronomy Observatory (NRAO), which operates the VLA, to bring a state-of-the-art search for extraterrestrial intelligence to the VLA for the first time. As the VLA conducts observations, COSMIC will enable SETI Institute scientists to access that data to analyze for evidence of technosignatures, signs of technology not caused by natural phenomena.

“Having all the VLA digital signals available to the COSMIC system is a major milestone, involving close collaboration with the NRAO VLA engineering team to ensure that the addition of the COSMIC hardware doesn’t in any way adversely affect existing VLA infrastructure,” said Jack Hickish, Digital Instrumentation Lead for COSMIC at the SETI Institute. “It is fantastic to have overcome the challenges of prototyping, testing, procurement, and installation – all conducted during both a global pandemic and semiconductor shortage – and we are excited to be able to move on to the next task of processing the many Tb/s of data to which we now have access.”

There are several advantages to conducting SETI research with the VLA:

  • The size of the VLA: Each of the VLA's 27 antennas is 25 meters in diameter, spread over 22 miles
  • The VLA has a combined collecting area equivalent to a single-dish antenna of 130 meters across
  • Each VLA antenna has 8 cryogenically cooled receivers covering the radio spectrum continuously from 1 to 50 GHz
  • Some receivers can operate below 1 GHz down to 54 MHz, a band used on Earth for television broadcasting

Once up and running, it is estimated that COSMIC SETI will observe about 40 million galactic star systems in two years. It will be the most comprehensive SETI observing program undertaken in the Northern Hemisphere, with high sensitivity and a huge target list.

“I am excited by the ability of COSMIC to conduct the most comprehensive technosignature search ever in the Northern Hemisphere,” said Cherry Ng, SETI Institute COSMIC Project Scientist. “We will be able to monitor millions of stars with a sensitivity high enough to detect an Arecibo-like transmitter out to a distance of 25 parsecs (81 light-years), covering an observing frequency range from 230 MHz to 50 GHz, which includes many parts of the spectrum that have not yet been explored for ETI signals.”

The system should be fully operational by early 2023 and will conduct its first major observational campaign in parallel with the ongoing VLA Sky Survey (VLASS).

“We look forward to partnering with the SETI Institute on this exciting initiative and are pleased to see this important milestone in the technical work that will make this new science possible,” said NRAO Director Tony Beasley.

Swiss astrophysicist takes a closer look at Jupiter’s origin

Researchers of the University of Zurich (UZH) and the National Centre of Competence in Research (NCCR) PlanetS have investigated Jupiter’s formation history in great detail. Their results suggest that the giant planet migrated far from its origin and collected large amounts of material on its journey. Professor of Theoretical Astrophysics at the University of Zurich and member of the NCCR PlanetS, Ravit Helled. Image: Jos Schmid

One of the most important open questions in planetary formation theory is the story of Jupiter’s origin. Using sophisticated supercomputer modeling, researchers of the University of Zurich (UZH) and the National Centre of Competence in Research (NCCR) PlanetS now shed new light on Jupiter’s formation history. Their results were published in the journal The Astrophysical Journal Letters.

A curious enrichment of heavy elements

When the Galileo spacecraft released a probe that parachuted into Jupiter’s atmosphere in 1995, it showed among other things that heavy elements (elements heavier than helium) are enriched there. At the same time, recent structure models of Jupiter that are based on gravity field measurements by the Juno spacecraft suggest that Jupiter’s interior is not uniform but has a complex structure.

Since we now know that the interior of Jupiter is not fully mixed, we would expect heavy elements to be in a giant gas planet’s deep interior as heavy elements are mostly accreted during the early

stages of the planetary formation”, study co-author, Professor at the University of Zurich and member of the NCCR PlanetS, Ravit Helled begins to explain. “Only in later stages, when the growing planet is sufficiently massive, can it effectively attract large amounts of light element gases like hydrogen and helium. Finding a formation scenario of Jupiter which is consistent with the predicted interior structure as well as with the measured atmospheric enrichment is therefore challenging yet critical for our understanding of giant planets ”, Helled says. Of the many theories that have so far been proposed, none could provide a satisfying answer.

A long migration

“Our idea was that Jupiter had collected these heavy elements in the late stages of its formation by migrating. In doing so, it would have moved through regions filled with so-called planetesimals – small planetary building blocks that are composed of heavy element materials – and accumulated them in its atmosphere”, study lead-author Sho Shibata, who is a postdoctoral researcher at the University of Zurich and a member of the NCCR PlanetS, explains.

Yet, migration by itself is no guarantee for accreting the necessary material. “Because of complex dynamical interactions, the migrating planet does not necessarily accrete the planetesimals in its path. In many cases, the planet actually scatters them instead – not unlike a shepherding dog scattering sheep”, Shibata points out. The team, therefore, had to run countless simulations to determine if any migration pathways resulted in insufficient material accretion.

“What we found was that a sufficient number of planetesimals could be captured if Jupiter formed in the outer regions of the solar system – about four times further away from the Sun than where it is located now – and then migrated to its current position. In this scenario, it moved through a region where the conditions favored material accretion – an accretion sweet spot, as we call it”, Sho reports.

A new era in planetary science

Combining the constraints introduced by the Galileo probe and Juno data, the researchers have finally come up with a satisfying explanation. “This shows how complex giant gas planets are and how difficult it is to realistically reproduce their characteristics” Ravit Helled points out.

“It took us a long time in planetary science to get to a stage where we can finally explore these details with updated theoretical models and numerical simulations. This helps us close gaps in our understanding not only of Jupiter and our solar system but also of the many observed giant planets orbiting far away stars”, Helled concludes.

Repeats are key to understanding humanity's genome

A team of researchers from institutions around the globe has finally filled in the gaps in the Human Reference Genome--and repeated sections are more abundant and more important than previously realized

It was like a map of New York missing all of Manhattan. The human reference genome finally has all its blank spots filled in, and seeing everything we missed the first time around is both repetitive—and enlightening.

“We’re realizing there’s a lot of human variation out there,” says geneticist Rachel O’Neill, director of UConn’s Institute for Systems Genomics. And in a contra-intuitive twist of fate, the variation comes in large part from the repeats.

A significant amount of human genetic material turns out to be long, repetitive sections that occur over and over. Although every human has some repeats, not everyone has the same number of them. And the difference in the number of repeats is where most of the human genetic variation is found.

This insight—that the repeats are important—is one of many significant findings from the Telomere-2-Telomere (T2T) project, a globe-spanning collaboration of institutions that filled in the missing sections of the original human genome assembly. O’Neill is a principal investigator on that project and an author on four of the six T2T papers that were published in Science yesterday.

“It took the invention of new methods of DNA sequencing and computational analysis, and the dedication of a remarkable team of scientists, to complete the reading of the 8% of the human genome that was too complex and repetitive in its structure to be resolved 20 years ago. It was worth the wait — a rich array of surprising architectural features is revealed, with major consequences for understanding human evolution, variation, and biological function,” says Francis Collins, White House Science Advisor and former director of the National Institutes of Health.

As amazing as it was at the time, the original Human Genome Project left about 8% of the genome blank.  

“That’s the equivalent of an entire chromosome in human DNA,” O’Neill says. That last 8% includes numerous genes and repetitive areas. Most of the newly added DNA sequences were near the repetitive telomeres (long, trailing ends of each chromosome) and centromeres (dense middle sections of each chromosome).

The blanks were a result of the “short-read” technology the Human Genome Project used. It was the only technology for genome mapping available 20 years ago, and it could only read the equivalent of a few words of the genetic code at a time. For example, imagine a section of the genome consisting of the sentence “All work and no play makes Jack a dull boy,” repeated nine times in a row. The short read technology would reveal only pieces of it such as “All work”, “Jack a”, “makes Jack”, et cetera. The researchers pieced those short sections together to make the sentence “All work and no play makes Jack a dull boy,” but they had no way to know it was repeated nine times.

The T2T project, however, had better tools. The new long-read DNA technology can read entire sentences, even paragraphs, at a time. So the researchers were able to see large chunks, or even full sections, of repeats.

“Generating a truly gapless sequence of the human genome is a major milestone. We would have loved to have done this 20 years ago, but the technology had to advance. This new reference is a truly solid foundation, without cracks, on which to understand human biology. There are no missing pieces!” says Bob Waterston, a biologist at the University of Washington who worked on the original Human Genome Project.

Many early-career researchers and trainees played pivotal roles in the T2T project. At UConn, Savannah Hoyt, Gabrielle Hartley, Patrick Grady in Rachel O’Neill’s lab, and Luke Wojenski in Leighton Core’s lab, were deeply involved in the work. One of their major contributions was developing a compendium of the repeats in the genome. They found the repetitive sections contained mobile elements, which are sections capable of jumping from one part of the genome to the other (the classic example is jumping genes that lead to color changes in corn kernels, such as from red to white); viruses; and new repeats no one had identified before, including some that carry genes. Some of the giant repeats with 10, 20, or 30 copies repeat back-to-back and contain genes that could account for much of human diversity. In the example sentence from earlier, imagine “Jack” is a gene. One person might have 5 copies of it. Another might have 25.

The T2T team got the first look at complete sequences for the central parts of every human chromosome. Called centromeres, they join the different arms of each X-shaped chromosome together. O’Neill’s team found that centromeres contain known mobile elements as well as new repeats. Much of the DNA in the centromere seems to be important for maintaining a cell’s genetic information through the generations. Centromeres are already known to play a role in DNA replication when cells reproduce, and if they shift their position in the chromosome significantly they can give rise to entirely new species. The complete, gapless centromere sequences constructed by the T2T project will allow a much more nuanced understanding of human centromeres and what they do. The next steps in human genetics research can use the complete T2T genome assembly as a jumping-off point to identify interesting areas of our DNA.

“The next phase of research will sequence the genomes of many different people to fully grasp human diversity, diseases, and our relationship to our closest relatives, the other primates,” O’Neill says.

SIAM recognizes distinguished work in applied mathematics, computational science

Society for Industrial and Applied Mathematics (SIAM) has announced the 2022 Class of SIAM Fellows. These distinguished members were nominated for their exemplary research as well as outstanding service to the community. Through their various contributions, SIAM Fellows help advance the fields of applied mathematics and computational science.

SIAM congratulates these 26 esteemed members of the community, listed below in alphabetical order:

Remi Abgrall, Universität Zürich, is being recognized for fundamental contributions to the development of numerical methods for conservation laws, in particular for multi-fluid flows and residual distribution schemes.

Sharon F. Arroyo, The Boeing Company, is being recognized for leadership in, promotion of, and contributions to the industrial practice of operations research.

Weizhu Bao, National University of Singapore, is being recognized for modeling and simulation of Bose-Einstein condensation and multiscale methods and analysis for highly oscillatory dispersive PDEs.

Bonnie Berger, Massachusetts Institute of Technology, is being recognized for pioneering work in computational molecular biology, including comparative and compressive genomics, network inference, genomic privacy, and protein structure prediction.

Zhiming Chen, Chinese Academy of Sciences, is being recognized for significant contributions to adaptive finite element methods, multiscale analysis and computation, and seismic imaging.

James Michael Crowley, Society for Industrial and Applied Mathematics is being recognized for service to SIAM and the applied mathematics and computational science community.

James H. Curry, University of Colorado Boulder, is being recognized for pioneering work in computational dynamics and mentorship of young researchers, particularly in the African American community.

Zlatko Drmač, University of Zagreb, is being recognized for contributions to algorithms with high relative accuracy in numerical linear algebra, model reduction, and system identification.

Chen Greif, The University of British Columbia, is being recognized for contributions to scientific supercomputing, especially in numerical linear algebra and its applications.

Abba B. Gumel, Arizona State University, is being recognized for stellar contributions to mathematical biology, particularly the modeling of epidemics, and applications to other public health problems.

Eldad Haber, The University of British Columbia, is being recognized for contributions to computational inverse problems, differential equations, statistical and optimization techniques, deep learning, and multi-scale methods.

John Robert King, University of Nottingham, is being recognized for contributions to asymptotic methods and systems biology.

Daniel Kressner, EPFL, is being recognized for contributions to numerical linear and multilinear algebra and scientific computing.

Jose Nathan Kutz, University of Washington, is being recognized for contributions to applied dynamical systems, machine learning, and nonlinear optics.

Lek-Heng Lim, University of Chicago, is being recognized for pioneering contributions to numerical multilinear algebra, and for introducing high-level algebra, geometry, and topology to applied mathematics.

Fang-Hua Lin, New York University, is being recognized for significant contributions to our understanding of the properties of solutions throughout nonlinear partial differential equations.

Peter B. Monk, University of Delaware, is being recognized for contributions to inverse scattering and the development and analysis of finite element methods for problems in acoustics and electromagnetism.

Houman Owhadi, California Institute of Technology, is being recognized for outstanding contributions in statistical numerical approximation, kernel learning, and uncertainty quantification.

Keith Promislow, Michigan State University, is being recognized for contributions to rigorous asymptotic reductions, development of novel models and their applications, and service to the applied mathematics community.

Rosemary Anne Renaut, Arizona State University, is being recognized for contributions to ill-posed inverse problems and regularization, geophysical and medical imaging, and high order numerical methods.

Wil Schilders, Eindhoven University of Technology, is being recognized for impressive contributions to industrial mathematics through semiconductor device simulation, iterative methods for the solution of linear systems, and model order reduction methods.

Leonard J. Schulman, California Institute of Technology, is being recognized for seminal contributions to coding theory, quantum computing, and matrix analysis, and outstanding service.

Amit Singer, Princeton University, is being recognized for foundational contributions to mathematical data analysis and the mathematics of cryo-electron microscopy.

Gabriele Steidl, Technische Universität Berlin, is being recognized for contributions to computational harmonic analysis and imaging sciences.

Raymond Tuminaro, Sandia National Laboratories, is being recognized for contributions to iterative linear solver algorithms and software to address scientific computing applications on large-scale parallel systems.

Hongkai Zhao, Duke University, is being recognized for seminal contributions to scientific computation, numerical analysis, and applications in science and engineering.