Harvard Med's computational tools help scientists understand how the brain makes split-second decisions

Our brains help us make countless decisions every day, from choosing whether to cross the road to selecting the most efficient route to the supermarket. Yet many of these decisions, even those that require our brains to factor in multiple sources of information at the same time, happen so quickly that we’re barely aware of the process involved. Image: olaser/iStock/Getty Images Plus

Jan Drugowitsch, assistant professor of neurobiology in the Blavatnik Institute at Harvard Medical School, is intrigued by this process. As a neurobiologist with a doctorate in machine learning, he uses a computational lens to study how the brain operates. He is particularly interested in how the brain takes in information about the world and uses this information to inform behavior. Drugowitsch’s lab focuses on theory, teaming up with experimentalists to test theories using computational tools.

In a conversation with Harvard Medicine News, Drugowitsch delves into the details of his research on how the brain processes information to make split-second decisions. He also discusses the role of computation and the importance of collaboration in unraveling the mysteries of decision-making.

HMNews: What aspects of the brain and behavior are you studying?

Drugowitsch: A lot of our work focuses on sensory perceptions on very short timescales—from milliseconds to seconds—and how we turn those perceptions into decisions. For example, an everyday human experience is making a decision about crossing the road. To do this, we need to figure out if the traffic situation is safe, including whether we have enough time to cross before a car arrives. For most people, this decision happens in an unconscious way using different sources of information, such as the traffic flow on the left and right and the sound of oncoming cars. In my lab we are studying processes like this one that happens automatically and efficiently in the brain. We’re asking, how does the brain combine multiple sources of information across time to make these kinds of decisions?

Over the last few years, we’ve been studying increasingly complex domains of how we make these choices. We’ve shown that many of these choices follow principles of statistical decision-making because the information we have is uncertain, so we have to gauge different sources of information against each other and ask, “Are we certain enough to commit to a choice?” My lab has been formulating statistical models that capture the process, including complexities such as the trade-off between speed and accuracy.

Now, we are shifting to understanding more continuous behaviors such as navigation. For example, keeping track of direction during navigation is a process that doesn’t have discrete steps—we keep track of our direction on a constant basis, and use this information to make behavioral decisions. We want to know how the brain does this on a continuous timescale.

 HMNews: You use computational tools in your research. What is computational neuroscience?

Drugowitsch: There are currently two forms of computational neuroscience. Traditional computational neuroscience involves building models in the language of mathematics, physics, and engineering to describe hypotheses about how the brain performs computations. These computations are usually related to how the brain processes information about the world. There is also a newer form of computational neuroscience that has emerged with the ability to gather much larger datasets about the brain. This kind of computational neuroscience involves developing and using more sophisticated tools to process complex neural data. We use both in our work.

A focus of my lab is how humans and animals deal with uncertain information. Essentially all of the information that we have about the world is uncertain, and handling uncertain information moves us into the realm of statistics. We use a lot of tools from statistics because they provide the adequate language to talk about beliefs about things in the world. More specifically, we use Bayesian statistics to formulate models of how uncertain information is processed in the abstract sense. Then we use tools from physics to define how this information processing that we’ve worked with on a statistical level can be realized in the brain. This is where biology comes in—it introduces constraints on how the brain operates and how it executes these statistical computations.

HMNews: Your recently published paper in Neuron about navigation in the brain uses some of the above approaches. Can you tell us a bit more about this work?

Drugowitsch: Our research builds on an earlier experimental observation about place cells, a population of cells in the hippocampus of the brain that represent our location in space. This observation, made in mice and rats, is that while a rodent is standing still, place cells suddenly become active in a rapid sequence of bursts that seems to simulate the animal’s trajectory through the environment. There are two hypotheses about the role of this activity. One is that it helps us memorize what we’ve done before and move it to long-term memory. The other is that it helps us plan future navigation.

Before addressing these hypotheses, we wanted to refine our understanding of what these bursts actually do by understanding the data better. We used existing data on rats foraging for food in a two-meter by two-meter environment and applied Bayesian statistical methods to gain a fuller picture of activity in place cells.

Previously, scientists thought that only a small subset of the bursts in place cells stimulated trajectories through open environments. However, we found that the majority of bursts are part of these trajectories. Additionally, the trajectories of these bursts feature momentum as if the animal were actually moving through space, even though it’s stationary. This is interesting because earlier work on activity of place cells during sleep found that the trajectories of those bursts don’t feature momentum. Thus, our findings suggest that bursts of activity in place cells may play a fundamentally different role depending on whether an animal is awake or asleep. Now that we have this information, we can move back to building computational models to understand how place cells help us plan and navigate through the world.

HMNews: Why do you think neuroscience is moving in a computational direction?

Drugowitsch: I think the adoption of more computational tools is in part a response to the many possibilities nowadays for collecting complex data. Previously, if we recorded from a single neuron while an animal did a simple task, we could interpret our data without using complex models. Now, we routinely record from hundreds or thousands of neurons in the brain while animals perform complex tasks, leading to data that can only be analyzed with complex computational models. There has been a realization that most neuroscientists need at least a basic understanding of how these computational models work, which has created a push towards greater literacy in computational neuroscience.

To this end, I co-direct a certificate program in computational neuroscience for graduate students at HMS. The program started because we noticed an increasing demand for students to learn quantitative skills, yet the courses we offered in this area weren’t broad enough. Our aim is to develop new courses that provide students with the skills they need to understand the full array of computational tools being developed to analyze neuroscience data. We also want to increase cohesion of the computational neuroscience community at HMS, and provide more forums where students can discuss questions in the field. 

HMNews: What motivated you to pursue computational neuroscience? 

Drugowitsch: I wanted to become a computational neuroscientist because I strongly believe that understanding the brain requires a complexity of thinking that cannot be achieved by intuition alone—and a lot of traditional experiments rely on intuition. Very often I find that things are different than I expected, which strengthens my belief that we should build formal models of how the brain operates in order to make progress in our understanding. Formulating these models expands our ability to think about complex interactions in the brain that are beyond what we can hold in our heads. We’re outsourcing this complexity to tools that have been developed in math and physics.

In general, I’m driven by curiosity, trying to figure out new things and trying to discover the principles that define how we operate. In my lab, we like to ask specific questions because this is the only way to make experimentally testable predictions. However, we hope to discover general principles that underlie these questions. If we are studying how an animal performs particular behaviors, we try to extract a generalization from that specific situation that we can test in another set of experiments. Computational neuroscience gives us the tools we need to explore these questions.

 HMNews: In your work, you often team up with colleagues from other branches of neurobiology. Why?

Drugowitsch: Building theories and running experiments require a different set of skills, so collaborations allow theorists like me to work with gifted experimentalists in a fruitful way.

There are many theories in computational neuroscience that remain untested, so by collaborating with experimentalists we can test those theories to see if they are supported by the data.

In some cases, we work with scientists running experiments with humans. The benefit of human experiments is that the training is fast—humans can perform complex tasks right away. The disadvantage is that it’s hard to look into their brains. For other questions, especially those about specific neural connections, we collaborate with scientists studying animals. For example, we’re working with Rachel Wilson, who studies drosophila [fruit fly] neurophysiology. We are asking, how does a specific neural circuit in the drosophila brain perform specific computations? We hope that the motifs we discover can be generalized across species, including humans.

In my lab, we may be able to develop blue-sky theories, but at the end of the day we need to connect those theories to data gathered in the real world. Working with people who conduct experiments allows us to do that.

This interview was edited for length and clarity.

Scientists unravel a new insight into how Pluto has formed ice-shapes

A team of international researchers, including Dr. Adrien Morison from the University of Exeter, has shown how vast ice forms have been shaped in one of the planet’s largest craters, Sputnik Planitia.  web pluto nasa rh 218xfree fee28

Perhaps the most striking feature on Pluto’s surface, Sputnik Planitia is an impact crater, consisting of a bright plain, slightly larger than France, and filled with nitrogen ice. 

For the new study, researchers have used sophisticated supercomputer modeling techniques to show that these ice forms, polygonal in shape, are formed by the sublimation of ice – a phenomenon where the solid ice can turn into a gas without going through a liquid state. 

The research team shows this sublimation of the nitrogen ice powers convection in the ice layer of Sputnik Planitia by cooling down its surface.  

Dr. Morison, a Research Fellow from Exeter’s Physics and Astronomy department said: “When the space probe New Horizon performed the only, to date, fly-by of Pluto in 2015, the collected data was enough to drastically change our understanding of this remote world.  

“In particular, it showed that Pluto is still geologically active despite being far away from the Sun and having limited internal energy sources. This included at Sputnik Planitia,  where the surface conditions allow the gaseous nitrogen in its atmosphere to coexist with solid nitrogen.  

“We know that the surface of the ice exhibits remarkable polygonal features – formed by thermal convection in the nitrogen ice, constantly organizing and renewing the surface of the ice.  However, there remained questions behind just how this process could occur.” 

In the new study, the research team conducted a series of numerical simulations that showed that cooling from sublimation can power convection in a way that is consistent with numerous data coming from New Horizons  - including the size of polygons, amplitude of topography, and surface velocities. 

It is also consistent with the timescale at which climate models predict sublimation of Sputnik Planitia, beginning around 1 - 2 million years ago. It showed that the dynamics of this nitrogen ice layer echo those found on Earth’s oceans, being driven by the climate.  

Such climate-powered dynamics of a solid layer could also occur at the surface of other planetary bodies, such as Triton (one of Neptune’s moons), or Eris and Makemake (from Kuiper’s Belt). 

Sublimation-driven convection in Sputnik Planitia on Pluto, by Dr. Morison (University of Exeter), Pr Labrosse (Geology Laboratory of Lyon, France), and Dr. Choblet (Planetology and Geodynamics Laboratory of Nantes, France) is published in Nature. 

Spanish team proposes a new systems immunology approach for COVID-19

Thanks to the large amount of -omics data becoming increasingly available, sophisticated computational models are developed for new fields such as immunology and the predictions they generate will help identify key molecules in inflammatory processes. The application of such computational systems biology approaches to immunology could lead to novel and more efficacious therapeutic strategies. gr1 0add4

“The recent work of our two teams on the modulation of hyper inflammation in COVID-19 illustrates really well how the synergy between experimental and computational researchers can accelerate the discovery of molecules of interest,” explains Prof. Antonio Del Sol, head of the Computational Biology groups at the LCSB and CIC bioGUNE. “By using computational modeling to inform traditional experimental approaches, we confirmed in a few months a potential target for medical intervention in COVID-19 patients. This is indeed very promising.”

Understanding the “cytokine storm” in COVID-19

In recent studies, researchers from the LCSB and CIC bioGUNE - on the computational side - and from the Kanneganti Lab at St. Jude Children’s Research Hospital - on the experimental side - focused on the mechanisms underlying the hyperinflammatory response in COVID-19. Hyperinflammation is caused when the immune response is amplified and maintained by positive feedback loops above the level needed to control the disease. Kanneganti’s lab recently found that in COVID-19, as well as other diseases, this hyperinflammatory “cytokine storm” could be mechanistically defined as a life-threatening condition caused by excessive production of proinflammatory proteins, cytokines, mediated by a form of inflammatory cell death called PANoptosis. In COVID-19, PANoptosis and the concomitant cytokine storm cause organ damage and increase the severity of the symptoms. This makes treatment challenging, as therapeutics need to alleviate inflammation while maintaining the patient’s ability to clear the virus through cell death and other pathways. It is, therefore, crucial to identify the molecules that amplify and maintain the inflammatory response. It is the first step towards new and putative life-saving therapeutic strategies.

Two studies identify protein TLR2 as a target

In a first study published in Science Advances, researchers from the Computational Biology groups of the LCSB and CIC bioGUNE used a novel computational method to analyze over 1700 cell-cell interactions and create a comprehensive map of the immune response in the lungs of COVID-19 patients. Their model identified Toll-like Receptor 2 (TLR2) as a molecule that might be able to modulate the inflammatory response, predicting that the inhibition of this protein could disrupt up to 75% of the feedback loops without interfering with the general immune response. The study put TLR2 on the map as a potential target for medical intervention in severe COVID-19 cases.

Separately, the team of Dr. Thirumala-Devi Kanneganti from the Department of Immunology of St Jude Children’s Research Hospital published a study in Nature Immunology that independently suggested that TLR2 might act as a key modulator of COVID-19-induced hyperinflammation. Using in vitro and in vivo experiments, the researchers found that increased expression of TLR2 in the blood of patients with COVID-19 correlated with disease severity and that, upon infection by the virus, TLR2 mediated the production of cytokines. The study also showed that treatment of transgenic mice with a TLR2 inhibitor protected the animals against SARS-CoV-2-mediated inflammatory cytokine production and mortality. “Experimental validation of computationally derived biomarkers is critical to provide multiple lines of evidence to support the proof-of-concept for the utility of targeting TLR2 to modulate inflammation. It is imperative to combine computational and experimental approaches to understand mechanisms involved in inflammatory processes,” underlines Dr. Kanneganti.

A coordinated effort to achieve full potential

This example is far from the only one: In a growing number of studies, systems immunology approaches are being successfully employed to help predict novel therapeutic targets for modulating uncontrolled immune responses. “Computational modeling and experimental validation will become key partnerships in biomedical research and should be systematically developed to achieve their full potential,” details Dr. Ilya Potapov, member of the Computational Biology group at the LCSB.

In their recent paper published in November in Trends in Immunology, the three co-authors mention the challenges researchers will have to tackle when building computational models in the context of hyper inflammation – such as technological limitations, shortage of good experimental models, and mutual unawareness – and postulate that experimental and computational efforts should be synergized from the onset. “The technological advances have set the stage for us. Now, we need to work together to build accurate computational models, define the necessary data, and design experiments to validate the computational predictions. This is the key to designing novel and more efficacious therapeutic strategies,” concludes Prof. Del Sol.

Exotic quantum particles research paves the way for future quantum apps with less magnetic field required

Exotic quantum particles and phenomena are like the world’s most daring elite athletes. Like the free solo climbers who scale impossibly cliff faces without a rope or harness, only the most extreme conditions will entice them to show up. For exotic phenomena like superconductivity or particles that carry a fraction of the charge of an electron, that means extremely low temperatures or extremely high magnetic fields. Electron fractionalization in magic-angle twisted bilayer graphene  CREDIT Second Bay Studios/Harvard SEAS

But what if you could get these particles and phenomena to show up under less extreme conditions? Much has been made of the potential of room-temperature superconductivity, but generating exotic fractionally charged particles at the low-to-zero magnetic field is equally important to the future of quantum materials and applications, including new types of quantum supercomputing.  

Now, a team of researchers from Harvard University led by Amir Yacoby, Professor of Physics and Applied Physics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), and Ashvin Vishwanath, Professor of Physics in the Department of Physics, in collaboration with Pablo Jarillo-Herrero at the Massachusetts Institute of Technology, have observed exotic fractional states at a low magnetic field in twisted bilayer graphene for the first time.

“One of the holy grails in the field of condensed matter physics is getting exotic particles with low to zero magnetic fields,” said Yacoby, senior writer of the study.  “There have been theoretical predictions that we should be able to see these bizarre particles with low to zero magnetic fields, but no one has been able to observe it until now.”

The researchers were interested in a specific exotic quantum state known as fractional Chern insulators. Chern insulators are topological insulators, meaning they conduct electricity on their surface or edge, but not in the middle. 

In a fractional Chern insulator, electron interactions form what’s known as quasiparticles, a particle that emerges from complex interactions between large numbers of other particles. Sound, for example, can be described as a quasiparticle because it emerges from the complex interactions of particles in a material. Like fundamental particles, quasiparticles have well-defined properties like mass and charge.

In fractional Chern insulators, electron interactions are so strong within the material that quasiparticles are forced to carry a fraction of the charge of normal electrons. These fractional particles have bizarre quantum properties that could be used to create robust quantum bits that are extremely resilient to outside interference.

To build their insulator, the researchers used two sheets of graphene twisted together at the so-called magic angle. Twisting unlocks new and different properties in graphene, including superconductivity, as first discovered by Jarillo-Herrero’s group at MIT, and states known as Chern bands, which hold great potential to generate fractional quantum states, as shown theoretically by Vishwanath’s group at Harvard. 

Think of these Chern bands like buckets that fill up with electrons. 

“In previous studies, you needed a large magnetic field to generate these buckets, which are the topological building blocks you need to get these exotic fractional particles,” said Andrew T. Pierce, a graduate student in Yacoby’s group and co-first author of the paper. “But magic-angle twist bilayer graphene already has these useful topological units built-in at zero magnetic fields.”

To generate fractional states, the researchers need to fill the buckets a fraction of the way with electrons. But here’s the hitch: for this to work, all the electrons in a bucket must have nearly the same properties. In twisted bilayer graphene, they don’t. In this system, electrons have different levels of a property known as the Berry curvature, which causes each electron to experience a magnetic field tied to its particular momentum. (It’s more complicated than that, but what isn’t in quantum physics?) 

When filling up the buckets, the electrons’ Berry curvature needs to be evened out for the fractional Chern insulator state to appear. 

That’s where a small applied magnetic field comes in. 

“We showed that we can apply a very small magnetic field to evenly distribute Berry curvature among electrons in the system, allowing us to observe a fractional Chern insulator in the twisted bilayer graphene,” said Yonglong Xie, a postdoctoral fellow at SEAS and co-first author of the paper.  “This research sheds light on the importance of the Berry curvature to realize fractionalized exotic states and could point to alternative platforms where Berry curvature isn’t as heterogeneous as it is in twisted graphene.”

"Twisted bilayer graphene is the gift that keeps on giving and this discovery of fractional Chern insulators is arguably one of the most significant advances in the field,” said Vishwanath, senior author of the study. “It is astonishing to think that this wonder material is ultimately made of the same stuff as your pencil tip. "

"The discovery of low magnetic field fractional Chern insulators in magic-angle twisted bilayer graphene opens a new chapter in the field of topological quantum matter,” said Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT and senior author of the study. “It offers the realistic prospect of coupling these exotic states with superconductivity, possibly enabling the creation and control of even more exotic topological quasiparticles known as anyons.”

Animated sequence of the VLTI images of stars around the Milky Way’s central black hole

This animation shows the orbits of the stars S29 and S55 as they move close to Sgr A* (centre), the supermassive black hole at the heart of the Milky Way.As we follow the stars along in their orbits, we see real images of the region obtained with the GRAVITY instrument on ESO’s Very Large Telescope Interferometer (VLTI) in March, May, June and July 2021.In addition to S29 and S55, the images als...
...

Read more