Quon wins NIH Director’s Award to develop framework to pinpoint disease-causing genetic mutations

Gerald Quon, an assistant professor in the Department of Molecular and Cellular Biology, College of Biological Sciences at the University of California, Davis has received a Director’s New Innovator Award from the National Institutes of Health (NIH). The award will support the development of a computational framework for characterizing how genetic variants associated with the risk of psychiatric diseases like schizophrenia and bipolar disorder work at the cellular level. Gerald Quon with one of the graphics card he uses to build computer networks for his research. Quon uses computational models to understand the relationship between genetics and cellular behavior that can lead to schizophrenia or bipolar disorder.

“The science put forward by this cohort is exceptionally novel and creative and is sure to push at the boundaries of what is known,” said NIH Director Francis S. Collins in a news release announcing the awards. “These visionary investigators come from a wide breadth of career stages and show that groundbreaking science can happen at any career level given the right opportunity.”

Quon’s project, entitled “Linking genetics to cellular behavior and disease via multimodal data integration,” will receive $1.5 million in support over five years. The project aims to characterize the relationship between gene regulation, neuron firing patterns, and the morphology of those neurons. 

“A lot of people who study the genetics of different disorders in humans look at the impact of genetic variants on the molecular level,” said Quon. “We're trying to connect what happens at the molecular level in neurons to cellular-level phenotypes.”

Learning how genetic variants associated with different psychiatric disorders work at the cellular level requires significant amounts of computing power. With the award, Quon and his colleagues can acquire the hardware needed to build, train and deploy complex digital models. 

“We use a lot of graphics cards and those get expensive quickly,” Quon said. 

Identifying targets for therapy

The award will also support efforts to validate predictions about which genetic variants are harmful and which are benign. Quon and his colleagues will be able to introduce specific DNA mutations into a neuron and evaluate how they impact a neuron’s response patterns. 

“Hundreds of places in the genome might harbor mutations that affect our risk of disease,” said Quon, “but we don’t actually know which mutations are the most important to target with therapies. Our work is trying to identify which mutations may have the biggest effect on neuron function and should therefore be prioritized for therapy.” 

This type of research is often stymied by the lack of an integral component: live neurons. 

“Some types of neuron function studies need access to live neurons; it’s hard to get access to live human neurons, so those datasets are rare and small,” said Quon. 

Small datasets are a huge obstacle to data analysis, so the computational framework Quon hopes to develop will enable the integration of data from both live and postmortem samples, from both humans and mice. Ultimately, using all this extra data will allow researchers to more accurately pinpoint which mutations result in disorders like schizophrenia.  

“By using our proposed framework to pool additional data from other sources,” said Quon, “we can boost the amount of statistical power we have, and we’ll need fewer neurons to do proper data analysis.” 

Machine Learning helps reveal cells' inner structures in new detail

Janelia’s COSEM Project Team has created a set of tools to make annotated 3D images of cells, showing the relationships between different organelles.

Open any introductory biology textbook, and you’ll see a familiar diagram: A blobby-looking cell filled with brightly colored structures – the inner machinery that makes the cell tick. 

Cell biologists have known the basic functions of most of these structures, called organelles, for decades. The bean-shaped mitochondria make energy, for example, and lanky microtubules help cargo zip around the cell. But for all that scientists have learned about these miniature ecosystems, much remains unknown about how their parts all work together. The COSEM Project Team has developed a set of algorithms that can map the organelles in microscope images of cells, creating detailed 3D representations of cells’ inner workings. Credit: COSEM Project Team

Now, high-powered microscopy – plus a heavy dose of machine learning – is helping to change that. New computer algorithms can automatically identify some 30 different kinds of organelles and other structures in super-high-resolution images of entire cells, a team of scientists at the Howard Hughes Medical Institute’s Janelia Research Campus reports.

The detail in these images would be nearly impossible to parse by hand throughout the entire cell, says Aubrey Weigel, who led the Janelia Project Team, called COSEM (for Cell Organelle Segmentation in Electron Microscopy). The data for just one cell is made up of tens of thousands of images; tracing all a cell’s organelles through that collection of pictures would take one person more than 60 years. But the new algorithms make it possible to map an entire cell in hours, rather than years. 

{media load=media,id=258,width=350,align=left,display=inline}

“By using machine learning to process the data, we felt we could revisit the canonical view of a cell,” Weigel says.

Janelia scientists also released a data portal, OpenOrganelle, where anyone can access the datasets and tools they’ve created.

These resources are invaluable for scientists studying how organelles keep cells running, says Jennifer Lippincott-Schwartz, a senior group leader and interim head of Janelia’s new 4D Cellular Physiology research area who is already using the data in her research. “What we haven’t really known is how different organelles and structures are arranged relative to each other – how they’re touching and contacting each other, how much space they occupy,” she says.

For the first time, those hidden relationships are visible. 

 
Using specialized electron microscopes, scientists capture thousands of images of cells, layer by layer. Within parts of those images, scientists painstakingly trace individual organelles by hand.
Stacking up the traced layers creates a 3D dataset. Then, scientists train machine learning algorithms on the data, teaching the computer to recognize different organelles. With enough example data, the computer can efficiently pick organelles out of images it’s never seen before, mapping organelle boundaries as shown here.
Those images are compiled into a 3D dataset with each organelle identified, revealing cells’ interiors in full detail. Additional tools can home in on specific organelles and pinpoint the places they interact. This image shows the interaction between mitochondria (orange) and the endoplasmic reticulum (green.)

 

Detailed data

The COSEM team’s journey started with data collected by high-powered electron microscopes housed in a special vibration-proof room at Janelia. 

For the past ten years, these microscopes have been churning out high-resolution snapshots of the fly brain. Janelia Senior scientist Shan Xu and senior group leader Harald Hess have engineered these scopes to mill off super-thin slivers of the fly brain using a focused beam of ions – an approach called FIB-SEM imaging. The scopes capture images layer by layer, and then computer programs stitch those images together into a detailed 3D representation of the brain. Based on these data, Janelia researchers released the most detailed neural map of the fly brain yet. 

While imaging the fly brain, Hess and Xu’s team also looked at other samples. Over time, they amassed a collection of data from many kinds of cells, including mammalian cells. “We thought that this detailed imaging of whole cells might be of larger interest to cell biologists,” Hess says. 

{media load=media,id=259,width=350,align=left,display=inline}

Weigel, then a postdoc in Lippincott-Schwartz’s lab, began mining those data for her research. “The resolving power of the FIB-SEM imaging was amazing, and we were able to see things at a level we couldn’t have imagined before,” says Weigel, “but there was more information in one sample than I could analyze in several lifetimes.” Realizing that others at Janelia were working on computational projects that might speed things up, she began organizing a collaboration. 

“All of the pieces were here at Janelia,” she says, and forming the COSEM project team aligned them towards a common goal.

Setting boundaries

Larissa Heinrich, a graduate student in group leader Stephan Saalfeld’s lab, had previously developed machine learning tools that could pinpoint synapses, the connections between neurons, in electron microscope data. For COSEM, she adapted those algorithms to instead map out, or segment, organelles in cells.

Saalfeld and Heinrich’s segmentation algorithms worked by assigning each pixel in an image a number. The number reflected how far the pixel was from the nearest synapse. An algorithm then used those numbers to ID and label all the synapses in an image. The COSEM algorithms work similarly, but with more dimensions, Saalfeld says. They classify every pixel by its distance to each of 30 different kinds of organelles and structures. Then, the algorithms integrate all of those numbers to predict where organelles are positioned.

Using data from scientists who have manually traced organelle boundaries and assigned numbers to pixels, the algorithm can learn that particular combinations of numbers are unreasonable, Saalfeld says. “So, for example, a pixel can’t be inside a mitochondrion at the same time it’s inside the endoplasmic reticulum.”

To answer questions like how many mitochondria are in a cell, or what their surface area is, the algorithms need to go even further, says group leader Jan Funke. His team built algorithms that incorporate prior knowledge about organelles’ characteristics. For example, scientists know that microtubules are long and thin. Based on that information, the computer can make judgments about where a microtubule begins and ends. The team can observe how such prior knowledge affects the computer program’s results – whether it makes the algorithm more or less accurate – and then make adjustments where necessary. 

After two years of work, the COSEM team has landed on a set of algorithms that generate good results for the data that have been collected so far. Those results are the important groundwork for future research at Janelia, says Weigel.  A new effort headed by Xu is taking FIB-SEM imaging to even greater levels of detail. And another soon-to-launch project team named CellMap will further refine COSEM’s tools and resources to create a more expansive cell annotation database, with detailed images of many more types of cells and tissue.

Together, those advances will support Janelia’s next 15-year research area, 4D Cellular Physiology – an effort led on an interim basis by Lippincott-Schwartz to understand how cells interact with each other within each of the many different kinds of tissue that make up an organism, says Wyatt Korff, Director of Project Teams at Janelia.

With new resources like those created by the COSEM team and the Enhanced FIB-SEM Technology group, Korff says, “we can actually begin to answer those questions, in a way that we haven’t had access to in the past.”

Swiss researchers develop new high-resolution PV production forecasts

In May of 2021, the International Energy Agency (IEA) has announced in its flagship report the measures it believes governments and policymakers should be taking to limit global warming to 1.5˚C. The report highlighted that to guarantee stable and affordable energy supplies now, and in the future, there needs to be a massive expansion of clean energy resources.

PVLIVE: high-resolution, large-scale data-driven photovoltaic (PV) system production forecasting

At CSEM – the prestigious Swiss Research and Development Organization – limiting global warming to 1.5˚C is not simply talk. Under the leadership of Pierre-Jean Alet, an experienced Sector Head, CSEM’s researchers are working on software that will drastically improve the management of PV-produced energy, from local energy communities to national grids. At CSEM, we listen and respond to the very real need of grid operators, who can’t always accurately predict PV energy generation. Currently, the operators must intervene to cover any energy shortfalls caused by inaccurate predictions and re-balance energy across their systems using alternative resources. This can be both costly and lead to additional carbon emissions,” he notes. “Accurate and robust PV production forecasts are thus an important and necessary tool to increase the amount of clean energy in the power system and ensure stable energy distribution across the power grid. They also help minimize any maintenance and balancing costs – this is why we developed PVLIVE,” enthuses Alet.  Graphic

Turning PV systems into ground-level weather stations

PVLIVE doesn’t just offer forecasting over a single or a few localized sites, it provides optimized PV production forecasting across entire countries, and its algorithms were tested for over a year on exceptionally large datasets. Uniquely compared to other forecasting systems, “CSEM’s solution is based on graph machine learning, the kind of AI algorithms that power social networks and recommendation systems in e-commerce,” says Rafael Carrillo, a Senior Researcher at CSEM. “We used this technique creatively to exploit the Spatio-temporal relationships between different PV systems: intuitively, if you are under the dominant wind direction, what you see on one system will eventually show up at another system further down the line. Effectively with PVLIVE, we turn PV systems into ground-level weather stations to forecast the production of their neighbors,” concludes Carrillo.

Using this technique, CSEM’s data scientists can provide forecasts across hundreds of PV systems that show a daytime normalized root-mean-square error of 13.8% with a forecasting horizon of 6 h in 15 min steps. This outperforms other platforms based on numerical weather forecasts and machine learning. Through a follow-up project, PVLIVE is now available as an interactive platform. You can view the platform in operation across the Netherlands and the project’s paper HERE.