Researchers, led by a team from the Beckman Institute at the University of Illinois, combined the power of two computational programs to determine the atomic structure of the abiological molecule cyanostar. This breakthrough will allow researchers to investigate the structure of more abiological molecules, which are relatively unknown.

A team from the University of Illinois at Urbana-Champaign and Indiana University combined two techniques to determine the structure of cyanostar, a new abiological molecule that captures unwanted negative ions in solutions.

When Semin Lee, a chemist and Beckman Institute postdoctoral fellow at Illinois, first created cyanostar at Indiana University, he knew the chemical properties, but couldn't determine the precise atomical structure. Lee synthesized cyanostar for its unique ability to bind with large, negatively charged ions, which could have applications such as removing perchlorate, a commercially produced contaminate that has been associated with negative effects on human health, from water and soil.

Determining the structure of a molecule, atom by atom, is important for both chemists and biologists to understand how molecules work and interact with surrounding material. In the past, each discipline had different ways to determine the structure based on the molecule's size and complexity.

To achieve the highest degree of accuracy, however, scientists have combined methods from both disciplines to determine the complicated structure of cyanostar, a symmetrical, five-sided macrocycle that is easy to synthesize and could also be useful in lithium ion batteries. These results are detailed in a paper recently published in the Journal of the American Chemical Society.

"There are two worlds which determine structure--the world of biology, which has big systems but often doesn't generate precise structural calculations," said Klaus Schulten, head of the Theoretical and Computational Biophysics Group (TCBG) at the Beckman Institute for Advanced Science and Technology at Illinois.

"Then there's the chemical world of smaller molecules, which results in super-precise calculations, but often can't account for disorder in the molecule."

Disorder--when a molecule has more than one way it can arrange itself--challenges traditional small molecule methodologies. Cyanostar, for example, can be arranged in four different ways.

To overcome the challenges of the disorder, Lee collaborated with Schulten's group, including Abhishek Singharoy, another Beckman Institute postdoctoral fellow who specializes in a supercomputational simulation method called xMDFF (Molecular Dynamics Flexible Fitting). The method uses x-ray crystallography data to examine a molecule's structure.

"In x-ray crystallography, we don't look at single molecule," said Singharoy. "We look at a bunch packed into a single crystal, and shine an electron x-ray on it. This produces a diffraction of the crystal structure. It's still tricky to get a good image with a disordered molecule, so we had to use another program to refine the structure further."

The team combined xMDFF with PHENIX, a popular tool in the crystallography community developed by a group at the University of California, Berkeley.

"It's really an excellent combination of the two programs. We wouldn't have gotten the results we did without this computational partnership," Schulten said. "This entire project is really a wonderful example of how today's science is being done. The disciplines joined, the teams joined, and the computational programs joined. When you are open-minded and look to see what others can do, you are really better off."

The team, with the help of Chris Mayne, a research programmer in TCBG, also created specialized force fields, a set of rules that helps the supercomputer program govern the connections between the molecules.

"The connections in biological (large) molecules are well studied because they are so common--they're lipids, nucleic acids, and proteins," said Mayne. "But cyanostar is an abiological molecule, so it's not constructed from the same set of biological building blocks. It required a customized force field, so we had to start from scratch."

With these newly developed force fields, the researchers could not only solve the structure of cyanostar, but it will allow the researchers to examine many more disordered abiological molecules.

"It opens an entire avenue. We can use these sorts of techniques to obtain the structures of multiple classes of abiological molecules," said Singharoy. "So that's where we'd like to go moving forward--use it as a general tool to start these other molecules."

Jinghui Zhang, Ph.D., an international expert in the analysis of genomic data, will lead the growth of innovative effort in newly dedicated space supported by Brooks Brothers

St. Jude Children's Research Hospital officials have named Jinghui Zhang, Ph.D., as the first chair of the Department of Computational Biology. She will hold the St. Jude Endowed Chair in Bioinformatics.

Computational biology applies mathematics and computer science to the study of genomics, systems biology, biological image analysis and structural and chemical biology. The department will occupy an entire floor in the Kay Research and Care Center, the newest building on the St. Jude campus. The 28,700-square-foot space will be named Brooks Brothers Computational Biology Center and hold both laboratories and offices, allowing seamless integration of computational scientists with experimentalists. It will also be home to a state-of-the-art genome sequencing laboratory. Under Zhang, the department will grow to include nine faculty members during the next several years. 

"Dr. Zhang has created new computational methods for analyzing genomic data, leading to new directions in research involving high-risk leukemia, brain and solid tumors," said James R. Downing, M.D., St. Jude president and chief executive officer. "Her appointment as chair will help us further the vision and direction of computational biology into a range of projects that will play key roles in research at St. Jude."

Computational biology efforts at St. Jude took shape five years ago with the creation of the St. Jude–Washington University Pediatric Cancer Genome Project (PCGP), an unprecedented effort to map the genomes of some of the deadliest childhood cancers.  Data generated from the project—100 trillion-plus pieces—encompass the complete normal and cancer genomes of 700 children and adolescents with 23 different childhood cancers. 

Zhang joined St. Jude in 2010, leading the effort to analyze PCGP data and the creation of several new computational tools that have been adopted by biologists worldwide. Her work has helped define the landscape of mutations that underlie pediatric cancers, resulting in the identification of new pediatric cancer genetic subtypes, insights into cancer-drug resistance and metastatic behavior, and new therapeutic targets against which drugs can be developed.

Prior to St. Jude, Zhang led genetic variation analysis of the first assembled human genome. She also contributed to key discoveries in the pilot phases of the National Cancer Institute's Cancer Genome Atlas Project and the Therapeutically Applicable Research to Generate Effective Treatment (TARGET) initiative.

Zhang received her undergraduate degree from Fu Dan University in Shanghai and her doctorate from the University of Connecticut in Storrs, Conn.

CAPTION Professor Brendan Frey (centre-right) and colleagues at the University of Toronto Faculty of Applied Science & Engineering. CREDIT Roberta Baker/ U of T Engineering

Evolution has altered the human genome over hundreds of thousands of years -- and now humans can do it in a matter of months.

Faster than anyone expected, scientists have discovered how to read and write the DNA code in a living body, using hand-held genome sequencers and gene-editing systems. But knowing how to write is different from knowing what to write. To diagnose and treat genetic diseases, scientists must predict the biological consequences of both existing mutations and those they plan to introduce. 

Deep Genomics, a start-up company spun out of research at the University of Toronto is on a mission to predict the consequences of genomic changes by developing new deep learning technologies. 

"Our vision is to change the course of genomic medicine," says Brendan Frey, the company's president and CEO, who is also a professor in The Edward S. Rogers Sr. Department of Electrical & Computer Engineering at the University of Toronto and a Senior Fellow of the Canadian Institute for Advanced Research. "We're inventing a new generation of deep learning technologies that can tell us what will happen within a cell when DNA is altered by natural mutations, therapies or even by deliberate gene editing."

Deep Genomics is the only company to combine more than a decade of world-leading expertise in both deep learning and genome biology. "Companies like Google, Facebook and DeepMind have used deep learning to hugely improve image search, speech recognition and text processing. We're doing something very different. The mission of Deep Genomics is to save lives and improve health," says Frey. 

Deep Genomics is now releasing its first product, called SPIDEX, which provides information about how hundreds of millions of DNA mutations may alter splicing in the cell, a process that is crucial for normal development. Because errant splicing is behind many diseases and disorders, including cancers and autism spectrum disorder, SPIDEX has immediate and practical importance for genetic testing and pharmaceutical development. The science validating the SPIDEX tool was described in the January 9, 2015 issue of the journal Science.

"The genome contains a catalogue of genetic variation that is our DNA blueprint for health and disease," says Stephen Scherer, director of The Centre for Applied Genomics at SickKids and the McLaughlin Centre at the University of Toronto, a CIFAR Senior Fellow, and an advisor to Deep Genomics. "Brendan has put together a fantastic team of experts in artificial intelligence and genome biology--if anybody can decode this blueprint and harness it to take us into a new era of genomic medicine, they can." 

Until now, geneticists have spent decades experimentally identifying and examining mutations within specific genes that can be clearly connected to disease, such as the BRCA-1 and BRCA-2 genes for breast cancer. However, the number of mutations that could lead to disease is vast and most have not been observed before, let alone studied. 

These mystery mutations pose an enormous challenge for current genomic diagnosis. Labs send the mutations they've collected to Deep Genomics, and the company uses their proprietary deep learning system, which includes SPIDEX, to 'read' the genome and assess how likely the mutation is to cause a problem. It can also connect the dots between a variant of unknown significance and a variant that has been linked to disease. "Faced with a new mutation that's never been seen before, our system can determine whether it impacts cellular biochemistry in the same way as some other highly dangerous mutation," says Frey.

Deep Genomics is committed to supporting publicly funded efforts to improve human health. "Soon after our Science paper was published, medical researchers, diagnosticians and genome biologists asked us to create a database to support academic research," says Frey. "The first thing we're doing with the company is releasing this database--that's very important to us."

"Soon, you'll be able to have your genome sequenced cheaply and easily with a device that plugs into your laptop. The technology already exists," explains Frey. "When genomic data is easily accessible to everyone, the big questions are going to be about interpreting the data and providing people with smart options. That's where we come in."

Deep Genomics envisions a future where computers are trusted to predict the outcome of experiments and treatments, long before anyone picks up a test tube. To realize that vision, the company plans to grow its team of data scientists and computational biologists. Deep Genomics will continue to invent new deep learning technologies and work with diagnosticians and biologists to understand the many complex ways that cells interpret DNA, from transcription and splicing to polyadenylation and translation. Building a thorough understanding of these processes has massive implications for genetic testing, pharmaceutical research and development, personalized medicine and improving human longevity.

First the scaffold is cracked, then defective parts are removed: Cells repair damaged DNA by a different mechanism than so far assumed, as LMU chemists have shown.

Defects in DNA can cause serious harm to an organism, including cell death or the development of cancer. Efficient repair mechanisms are therefore of vital importance. LMU chemist Professor Christian Ochsenfeld, Chair of Theoretical Chemistry at LMU, and Dr. Keyarash Sadeghian from his group have explained for the first time in detail how a human DNA repair enzyme works. Their supercomputer simulations show that the repair process is different from what was previously thought. The scientists have reported their results in the current issue of the Journal of the American Chemical Society.

DNA consists of certain basic building blocks, each consisting of a nucleotide base, a sugar, and a phosphate group. The genetic blueprints are encoded in the sequence of nucleotide bases. The sugars are bound together by the phosphate groups, forming the backbone of the DNA, and each sugar has a nucleotide base attached to it. Reactive oxygen species, which arise in every cell as a by-product of respiration, attack DNA. Often, they attack the nucleotide base guanine and oxidize it to a so-called 8OG base. This defect can lead to faulty DNA replication and thus lead to deleterious mutations. The job of DNA repair enzymes is therefore to recognize such bases, bind them in their reactive centres, and remove them from the DNA strand.

“It is really remarkable that even if the undamaged and the damaged guanine are bound in the active centre and assume identical positions, only the oxidized form of guanine is excised from DNA by the human repair enzyme hOGG1”, says Sadeghian, first author of the study.

Taking a detour

By running complex quantum mechanical supercomputer simulations developed in Ochsenfeld’s group, the scientists have now managed for the first time to explain how the repair enzyme distinguishes between a normal and an oxidized base. The trick here is that the enzyme takes a detour. “Contrary to the assumptions so far, that the oxidized form of guanine has to be activated first for the repair to take place, we have now shown that the sugar bound to it plays a crucial role in the first step,” Sadeghian reports. “The repair enzyme first opens the ring structure of the sugar by gripping it from both sides simultaneously, like a pair of tongs. This step only works if the sugar is bound to the oxidized form of the base. If the normal guanine is bound, then the enzyme is halted and cannot continue its activity.” Opening the sugar destabilizes the otherwise highly stable chemical bond between the oxidized nucleotide base and the DNA strand, and the bond is then broken in further steps.

The human repair enzyme hOGG1 is not the only one to follow this clever strategy: A bacterial repair enzyme with a very different structure does so as well, as the scientists have shown. “Our finding that DNA repair enzymes have found a detour and don’t attack their target object directly in the first step brings new perspectives for understanding these processes,” Ochsenfeld says. “With our computer simulations, we can for the first time follow chemical reactions that occur with such high complexity in nature that they cannot always be captured experimentally. This means we can hopefully clarify in the future whether these DNA repair mechanisms are also used by other enzymes with similar functions.

The U-M team’s computer model, shown here, projects the amount of surfactant medication delivered to an adult human lung using 280 milliliters of liquid, the same amount used in the successful 1997 surfactant replacement therapy studies. The colored bubbles represent the surfactant medication; red areas receive the most medication, while blue areas receive less. Image credit: James Grotberg, U-M Engineering

The first computer model that predicts the flow of liquid medication in human lungs is providing new insight into the treatment of acute respiratory distress syndrome.

University of Michigan researchers are using the new technology to uncover why a treatment that saves the lives of premature babies has been largely unsuccessful in adults.

Acute respiratory distress syndrome, or ARDS, is a life-threatening inflammation of the respiratory system that kills 74,000 adults each year in the United States alone. It's most common among patients with lung injury or sepsis, a whole-body inflammation caused by infection.The treatment, called surfactant replacement therapy, delivers a liquid medication into the lungs that makes it easier for them to inflate. It's widely used to treat a similar condition in premature babies, who sometimes lack the surfactant necessary to expand their lungs.

The treatment has contributed to a dramatic reduction in mortality rates of premature babies. But attempts to use it in adults have been largely unsuccessful despite nearly two decades of research.

"The medication needs to work its way from the trachea to tiny air sacs deep inside the lungs to be effective," said James Grotberg, the leader of the team that developed the technology. Grotberg is a professor of biomedical engineering in the U-M College of Engineering and a professor of surgery at the U-M Medical School. A paper on the findings will be published the week of July 13 in Proceedings of the National Academy of Sciences. "This therapy is relatively straightforward in babies but more complex in adults, mostly because adult lungs are much bigger."

A 1997 clinical study that administered the treatment to adults showed promise, cutting the mortality rate among those who received the medication from 40 percent to 20 percent. But two larger studies in 2004 and 2011 showed no improvement in mortality. As a result, the treatment is not used on adults today.

"Everyone walked away from this therapy after the 2011 study failed," Grotberg said. "Adult surfactant replacement therapy has been a great disappointment and puzzlement to the community ever since. But now, we think we've discovered why the later studies didn't improve mortality."

Grotberg's team brought an engineering perspective to the puzzle, building a mathematical computer model that provided a three-dimensional image of exactly how the surfactant medication flowed through the lungs of patients in all three trials. When the simulations were complete, the team quickly saw one detail that set the successful 1997 study apart: a less concentrated version of the medication.

"The medication used in the 1997 study delivered the same dose of medication as the later studies, but it was dissolved in up to four times more liquid," Grotberg said. "The computer simulations showed that this additional liquid helped the medication reach the tiny air sacs in the lungs. So a possible route for success is to go back to the larger volumes used in the successful 1997 study."

The simulations showed that the thickness, or viscosity, of the liquid matters too. This is a critical variable, since different types of surfactant medication can be manufactured with different viscosities. The team believes that doctors may be able to use the modeling technology to optimize the medication for individual patients. They could run personalized simulations of individual patients' lungs, then alter variables like volume, viscosity, patient position and flow rate of the medication to account for different lung sizes and medical conditions.

"We created this model to be simple, so that it can provide results quickly without the need for specialized equipment," said Cheng-Feng Tai, a former postdoctoral student in Grotberg's lab who wrote the initial code for the model. "A physician could run it on a standard desktop PC to create a customized simulation for a critically ill patient in about an hour."

Tai accomplished this by creating a model that provides similar results to traditional fluid dynamics modeling, but requires far less time and processing power.

"Fully three-dimensional fluid dynamics models require a specialized supercomputer and days or weeks of processing time," he said. "But critically ill hospital patients don't have that kind of time. So we streamlined the code to produce a simulated three-dimensional image with much less computing power and processing time."

Grotberg says the modeling technology could be used in other types of research as well, including more precise targeting of other medications in the lungs and projecting results from animal research to humans.

Page 7 of 42