Research published in April provided “slam dunk” evidence of two prehistoric supernovae exploding about 300 light years from Earth. Now, a follow-up investigation based on supercomputer modeling shows those supernovae likely exposed biology on our planet to a long-lasting gust of cosmic radiation, which also affected the atmosphere.

NASA’s Exobiology and Evolutionary Biology program supported the research, and computation time was provided the supercomputing environment at Washburn University.

“I was surprised to see as much effect as there was,” said Adrian Melott, professor of physics at the University of Kansas, who co-authored the new paper appearing The Astrophysical Journal Letters, a peer-reviewed express scientific journal that allows astrophysicists to rapidly publish short notices of significant original research.

“I was expecting there to be very little effect at all,” he said. “The supernovae were pretty far way — more than 300 light years — that’s really not very close.”

According to Melott, initially the two stars that exploded 1.7 to 3.2 million and 6.5 to 8.7 million years ago each would have caused blue light in the night sky brilliant enough to disrupt animals’ sleep patterns for a few weeks.

But their major impact would have come from radiation, which the KU astrophysicist said would have packed doses equivalent to one CT scan per year for every creature inhabiting land or shallower parts of the ocean.

“The big thing turns out to be the cosmic rays,” said Melott. “The really high-energy ones are pretty rare. They get increased by quite a lot here — for a few hundred to thousands of years, by a factor of a few hundred. The high-energy cosmic rays are the ones that can penetrate the atmosphere. They tear up molecules, they can rip electrons off atoms, and that goes on right down to the ground level. Normally that happens only at high altitude.”

Melott’s collaborators on the research are Brian Thomas and Emily Engler of Washburn University, Michael Kachelrieß of the Institutt for fysikk in Norway, Andrew Overholt of MidAmerica Nazarene University and Dimitry Semikoz of the Observatiore de Paris and Moscow Engineering Physics Institute.

The boosted exposure to cosmic rays from supernovae could have had “substantial effects on the terrestrial atmosphere and biota,” the authors write.

For instance, the research suggested the supernovae might have caused a 20-fold increase in irradiation by muons at ground level on Earth.

“A muon is a cousin of the electron, a couple of hundred times heavier than the electron — they penetrate hundreds of meters of rock,” Melott said. “Normally there are lots of them hitting us on the ground. They mostly just go through us, but because of their large numbers contribute about 1/6 of our normal radiation dose. So if there were 20 times as many, you’re in the ballpark of tripling the radiation dose.”

Melott said the uptick in radiation from muons would have been high enough to boost the mutation rate and frequency of cancer, “but not enormously. Still, if you increased the mutation rate you might speed up evolution.”

Indeed, a minor mass extinction around 2.59 million years ago may be connected in part to boosted cosmic rays that could have helped to cool Earth’s climate. The new research results show that the cosmic rays ionize the Earth’s atmosphere in the troposphere—the lowest level of the atmosphere—to a level eight times higher than normal. This would have caused an increase in cloud-to-ground lightning.

“There was climate change around this time,” said Melott. “Africa dried out and a lot of the forest turned into savannah. Around this time and afterwards, we started having glaciations — ice ages — over and over again, and it’s not clear why that started to happen. It’s controversial, but maybe cosmic rays had something to do with it.”

With a new five-year, $11.5 million grant from the National Institutes of Health, Brown University will expand its research in computational biology and launch a new Center of Biomedical Research Excellence (COBRE), which will support five early career faculty members as they tackle the genomics underlying diseases such cancer, preeclampsia and severe lung infections.

"There's data and then there's information," said David Rand, director of the new center and chair of the Department of Ecology and Evolutionary Biology (EEB). "Turning data into information you can use for something is what computational biology is all about."

The new center will bring together diverse teams of researchers to generate new insights to advance medicine and health, said David Savitz, Brown's vice president for research.

"Brown scientists and students from a number of departments around the University -- from computer science and applied mathematics to biology, medicine and public health -- have been working collaboratively to understand and realize the benefits of advanced genomics," said Savitz, who is also a professor in the School of Public Health. "This new COBRE will expand those programs to help move Brown to the forefront of this exciting, promising field of research."

Young field, rich history

In the early 1990s, the advent of genome sequencing turned molecular biology into a "big data" science where computation became a crucial tool for life sciences research, Rand said. Recognizing this emergence, Franco Preparata, computer science professor emeritus, guided Brown in launching one of the first undergraduate computational biology majors in the country.

"It was a very Brown thing to start at the undergraduate level," said Rand, who helped to establish the program.

Their vision was to build on that foundation. About a decade later under the University's Plan for Academic Enrichment, the program grew into the Center for Computational Molecular Biology. CCMB recruited five faculty members and facilitated research and collaboration among faculty in applied mathematics (Professor Charles Lawrence), computer science (Professors Sorin Istrail and Ben Raphael), the Division of Biology and Medicine (Professors Dan Weinreich and Sohini Ramachandran). In the years since its founding, existing faculty members from departments in BioMed and Public Health have helped increase the breadth of CCMB.

CCMB researchers have developed innovative methods to analyze complex genomics data sets. For example, Sohini Ramachandran, Manning Assistant Professor of Ecology and Evolutionary Biology, has developed analyses to discover how humans have diversified throughout history as they migrated out of Africa to the rest of the world. As part of the COBRE project, she will direct new efforts to uncover population-specific genomic signatures of cancer risk. 

New projects, capabilities

The new COBRE will expand such collaborations and capabilities even further, Rand said, in part by substantially bolstering the University's infrastructure. To date, computational biology researchers have had to develop their own in-house technical capabilities, but the COBRE will build a research core where expert staff will be able to develop and code technical implementations for the center's researchers, freeing valuable time and resources in their own labs. This COBRE data analysis core will be co-directed by Associate Professors Casey Dunn from EEB and Zhijin (Jean) Wu from the Department of Biostatistics in the School of Public Health.

The center will directly fund research of five teams of scientists in which younger faculty members will pursue studies related to human disease under the mentorship of two more senior professors: one with expertise in computing and mathematics and another with expertise in biology and medicine. The COBRE will further support researchers with an administrative core that will support new seed projects to increase the breadth of users across the University. In addition to staffing in individual labs, the grant will allow the University to hire four new technical staff members to expand the COBRE Computational Core. These resources will contribute to the integration of related programs across the University including the Data Science Initiative, the Data Science Practice group and the Brown Center for Biomedical Informatics.

Five projects will get underway beginning June 1:

Amanda Jamieson, assistant professor of molecular microbiology and immunology, will study bioinformatics data to identify the genomic and cellular mechanisms underlying tolerance of viral and bacterial coinfection in the lung. Her mentors will be Dr. Jack A. Elias , dean of medicine and biologic sciences, and Wu.

Nicola Neretti, assistant professor of biology, will use bioinformatics screening of a fruit fly model to identify new drug targets for extending healthy lifespan. He'll work with mentors Rand and Charles Lawrence, professor of applied mathematics.

Ramachandran will develop new computational and analytical methodologies to identify risk genes for leukemia that differ in incidence across ethnic groups and genders. Her mentors will be Lawrence and Valerie Knopik, associate professor of psychiatry and human behavior at Brown and Rhode Island Hospital.

Alper Uzun, assistant professor of pediatrics (research) will test the hypothesis that variants in a refined set of gene candidates underlie the complex basis of preeclampsia. He'll work with Dr. Jim Padbury, William and Mary Oh - William and Elsa Zopfi Professor of Pediatrics at Brown and Women & Infants Hospital, and William Fairbrother, associate professor of biology.

Shipra Vaishnava, assistant professor of molecular microbiology and immunology, will study the spatial variation in the gut microbiome in response to antimicrobials and immunity pathways that can inform aspects of human irritable bowel disease (IBD). Her mentors will be Professor Richard Bennett from her department and Professor Mitch Sogin from the Marine Biological Laboratories.

Over the last 20 years, Brown and its affiliated hospitals have earned several other COBRE grants for research in areas ranging from human behavior to stem cells to skeletal health. Following the initial 5-year award period, COBRE grants can be renewed for and additional two five year periods, but the final 5-year award must focus more on continued sustainability of the research cores established in the COBRE.

UK-funded scientists have designed a supercomputer model that applies techniques used to analyse social networks to identify new ways of treating cancer, according to research published in PLOS Computational Biology yesterday.

The model analyses the unique behaviours of cancer-causing proteins - spotting what makes them different from normal proteins, and mapping out molecular targets for new potential drugs that could be developed to treat cancer. 

Scientists at The Institute of Cancer Research, London, compared proteins inside cells to members of an enormous social network, mapping the ways they interact. This allowed them to predict which proteins will be most effectively targeted with drugs.

The researchers have made this map publicly available. It could provide drug discovery scientists with a shortcut to finding new drugs for many different types of cancer. 

The team found that there are many molecular pathways that interact to affect the development of cancer. Cancer-causing proteins that have already been successfully targeted with drugs tended to have particular 'social' characteristics that differ from non-cancer proteins - suggesting that previously unexplored cancer proteins with similar characteristics could also make good drug targets.

'Hub-like' proteins which 'communicate' with lots of other proteins - like a super-Facebook user with thousands of friends - were more likely to cause cancer.

This information could provide a wide range of potential targets for drug development. 

Study leader Dr Bissan Al-Lazikani, Team Leader in Computational Biology and Cancer Research UK-funded scientist at The Institute of Cancer Research, London, said: "Our study is the first to identify the rules of social behaviour of cancer proteins and use it to predict new targets for potential cancer drugs. It shows that cancer drug targets behave very differently from normal proteins and often have a complex web of social interactions, like a Facebook super-user.

"Finding new targets is one of the most important steps in drug discovery. But it can be a lengthy, expensive process. The map that we've made will help researchers design better new drugs, more quickly, saving time and money. It also sheds light on how resistance to treatments may occur, and in just a few years could help doctors choose the best drug combinations to suit individual patients." 

Nell Barrie, Cancer Research UK's senior science information manager said: "Thanks to research, cancer survival has doubled in the last 40 years. But we urgently need to develop better, more effective treatments so that in the future no one has to fear a cancer diagnosis. Research like this, that's made publically available, will help speed up crucial advances in drug discovery to save more lives from cancer."

CAPTION This is Ashutosh Chilkoti.

Supercomputer program scrambles genetic codes for production of repetitive DNA and synthetic molecules

Researchers have created a supercomputer program that will open a challenging field in synthetic biology to the entire world.

In the past decade, billions of dollars have been spent on technology that can quickly and inexpensively read and write DNA to synthesize and manipulate polypeptides and proteins.

That technology, however, stumbles when it encounters a repetitive genetic recipe. This includes many natural and synthetic materials used for a range of applications from biological adhesives to synthetic silk. Like someone struggling with an "impossible" jigsaw puzzle, synthesizers have trouble determining which genetic piece goes where when many of the building blocks look the same. CAPTION After swapping pieces of genetic code in repeating protein recipes, researchers were able to scramble them enough to be commercially synthesized. The dark bands are indicators of 18 repeating polypeptides successfully being fabricated. CREDIT Nicholas Tang, Duke University

Scientists from Duke University have removed this hurdle by developing a freely available supercomputer program based on the "traveling salesman" mathematics problem. Synthetic biologists can now find the least-repetitive genetic code to build the molecule they want to study. The researchers say their program will allow those with limited resources or expertise to easily explore synthetic biomaterials that were once available to only a small fraction of the field.

The results appear in Nature Materials, January 4, 2016. 

"Synthesizing and working with highly repetitive polypeptides is a very challenging and tedious process, which has long been a barrier to entering the field," said Ashutosh Chilkoti, the Theo Pilkington Professor of Biomedical Engineering and chair of the biomedical engineering department at Duke. "But with the help of our new tool, what used to take researchers months of work can now be ordered online by anyone for about $100 and the genes received in a few weeks, making repetitive polypeptides much easier to study." 

Every protein and polypeptide is based on the sequencing of two or more amino acids. The genetic recipe for an individual amino acid -- called a codon -- is three letters of DNA long. But nature has 61 codons that produce 20 amino acids, meaning there are multiple codons that yield a given amino acid.

Because synthetic biologists can get the same amino acid from multiple codons, they can avoid troublesome DNA repeats by swapping in different codons that achieve the same effect. The challenge is finding the least repetitive genetic code that still makes the desired polypeptide or protein.

"I always thought there was a potential solution, that there must be a way of mathematically figuring it out," said Chilkoti. "I had offered this problem to graduate students before, but nobody wanted to tackle it because it requires a particular combination of high-level math, computer science and molecular biology. But Nicholas Tang was the right guy."

After studying the problem in detail, Nicholas Tang, a doctoral candidate in Chilkoti's laboratory, discovered that the solution is a version of the "traveling salesman" mathematics problem. The classic question is, given a map with a set of cities to visit, what is the shortest route possible that hits every city exactly once before returning to the original city?

After writing the algorithm, Tang put it to the test. He created a laundry list of 19 popular repetitive polypeptides that are currently being studied in laboratories around the world. After passing the codes through the program, he sent them for synthesis by commercial biotechnology outfits -- a task that would be impossible for any one of the original codes.

Without the help of commercial technology, researchers spend months building the DNA that cells use to produce the proteins being studied. It's a tedious, repetitive task -- not the most attractive prospect to a young graduate student. But if the new program worked, the process could be reduced to a few weeks of waiting for machines to deliver the goods instead.

When Tang received his DNA, they each were introduced into living cells to produce the desired polypeptide as hoped.

"He made 19 different polymers from the field in one shot," said Chilkoti. "What probably took tens of researchers years to create, he was able to reproduce in a single paper in a matter of weeks."

Chilkoti and Tang are now working to make the new supercomputer program available online for anybody to use through a simple web form, opening a new area of synthetic biology for all to explore.

"This advance really democratizes the field of synthetic biology and levels the playing field," said Tang. "Before, you had to have a lot of expertise and patience to work with repetitive sequences, but now anyone can just order them online. We think this could really break open the bottleneck that has held the field back and hopefully recruit more people into the field."

TU Dresden engineers develop simulations of the human heart

Ceaseless beats of the human heart, which last from early stages of embryo till death, circulate the blood through the body thereby providing crucial substrates maintaining the vitality of every cell. Hence, any disorder in this circulation system might cause loss of life quality, stroke or even sudden death. According to the World Health Organization, in 2012 heart diseases and stroke correspond to 31% of all global deaths.

As scientists gain superior insight to working mechanisms of the heart, it will be possible to develop more efficacious treatment techniques, thereby reducing the mortality and economical side effects of heart diseases. However, understanding how the heart functions is a demanding task due to its complexity, in particular, due to the encountered difficulties in vivo experiments. Besides, each patient's heart possesses its own characteristics. In this context, supercomputational modelling might increase our comprehension and serve as a milestone in achieving more successful patient-specific therapies.

Cardiac resynchronization therapy (CRT) is one of the most frequently used treatment methods for patients having reduced cardiac pump functions which is caused by dyssynchronous contraction of the ventricles. A successful CRT depends on factors such as selection of patients, the position of the pacemaker leads, timing and magnitude of pacing. However, a definite pacing configuration, which is appropriate for all patients, does not exist and one-third of patients even does not respond to CRT. At this point, computer simulations might predict CRT feasibility, thereby, supporting cardiologists to accomplish an optimum treatment strategy.

The current project (KA 1163/18-2) granted by the DFG is concerned with the improvement of the numerical schemes that were established earlier in the project (KA-1163/18-1) with novel tools leading to a comprehensive simulation of the heart. Subsequently, the predictive capacity of the developed framework will be assessed in terms of computer analysis of virtual personalized heart models. To this end, patient specific heart models will be created suitable for finite element simulations. The established numerical tools will be validated with clinical data obtained from humans having healthy and unhealthy cardiac functions. Furthermore, the capacity of the framework will be assessed according to data of patients who have already undergone CRT. Finally, we will try to get insight to CRT and optimize the pacemaker set up with an aim to achieve an optimum CRT in supercomputer simulations.

The project "Computational Electromechanics of the Heart: Development of Predictive Simulation Tools for Patient Specific Analysis" is funded by DFG with 300.000 Euros from 2016 to 2018. The Institute for Structural Analysis at TU Dresden (Univ.-Prof. Dr.-Ing. habil. M. Kaliske) cooperates closely with the Heartcentre at TU Dresden (Univ.-Prof. Dr. med. R. H. Strasser).

Page 1 of 42