LATEST

Credit: B. Minchew/Caltech

Model focuses on triggers for seasonal ice breakup that could improve accuracy of sea-level rise predictions

Projections of how much the melting of ice sheets will contribute to sea-level rise can vary by several meters based on the rate of iceberg calving at the edges of those ice sheets. To provide climate scientists with models that make more accurate forecasts, a postdoctoral researcher at Caltech has created a computer simulation of one of the key processes controlling glacial calving.

Glaciers are moving slabs of ice that slowly grind downhill. Where they end in the sea, chunks break off, forming icebergs in a process known as calving. When temperatures plummet in the winter, those icebergs can freeze together and create a traffic jam that prevents further icebergs from breaking off from the glacier.

During the winter, the glacier loses much less ice to the sea. The eventual spring breakup of what is known as the mélange—that frozen iceberg logjam—occurs suddenly, and is the focus of research by Caltech's Alexander Robel.

"I developed a [super]computer model that simulates how the first iceberg calving of the warm season creates a shock wave that travels through the jammed mélange, breaking it up," says Robel, a National Oceanic and Atmospheric Administration (NOAA) Postdoctoral Scholar and a Stanback Postdoctoral Scholar at Caltech. His new model was featured in Nature Communications on February 28.

The mélange is a frozen granular material, so Robel adapted an open-source computer simulation called the Discrete-Element Bonded-Particle Sea Ice Model to show how icebergs freeze together in the winter and then transmit the shock of the first iceberg calving in the summer.

That first calving is made possible by the thinning of sea ice in warmer water, which reduces the ability of the mélange to act a bulwark against the glacier.

Robel tailored his modeled glaciers to resemble fjords in Greenland. Those fjords are narrow channels of water that are prone to trapping mélange. Robel was able show that the threshold at which spring sea-ice breakup is likely to occur is based in part on the thickness of sea ice within the mélange, but also on the shape of the channel within which the mélange is trapped.

Robel, who is a researcher in Caltech's Division of Geological and Planetary Sciences, home to the Seismological Laboratory, says his work was inspired in part by seismological studies of the way fractures propagate through elastic materials—drawing a connection between earthquakes and iceberg calving.

Robel's paper is titled "Thinning Sea Ice Weakens Buttressing Force of Iceberg Mélange and Promotes Calving." His research was supported by NOAA and the Foster and Coco Stanback Postdoctoral Fellowship in Global and Environmental Science.

This animation shows how a single calving event sends out a shockwave that triggers a breakup of the frozen mélange. Credit: Alexander Robel
This animation shows how a single calving event sends out a shockwave that triggers a breakup of the frozen mélange. Credit: Alexander Robel

UCSF-led study may lead to precision medicine treatment for traumatic brain injury

Scientists have used a unique supercomputational technique that sifts through big data to identify a subset of concussion patients with normal brain scans, who may deteriorate months after diagnosis and develop confusion, personality changes and differences in vision and hearing, as well as post-traumatic stress disorder. This finding, which is corroborated by the identification of molecular biomarkers, is paving the way to a precision medicine approach to the diagnosis and treatment of patients with traumatic brain injury.

Investigators headed by scientists at UC San Francisco and its partner institution Zuckerberg San Francisco General Hospital and Trauma Center (ZSFG) analyzed an unprecedented array of data, using a machine learning tool called topological data analysis (TDA), which "visualizes" diverse datasets across multiple scales, a technique that has never before been used to study traumatic brain injury.

TDA, which employs mathematics derived from topology, draws on the philosophy that all data has an underlying shape. It creates a summary or compressed representation of all the data points using algorithms that map patient data into a multidimensional space. The new research relied on a TDA platform developed by Ayasdi, an advanced analytics company based in Palo Alto, Calif.

"TDA is a type of machine intelligence that provides a way to easily visualize patient differences across the full spectrum of traumatic brain injury from concussion to coma," said senior co-author, Adam Ferguson, PhD, associate professor in the Department of Neurological Surgery and a member of the UCSF Weill Institute for Neurosciences. "This has potential to transform diagnosis and predict outcome by providing a new level of precision."

The study, publishing in PLOS ONE on March 1, 2017, is part of a government-funded multisite initiative called TRACK-TBI, Transforming Research and Clinical Knowledge in Traumatic Brain Injury, which was established to identify new diagnostic and prognostic markers, and to refine outcome assessments.

Diversity of Secondary Injuries Hampers Progress

Traumatic brain injury results in approximately 52,000 deaths, 257,000 hospitalizations and 2.2 million emergency department visits in the United States annually, according to the Centers for Disease Control and Prevention's Injury Center. These injuries can lead to widespread lesions throughout the brain's white matter, as well as a "cascade of secondary injury mechanisms that evolve over times." The heterogeneity of the manifestations of these secondary injuries is one significant obstacle that thwarts the development of new treatments, the authors note in their paper.

"Traumatic brain injury lags 40 to 50 years behind cancer and heart disease in terms of progress and understanding of the actual disease process and its potential aftermath," said co-senior author Geoffrey Manley, MD, PhD, vice chair of the Department of Neurological Surgery at UCSF and chief of neurosurgery at ZFGH. "More than 30 clinical trials of potential traumatic brain injury treatments have failed, and not a single drug has been approved."

In the study, so-called common data elements were collected from 586 patients with acute traumatic brain injury from trauma centers at ZSFG, University of Pittsburgh Medical Center and University Medical Center Brackenridge in Austin, Texas. These elements comprised 944 variables, including demographics, clinical presentation, imaging and psychological testing. The variables were edited by "data curators," to build a robust multivariate clinical dataset that could be harnessed by biomedical data scientists.

Mapping of outcome using TDA revealed that concussion, or mild traumatic brain injury, could be stratified into multiple subgroups with diverse prognoses. Among them was a large group of patients who, despite normal brain scans, demonstrated poor recovery and a tendency to get worse, three-to-six months after the injury. These patients were likely to suffer from post-traumatic stress disorder.

Precision Approach Allows for Proactive Treatment

"These are patients with clear scans that would have been discharged from the hospital with nothing more than a recommendation to take over-the-counter medications," said Ferguson. "By recognizing these patients as a distinct subgroup, clinicians may be able to anticipate future symptoms and treat them proactively."

According to first author Jessica Nielson, PhD, also of the UCSF Department of Neurological Surgery and the Weill Institute, the most challenging symptoms to treat are those that are not apparent immediately after an injury. "Eventually we hope to identify treatment targets early after injury to prevent this gradual decline and boost our ability to intervene and improve outcome for patients," she said.

In addition to variants in PARP1, researchers found other biomarkers in patients' blood samples that were predictive of poor recovery, including ANKK1 and COMT. These genes are associated with signaling by the neurotransmitter dopamine and may provide critical clues to recovery and drugs' responsiveness. A future goal of TRACK-TBI is to contribute to the design of clinical trials to develop therapeutic drugs for traumatic brain injury - perhaps even tailored to a patient's blood-based biomarkers.

"TDA improves upon traditional outcome prediction approaches for patients with traumatic brain injury," said Nielson, who is also affiliated with ZSFG. "By leveraging the full information provided by all outcomes, TDA has the potential to improve diagnosis and therapeutic targeting."

TDA technology is already being used by banks for fraud detection and in military applications, but it has not reached the mainstream in biomedicine, according to the authors.

"Potentially this could change very soon," said Ferguson. "Our proof-of-concept study demonstrates the maturity of the technology and its potential use in medical decision-making when coupled with high-quality data from electronic medical records."

Engineers from the University of Luxembourg are working together with scientists from the WSL Institute for Snow and Avalanche Research SLF in Switzerland to better analyse mechanical properties of snow. The project has the goal to develop a supercomputer model that can help solving typical snow-related engineering problems. The model could, for example, be used to anticipate avalanches, to determine the load on buildings caused by snow or calculate the traction of vehicles on snow-covered surfaces by predicting the behaviour of snow. 

After having studied various flood scenarios, developed an innovative mathematical method to simulate the flow of debris and predicted its mechanical impact on buildings and structures, Bernhard Peters, professor of Thermo- and Fluiddynamics and head of the LuXDEM research team at the Faculty of Science, Technology and Communication (FSTC), has extended his research activities to snow simulations. 

Prof. Peters with his research team are now developing a model that calculates the properties and behaviour of snow masses under high and low strain rates based on the structure of microscopic snow particles. “Such a model has several advantages compared to traditional snow models. First, our model can directly factor in microstructural information. Second, it includes contacts and bonding between the snow grains. Third, it can explicitly account for the large displacements and rearrangement of the snow grains during deformation. Hence, this  particle model explicitly includes all the relevant physical micro-scale processes”, explains Bernhard Peters. 

In order to validate this new model, Professor Peters involved field experts from one of the world-renowned institutes in snow research, namely the WSL Institute for Snow and Avalanche Research SLF, based in Davos Dorf, Switzerland. “The project combines complementary expertise of the two involved research groups with Luxembourg being an expert in discrete element modelling and Switzerland in the tomographic investigation and experimental measurement of snow characteristics”, adds Prof. Peters. 

The bi-national project was officially launched at a kick-off meeting on 8 December 2016 and is funded by the Fond National de la Recherche Luxembourg (FNR) and the Swiss National Science Foundation (SNSF) for a period of 3 years.

Michelle V. Evans

Zika virus could be transmitted by more mosquito species than those currently known, according to a new predictive model created by ecologists at the University of Georgia and the Cary Institute of Ecosystem Studies. Their findings, published in the journal eLife, offer a list of 26 additional potential candidate species--including seven that occur in the continental United States--that the authors suggest should be the first priority for further research.

"The biggest take home message is that these are the species that we need to prioritize," said lead author Michelle V. Evans, a UGA doctoral student in ecology and conservation. "Especially as we're in the slower part of the mosquito season, now is the time to catch up so we're prepared for the summer."

Targeting Zika's potential vectors--species that can transmit the virus from one host to another--is an urgent need, given its explosive spread and the devastating health effects associated with it. It's also time-consuming and expensive, requiring the collection of mosquitoes in affected areas, testing them to see which ones are carrying the virus, and conducting laboratory studies.

The new model could streamline the initial step of pinpointing Zika vectors.

"What we've done is to draw up a list of potential vector candidates based on the associations with viruses that they've had in the past as well as other traits that are specific to that species," said paper co-author Courtney C. Murdock, an assistant professor in the UGA School of Veterinary Medicine and Odum School of Ecology. "That allows us to have a predictive framework to effectively get a list of candidate species without having to search blindly."

The researchers developed their model using machine learning, a form of artificial intelligence that is particularly useful for finding patterns in large, complicated data sets. It builds on work done by co-author Barbara A. Han of the Cary Institute, who has used similar methods to predict bat and rodent reservoirs of disease based on life history traits.

Data used in the model consisted of information about the traits of flaviviruses--the family that includes Zika, yellow fever and dengue--and all the mosquito species that have ever been associated with them. For mosquito species, these included general traits like subgenus and geographic distribution as well as traits relevant to the ability of each species to transmit disease, such as proximity to human populations, whether they typically bite humans and how many different viruses they are known to transmit.

For viruses, traits included how many different mosquito species they infect, whether they have ever infected humans and the severity of the diseases they cause.

Analyzing known mosquito-virus pairs, the researchers found that certain traits were strong predictors of whether a linkage would form. The most important of these for mosquitoes were the subgenus, the continents it occurred on and the number of viruses it was able to transmit. For viruses, the most important trait was the number of mosquito species able to act as a vector.

Based on what they learned, they used the model to test the combination of Zika virus with all the mosquito species known to transmit at least one flavivirus. The model found 35 predicted Zika vectors, including 26 previously unsuspected possibilities.

Seven of those species occur in the continental U.S., with ranges that in some cases differ from those of the known vectors. Evans and Murdock cautioned strongly against assuming that this means that Zika will spread to all those areas.

"We're really solely looking at vector competence, which is only one small part of disease risk," Evans said. "It's one factor out of many, and not even the most important one. I want to stress that all of these are just predictions that need to be validated by empirical work. We are suggesting that people who are doing that work should focus on these species first," she said.

"Ecologists have long known that everything is connected to everything else, and are pretty good, I think, at sifting out where that matters from where it doesn't," said senior author John M. Drake, a professor in the Odum School and director of the UGA Center for the Ecology of Infectious Diseases. "This work highlights that ecological way of thinking and why it's important in understanding infectious diseases."

Nextstrain wins international competition by making it easier to share, analyze genetic sequencing data of spreading viruses

After three rounds of competition -- one of which involved a public vote -- a software tool developed by researchers at Fred Hutchinson Cancer Research Center and the University of Basel to track Zika, Ebola and other viral disease outbreaks in real time has won the first-ever international Open Science Prize.

Fred Hutch evolutionary biologist Dr. Trevor Bedford and physicist and computational biologist Dr. Richard Neher of the Biozentum Center for Molecular Life Studies in Basel, Switzerland, designed a prototype called nextstrain to analyze and track genetic mutations during the Ebola and Zika outbreaks. Using the platform Bedford and Neher built, anyone can download the source code from the public-access code-sharing site GitHub, run genetic sequencing data for the outbreak they are following through the pipeline and build a web page showing a phylogenetic tree, or genetic history of the outbreak, in a few minutes, Bedford said.

He and Neher envision the tool as adaptable for any virus -- a goal to which they will apply the $230,000 prize announced today by its three sponsors, the U.S. National Institutes of Health, the British-based charitable foundation Wellcome Trust and the U.S.-based Howard Hughes Medical Institute.

"Everyone is doing sequencing, but most people aren't able to analyze their sequences as well or as quickly as they might want to," Bedford said. "We're trying to fill in this gap so that the World Health Organization or the U.S. Centers for Disease Control and Prevention -- or whoever -- can have better analysis tools to do what they do. We're hoping that will get our software in the hands of a lot of people."

For now, the tool is easy to use for Zika and Ebola. (The researchers also built a separate platform called nextflu for influenza.) But adapting the platform for other pathogens still involves a fair amount of work and technical skill, so Bedford is working with a web developer to "get that bar down so it will be easier to have this built out for other things."

By lowering the technical bar, he and Neher hope to nudge researchers to overcome another obstacle: a longstanding reluctance to share data. That is also a goal of the Open Science Prize.

Sharing is caring

"Open science" supporters believe, as Bedford and Neher do, that sharing preliminary information quickly speeds discoveries, including those that could improve human health, and is therefore good for both science and society. The Open Science Prize competition aimed to stimulate the development of ground-breaking tools and platforms to make it easier for researchers and the wider public to share and find publications, datasets, code and other research outputs as well as to "generate excitement, momentum and further investment" in doing so, according to the prize sponsors.

Nextstrain "is an exemplar of open science and will have a great impact on public health by tracking viral pathogens," said Robert Kiley, who leads Wellcome's work on open research, in a statement. All of the Open Science Prize entrants "demonstrated what's possible when data and code are made open for all," he said.

Bedford and Neher were among six teams of finalists chosen in May from 96 entries representing 450 innovators and 45 countries. In January, a public vote (3,730 votes from 76 countries, to be precise) narrowed the field to three. Bedford praised both runner-up teams as doing "really fantastic work." MyGene2 is designed to help people with rare diseases share health and genetic information with other families, clinicians and researcher worldwide. OpenTrialsFDA is aimed at making it easier to find information from clinical trials that was reported to the federal Food and Drug Administration but never published in academic journals.

For all of its cutting-edge technology, nextstrain, the winning project, belongs to a long tradition of using data visualization to understand -- and intervene in -- outbreaks, dating back to the 1854 London cholera outbreak. At the time, cholera, an infectious and often fatal intestinal disease, was thought to be spread by "miasma" or bad-smelling air. Dr. John Snow, the "father of modern epidemiology," the study of the causes and patterns of disease, suspected the disease was spread by contaminated water. He drew a map of public well sites and cholera cases and noted that cases clustered around a particular well.

The map, Bedford said, made an intervention -- removing the handle of the Broad Street water pump -- obvious.

"What we're doing with nextstrain is meant to be in this tradition," he said. "Right now it's more of a 'now-cast,' but we really want to be doing a real-time forecast of what's going on with an epidemic."

Evolutionary and computational biologists like Bedford and Neher are in the open science movement's vanguard. One reason is that their fields are the ones most concerned with outbreaks, where waiting to publish can have deadly consequences.

Real-time tracking of genetic mutations during disease outbreaks helps scientists discern what makes viruses so severe and inform public health efforts to contain them. Being able to do so depends on researchers openly sharing the genetic sequencing data, something that not all scientists embrace in a competitive world where researchers rush to publish in prestigious journals and stake claims to discoveries.

Lessons from Ebola

The seed for nextstrain sprouted while Bedford was doing postdoctoral research at the University of Michigan. He had published a paper on flu migration using data up to 2010. He found himself thinking what a pity it was that the analysis couldn't be updated as new data came out. But the fact that a paper had already been published was a disincentive for anyone to write a new paper with just a small update to the data.

From that frustration, nextflu was born. And nextflu led to nextstrain.

The devastating 2013-2016 Ebola epidemic in West Africa leant the project new urgency. Relatively early in the outbreak, researchers sequenced Ebola genomes from patients and immediately uploaded them to the public database GenBank, leading to a surge of collaboration from experts in diverse fields. The collection of shared, publically available data helped answer critically important questions as the epidemic was unfolding. It added to the confirmation that that the outbreak was being sustained by human-to-human contact, not contact with bats or other animal carriers, suggested probable transmission routes and revealed where and how fast mutations in the virus were occurring -- all information crucial to both public health and medical interventions.

Even when data is shared, speed is everything in responding to outbreaks, so any tool that speeds data analysis contributes to the effort.

But despite the precedent set by the response to the Ebola epidemic, fewer researchers have shared Zika virus genome sequences from the more recent crisis in Brazil, Central America and the Caribbean, the researchers said.

"I'm not seeing the same thing with Zika," said Dr. Gytis Dudas, a postdoctoral fellow in Bedford's laboratory who worked on many of the Ebola analyses. In part, Dudas said, the Zika virus is more difficult to sequence than Ebola, making researchers more likely to guard their rare sequences for publications.

And that, Bedford said, is "a tragedy," even as he understands that academic careers depend on publishing.

"The idea is that this nextstrain platform would provide some neutral ground with which to share data," said Bedford. "We're not trying to make a flashy paper. We just want [the data] to be on the website so people can look at the latest thing and do analyses that aren't stymied by publication practices. This kind of simple sequence sharing during outbreaks is something that if you could just push the [scientific] community a little bit, you could have some real-world impact in helping respond to epidemics."

Professor Awais Rashid

A comprehensive 'Body of Knowledge' to inform and underpin education and professional training for the cyber security sector is set to be created through a major international programme of work.

An increasing political, societal and economic concern, cyber attacks cost an estimated $400 billion (according to Lloyds) to global economies. The scale of the issue was further highlighted recently when the Bulletin of the Atomic Scientists factored cyber attacks into their decision to move the symbolic Doomsday Clock closer to midnight.

However, there is a long-recognised skills gap within the cyber security sector, an issue that experts agree is compounded by a fragmented and incoherent foundational knowledge for this relatively immature field.

Mature scientific disciplines, such as mathematics, physics, chemistry and biology have long-established foundational knowledge and clear learning steps from pupils studying GCSEs at secondary school to undergraduate degrees at university, and beyond.

Leading efforts to bring cyber security into line with the more established sciences is a project led by Lancaster University's Professor Awais Rashid, along with other leading cyber security experts - including Professor Andrew Martin, Professor George Danezis, Dr Emil Lupu, and Dr Howard Chivers - that will pull together knowledge from major internationally-recognised experts to form a Cyber Security Body of Knowledge.

"The creation of a Cyber Security Body of Knowledge is an essential step towards creating the necessary foundational knowledge to inform the education and development of future cyber security professionals, and the discipline as a whole," said Professor Rashid, Director of Lancaster University's Security Lancaster Research Centre. "The aim is for it to become the Bible for the cyber security field and a resource that the whole community can use."

The Cyber Security Body of Knowledge would be made open-source and would be of benefit to educators, schools and for professional development programmes. In addition to bringing together cutting-edge knowledge, it will also make recommendations about which elements should form content for curricula at different levels.

There will be opportunities for both academia and cyber security professionals to participate in the two and a half-year development process by acting as authors or participating in the community of reviewers to scrutinise the body of knowledge. The process will also be informed by an Industrial Advisory Board. "The Cyber Security Body of Knowledge will be a resource for the community by the community", said Professor Rashid.

This project will be funded through the National Cyber Security Programme - a £1.9 billion transformational investment to provide the UK with the next generation of cyber security.

Minister of State for Digital and Culture Matt Hancock said: "We have recently announced a Cyber Schools Programme so thousands of the best and brightest young minds are given the opportunity to learn cutting-edge cyber security skills alongside their secondary school studies. This follows a series of other initiatives to find, finesse and fast-track tomorrow's online security experts as part of the Government's National Cyber Security Programme. I welcome the development of the Cyber Security Body of Knowledge which will play a vital role in helping professionals share their expertise to help inspire the next generation of talent across the country."

Chris Ensor, Deputy Director for Cyber Security Skills and Growth at the National Cyber Security Centre (NCSC) said: "NCSC and GCHQ are undertaking ambitious skills programme that includes a suite of initiatives designed to strengthen and develop the pipeline of talent feeding the UK's cyber security workforce.

"The Cyber Security Body of Knowledge is an important part of this urgent work that seeks to inspire and train talented young people to develop their cyber skills."

Democratization of access to “Immersive Virtual Engineering” solutions for the global engineering community

ESI Group, pioneer in Virtual Prototyping solutions, has acquired Scilab Enterprises SAS, publisher of Scilab, the most compelling open source alternative to MATLAB. Scilab provides a world class powerful environment for engineering computation and scientific applications.

Commenting on this acquisition, Vincent Chaillou, ESI Group’s COO, said: “This acquisition fits perfectly with ESI Group’s technology investment strategy. It is aligned with our objective to expand our user base to include all stakeholders involved in the industrial product creation process, starting from the earliest stages of analytical modeling. It paves the way towards the more elaborate 3D-4D numerical simulations of the full Virtual Prototyping and eventually of the all-encompassing “Immersive Virtual Engineering” transformativesolutions of Industry 4.0.”

Raphaël Auphan, CEO of Scilab Enterprises, said: “We are very enthusiastic about joining ESI, a numerical simulation and Virtual Prototyping global leader, to bring Scilab to a wider range of industrial, academic and research players. Our shared vision will provide the engineering community with the latest generation of analytical solutions to meet current and future numerical simulation challenges."

A global community with more than a million engineering users

Scilab Enterprises was created in 2010 out of the Scilab Consortium, which was itself created in 2003 as part of an initiative backed by INRIA, the French National Institute for computer sciences and applied mathematics. Scilab (SCIentific LABoratory) is an open-source multiplatform analytical numerical computation software and scientific & engineering programming language. First introduced in 1980 it is now engaging an active community of over a million engineering users and development partners in diverse industries and in education. With its wide range of mathematical functions, graphic interfaces, graphs and algorithms, Scilab enables users to build their own applications for numerical analysis, system modeling, data analytics, optimization, signal and image processing, embedded and control systems, up to test and measurement interfaces.

Beyond publishing Scilab and offering Scilab consulting services, Scilab Enterprises offers as well “Scilab Cloud” for the web deployment in Software as a Service (SaaS) mode of scientific and engineering applications. This enables organizations and individuals to publish and manage the web-based use of their own Scilab applications.

A collaborative inter-connectable platform, available in PaaS mode

Thanks to its ability to interconnect with third-party codes, technologies and applications, Scilab can also serve as a single scientific and technical computing platform. Building on that capability Scilab Enterprises now offers a scientific and technological “Platform as a Service” (PaaS) to enable countless numbers of public and private enterprises as well as individual engineers and scientists to monetize applications written in different programming languages by facilitating their distribution, back-up and use. Importantly, applications exploiting Scilab’s many computational functions can be accessed by an unlimited number of users as the software is open source.

A powerful vector for the democratization of ESI Group’s Virtual Prototyping and Immersive Virtual Engineering solutions

The acquisition of Scilab will help expand ESI Group’s footprint in the product pre-design stage and early analytical phase. Starting with the recent acquisition of ITI, and its SimulationX (0D-1D) system modeling software, this expansion is part of the Group’s transformative and disruptive change strategy focused on front-loading the power of computer modeling to everyone involved in the product development process. Already engineers working in the frame of conventional “Product Lifecycle Management” (PLM) benefit from use of mathematical analytical models, built around Scilab, to quickly explore design options with simple (0D) models, before embarking into detailed (0D-1D to 3D-4D) modeling based design, certification and production. In ESI’s PLM disruptive vision, the next step to Virtual Prototyping is to follow the life of the product after its development and certification phases, to cover its actual, real life in operation. Now within the new methodological approach of the “Product Performance Lifecycle” (PPL) innovative modeling, the ‘as built’ Virtual Prototype is transformed into its “Hybrid Virtual Twin”, with ‘data driven’ updates from sensors in actual operations. Here mathematical models of the product - along with its real or virtual life sensors and control systems - are key to provide reliable predictive prototyping.
In this fully “End to End” vision of innovative product development and subsequent piloting in real life operational conditions, the acquisition of Scilab Enterprises equips ESI to address the full spectrum of early engineering needs, from simple, but physically realistic models, all the way to the “as manufactured” and “as operated” virtual products that customers build today and develop for tomorrow assisted or autonomous products.

Following ESI’s recent successful acquisition and integration of OpenCFD, specializing in developments and services for OpenFOAM, an open source software for broad band numerical simulation in the field of Fluid Mechanics, this current operation substantiates the Group’s commitment to the open source business model to foster the disruptive moves that will democratize Virtual Engineering solutions “for all”. It will provide beneficiaries with greater freedom to customize applications and to tailor them to their own flexible and affordable needs. In this regard, Scilab’s incorporation into ESI’s global eco-system is expected to be a major catalyst in easing and speeding up the digital transformation of innovative industrial product development.

Multiple technological and commercial synergies

Scilab Enterprises is naturally synergetic with ESI Group, both in technology and business opportunities. The existing ESI Cloud offering will be greatly boosted by this acquisition and by Scilab’s reputation with a very large global community of users in diverse industries and academic circles. It also represents a major asset that will help to increase ESI’s global visibility and eventually to unlock valuable commercial opportunities. Moreover, the dynamic presence of Scilab in the educational community worldwide will immediately expand ESI’s footprint in that all important sphere.

Financial aspects of the operation

The operation is being financed mainly by the transfer of some ESI Group treasury shares to the shareholders of Scilab Enterprises. Scilab’s development platform and team will be rapidly integrated into ESI’s operating structure.

Mosaic pneumococcal population structure caused by horizontal gene transfer is shown on the left for a subset of genes. Matrix on the right shows a genome-wide summary of the relationships between the bacteria, ranging from blue (distant) to yellow (closely related). Photo: Pekka Marttinen.

Gene transfers are particularly common in the antibiotic-resistance genes of Streptococcus pneumoniae bacteria.

When mammals breed, the genome of the offspring is a combination of the parents' genomes. Bacteria, by contrast, reproduce through cell division. In theory, this means that the genomes of the offspring are copies of the parent genome. However, the process is not quite as straightforward as this due to horizontal gene transfer through which bacteria can transfer fragments of their genome to each other. As a result of this phenomenon, the genome of an individual bacterium can be a combination of genes from several different donors. Some of the genome fragments may even originate from completely different species.

In a recent study combining machine learning and bioinformatics, a new supercomputational method was developed for modeling gene transfers between different lineages of a bacterial population or even between entirely different bacterial species. The method was used to analyse a collection of 616 whole-genomes of a recombinogenic pathogen Streptococcus pneumoniae.

The usefulness of a gene affects its transfer rate

In the study, several individual genes in which gene transfers were considered particularly common were identified. These genes also included genes causing resistance to antibiotics.

'In the case of antibiotic-resistance genes, the number of gene transfers may be related to how useful these genes are to bacteria and to the resulting selection pressure', says Academy Research Fellow Pekka Marttinen from the Aalto University Department of Computer Science.

'The study will not provide a direct solution to antibiotic resistance because this would require a profound understanding of how the resistance occurs and spreads. Nevertheless, knowing the extent to which gene transfer occurs between different species and lineages can help in improving this understanding', he explains.

The study was able to show that gene transfer occurs both within species and between several different species. The large number of transfers identified during the study was a surprise to the researchers.

'Previous studies have shown that gene transfers are common in individual genes, but our team was the first to use a computational method to show the extent of gene transfer across the entire genome', Marttinen says.

'The method also makes it possible to effectively examine ancestral gene transfers for the first time, which is important in examining transfers between different species.'

Molecular Biology and Evolution published the results in February.

Molecular data analysis using R published by the US publisher, Wiley Blackwell. This book was written with two kinds of people targeted as audience. Researchers working in molecular biology laboratories who intend to analyze their experimental data with a modern statistical environment, and bioinformaticians wishing to understand the experimental methods behind the data they are working with. 

This book addresses the difficulties experienced by wet lab researchers with the statistical analysis of molecular biology related data.  The authors explain how to use R and Bioconductor for the analysis of experimental data in the field of molecular biology.  The content is based upon two university courses for bioinformatics and experimental biology students (Biological Data Analysis with R and High-throughput Data Analysis with R). The material is divided into chapters based upon the experimental methods used in the laboratories.  

Key features include:
• Broad appeal--the authors target their material to researchers in several levels, ensuring that the basics are always covered.
• First book to explain how to use R and Bioconductor for the analysis of several types of experimental data in the field of molecular biology.
• Focuses on R and Bioconductor, which are widely used for data analysis. One great benefit of R and Bioconductor is that there is a vast user community and very active discussion in place, in addition to the practice of sharing codes. Further, R is the platform for implementing new analysis approaches, therefore novel methods are available early for R users.

Big data for the molecular biology lab: new educational resource from Finland

"In this work we used research data produced from molecular biology experiments conducted at the University of Tampere," says the author Chaba Ortutay. He continues: The content of the book uses our experiences with constant interactions with students at our courses delivered at different universities in Finland, Ireland, Hungary; and on our online platform, worldwide. The feedback from these students were indispensable for writing and finalizing our book."

Detailed information on Wiley’s website: http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1119165024.html

Page 36 of 42