LATEST

Three-dimensional rendering of Risso's dolphin echolocation click spectra recorded in the Gulf of Mexico, aggregated by an unsupervised learning algorithm.  CREDIT Kaitlin Frasier

Machine learning approach could help scientists monitor wild dolphin populations

Scientists have developed a new algorithm that can identify distinct dolphin click patterns among millions of clicks in recordings of wild dolphins. This approach, presented in PLOS Computational Biology by Kaitlin Frasier of Scripps Institution of Oceanography, California, and colleagues, could potentially help distinguish between dolphin species in the wild.

Frasier and her colleagues build autonomous underwater acoustic sensors that can record dolphins' echolocation clicks in the wild for over a year at a time. These instruments serve as non-invasive tools for studying many aspects of dolphin populations, including how they are affected by the Deepwater Horizon oil spill, natural resource development, and climate change.

Because the sensors record millions of clicks, it is difficult for a human to recognize any species-specific patterns in the recordings. So, the researchers used advances in machine learning to develop an algorithm that can uncover consistent click patterns in very large datasets. The algorithm is "unsupervised," meaning it seeks patterns and defines different click types on its own, instead of being "taught" to recognize patterns that are already known.

The new algorithm was able to identify consistent patterns in a dataset of over 50 million echolocation clicks recorded in the Gulf of Mexico over a two-year period. These click types were consistent across monitoring sites in different regions of the Gulf, and one of the click types that emerged is associated with a known dolphin species.

The research team hypothesizes that some of the consistent click types revealed by the algorithm could be matched to other dolphin species and therefore may be useful for remote monitoring of wild dolphins. This would improve on most current monitoring methods, which rely on people making visual observations from large ships or aircraft and are only possible in daylight and good weather conditions.

Next, the team plans to integrate this work with deep learning methods to improve their ability to identify click types in new datasets recorded different regions. They will also perform fieldwork to verify which species match with some of the new click types revealed by the algorithm.

"It's fun to think about how the machine learning algorithms used to suggest music or social media friends to people could be re-interpreted to help with ecological research challenges," Frasier says. "Innovations in sensor technologies have opened the floodgates in terms of data about the natural world, and there is a lot of room for creativity right now in ecological data analysis."

This image depicts a poverty map (552 communities) of Senegal generated using the researchers' supercomputational tools.

Researchers harness big data to improve poverty maps, a much-needed tool to aid world's most vulnerable people

For years, policymakers have relied upon surveys and census data to track and respond to extreme poverty.

While effective, assembling this information is costly and time-consuming, and it often lacks detail that aid organizations and governments need in order to best deploy their resources.

That could soon change.

A new mapping technique, described in the Nov. 14 issue of the Proceedings of the National Academies of Sciences, shows how researchers are developing supercomputational tools that combine cellphone records with data from satellites and geographic information systems to create timely and incredibly detailed poverty maps.

"Despite much progress in recent decades, there are still more than 1 billion people worldwide lacking food, shelter and other basic human necessities," says Neeti Pokhriyal, one of the study's co-lead authors, and a PhD candidate in the Department of Computer Science and Engineering at the University at Buffalo.

The study is titled "Combining Disparate Data Sources for Improved Poverty Prediction and Mapping."

Some organizations define extreme poverty as a severe lack of food, health care, education and other basic needs. Others relate it to income; for example, the World Bank says people living on less than $1.25 per day (2005 prices) are extremely impoverished.

While declining in most areas of the world, roughly 1.2 billion people still live in extreme poverty. Most are in Asia, sub-Saharan Africa and the Caribbean. Aid organizations and governmental agencies say that timely and accurate data are vital to ending extreme poverty.

The study focuses on Senegal, a sub-Saharan country with a high poverty rate.

The first data set are 11 billion calls and texts from more than 9 million Senegalese mobile phone users. All information is anonymous and it captures how, when, where and with whom people communicate with.

The second data set comes from satellite imagery, geographic information systems and weather stations. It offers insight into food security, economic activity and accessibility to services and other indicators of poverty. This can be gleaned from the presence of electricity, paved roads, agriculture and other signs of development.

The two datasets are combined using a machine learning-based framework.

Using the framework, the researchers created maps detailing the poverty levels of 552 communities in Senegal. Current poverty maps divide the nation in four regions. The framework also can help predict certain dimensions of poverty such as deprivations in education, standard of living and health.

Unlike surveys or censuses, which can take years and cost millions of dollars, these maps can be generated quickly and cost-efficiently. And they can be updated as often as the data sources are updated. Plus, their diagnostic nature can help assist policymakers in designing better interventions to fight poverty.

Pokhriyal, who began work on the project in 2015 and has travelled to Senegal, says the goal is not to replace census and surveys but to supplement these sources of information in between cycles. The approach could also prove useful in areas of war and conflict, as well as remote regions.

SwRI scientists modeled the protracted period of bombardment after the Moon formed, determining that impactor metals may have descended into Earth’s core. This artistic rendering illustrates a large impactor crashing into the young Earth. Light brown and gray particles indicate the projectile’s mantle (silicate) and core (metal) material, respectively.

Large planetesimals delivered more mass to nascent Earth than previously thought

Southwest Research Institute scientists recently modeled the protracted period of bombardment following the Moon's formation, when leftover planetesimals pounded the Earth. Based on these simulations, scientists theorize that moon-sized objects delivered more mass to the Earth than previously thought.

young earth projectile D022813 0

Early in its evolution, Earth sustained an impact with another large object, and the Moon formed from the resulting debris ejected into an Earth-orbiting disk. A long period of bombardment followed, the so-called "late accretion," when large bodies impacted the Earth delivering materials that were accreted or integrated into the young planet.

"We modeled the massive collisions and how metals and silicates were integrated into Earth during this 'late accretion stage,' which lasted for hundreds of millions of years after the Moon formed," said SwRI's Dr. Simone Marchi, lead author of a Nature Geoscience paper outlining these results. "Based on our simulations, the late accretion mass delivered to Earth may be significantly greater than previously thought, with important consequences for the earliest evolution of our planet."

Previously, scientists estimated that materials from planetesimals integrated during the final stage of terrestrial planet formation made up about half a percent of the Earth's present mass. This is based on the concentration of highly "siderophile" elements -- metals such as gold, platinum and iridium, which have an affinity for iron -- in the Earth's mantle. The relative abundance of these elements in the mantle points to late accretion, after Earth's core had formed. But the estimate assumes that all highly siderophile elements delivered by the later impacts were retained in the mantle.

Late accretion may have involved large differentiated projectiles. These impactors may have concentrated the highly siderophile elements primarily in their metallic cores. New high-resolution impact simulations by researchers at SwRI and the University of Maryland show that substantial portions of a large planetesimal's core could descend to, and be assimilated into, the Earth's core -- or ricochet back into space and escape the planet entirely. Both outcomes reduce the amount of highly siderophile elements added to Earth's mantle, which implies that two to five times as much material may have been delivered than previously thought.

"These simulations also may help explain the presence of isotopic anomalies in ancient terrestrial rock samples such as komatiite, a volcanic rock," said SwRI co-author Dr. Robin Canup. "These anomalies were problematic for lunar origin models that imply a well-mixed mantle following the giant impact. We propose that at least some of these rocks may have been produced long after the Moon-forming impact, during late accretion."

New method creates time-efficient way of supercomputing models of complex systems reaching equilibrium

When the maths cannot be done by hand, physicists modelling complex systems, like the dynamics of biological molecules in the body, need to use supercomputer simulations. Such complicated systems require a period of time before being measured, as they settle into a balanced state. The question is: how long do supercomputer simulations need to run to be accurate? Speeding up processing time to elucidate highly complex study systems has been a common challenge. And it cannot be done by running parallel computations. That's because the results from the previous time lapse matters for computing the next time lapse. Now, Shahrazad Malek from the Memorial University of Newfoundland, Canada, and colleagues have developed a practical partial solution to the problem of saving time when using supercomputer simulations that require bringing a complex system into a steady state of equilibrium and measuring its equilibrium properties. These findings are part of a special issue on "Advances in Computational Methods for Soft Matter Systems," recently published in EPJ E.

One solution is to run multiple copies of the same simulation, with randomised initial conditions for the positions and velocities of the molecules. By averaging the results over this ensemble of 10 o 50 runs, each run in the ensemble can be shorter than a single long run and still produce the same level of accuracy in the results. In this study, the authors go one step further and focus on an extreme case of examining an ensemble of 1,000 runs--dubbed a swarm. This approach reduces the overall time required to get the answer to estimating the value of the system at equilibrium.

Since this sort of massive multi-processor system is gradually becoming more common, this work contributes to increasing the techniques available to scientists. The solutions can be applied to computational studies in fields such as biochemistry, materials physics, astrophysics, chemical engineering, and economics.

The upper and lower series of pictures each show a simulation of a neutron star merger. In the scenario shown in the upper panels the star collapses after the merger and forms a black hole, whereas the scenario displayed in the lower row leads to an at least temporarily stable star. CREDIT Picture: Andreas Bauswein, HITS

Neutron stars are the densest objects in the Universe; however, their exact characteristics remain unknown. Using supercomputing simulations based on recent observations, an international team of scientists has managed to narrow down the size of these stars

When a very massive star dies, its core contracts. In a supernova explosion, the star's outer layers are expelled, leaving behind an ultra-compact neutron star. For the first time, the LIGO and Virgo Observatories have recently been able to observe the merger of two neutron stars and measure the mass of the merging stars. Together, the neutron stars had a mass of 2.74 solar masses. Based on these observational data, an international team of scientists from Germany, Greece, and Japan including HITS astrophysicist Dr. Andreas Bauswein has managed to narrow down the size of neutron stars with the aid of supercomputer simulations. The calculations suggest that the neutron star radius must be at least 10.7 km. The international research team's results have been published in "Astrophysical Journal Letters."

The Collapse as Evidence

In neutron star collisions, two neutron stars orbit around each other, eventually merging to form a star with approximately twice the mass of the individual stars. In this cosmic event, gravitational waves - oscillations of spacetime - whose signal characteristics are related to the mass of the stars, are emitted. This event resembles what happens when a stone is thrown into water and waves form on the water's surface. The heavier the stone, the higher the waves.

The scientists simulated different merger scenarios for the recently measured masses to de-termine the radius of the neutron stars. In so doing, they relied on different models and equations of state describing the exact structure of neutron stars. Then, the team of scientists checked whether the calculated merger scenarios are consistent with the observations. The conclusion: All models that lead to the direct collapse of the merger remnant can be ruled out because a collapse leads to the formation of a black hole, which in turn means that relatively little light is emitted during the collision. However, different telescopes have observed a bright light source at the location of the stars' collision, which provides clear evidence against the hypothesis of collapse.

The results thereby rule out a number of models of neutron star matter, namely all models that predict a neutron star radius smaller than 10.7 kilometers. However, the internal structure of neutron stars is still not entirely understood. The radii and structure of neutron stars are of particular interest not only to astrophysicists, but also to nuclear and particle physicists because the inner structure of these stars reflects the properties of high-density nuclear mat-ter found in every atomic nucleus.

Neutron Stars Reveal Fundamental Properties of Matter

While neutron stars have a slightly larger mass than our Sun, their diameter is only a few 10 km. These stars thus contain a large mass in a very small space, which leads to extreme conditions in their interior. Researchers have been exploring these internal conditions for already some decades and are particularly interested in better narrowing down the radius of these stars as their size depends on the unknown properties of density matter.

The new measurements and new calculations are helping theoreticians better understand the properties of high-density matter in our Universe. The recently published study already represents a scientific progress as it has ruled out some theoretical models, but there are still a number of other models with neutron star radii greater than 10.7 km. However, the scien-tists have been able to demonstrate that further observations of neutron star mergers will continue to improve these measurements. The LIGO and Virgo Observatories have just be-gun taking measurements, and the sensitivity of the instruments will continue to increase over the next few years and provide even better observational data. "We expect that more neutron star mergers will soon be observed and that the observational data from these events will reveal more about the internal structure of matter," HITS scientist Andreas Bauswein concludes.

The reference gene is depicted by black circle. The initial static global PIN is projected onto normal and cancer samples based on gene expression, and each function (red and green) are diffused through each PIN. In this case, the reference gene is assigned green function in normal and red function in cancer, i.e., the gene gained red and lost the green function in cancer.

Changes in gene function in tumor samples correlate with patient survival

A given gene may perform a different function in breast cancer cells than in healthy cells due to changes in networks of interacting proteins, according to a new study published in PLOS Computational Biology.

Previous research has shown that a protein produced by a single gene can potentially have different functions in a cell depending on the proteins with which it interacts. Protein interactions can differ depending on context, such as in different tissues or developmental stages. For instance, one protein produced by a key fruit fly gene serves two separate functions over the course of fly development.

Building on this concept, Sushant Patkar of the University of Maryland and colleagues hypothesized that alterations in protein interaction networks in breast cancer cells may change the function of individual genes. To test this idea, they analyzed protein expression in 1,047 breast cancer tumors and 110 healthy breast tissue samples, using data from The Cancer Genome Atlas project.

The researchers developed a supercomputational framework to determine the structure of protein interaction networks in each sample and infer which genes performed different cellular functions within these networks. Then, they compared the number of genes inferred to perform each function in cancer cells relative to healthy cells.

The analysis revealed that several functions were associated with more or fewer genes in cancer cells than in healthy cells, but not because of changes in the expression of these genes. Instead, their function changed due to changes in their protein interaction networks. The researchers also showed that profiling a patient's tumor tissue according to these functional shifts served as an effective method to predict their cancer subtype and their survival.

"While it is completely plausible for a gene to lose or acquire novel biological functions, examples of such changes have predominantly been observed in the context of evolution," Patkar says. "We have developed a bioinformatics approach that suggests that such changes might alternatively occur through changes in the interactions of proteins encoded by the gene."

Next, the team plans to validate their supercomputational framework as a tool to assess changes in gene function in other biological contexts, such as in other diseases and tissues.

Access to the freely available article in PLOS Computational Biology: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005793

Information attacks have emerged as a major concern of societies worldwide. They come under different names and in different flavors — fake news, disinformation, political astroturfing, influence operations, etc. And they may arrive as a component of hybrid warfare — in combination with traditional cyber-attacks (use of malware), and with conventional military action or covert physical attacks. (Photo Credit: Shutterstock)

A team of U.S. Army researchers recently joined an international group of scientists in Chernihiv, Ukraine to initiate a first-of-its-kind global science and technology research program to understand and ultimately combat disinformation attacks in cyberspace. 

Scientists from the Bulgarian Defense Institute in Sophia, Bulgaria; the Chernihiv National University of Technology in Chernihiv, Ukraine; and the National Technical University of Ukraine "Igor Sikorsky Kyiv Polytechnic" in Kyiv, Ukraine joined ARL researchers Nov. 14-15, at the kickoff meeting of the Cyber Rapid Analysis for Defense Awareness of Real-time Situation project. The participation of Bulgarian and Ukrainian institutions is funded by NATO Science for Peace and Security Programme, which promotes dialogue and practical cooperation between NATO member states and non-NATO partner nations -- in this case Ukraine -- based on scientific research, technological innovation and knowledge exchange.



Over the next three years, the group will develop theoretical foundations, methods, and approaches towards software tools for situational awareness that will enable a nation's defense forces to monitor cyberspace to detect malicious information injections and give timely notification of an information attack, said Dr. Alexander Kott, ARL chief scientist, who attended the meeting together with ARL's Dr. Brian Rivera, the chief of the Network Science Division. They'll also help create conditions necessary for decision making about prevention or timely response to adversarial disinformation injections or manipulations. Especially important in meeting these objectives will be the real world experiences pertaining to actual disinformation attacks directed against Ukraine.

"Information attacks have emerged as a major concern of societies worldwide. They come under different names and in different flavors -- fake news, disinformation, political astroturfing, influence operations, etc. And they may arrive as a component of hybrid warfare -- in combination with traditional cyber-attacks (use of malware), and with conventional military action or covert physical attacks. A particularly poignant example of a victim of such attacks has been Ukraine," Kott said.

He said the ARL scientists bring to this project a number of critical scientific elements. These include published research results -- theories and algorithms -- that explain and predict propagation of opinions and trust within a network, find untrustworthy sources within cyberspace, and detect false news. Much of these were developed in the context of ARL's extensive Network Science research in alliance with multiple academic institutions, and will help jump-start CyRADARS.

"ARL also operates a unique Open Campus business model. It enables scientists from both USA and other countries to conduct collaborative research at ARL. Within the context of CyRADARS, students and faculty from Ukraine and Bulgaria will be able to come to ARL and use ARL's Open Campus facilities and test beds while working on joint projects with ARL scientists," Kott said.

The research efforts will take place at all four institutions in a virtual, distributed networked laboratory that the project will create.

Each rectangular structure represents a heart cell in the supercomputer model. The color bursts depict propagating waves of calcium. Each cell is identical but exhibits a distinct pattern of calcium waves due to random ion channel gating. The team investigated how this randomness gives rise to sudden unpredictable heart arrhythmia. CREDIT PLOS Computational Biology/Mark A. Walker

Some heart disease patients face a higher risk of sudden cardiac death, which can happen when an arrhythmia -- an irregular heartbeat-- disrupts the normal electrical activity in the heart and causes the organ to stop pumping. Arrhythmias linked to sudden cardiac death are very rare, however, making it difficult to study how they occur and how they might be prevented.

To make it much easier to discover what triggers this deadly disorder, a team led by Johns Hopkins researchers constructed a powerful new supercomputer model that replicates the biological activity within the heart that precedes sudden cardiac death.

In a study published recently in PLOS Computational Biology, the team reported that the digital model has yielded important clues that could provide treatment targets for drug makers.

"For the first time, we have come up with a method for relating how distressed molecular mechanisms in heart disease determine the probability of arrhythmias in cardiac tissue," said Raimond L. Winslow, the Raj and Neera Singh Professor of Biomedical Engineering at Johns Hopkins and senior author of the study.

"The importance of this," said Winslow, who also is director of the university's Institute for Computational Medicine, "is that we can now quantify precisely how these dysfunctions drive and influence the likelihood of arrhythmias. That is to say, we now have a way of identifying the most important controlling molecular factors associated with these arrhythmias."

With this knowledge, Winslow said, researchers will be better able to develop treatments to keep some deadly heart rhythms from forming. "It could lead to better drugs that target the right mechanisms," Winslow said. "If you can find a molecule that blocks this particular action, then doing so will significantly reduce the probability of an arrhythmia, whereas other manipulations will have comparatively negligible effects."

The lead author of the study was Mark A. Walker, who worked on the problem while earning his Johns Hopkins doctoral degree under Winslow's supervision. Walker said he and his colleagues used supercomputer models to determine what activity was linked to arrhythmia at three biological levels: in the heart tissue as a whole, within individual heart cells, and within the molecules that make up the cells, including small proteins called ion channels that control the movement of calcium in the heart.

"Calcium is an important player in the functioning of a heart cell," said Walker, who now works as a computational biologist at the Broad Institute, a research center affiliated with Harvard University and the Massachusetts Institute of Technology. "There are a lot of interesting questions about how the handling of calcium in heart cells can sort-of go haywire."

Walker and his colleagues chose to focus on one intriguing question about this process: when heart cells possess too much calcium, which can happen in heart disease patients, how does this overload of calcium trigger an arrhythmia?

The team discovered that heart cells respond by expelling excess calcium, and in doing so, they generate an electrical signal. If by chance, a large enough number of these signals are generated at the same time, it can trigger an arrhythmia.

"Imagine if you have a bunch of people in a room, and you want to test the strength of the floor they are all standing on," Walker said. "It's not a very strong floor, so if there's enough force on it, it will break. You tell everyone that on the count of three, jump in the air. They all land on the floor, and you try to figure out what's the probability that the floor will break, given that everyone is going to jump at a slightly different time, people will weigh different amounts, and they might jump to different heights. These are all random variables."

Similarly, Walker said, random variables also exist in trying to determine the probability that enough calcium-related electrical signals will simultaneously discharge in the heart to set off a lethal arrhythmia. Because the circumstances that cause sudden cardiac death are so rare, it makes them very tough to predict.

"You're trying to figure out what the probability is," Walker said. "The difficulty of doing that is if the probability is one in a million, then you have to do tens of millions of trials to estimate that probability. One of the advances that we made in this work was that we were able to figure out how to do this with really just a handful of trials."

Walker and Winslow both cautioned that, at present, the new supercomputer model cannot predict which heart patients face a higher risk of sudden cardiac death. But they said the model should speed up the pace of heart research and the development of related medicines or treatments such as gene therapy. They said the model will be shared as free open-source software. 

Stefano Baroni professor of theoretical condensed-matter physics

From the evolution of planets to electronics: Thermal conductivity plays a fundamental role in many processes

"Our goal? To radically innovate numerical simulations in the field of thermal transport to take on the great science and technology issues in which this phenomenon is so central. This new study, which has designed a new method with which to analyze heat transfer data more efficiently and accurately, is an important step in this direction".

This is how Stefano Baroni describes this new research performed at Trieste's SISSA by a group led by him, which has just been published in the Scientific Reports journal.

The research team focused on studying thermal transfer, the physics mechanism by which heat tends to flow from a warmer to a cooler body. Familiar to everyone, this process is involved in a number of fascinating scientific issues such as, for example, the evolution of the planets, which depends crucially on the cooling process within them. But it is also crucial to the development of various technological applications: from thermal insulation in civil engineering to cooling in electronic devices, from maintaining optimal operating temperatures in batteries to nuclear plant safety and storage of nuclear waste.

"Studying thermal transfer in the laboratory is complicated, expensive and sometimes impossible, as in the case of planetology. Numerical simulation, on the other hand, enables us to understand the hows and whys of such phenomena, allowing us to calculate precisely physical quantities which are frequently not accessible in the lab, thereby revealing their deepest mechanisms", explains Baroni. The problem is that until a short time ago it was not possible to do supercomputing in this field with the same sophisticated quantum methodologies used so successfully for many other properties: "The equations needed to compute heat currents from the molecular properties of materials were not known. Our research group overcame this obstacle a few years ago formulating a new microscopic theory of heat transfer."

But a further issue needed resolving. The simulation times required to describe the heat transfer process are hundreds of times longer than those currently used to simulate other properties. And this understandably posed a number of problems.

"With this new research, bringing together concepts demonstrated by previous theories - especially that known as the Green-Kubo theory - with our knowledge of the quantum simulation field we understood how to analyse the data to simulate heat conductivity in a sustainable way in terms of supercomputer resources and, consequently, cost. And this opens up extremely important research possibilities and potential applications for these studies".

With one curiosity which Baroni reveals: "The technique we have formulated is adapted from a methodology used in completely different sectors, such as electronic engineering, to study the digitilization of sound, and quantitative social sciences and economics, to study the dynamics of complex processes such as financial markets, for example. It is interesting to see how unexpected points of contact and cross fertilization can sometimes arise amongst such different fields".

  1. French researchers develop supercomputer models for better understanding of railway ballast
  2. Finnish researchers develop method to measure neutron star size uses supercomputer modeling based on thermonuclear explosions
  3. Using machine learning algorithms, German team finds ID microstructure of stock useful in financial crisis
  4. Russian prof Sergeyev introduces methodology working with numerical infinities and infinitesimals; opens new horizons in a supercomputer called Infinity
  5. ASU, Chinese researchers develop human mobility prediction model that offers scalability, a more practical approach
  6. Marvell Technology buys rival chipmaker Cavium for $6 billion in a cash-and-stock deal
  7. WPI researchers use machine learning to detect when online news are a paid-for pack of lies
  8. Johns Hopkins researchers develop model estimating the odds of events that trigger sudden cardiac death
  9. Young Brazilian researcher creates supercomputer model to explain the origin of Earth's water
  10. With launch of new night sky survey, UW researchers ready for era of big data astronomy
  11. CMU software assembles RNA transcripts more accurately
  12. Russian scientists create a prototype neural network based on memristors
  13. Data Science Institute at Columbia develops statistical method that makes better predictions
  14. KU researchers untangle vexing problem in supercomputer-simulation technology
  15. University of Bristol launches £43 million Quantum Technologies Innovation Centre
  16. Utah researchers develop mile stone for ultra-fast communication
  17. Pitt supercomputing helps doctors detect acute kidney injury earlier to save lives
  18. American University prof builds models to help solve few-body problems in physics
  19. UW prof helps supercompute activity around quasars, black holes
  20. Nottingham's early warning health, welfare system could save UK cattle farmers millions of pounds, reduce antibiotic use

Page 1 of 36