Trade publication covering breaking news in the rapidly evolving supercomputer marketplace.
  1. The chemical and biomolecular engineering assistant professor earned a research grant from the American Chemical Society Petroleum Research Fund.

    Srinivas Rangarajan, assistant professor in the department of chemical and biomolecular engineering at Lehigh University, received a research grant from The American Chemical Society Petroleum Research Fund (ACS PRF) in September 2017.

    ACS PRF grants support for fundamental research related to petroleum or fossil fuels at nonprofit institutions. Funds are used as seed money, enabling an investigator to initiate a new research direction. In particular, this award is part of ACS's Doctoral New Investigator (DNI) grant to support new faculty. Rangarajan's innovative research involves the computational discovery of new catalysts for olefin production. {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    Olefins, a class of chemicals that includes ethylene, propylene and butadiene, are derived from petroleum. Although not sold directly to consumers, olefin is a multi-billion-dollar industry--with a cumulative global annual output of over 200 million metric tons--and is the basic component of polymers handled by consumers every day: Fibers, bins, bottles, food packaging films, trash liners, insulation, tile flooring, antifreeze, solvents, detergents, tires, carpet backing and rubber hoses.

    Olefins are currently produced through an energy-intensive "steam cracking" process of natural gas liquids and lighter components of crude oil. A more energy-efficient alternative is to produce olefins directly from higher alkanes present in shale gas, an abundant form of natural gas found trapped within shale formations.

    Through detailed quantum chemistry calculations, Rangarajan aims to understand, at the atomic scale, where and how ethane interacts with a known catalyst (solid transition metal sulfide) to produce a type of olefin known as ethylene.

    The study will then use these calculations to formulate a mechanistic model of ethylene synthesis, laying the foundation for computationally discovering better-performing catalysts for olefin production.

    Rangarajan joined Lehigh in January 2017 after his postdoctoral studies at University of Wisconsin, Madison. He received his B.Tech. from the Indian Institute of Technology, Madras in 2007, and his Ph.D. from the University of Minnesota in 2013. His industrial experience includes previous employment at Shell Global Solutions in The Netherlands and India.

  2. Families of genes encoding proteins involved in communication across synapses define neurons by determining which cells they connect with and how they communicate

    In a major step forward in research, scientists at Cold Spring Harbor Laboratory (CSHL) today publish in Cell a discovery about the molecular-genetic basis of neuronal cell types. Neurons are the basic building blocks that wire up brain circuits supporting mental activities and behavior. The study, which involves sophisticated supercomputational analysis of the messages transcribed from genes that are active in a neuron, points to patterns of cell-to-cell communication as the core feature that makes possible rigorous distinctions among neuron types across the mouse brain.

    CAPTION Patterns of cell-to-cell communication are the core feature that makes possible rigorous distinctions among neuron types across the mouse brain. This discovery by CSHL researchers was made through analysis of the gene activity profiles of these 6 known inhibitory cell types. CREDIT Huang Lab, CSHL
    CAPTION Patterns of cell-to-cell communication are the core feature that makes possible rigorous distinctions among neuron types across the mouse brain. This discovery by CSHL researchers was made through analysis of the gene activity profiles of these 6 known inhibitory cell types. CREDIT Huang Lab, CSHL

    The team, led by CSHL Professor Z. Josh Huang, likens their discovery to the way communication styles and patterns enable us to learn important -- definitive -- things about people. "To figure out who I am," says Anirban Paul, Ph.D., the paper's first author, "you can learn a great deal by studying with whom and how I communicate. When I call my grandma, do I use an app or a phone? How do I speak to her, in terms of tone? How does that compare with when I communicate with my son, or my colleague?"

    Using six genetically identifiable types of cortical inhibitory neurons that all release the neurotransmitter GABA, the team sought to discover factors capable of distinguishing their core molecular features. Using a high-resolution RNA sequencing method optimized by Paul and supercomputational tools developed by CSHL Assistant Professor Jesse Gillis and colleague Megan Crow, Ph.D., the team searched for families of genes whose activity exhibited characteristic patterns in each neuron type. 

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    Out of more than 600 gene families, the team found about 40 families whose activity patterns could be used to distinguish the six groups of cells. Surprisingly, Huang says, these cell-defining features fell into just six functional categories of gene families, all of which are vital for cell-to-cell communications. They include proteins that are expressed on either side of the communications interface in neurons: along or near membranes that face one another across the synaptic gap - the narrow space separating a (pre-synaptic) neuron sending a signal and a (post-synaptic) neuron facing it which receives the signal.

    The key genes dictate which other cells a neuron connects to and how it communicates with those partners, Huang explains. "Ultimately, the genes code for one striking feature, which is with whom and how these neurons talk to each other."

    The communications architecture is effectively hardwired into the molecular makeup of neurons, very likely across the brain, the team says. The ability to generalize the GABA results is based on re-analysis performed by the team of published data from the Allen Brain Atlas. It shows that the six "defining" gene/protein families they identified are also statistically distinguishing in random sets of neurons, including excitatory neurons, throughout the brain.

    The findings should help scientists sort out the bewildering array of neurons that are intertwined in the brain. Over the years, as neuroscientists have proposed various approaches for classifying neurons, one question has been whether these classifications reflect cells' biological properties, or are merely useful but arbitrary designations. Hundreds or thousands of types of neurons probably exist in the brain but there has been no agreement among neuroscientists about how to define different types and on what biological basis. In a real sense, says Huang, the CSHL team has discovered the rules of a "core genetic recipe" for constructing diverse types of nerve cells.

  3. A highly unusual collaboration between information engineers at Oxford, the Zooniverse citizen science platform and international disaster response organization Rescue Global is enabling a rapid and effective response to Hurricane Irma.

    The project draws on the power of the Zooniverse, the world's largest and most popular people-powered research platform, to work with volunteers and crowd source the data needed to understand Irma's path of destruction and the damage caused. Combining these insights with detailed artificial intelligence will support rescue relief organisations to understand the scale of the crisis, and deliver aid to those worst affected as soon as possible.

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    Irma is now judged to be the most powerful Atlantic storm in a decade, breaking previous extreme weather records and causing widespread destruction and death across the Caribbean. Tens of thousands of people have been displaced or made homeless, and well over a million are at risk from loss of critical services such as water and electricity.

    The disaster poses huge challenges for crisis response teams, who need to assess as quickly as possible the extent of the destruction on islands spread over thousands of square miles, and ensure that the right aid gets to those in most need in the safest and most efficient way.

    In the immediate aftermath of Irma, Oxford researchers have been working round the clock in partnership with Rescue Global, a respected international crisis response charity, to help address this problem. The results have already supported Rescue Global to get aid delivered to some of the areas worst affected by Irma.

    On September 12th the Zooniverse, which was founded by Oxford researchers, relaunched its Planetary Response Network. First trialled in the days following the Nepal earthquake of April 2015, the PRN aims to mobilise a 'crowd' to assist in a live disaster that is still unfolding. In recent days thousands of volunteers from around the world have already joined the effort. Their role is to analyse 'before' and 'after' satellite images of the islands hit by Irma and identify features such as damaged buildings, flooding, blocked roads or new temporary settlements which indicate that people are homeless.

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    Rebekah Yore, Operations Manager at Rescue Global, commented: "By the morning of Friday 15th September, we were told by the Zooniverse team that roughly 300,000 classifications from 7,500 people had taken place through the platform. This extraordinary effort is the equivalent to the output of one person working full-time for just over a year, or that same person working 24/7 without breaks for around 3 months. And the number of volunteers and classifications are increasing daily. This input is already having a direct effect on the ground, helping to provide situational awareness for all deployed teams."

    The sheer volume of images would take an individual months to sort through, but can be analysed in a matter of hours by the 'crowd'. Because the images are often of poor quality, human observers are much better placed to perform this part of the task than computers.

    For the next step, however, computers are essential, and Oxford engineering researchers have developed a suite of sophisticated artificial intelligence tools which can process the resulting data. Machine learning approaches quickly reconcile inconsistent responses, aggregate the data and integrate information derived from other crowd-sourced mapping materials, such as the Humanitarian Open Street Maps and Tomnod. This approach generates the best information possible to inform relief efforts. This analysis enables the team to build impact 'heat maps' that identify the areas in need of urgent assistance. Oxford has considerable expertise in this area: the tools have been refined over several years and were previously used to assist Rescue Global in its response to the 2015 Nepal and 2016 Ecuador earthquakes.

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    The 'heat maps' enable Rescue Global to decide where to send its own small reconnaissance planes to conduct detailed aerial assessments, and to share critical information with a multitude of governmental and humanitarian partners. Working closely with Airlink, who are flying in aid to a central location, Rescue Global is using information gathered through the Zooniverse platform and its own needs assessments to coordinate the onward delivery of aid through a network of boats and planes, ensuring that it gets to those who need it most.

    This new technology offers an evidence-based, rational approach to disaster management. Through collaboration with crisis responders like Rescue Global, Oxford researchers are making a unique and significant difference to victims of Hurricane Irma.

    Dr Steven Reece, Machine Learning Research Fellow and mapping lead at Oxford University, said: 'As always we are extremely grateful to our friends in the satellite industry for providing data and, of course, the crowd for their amazing work interpreting the imagery so quickly. This has been a sustained campaign and we've now produced heat maps for all the Virgin Islands. With Hurricane Maria increasing in strength and bearing down on the same area, we will have a lot more work ahead of us.'

  4. Pitt researchers awarded $300k by NSF to identify environmentally sustainable chelating agents

    Molecular chelating agents are used in many areas ranging from laundry detergents to paper pulp processing to precious metal refining. However, some chelating agents, especially the most effective ones, do not degrade in nature and may pollute the environment. With support from the National Science Foundation (NSF), researchers at the University of Pittsburgh Swanson School of Engineering are developing machine learning procedures to discover new chelating agents that are both effective and degradable.

    Dr. John Keith, a Richard King Mellon Faculty Fellow in Energy and assistant professor of chemical engineering at Pitt, is principal investigator; and Dr. Eric Beckman, Distinguished Service Professor of chemical engineering and co-director of Pitt's Mascaro Center for Sustainable Innovation, is co-PI. Their project titled "SusChEM: Machine learning blueprints for greener chelants" will receive $299,999 from the NSF.

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    "Chelating agents are molecules that bind to and isolate metal ions dissolved in water," explains Dr. Keith. "Cleaning detergents normally don't work well in hard water because of metal ions like magnesium and calcium interfering. That's why commercial detergents typically include some chelating agents to hold up those metal ions so the rest of the detergent can focus on cleaning."

    While chelating agents are valued for their ability to bind strongly to different metal ions, researchers are also factoring how long it takes them to degrade in the environment and their probabilities of being toxic when searching for more effective chelate structures. "Many of the widely used chelating agents we use end up in water runoffs, where they can be somewhat toxic to wildlife and sometimes to people as well," says Dr. Beckman.

    Developing new chelating agents so far has relied on trial and error experimentation. Dr. Beckman continues, "In the past, folks have tried to create better chelating agents by tweaking existing structures, but whenever that produces something less toxic, the chelating agent winds up being much less effective too. We're trying a new approach that uses machine learning to look through much larger and more diverse pools of candidate molecules to find those that would be the most useful."

    The Pitt research team will use quantum chemistry calculations to develop machine learning methods that can predict new molecules that would be more effective and greener than existing chelating agents. While computational quantum chemistry can be used to screen through a thousand hypothetical chelating agents in a year, machine learning methods based on quantum chemistry could be used to screen through 100,000s of candidates per week. Once the researchers identify promising candidates, they will synthesize and test them in their labs to validate the efficacy of the machine learning process for designing greener chemicals.

    The results of the research will have a significant impact on a range of topics relevant to environmentally-safe engineering and the control of metals in the environment, including supercomputer-aided design of greener chelating agents used in detergents, treatments of heavy metal poisoning, metal extractions for soil treatments, waste remediation, handling normally occurring radioactive materials from hydraulic fracturing sites, and water purification.

    "Chelating agents are used in such a wide range of industries, so even a small improvement can have a big impact on sustainability as a whole," says Dr. Keith.

  5. Could the flapping of a butterfly's wings in Costa Rica set off a hurricane in California? The question has been scrutinized by chaos theorists, stock-market analysts and weather forecasters for decades. For most people, this hypothetical scenario may be difficult to imagine on Earth - particularly when a real disaster strikes.

    Yet, in space, similarly small fluctuations in the solar wind as it streams toward the Earth's magnetic shield actually can affect the speed and strength of "space hurricanes," researcher Katariina Nykyri of Embry-Riddle Aeronautical University has reported.

    The study, published on September 19 in the Journal of Geophysical Research - Space Physics, offers the first detailed description of the mechanism by which solar wind fluctuations can change the properties of so-called space hurricanes, affecting how plasma is transported into the Earth's magnetic shield, or magnetosphere.

    Those "hurricanes" are formed by a phenomenon known as Kelvin-Helmholtz (KH) instability. As plasma from the Sun (solar wind) sweeps across the Earth's magnetic boundary, it can produce large vortices (about 10,000-40,000 kilometers in size) along the boundary layer, Nykyri explained.

    "The KH wave, or space hurricane, is one of the major ways that solar wind transports energy, mass and momentum into the magnetosphere," said Nykyri, a professor of physics and a researcher with the Center for Space and Atmospheric Research at Embry-Riddle's Daytona Beach, Fla., campus. "Fluctuations in solar wind affect how quickly the KH waves grow and how large they become." {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    When solar wind speeds are faster, the fluctuations are more powerful, Nykyri reported, and they seed larger space hurricanes that can transport more plasma.

    Gaining deeper insights into how solar wind conditions affect space hurricanes may someday provide better space-weather prediction and set the stage for safer satellite navigation through radiation belts, Nykyri said. This is because solar wind can excite ultra-low frequency (ULF) waves by triggering KH instability, which can energize radiation belt particles.

    Space hurricanes are universal phenomena, occurring at the boundary layers of Coronal Mass Ejections - giant balls of plasma erupting from the Sun's hot atmosphere - in the magnetospheres of Jupiter, Saturn and other planets, Nykyri noted.

    "KH waves can alter the direction and properties of Coronal Mass Ejections, which eventually affect near-Earth space weather," Nykyri explained. "For accurate space weather prediction, it is crucial to understand the detailed mechanisms that affect the growth and properties of space hurricanes."

    Furthermore, in addition to playing a role in transporting energy and mass, a recent discovery by Nykyri and her graduate student Thomas W. Moore shows that KH waves also provide an important way of heating plasma by millions of degrees Fahrenheit (Moore et al., Nature Physics, 2016), and therefore may be important for solar coronal heating. It might also be used for transport barrier generation in fusion plasmas.

    For the current research, simulations were based on seven years' worth of measurements of the amplitude and velocity of solar wind fluctuations at the edge of the magnetosphere, as captured by NASA's THEMIS (Time History of Events and Macroscale Interactions during Substorms) spacecraft.

  6. All-sky survey will be unique resource for researchers

    Astronomers have embarked on the largest observing project in the more than four-decade history of the National Science Foundation's Karl G. Jansky Very Large Array (VLA) -- a huge survey of the sky that promises a rich scientific payoff over many years.

    Over the next 7 years, the iconic array of giant dish antennas in the high New Mexico desert will make three complete scans of the sky visible from its latitude -- about 80 percent of the entire sky. The survey, called the VLA Sky Survey (VLASS), will produce the sharpest radio view ever made of such a large portion of the sky, and is expected to detect 10 million distinct radio-emitting celestial objects, about four times as many as are now known. 

    "This survey puts to work the tremendously improved capabilities of the VLA produced by the upgrade project that was completed in 2012. The result will be a unique and extremely valuable tool for frontier research over a diverse range of fields in astrophysics," said Tony Beasley, Director of the National Radio Astronomy Observatory (NRAO).

    Astronomers expect the VLASS to discover powerful cosmic explosions, such as supernovae, gamma ray bursts, and the collisions of neutron stars, that are obscured from visible-light telescopes by thick clouds of dust, or that otherwise have eluded detection. The VLA's ability to see through dust will make the survey a tool for finding a variety of structures within our own Milky Way that also are obscured by dust.

    The survey will reveal many additional examples of powerful jets of superfast particles propelled by the energy of supermassive black holes at the cores of galaxies. This will yield important new information on how such jets affect the growth of galaxies over time. The VLA's ability to measure magnetic fields will help scientists gain new understanding of the workings of individual galaxies and of the interactions of giant clusters of galaxies.

    "In addition to what we think VLASS will discover, we undoubtedly will be surprised by discoveries we aren't anticipating now. That is the lesson of scientific history, and perhaps the most exciting part of a project like this," said Claire Chandler, VLASS Project Director.

    The survey began observations on September 7. It plans to complete three scans of the sky, each separated by approximately 32 months. Data from all three scans will be combined to make sensitive radio images, while comparing images from the individual scans will allow discovery of newly-appearing or short-lived objects. For the survey, the VLA will receive cosmic radio emissions at frequencies between 2 and 4 GigaHertz, frequencies also used for satellite communications and microwave ovens.

    NRAO will release data products from the survey as quickly as they can be produced. Raw data, which require processing to turn into images, will be released as soon as observations are made. "Quick look" images, produced by an automated processing pipeline, typically will be available within a week of the observations. More sophisticated images, and catalogs of objects detected, will be released on timescales of months, depending on the processing time required.

    In addition, other institutions are expected to enhance the VLASS output by performing additional processing for more specialized analysis, and make those products available to the research community. The results of VLASS also will be available to students, educators, and citizen scientists.

    Completing the VLASS will require 5,500 hours of observing time. It is the third major sky survey undertaken with the VLA. From 1993-1996, the NRAO VLA Sky Survey (NVSS) used 2932 observing hours to cover the same area of sky as VLASS, but at lower resolution. The FIRST (Faint Images of the Radio Sky at Twenty centimeters) Survey studied a smaller portion of sky in more detail, using 3200 observing hours from 1993 to 2002.

    "The NVSS and FIRST surveys have been cited more than 4,500 times in scientific papers, and that number still is growing," said Project Scientist Mark Lacy. "That's an excellent indication of the value such surveys provide to the research community," he added.

    Since the NVSS and FIRST surveys were completed, the VLA underwent a complete technical transformation. From 2001-2012, the original electronic systems designed and built during the 1970s were replaced with state-of-the-art technology that vastly expanded the VLA's capabilities.

    "This upgrade made the VLA a completely new scientific tool. We wanted to put that tool to use to produce an all-sky survey that would benefit the entire astronomical community to the maximum extent possible," Beasley said.

    In 2013, NRAO announced that it would consider conducting a large survey, and invited astronomers from around the world to submit ideas and suggestions for the scientific and technical approaches that would best serve the needs of researchers. Ideas were also solicited during scientific meetings, and a Survey Science Group was formed to advise NRAO on the survey's scientific priorities that includes astronomers from a wide variety of specialties and institutions.

    Based on the recommendations from the scientific community, NRAO scientists and engineers devised a design for the survey. In 2016, a pilot survey, using 200 observing hours, was conducted to test and refine the survey's techniques. The Project Team underwent several design and operational readiness reviews, and finally obtained the go-ahead to begin the full survey earlier this year.

    "Astronomy fundamentally is exploring -- making images of the sky to see what's there. The VLASS is a new and powerful resource for exploration," said Steve Myers, VLASS Technical Lead.

  7. Facebook announced a major investment with CIFAR today, a result of the Institute's leadership in the field of artificial intelligence (AI). The US$2.625 million investment over five years will continue Facebook's support of CIFAR's Learning in Machines & Brains program, and will also fund a Facebook-CIFAR Chair in Artificial Intelligence at the Montreal Institute for Learning Algorithms (MILA).

    Facebook made the announcement Friday at a ceremony at McGill University in Montreal, attended by Prime Minister Justin Trudeau and CIFAR President & CEO Alan Bernstein. Facebook also announced funding for a Facebook AI Research (FAIR) Lab to be headed by Joelle Pineau, a CIFAR Senior Fellow in the Learning in Machines & Brains program, and an artificial intelligence researcher at McGill. Pineau will be joined at FAIR by CIFAR Associate Fellow Pascal Vincent, an associate professor at the University of Montreal. {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    "Facebook's investment in CIFAR and in the Canadian AI community recognizes the strength of Canadian research in artificial intelligence. I'm proud of the major part that CIFAR played in helping to spark today's AI boom, and the part we continue to play in the AI sphere," said Alan Bernstein, President & CEO of CIFAR.

    Fellows in CIFAR's Learning in Machines & Brains (LMB) program are world leaders in AI research. They include Program Co-Directors Yann LeCun (New York University and Facebook) and Yoshua Bengio (Université de Montréal). In addition, Distinguished Fellow Geoffrey Hinton (University of Toronto and Google) is the founding director of LMB. Hinton is the originator of "deep learning," a technique that revolutionized the field of AI.

    Earlier this year CIFAR was selected by the Government of Canada to develop and implement the $125 million Pan-Canadian Artificial Intelligence Strategy, designed to cement Canada's status as a leader in AI research. Among other things, the strategy is establishing centres of AI excellence in Edmonton, Montreal, and Toronto-Waterloo. One of these centres is MILA, which is also receiving funding from Facebook.

    A major part of Facebook's announcement was the opening of a Montreal Facebook AI Research Lab to be directed by Pineau, who co-directs the Reasoning and Learning Lab at McGill University. The new lab is one of a total of four FAIR labs. The others are in Menlo Park, New York and Paris. In total, Facebook announced $7 million in AI support for Canadian institutions. In addition to CIFAR and the FAIR lab, funding was also announced for MILA, McGill University, and Université de Montréal.

    "CIFAR invests in top researchers who ask questions with the potential to change the world. When we backed Geoff Hinton's proposal for an artificial intelligence program in 2004, we knew that we were bringing together excellent researchers addressing important questions. More than a decade later, that investment has paid off for Canada and for the world," Dr. Bernstein said.

  8. Three scientists at Niels Bohr Institute (NBI), University of Copenhagen, have carried out extensive supercomputer simulations related to star formation. They conclude that the present idealized models are lacking when it comes to describing details in the star formation process. “Hopefully our results can also help shed more light on planet formation”, says Michael Küffmeier, astrophysicist and head of the research team.

    milky way496 35aa7

    Our own galaxy, the Milky Way, consists of more than 100 billion stars. New stars are formed in so-called molecular clouds, where most of the gas is in the form of molecules, and is very cold. In the Milky Way there are many different varieties of molecular clouds, with for example masses ranging from a few hundred to several million times the mass of the Sun. Photo: NASA

    In order to explain the basics of star formation, one can use simple models – simple geometrical shapes that are easy to understand and relate to.

    But even so – even when such simple models can explain the basic principles at work, they may still be lacking when it comes to quantitative details - which is exactly what three researchers from Centre for Star and Planet Formation at NBI demonstrate in a scientific article just published in The Astrophysical Journal.

    The scientists carried out supercomputer simulations of the formation of hundreds of stars, from which nine carefully selected stars, representing various regions in space, were chosen for more detailed modeling, explains astrophysicist Michael Küffmeier, head of the project – which is also a major part of his Ph.D. dissertation.

    Küffmeier planned and carried out the research in cooperation with NBI-colleagues professor Åke Nordlund and senior lecturer Troels Haugbølle – and the simulations show that star formation is indeed heavily influenced by local environmental conditions in space, says Küffmeier: “These conditions e.g. control the size of protoplanetary disks, and the speed at which star formations takes place – and no scientific study has ever shown this before”.

    supercomputers working around the clock

    According to the classical model, a star is formed when a prestellar core – a roundish accumulation containing approximately 99 percent gas and 1 percent dust – collapses due to ‘overweight’. Subsequently, a star is formed in the center of the collapse – followed, as a result of angular momentum, by the formation of a disk of gas and dust rotating around the star..

    stjernedannelse300 3billeder engelsk 7078d

    “This is the star’s protoplanetary disk, and planets are thought to be formed in such disks – planet Earth being no exception”, says Michael Küffmeier.

    But how did the NBI-researchers manage to detail this model? The answer is closely linked to state-of-the-art supercomputer simulations: You feed some of the most potent supercomputers available with an almost unfathomable ‘load’ of information - and let them grind around the clock for months. And then, Michael Küffmeier says, you may be lucky enough to be able to put to the test even established concepts:

    “We started by studying the step before the prestellar cores. And when you have a go at that via computer simulations, you will inevitably have to deal with Giant Molecular Clouds – which are regions in space dense with gas and dust; regions, where star formation takes place”.

    A very voluminous cloud

    A giant molecular cloud is called ‘giant’ for a reason – just take the giant molecular cloud which the three NBI-researchers studied. If you look closely at this cloud – and for computational reasons decide to examine it by ‘squeezing’ it into a cubical model, which is what the researchers did – you end up with a cube measuring 8 million times the distance between the Sun and Earth on all sides. And if you carry out that multiplication, the end result will be more figures than most brains can even vaguely comprehend, since the distance from the Sun to Earth is 150 million kilometer.

    The NBI-researchers looked closely at nine different stars in this giant molecular cloud – “and in each case we gathered new knowledge about the formation of this particular star”, says Michael Küffmeier:

    Star formation in a giant molecular cloud. The small white dots represent stars in the supercomputer simulation.

    Stjernedannelse496 c8d65

    Star formation in a giant molecular sky. All the small white dots represent a star in the supercomputer simulation.

    “Since we worked in different regions of a giant molecular cloud, the results from the stars examined revealed differences in e.g. disk formation and disk size which can be attributed to the influence exerted by local environmental conditions. In this sense we have gone beyond the classical understanding of star formation”.

    The NBI-team had access to super supercomputers – a large number of single computers linked in networks – some in Paris, and some in Copenhagen at the H.C. Ørsted Institute at University of Copenhagen. And the machines were really put to work, says senior lecturer Troels Haugbølle, one of Michael Küffmeiers co-authors: “These calculations were so extensive that if you imagine that the simulations describing the formation of just one of the stars were to be carried out on a single lap top-supercomputer, the machine would have to work 24/7 for the better part of 200 years”.

    Supported by observations

    Based on the supercomputer simulations, the three NBI-scientists have studied in particular the influence of magnetic fields and turbulence – factors that are seen to play important roles in star formation. This may, adds Michael Küffmeier, be one of the reasons why protoplanetary disks are relatively small in some regions of a giant molecular cloud:

    “We are able to see how important the environment is for the star formation process. We have thus started on the path to make realistic, quantitative models of the formation of stars and planet, and we will continue digging deeper into this. One of the things we would like examine has to do with the fate of dust in protoplanetary disks – we want to know how dust and gas are separated, allowing in the end planets to form”.

    The NBI-scientists are pleased that their supercomputer simulations seem to be supported by telescope observations, from space and from the ground – among these, observations carried out by the powerful ALMA-telescope in Northern Chile, says Michael Küffmeier: “These are observations which qualitatively corroborate our simulations”.

    The fact that the telescope observations qualitatively corroborate the NBI supercomputer simulations means that the two sets of data do not in any significant way collide or contradict each other, explains Michael Küffmeier: "Nothing derived from the telescope observations contradicts our main hypothesis: that star formation is a direct consequence of processes happening on larger scales."

    alma1280 f8994

    ALMA (Atacama Large Millimeter / Submillimeter Array) in Chile can in the future eventually contribute to an expanded understanding of planetary formation. Photo: ESA

    The scientists expect that their continued supercomputer simulations will contribute to a better understanding of planet formation – by combining knowledge gleaned from the NBI-simulations with observations carried out by ALMA as well as the extremely advanced James Webb Space Telescope scheduled for launch in October 2018.

    “The James Webb Space Telescope will be able to provide us with information regarding the atmosphere surrounding exoplanets – planets outside our solar system orbiting a star”, says Michael Küffmeier: “This, too, will help us get a better understanding of the origin of planets”.

  9. UC Riverside-led study invokes intriguing self-interacting dark matter theory

    Identical twins are similar to each other in many ways, but they have different experiences, friends, and lifestyles.

    This concept is played out on a cosmological scale by galaxies. Two galaxies that appear at first glance to be very similar and effectively identical can have inner regions rotating at very different rates - the galactic analog of twins with different lifestyles.

    A team of physicists, led by Hai-Bo Yu of the University of California, Riverside, has found a simple and viable explanation for this diversity.

    Every galaxy sits within a dark matter halo that forms the gravitational scaffolding holding it together. The distribution of dark matter in this halo can be inferred from the motion of stars and gas particles in the galaxy.

    Yu and colleagues report in Physical Review Letters that diverse galactic-rotation curves, a graph of rotation speeds at different distances from the center, can be naturally explained if dark matter particles are assumed to strongly collide with one another in the inner halo, close to the galaxy's center - a process called dark matter self-interaction.

    "In the prevailing dark matter theory, called Cold Dark Matter or CDM, dark matter particles are assumed to be collisionless, aside from gravity," said Yu, an assistant professor of theoretical particle physics and astrophysics, who led the research. "We invoke a different theory, the self-interacting dark matter model or SIDM, to show that dark matter self-interactions thermalize the inner halo, which ties ordinary matter and dark matter distributions together so that they behave like a collective unit. The self-interacting dark matter halo then becomes flexible enough to accommodate the observed diverse rotation curves."

    Yu explained that the dark matter collisions take place in the dense inner halo, where the luminous galaxy is located. When the particles collide, they exchange energy and thermalize. For low-luminous galaxies, the thermalization process heats up the inner dark matter particles and pushes them out of the central region, reducing the density, analogous to a popcorn machine in which kernels hit each other as they pop, causing them to fly up from the bottom of the machine. For high-luminous galaxies such as the Milky Way, thermalization pulls the particles into the deep potential well of the luminous matter and increases the dark matter density. In addition, the cosmological assembly history of halos also plays a role in generating the observed diversity.

    "Our work demonstrates that dark matter may have strong self-interactions, a radical deviation from the prevailing theory," Yu said. "It well explains the observed diversity of galactic rotating curves, while being consistent with other cosmological observations."

    Dark matter makes up about 85 percent of matter in the universe, but its nature remains largely unknown despite its unmistakable gravitational imprint on astronomical and cosmological observations. The conventional way to study dark matter is to assume that it has some additional, nongravitational interaction with visible matter that can be studied in the lab. Physicists do not know, however, if such an interaction between dark and visible matter even exists.

    Over the last decade, Yu has pioneered a new line of research based on the following premise: Setting aside whether dark matter interacts with visible matter, what happens if dark matter interacts with itself through some new dark force?

    Yu posited the new dark force would affect the dark matter distribution in each galaxy's halo. He realized that there is indeed a discrepancy between CDM and astronomical observations that could be solved if dark matter is self-interacting.

    "The compatibility of this hypothesis with observations is a major advance in the field," said Flip Tanedo, an assistant professor of theoretical particle physics at UC Riverside, who was not involved in the research. "The SIDM paradigm is a bridge between fundamental particle physics and observational astronomy. The consistency with observations is a big hint that this proposal has a chance of being correct and lays the foundation for future observational, experimental, numerical, and theoretical work. In this way, it is paving the way to new interdisciplinary research."

    SIDM was first proposed in 2000 by a pair of eminent astrophysicists. It experienced a revival in the particle physics community around 2009, aided in part by key work by Yu and collaborators.

    "This is a special time for this type of research because numerical simulations of galaxies are finally approaching a precision where they can make concrete predictions to compare the observational predictions of the self-interacting versus cold dark matter scenarios," Tanedo said. "In this way, Hai-Bo is the architect of modern self-interacting dark matter and how it merges multiple different fields: theoretical high-energy physics, experimental high-energy physics, observational astronomy, numerical simulations of astrophysics, and early universe cosmology and galaxy formation."

    The research paper is included by Physical Review Letters as a "Editor's Suggestion" and featured also in APS Physics.

  10. With 2 grants from the National Institutes of Health, Worcester Polytechnic Institute Professor Songbai Ji is using neuroimaging and supercomputer modeling to develop better tools to understand the mechanics of traumatic brain injuries in athletes

    As fall sports seasons get under way and concerns related to concussions in contact sports continue to grow, a Worcester Polytechnic Institute (WPI) biomedical engineering professor is developing better tools to understand the mechanics of traumatic brain injuries in athletes.

    With two grants from the National Institutes of Health, Songbai Ji is using advanced neuroimaging to develop highly specific supercomputer models of the head and brain to better diagnose concussions in real time.

    Ji, whose research integrates neuroimaging into existing brain injury research, focuses on how injuries affect functionally important neural pathways and specific areas of the brain. While there are numerous studies that essentially view the brain as a single unit to determine injury, Ji says, certain components like white matter neural tracts (tissue that helps coordinate communication between different regions of the brain) deep within the brain are more vulnerable and thus may be better indicators of injury.

    Ji is developing a sophisticated head injury computer model to produce a strain map as part of a four-year $1.5 million NIH grant, titled "Accumulated white matter fiber strain from repetitive head impacts in contact sports." Co-principal investigators include colleagues from Dartmouth College, Indiana University School of Medicine, and medical device developer Simbex. In a separate two-year, $461,545 NIH grant, titled "Model-based cumulative analysis of on-field head impacts in contact sports," Ji is working to make the model simulation in real time.

    "Typically it would take hours to produce a detailed strain map for each impact to determine if someone has a concussion," said Ji. "But we are developing a model simulation in real time." {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    Sports concussions have been a growing concern for years. According to the most current data from the Centers for Disease Control and Prevention, in 2012, nearly 330,000 children aged 19 or younger were treated in emergency rooms across the United States for sports and recreation-related injuries that included a diagnosis of concussion or traumatic brain injury.

    Ji envisions that, in the future, an athlete on the field could be wearing protective gear, such as a helmet or mouthguard, equipped with an impact sensor. When an athlete's head is struck, the sensor would record the acceleration, which would provide input to the computer model.

    Because Ji will have pre-computed various strain maps into a computer, athletic trainers could quickly retrieve a strain map of the blow that could be used to assess injury risk.

    "But the computational cost is now too high for real-world applications," Ji said, "and that's why we are also developing a real-time simulation technique."

    Ji added that many current concussion studies are looking at acceleration magnitudes, much like a "hit count" - the number of times an athlete's head has been hit - rather than considering how many times a specific brain region experiences a certain level of strain and deformation, which is likely more related to the extent of the actual injury. Ji's lab is looking at the role such repeated straining plays in the severity of concussions.

    In a recently published paper in Biomechanics and Modeling in Mechanobiology, Ji and his research associates found that, in addition to the findings about deep white matter, a rigorous cross-validation of injury prediction performance has been lacking in brain injury studies. They proposed a general framework to address this issue in future studies.

    "I am very encouraged by the research and think that we can make an impact in this crucial health area," Ji says. "My research team understands just how important it is to advance concussion research for athletes of both genders and of all ages."