LATEST

CAPTION This visualization of research by K. Nykyri et al., compiled from NASA images and MHD simulations, depicts the near-Earth space with the dayside magnetosphere, magnetotail and boundary layers with giant Kelvin-Helmholtz waves (i.e., 'space hurricanes'). Nykyri's study in the Journal of Geophysical Research - Space Physics, finds that magnetosheath (shocked solar wind) velocity fluctuations affect the growth and properties of the Kelvin-Helmholtz waves. CREDIT K. Nykyri, Embry-Riddle Aeronautical University

Could the flapping of a butterfly's wings in Costa Rica set off a hurricane in California? The question has been scrutinized by chaos theorists, stock-market analysts and weather forecasters for decades. For most people, this hypothetical scenario may be difficult to imagine on Earth - particularly when a real disaster strikes.

Yet, in space, similarly small fluctuations in the solar wind as it streams toward the Earth's magnetic shield actually can affect the speed and strength of "space hurricanes," researcher Katariina Nykyri of Embry-Riddle Aeronautical University has reported.

The study, published on September 19 in the Journal of Geophysical Research - Space Physics, offers the first detailed description of the mechanism by which solar wind fluctuations can change the properties of so-called space hurricanes, affecting how plasma is transported into the Earth's magnetic shield, or magnetosphere.

Those "hurricanes" are formed by a phenomenon known as Kelvin-Helmholtz (KH) instability. As plasma from the Sun (solar wind) sweeps across the Earth's magnetic boundary, it can produce large vortices (about 10,000-40,000 kilometers in size) along the boundary layer, Nykyri explained.

"The KH wave, or space hurricane, is one of the major ways that solar wind transports energy, mass and momentum into the magnetosphere," said Nykyri, a professor of physics and a researcher with the Center for Space and Atmospheric Research at Embry-Riddle's Daytona Beach, Fla., campus. "Fluctuations in solar wind affect how quickly the KH waves grow and how large they become." 

When solar wind speeds are faster, the fluctuations are more powerful, Nykyri reported, and they seed larger space hurricanes that can transport more plasma.

Gaining deeper insights into how solar wind conditions affect space hurricanes may someday provide better space-weather prediction and set the stage for safer satellite navigation through radiation belts, Nykyri said. This is because solar wind can excite ultra-low frequency (ULF) waves by triggering KH instability, which can energize radiation belt particles.

Space hurricanes are universal phenomena, occurring at the boundary layers of Coronal Mass Ejections - giant balls of plasma erupting from the Sun's hot atmosphere - in the magnetospheres of Jupiter, Saturn and other planets, Nykyri noted.

"KH waves can alter the direction and properties of Coronal Mass Ejections, which eventually affect near-Earth space weather," Nykyri explained. "For accurate space weather prediction, it is crucial to understand the detailed mechanisms that affect the growth and properties of space hurricanes."

Furthermore, in addition to playing a role in transporting energy and mass, a recent discovery by Nykyri and her graduate student Thomas W. Moore shows that KH waves also provide an important way of heating plasma by millions of degrees Fahrenheit (Moore et al., Nature Physics, 2016), and therefore may be important for solar coronal heating. It might also be used for transport barrier generation in fusion plasmas.

For the current research, simulations were based on seven years' worth of measurements of the amplitude and velocity of solar wind fluctuations at the edge of the magnetosphere, as captured by NASA's THEMIS (Time History of Events and Macroscale Interactions during Substorms) spacecraft.

The new VLA Sky Survey (VLASS) sharpens the view. Here is the same radio-emitting object as seen, from left to right, with the NRAO VLA Sky Survey (NVSS), the FIRST Survey, and the VLASS. The VLASS image, unlike the others, allows astronomers to positively identify the image as jets of material propelled outward from the center of a galaxy that also is seen in the visible-light Sloan Digital Sky Survey. Technical data: NVSS image at 1.4 GHz in VLA's D configuration; FIRST image at 1.4 GHz in B configuration; VLASS image at 3 GHz in B configuration. Credit: Bill Saxton, NRAO/AUI/NSF.

All-sky survey will be unique resource for researchers

Astronomers have embarked on the largest observing project in the more than four-decade history of the National Science Foundation's Karl G. Jansky Very Large Array (VLA) -- a huge survey of the sky that promises a rich scientific payoff over many years.

Over the next 7 years, the iconic array of giant dish antennas in the high New Mexico desert will make three complete scans of the sky visible from its latitude -- about 80 percent of the entire sky. The survey, called the VLA Sky Survey (VLASS), will produce the sharpest radio view ever made of such a large portion of the sky, and is expected to detect 10 million distinct radio-emitting celestial objects, about four times as many as are now known. 

"This survey puts to work the tremendously improved capabilities of the VLA produced by the upgrade project that was completed in 2012. The result will be a unique and extremely valuable tool for frontier research over a diverse range of fields in astrophysics," said Tony Beasley, Director of the National Radio Astronomy Observatory (NRAO).

Astronomers expect the VLASS to discover powerful cosmic explosions, such as supernovae, gamma ray bursts, and the collisions of neutron stars, that are obscured from visible-light telescopes by thick clouds of dust, or that otherwise have eluded detection. The VLA's ability to see through dust will make the survey a tool for finding a variety of structures within our own Milky Way that also are obscured by dust.

The survey will reveal many additional examples of powerful jets of superfast particles propelled by the energy of supermassive black holes at the cores of galaxies. This will yield important new information on how such jets affect the growth of galaxies over time. The VLA's ability to measure magnetic fields will help scientists gain new understanding of the workings of individual galaxies and of the interactions of giant clusters of galaxies.

"In addition to what we think VLASS will discover, we undoubtedly will be surprised by discoveries we aren't anticipating now. That is the lesson of scientific history, and perhaps the most exciting part of a project like this," said Claire Chandler, VLASS Project Director.

The survey began observations on September 7. It plans to complete three scans of the sky, each separated by approximately 32 months. Data from all three scans will be combined to make sensitive radio images, while comparing images from the individual scans will allow discovery of newly-appearing or short-lived objects. For the survey, the VLA will receive cosmic radio emissions at frequencies between 2 and 4 GigaHertz, frequencies also used for satellite communications and microwave ovens.

NRAO will release data products from the survey as quickly as they can be produced. Raw data, which require processing to turn into images, will be released as soon as observations are made. "Quick look" images, produced by an automated processing pipeline, typically will be available within a week of the observations. More sophisticated images, and catalogs of objects detected, will be released on timescales of months, depending on the processing time required.

In addition, other institutions are expected to enhance the VLASS output by performing additional processing for more specialized analysis, and make those products available to the research community. The results of VLASS also will be available to students, educators, and citizen scientists.

Completing the VLASS will require 5,500 hours of observing time. It is the third major sky survey undertaken with the VLA. From 1993-1996, the NRAO VLA Sky Survey (NVSS) used 2932 observing hours to cover the same area of sky as VLASS, but at lower resolution. The FIRST (Faint Images of the Radio Sky at Twenty centimeters) Survey studied a smaller portion of sky in more detail, using 3200 observing hours from 1993 to 2002.

"The NVSS and FIRST surveys have been cited more than 4,500 times in scientific papers, and that number still is growing," said Project Scientist Mark Lacy. "That's an excellent indication of the value such surveys provide to the research community," he added.

Since the NVSS and FIRST surveys were completed, the VLA underwent a complete technical transformation. From 2001-2012, the original electronic systems designed and built during the 1970s were replaced with state-of-the-art technology that vastly expanded the VLA's capabilities.

"This upgrade made the VLA a completely new scientific tool. We wanted to put that tool to use to produce an all-sky survey that would benefit the entire astronomical community to the maximum extent possible," Beasley said.

In 2013, NRAO announced that it would consider conducting a large survey, and invited astronomers from around the world to submit ideas and suggestions for the scientific and technical approaches that would best serve the needs of researchers. Ideas were also solicited during scientific meetings, and a Survey Science Group was formed to advise NRAO on the survey's scientific priorities that includes astronomers from a wide variety of specialties and institutions.

Based on the recommendations from the scientific community, NRAO scientists and engineers devised a design for the survey. In 2016, a pilot survey, using 200 observing hours, was conducted to test and refine the survey's techniques. The Project Team underwent several design and operational readiness reviews, and finally obtained the go-ahead to begin the full survey earlier this year.

"Astronomy fundamentally is exploring -- making images of the sky to see what's there. The VLASS is a new and powerful resource for exploration," said Steve Myers, VLASS Technical Lead.

Facebook announced a major investment with CIFAR today, a result of the Institute's leadership in the field of artificial intelligence (AI). The US$2.625 million investment over five years will continue Facebook's support of CIFAR's Learning in Machines & Brains program, and will also fund a Facebook-CIFAR Chair in Artificial Intelligence at the Montreal Institute for Learning Algorithms (MILA).

Facebook made the announcement Friday at a ceremony at McGill University in Montreal, attended by Prime Minister Justin Trudeau and CIFAR President & CEO Alan Bernstein. Facebook also announced funding for a Facebook AI Research (FAIR) Lab to be headed by Joelle Pineau, a CIFAR Senior Fellow in the Learning in Machines & Brains program, and an artificial intelligence researcher at McGill. Pineau will be joined at FAIR by CIFAR Associate Fellow Pascal Vincent, an associate professor at the University of Montreal. 

"Facebook's investment in CIFAR and in the Canadian AI community recognizes the strength of Canadian research in artificial intelligence. I'm proud of the major part that CIFAR played in helping to spark today's AI boom, and the part we continue to play in the AI sphere," said Alan Bernstein, President & CEO of CIFAR.

Fellows in CIFAR's Learning in Machines & Brains (LMB) program are world leaders in AI research. They include Program Co-Directors Yann LeCun (New York University and Facebook) and Yoshua Bengio (Université de Montréal). In addition, Distinguished Fellow Geoffrey Hinton (University of Toronto and Google) is the founding director of LMB. Hinton is the originator of "deep learning," a technique that revolutionized the field of AI.

Earlier this year CIFAR was selected by the Government of Canada to develop and implement the $125 million Pan-Canadian Artificial Intelligence Strategy, designed to cement Canada's status as a leader in AI research. Among other things, the strategy is establishing centres of AI excellence in Edmonton, Montreal, and Toronto-Waterloo. One of these centres is MILA, which is also receiving funding from Facebook.

A major part of Facebook's announcement was the opening of a Montreal Facebook AI Research Lab to be directed by Pineau, who co-directs the Reasoning and Learning Lab at McGill University. The new lab is one of a total of four FAIR labs. The others are in Menlo Park, New York and Paris. In total, Facebook announced $7 million in AI support for Canadian institutions. In addition to CIFAR and the FAIR lab, funding was also announced for MILA, McGill University, and Université de Montréal.

"CIFAR invests in top researchers who ask questions with the potential to change the world. When we backed Geoff Hinton's proposal for an artificial intelligence program in 2004, we knew that we were bringing together excellent researchers addressing important questions. More than a decade later, that investment has paid off for Canada and for the world," Dr. Bernstein said.

Three scientists at Niels Bohr Institute (NBI), University of Copenhagen, have carried out extensive supercomputer simulations related to star formation. They conclude that the present idealized models are lacking when it comes to describing details in the star formation process. “Hopefully our results can also help shed more light on planet formation”, says Michael Küffmeier, astrophysicist and head of the research team.

milky way496 35aa7

Our own galaxy, the Milky Way, consists of more than 100 billion stars. New stars are formed in so-called molecular clouds, where most of the gas is in the form of molecules, and is very cold. In the Milky Way there are many different varieties of molecular clouds, with for example masses ranging from a few hundred to several million times the mass of the Sun. Photo: NASA

In order to explain the basics of star formation, one can use simple models – simple geometrical shapes that are easy to understand and relate to.

But even so – even when such simple models can explain the basic principles at work, they may still be lacking when it comes to quantitative details - which is exactly what three researchers from Centre for Star and Planet Formation at NBI demonstrate in a scientific article just published in The Astrophysical Journal.

The scientists carried out supercomputer simulations of the formation of hundreds of stars, from which nine carefully selected stars, representing various regions in space, were chosen for more detailed modeling, explains astrophysicist Michael Küffmeier, head of the project – which is also a major part of his Ph.D. dissertation.

Küffmeier planned and carried out the research in cooperation with NBI-colleagues professor Åke Nordlund and senior lecturer Troels Haugbølle – and the simulations show that star formation is indeed heavily influenced by local environmental conditions in space, says Küffmeier: “These conditions e.g. control the size of protoplanetary disks, and the speed at which star formations takes place – and no scientific study has ever shown this before”.

supercomputers working around the clock

According to the classical model, a star is formed when a prestellar core – a roundish accumulation containing approximately 99 percent gas and 1 percent dust – collapses due to ‘overweight’. Subsequently, a star is formed in the center of the collapse – followed, as a result of angular momentum, by the formation of a disk of gas and dust rotating around the star..

stjernedannelse300 3billeder engelsk 7078d

“This is the star’s protoplanetary disk, and planets are thought to be formed in such disks – planet Earth being no exception”, says Michael Küffmeier.

But how did the NBI-researchers manage to detail this model? The answer is closely linked to state-of-the-art supercomputer simulations: You feed some of the most potent supercomputers available with an almost unfathomable ‘load’ of information - and let them grind around the clock for months. And then, Michael Küffmeier says, you may be lucky enough to be able to put to the test even established concepts:

“We started by studying the step before the prestellar cores. And when you have a go at that via computer simulations, you will inevitably have to deal with Giant Molecular Clouds – which are regions in space dense with gas and dust; regions, where star formation takes place”.

A very voluminous cloud

A giant molecular cloud is called ‘giant’ for a reason – just take the giant molecular cloud which the three NBI-researchers studied. If you look closely at this cloud – and for computational reasons decide to examine it by ‘squeezing’ it into a cubical model, which is what the researchers did – you end up with a cube measuring 8 million times the distance between the Sun and Earth on all sides. And if you carry out that multiplication, the end result will be more figures than most brains can even vaguely comprehend, since the distance from the Sun to Earth is 150 million kilometer.

The NBI-researchers looked closely at nine different stars in this giant molecular cloud – “and in each case we gathered new knowledge about the formation of this particular star”, says Michael Küffmeier:

Star formation in a giant molecular cloud. The small white dots represent stars in the supercomputer simulation.

Stjernedannelse496 c8d65

Star formation in a giant molecular sky. All the small white dots represent a star in the supercomputer simulation.

“Since we worked in different regions of a giant molecular cloud, the results from the stars examined revealed differences in e.g. disk formation and disk size which can be attributed to the influence exerted by local environmental conditions. In this sense we have gone beyond the classical understanding of star formation”.

The NBI-team had access to super supercomputers – a large number of single computers linked in networks – some in Paris, and some in Copenhagen at the H.C. Ørsted Institute at University of Copenhagen. And the machines were really put to work, says senior lecturer Troels Haugbølle, one of Michael Küffmeiers co-authors: “These calculations were so extensive that if you imagine that the simulations describing the formation of just one of the stars were to be carried out on a single lap top-supercomputer, the machine would have to work 24/7 for the better part of 200 years”.

Supported by observations

Based on the supercomputer simulations, the three NBI-scientists have studied in particular the influence of magnetic fields and turbulence – factors that are seen to play important roles in star formation. This may, adds Michael Küffmeier, be one of the reasons why protoplanetary disks are relatively small in some regions of a giant molecular cloud:

“We are able to see how important the environment is for the star formation process. We have thus started on the path to make realistic, quantitative models of the formation of stars and planet, and we will continue digging deeper into this. One of the things we would like examine has to do with the fate of dust in protoplanetary disks – we want to know how dust and gas are separated, allowing in the end planets to form”.

The NBI-scientists are pleased that their supercomputer simulations seem to be supported by telescope observations, from space and from the ground – among these, observations carried out by the powerful ALMA-telescope in Northern Chile, says Michael Küffmeier: “These are observations which qualitatively corroborate our simulations”.

The fact that the telescope observations qualitatively corroborate the NBI supercomputer simulations means that the two sets of data do not in any significant way collide or contradict each other, explains Michael Küffmeier: "Nothing derived from the telescope observations contradicts our main hypothesis: that star formation is a direct consequence of processes happening on larger scales."

alma1280 f8994

ALMA (Atacama Large Millimeter / Submillimeter Array) in Chile can in the future eventually contribute to an expanded understanding of planetary formation. Photo: ESA

The scientists expect that their continued supercomputer simulations will contribute to a better understanding of planet formation – by combining knowledge gleaned from the NBI-simulations with observations carried out by ALMA as well as the extremely advanced James Webb Space Telescope scheduled for launch in October 2018.

“The James Webb Space Telescope will be able to provide us with information regarding the atmosphere surrounding exoplanets – planets outside our solar system orbiting a star”, says Michael Küffmeier: “This, too, will help us get a better understanding of the origin of planets”.

CAPTION Hai-Bo Yu is an assistant professor of theoretical particle physics and astrophysics at UC Riverside. CREDIT I. Pittalwala, UC Riverside.

UC Riverside-led study invokes intriguing self-interacting dark matter theory

Identical twins are similar to each other in many ways, but they have different experiences, friends, and lifestyles.

This concept is played out on a cosmological scale by galaxies. Two galaxies that appear at first glance to be very similar and effectively identical can have inner regions rotating at very different rates - the galactic analog of twins with different lifestyles.

A team of physicists, led by Hai-Bo Yu of the University of California, Riverside, has found a simple and viable explanation for this diversity.

Every galaxy sits within a dark matter halo that forms the gravitational scaffolding holding it together. The distribution of dark matter in this halo can be inferred from the motion of stars and gas particles in the galaxy.

Yu and colleagues report in Physical Review Letters that diverse galactic-rotation curves, a graph of rotation speeds at different distances from the center, can be naturally explained if dark matter particles are assumed to strongly collide with one another in the inner halo, close to the galaxy's center - a process called dark matter self-interaction.

"In the prevailing dark matter theory, called Cold Dark Matter or CDM, dark matter particles are assumed to be collisionless, aside from gravity," said Yu, an assistant professor of theoretical particle physics and astrophysics, who led the research. "We invoke a different theory, the self-interacting dark matter model or SIDM, to show that dark matter self-interactions thermalize the inner halo, which ties ordinary matter and dark matter distributions together so that they behave like a collective unit. The self-interacting dark matter halo then becomes flexible enough to accommodate the observed diverse rotation curves."

Yu explained that the dark matter collisions take place in the dense inner halo, where the luminous galaxy is located. When the particles collide, they exchange energy and thermalize. For low-luminous galaxies, the thermalization process heats up the inner dark matter particles and pushes them out of the central region, reducing the density, analogous to a popcorn machine in which kernels hit each other as they pop, causing them to fly up from the bottom of the machine. For high-luminous galaxies such as the Milky Way, thermalization pulls the particles into the deep potential well of the luminous matter and increases the dark matter density. In addition, the cosmological assembly history of halos also plays a role in generating the observed diversity.

"Our work demonstrates that dark matter may have strong self-interactions, a radical deviation from the prevailing theory," Yu said. "It well explains the observed diversity of galactic rotating curves, while being consistent with other cosmological observations."

Dark matter makes up about 85 percent of matter in the universe, but its nature remains largely unknown despite its unmistakable gravitational imprint on astronomical and cosmological observations. The conventional way to study dark matter is to assume that it has some additional, nongravitational interaction with visible matter that can be studied in the lab. Physicists do not know, however, if such an interaction between dark and visible matter even exists.

Over the last decade, Yu has pioneered a new line of research based on the following premise: Setting aside whether dark matter interacts with visible matter, what happens if dark matter interacts with itself through some new dark force?

Yu posited the new dark force would affect the dark matter distribution in each galaxy's halo. He realized that there is indeed a discrepancy between CDM and astronomical observations that could be solved if dark matter is self-interacting.

"The compatibility of this hypothesis with observations is a major advance in the field," said Flip Tanedo, an assistant professor of theoretical particle physics at UC Riverside, who was not involved in the research. "The SIDM paradigm is a bridge between fundamental particle physics and observational astronomy. The consistency with observations is a big hint that this proposal has a chance of being correct and lays the foundation for future observational, experimental, numerical, and theoretical work. In this way, it is paving the way to new interdisciplinary research."

SIDM was first proposed in 2000 by a pair of eminent astrophysicists. It experienced a revival in the particle physics community around 2009, aided in part by key work by Yu and collaborators.

"This is a special time for this type of research because numerical simulations of galaxies are finally approaching a precision where they can make concrete predictions to compare the observational predictions of the self-interacting versus cold dark matter scenarios," Tanedo said. "In this way, Hai-Bo is the architect of modern self-interacting dark matter and how it merges multiple different fields: theoretical high-energy physics, experimental high-energy physics, observational astronomy, numerical simulations of astrophysics, and early universe cosmology and galaxy formation."

The research paper is included by Physical Review Letters as a "Editor's Suggestion" and featured also in APS Physics.

Songbai Ji, associate professor of biomedical engineering at Worcester Polytechnic Institute, is researching how injuries affect functionally important neural pathways and specific areas of the brain.

With 2 grants from the National Institutes of Health, Worcester Polytechnic Institute Professor Songbai Ji is using neuroimaging and supercomputer modeling to develop better tools to understand the mechanics of traumatic brain injuries in athletes

As fall sports seasons get under way and concerns related to concussions in contact sports continue to grow, a Worcester Polytechnic Institute (WPI) biomedical engineering professor is developing better tools to understand the mechanics of traumatic brain injuries in athletes.

With two grants from the National Institutes of Health, Songbai Ji is using advanced neuroimaging to develop highly specific supercomputer models of the head and brain to better diagnose concussions in real time.

Ji, whose research integrates neuroimaging into existing brain injury research, focuses on how injuries affect functionally important neural pathways and specific areas of the brain. While there are numerous studies that essentially view the brain as a single unit to determine injury, Ji says, certain components like white matter neural tracts (tissue that helps coordinate communication between different regions of the brain) deep within the brain are more vulnerable and thus may be better indicators of injury.

Ji is developing a sophisticated head injury computer model to produce a strain map as part of a four-year $1.5 million NIH grant, titled "Accumulated white matter fiber strain from repetitive head impacts in contact sports." Co-principal investigators include colleagues from Dartmouth College, Indiana University School of Medicine, and medical device developer Simbex. In a separate two-year, $461,545 NIH grant, titled "Model-based cumulative analysis of on-field head impacts in contact sports," Ji is working to make the model simulation in real time.

"Typically it would take hours to produce a detailed strain map for each impact to determine if someone has a concussion," said Ji. "But we are developing a model simulation in real time." 

Sports concussions have been a growing concern for years. According to the most current data from the Centers for Disease Control and Prevention, in 2012, nearly 330,000 children aged 19 or younger were treated in emergency rooms across the United States for sports and recreation-related injuries that included a diagnosis of concussion or traumatic brain injury.

Ji envisions that, in the future, an athlete on the field could be wearing protective gear, such as a helmet or mouthguard, equipped with an impact sensor. When an athlete's head is struck, the sensor would record the acceleration, which would provide input to the computer model.

Because Ji will have pre-computed various strain maps into a computer, athletic trainers could quickly retrieve a strain map of the blow that could be used to assess injury risk.

"But the computational cost is now too high for real-world applications," Ji said, "and that's why we are also developing a real-time simulation technique."

Ji added that many current concussion studies are looking at acceleration magnitudes, much like a "hit count" - the number of times an athlete's head has been hit - rather than considering how many times a specific brain region experiences a certain level of strain and deformation, which is likely more related to the extent of the actual injury. Ji's lab is looking at the role such repeated straining plays in the severity of concussions.

In a recently published paper in Biomechanics and Modeling in Mechanobiology, Ji and his research associates found that, in addition to the findings about deep white matter, a rigorous cross-validation of injury prediction performance has been lacking in brain injury studies. They proposed a general framework to address this issue in future studies.

"I am very encouraged by the research and think that we can make an impact in this crucial health area," Ji says. "My research team understands just how important it is to advance concussion research for athletes of both genders and of all ages."

A model highlighting the human body's two distinct B-cell clonal networks. Drexel researchers and colleagues have discovered that one network spans the blood, bone marrow, spleen and lung, while the other is found in the gastrointestinal tract. (Credit: Alexander H. Farley)

Drexel University computational biologists have helped to create the first “anatomic atlas” of B-cell clone lineages, their properties and tissue connections.

The researchers revealed that the population of B-cells — highly specialized immune cells that generate the antibodies that protect us from disease — are split into two broad networks within the body. One spans the blood, bone marrow, spleen and lung, while the other is found in the gastrointestinal tract. This atlas of B-cell tissue distribution, published in Nature Biotechnology, will be an essential resource for researchers and clinicians studying infectious diseases, cancer, the microbiome and vaccine responses.

“We have now shown that the geography of the human body is important for how the immune system works, with B-cells operating differently depending on where they are located,” Hershberg said.

To discover how B-cell clones are distributed in the body, the researchers sequenced 933,427 lineages and mapped them to eight different anatomic compartments in six human organ donors.

The main challenge the researchers faced was to prove that if they did not find B-cells in a certain region of the body, that they were not actually there — the same way an ecologist must prove the extinction of an animal species, according to study co-corresponding author Uri Hershberg, PhD, an associate professor in the School of Biomedical Engineering, Science and Health Systems at Drexel.

“B-cell clones are highly diverse. We have over 100 billion different B-cell types at any time. So even at 1 million B-cells sequenced, we definitely did not sample all of them,” Hershberg said. “That is why our group used statistical tools from ecological research to estimate how many clone types were missing from our survey. When considering large experienced B-cell clones, we had sampled enough to prove with high confidence that they are segregated to the blood or the gut.”

B-cells are key players in the body’s protective immunity. When your body is infected by a particular pathogen, only the specific B-cells that recognize the invader will respond. These specialized cells then quickly multiply and create an infantry of identical sister cells of a specific clono-type to fight the infection. Special types of B-cells “remember” the invader, making you immune to a second attack.

The tissue distribution and trafficking of these cells from the same clone are essential to understand, since these processes influence how infections are controlled throughout the body. Animal studies have indicated that the tissue localization of B- cells and plasma cells is important for protective immunity and the homeostasis of bacterial microflora. However, unlike lab mice, humans live for decades in diverse environments and are exposed to many different pathogens.

To understand how B-cell clones are distributed in the human body, the scientists needed both donated human organs, as well as new data analysis and visualization tools developed by the Hershberg lab at Drexel.

Co-author Donna Farber from Columbia University directed the organ donor tissue program responsible for acquiring the necessary tissue samples from six individuals. Researchers at the University of Pennsylvania’s Perelman School of Medicine, led by Eline T. Luning Prak, traced and sequenced the B-cell populations in eight tissues across these individuals. In one case, they collected and sequenced B-cells from 257 different samples, which resulted in over 23 million different sequences.

At Drexel, Hershberg and his team of biomedical engineers generated and organized the sequences into comprehensive data sets, creating figures that showed the diversity, similarity and networks of large clones. They then built a database system, which can now be used by other scientists to standardize how this type of data is analyzed in future studies.

The researchers’ findings may help identify tissue-specific markers for B-cells, Hershberg said.

“Our findings suggest that there are certain kinds of B-cells in the gut that are not found in the blood, which is important because diseases tend to happen at the interfaces of your body — the lungs, the gut the throat. However, those organs are more difficult to test than the blood,” he added. “If we can identify what is ‘special’ about the B-cell clones in the gut, then we can look for those same markers in the blood.”

PhD candidate Bochao Zhang, postdoctoral researcher Gregory W. Schwartz and software engineer Aaron M. Rosenfield also contributed to this research.

CAPTION Artist's representation of the quantum memory device. CREDIT Ella Maru Studio

Smallest-yet optical quantum memory device is a storage medium for optical quantum networks with the potential to be scaled up for commercial use

For the first time, an international team led by engineers at Caltech has developed a computer chip with nanoscale optical quantum memory.

Quantum memory stores information in a similar fashion to the way traditional computer memory does, but on individual quantum particles--in this case, photons of light. This allows it to take advantage of the peculiar features of quantum mechanics (such as superposition, in which a quantum element can exist in two distinct states simultaneously) to store data more efficiently and securely.

"Such a device is an essential component for the future development of optical quantum networks that could be used to transmit quantum information," says Andrei Faraon (BS '04), assistant professor of applied physics and materials science in the Division of Engineering and Applied Science at Caltech, and the corresponding author of a paper describing the new chip.

The study appeared online ahead of publication by Science magazine on August 31. 

"This technology not only leads to extreme miniaturization of quantum memory devices, it also enables better control of the interactions between individual photons and atoms," says Tian Zhong, lead author of the study and a Caltech postdoctoral scholar. Zhong is also an acting assistant professor of molecular engineering at the University of Chicago, where he will set up a laboratory to develop quantum photonic technologies in March 2018.

The use of individual photons to store and transmit data has long been a goal of engineers and physicists because of their potential to carry information reliably and securely. Because photons lack charge and mass, they can be transmitted across a fiber optic network with minimal interactions with other particles.

The new quantum memory chip is analogous to a traditional memory chip in a computer. Both store information in a binary code. With traditional memory, information is stored by flipping billions of tiny electronic switches either on or off, representing either a 1 or a 0. That 1 or 0 is known as a bit. By contrast, quantum memory stores information via the quantum properties of individual elementary particles (in this case, a light particle). A fundamental characteristic of those quantum properties--which include polarization and orbital angular momentum--is that they can exist in multiple states at the same time. This means that a quantum bit (known as a qubit) can represent a 1 and a 0 at the same time.

To store photons, Faraon's team created memory modules using optical cavities made from crystals doped with rare-earth ions. Each memory module is like a miniature racetrack, measuring just 700 nanometers wide by 15 microns long--on the scale of a red blood cell. Each module was cooled to about 0.5 Kelvin--just above Absolute Zero (0 Kelvin, or -273.15 Celsius)--and then a heavily filtered laser pumped single photons into the modules. Each photon was absorbed efficiently by the rare-earth ions with the help of the cavity.

The photons were released 75 nanoseconds later, and checked to see whether they had faithfully retained the information recorded on them. Ninety-seven percent of the time, they had, Faraon says.

Next, the team plans to extend the time that the memory can store information, as well as its efficiency. To create a viable quantum network that sends information over hundreds of kilometers, the memory will need to accurately store data for at least one millisecond. The team also plans to work on ways to integrate the quantum memory into more complex circuits, taking the first steps toward deploying this technology in quantum networks.

An increasing global reliance on—and demand for—heightened security in public and private settings calls for optimal sensor technology. Public places, such as shopping malls, banks, transportation hubs, museums, and parking lots, frequently benefit from cameras and motion detectors, which identify suspicious and unwelcome activity. However, placing security sensors to optimize resource management and system performance while simultaneously protecting people and products is a tricky challenge.

Researchers have conducted many studies on sensor placement and utilized multiple techniques—including graph-based approaches, computational geometry, and Bayesian methods—to generate setups of varying success. But despite past efforts, this optimization problem remains complicated. In a paper publishing today in the SIAM Journal on Scientific Computing, Sung Ha Kang, Seong Jun Kim, and Haomin Zhou propose a computational level set method to optimally position a sensor-based security system for maximum surveillance of a complex environment. “In optimal sensor positioning, the covered and non-covered regions can be accurately classified using the level set, and the dynamics of the coverage with respect to a sensor position can be derived and tracked conveniently,” Kim said. “Over the years, the level set method has proven to be a robust numerical technique for this purpose.”

The authors begin by identifying the ongoing challenges of effective sensor optimization, including high demand for computational resources. Obstacles obstructing sensor view and range are frequently of arbitrary shape, making their positions difficult to locate. Additionally, maximizing coverage area is a costly problem of infinite dimensions, and finding the global optimal solution often becomes computationally intractable. “Many previous works are solved by combinatorial approaches, while our setup is more continuous,” Kang said. “This offers more flexibility in handling complicated regions and different configurations, such as limited viewing range and directions.”

Kang, Kim, and Zhou combine and modify existing algorithms to yield more accurate sensory constraints from a practical viewpoint. While past studies have assumed that sensors have an infinite coverage range and/or a 360-degree viewing angle, the authors extend existing formulations to acknowledge the finite range, limited viewing angle, and nonzero failure rate of realistic sensors. “Sensors, regardless how well they are manufactured, can fail to acquire targeted information,” Zhou said. “Modeling those constraints effectively is crucial when one wants to solve the practical sensor positioning problem. In general, those constraints make the problem harder to solve — they naturally demand sophisticated computational algorithms.”

Their model employs a level set formulation, a flexible conceptual framework often used in the numerical analysis of shapes and spaces. This mechanism offers a number of advantages. “Level sets conveniently represent the visible and invisible regions as well as obstacles of arbitrary shape, and handle topological changes in the regions automatically,” Zhou said. “In addition, the extensive literature on level set methods provides solid theoretical foundation as well as abundant computation techniques when it comes to implementation.” The authors solve a system of ordinary differential equations (ODEs), then convert the ODEs to stochastic differential equations via a global optimization strategy called intermittent diffusion. These steps yield the optimal viewing directions and locations of all the sensors, as well as the largest possible surveillance region – the global optimum. “Without being limited to polygonal environments that are typically assumed in sensor positioning, like combinatorial approaches, our method can be applied to more general setups and approximate a globally optimal position due to the level set framework and intermittent diffusion,” Kim said.

By acknowledging and accounting for finite range, limited viewing angle, and nonzero failure rate, Kang, Kim, and Zhou create a unique sensor optimization model. “To the best of our knowledge, viewing sensor placement problems from a probabilistic prospective in the level set framework is novel,” Zhou said. “Yet there is room to further improve the computational complexity. We theoretically analyzed the basic situation in the paper, but more needs be done to better understand the probability issues related to the sensor positioning problem.”

Nevertheless, the authors are pleased with the implications of their current computational method, which could improve surveillance at nearly a myriad of monitored areas, from neighborhood gas stations to mall parking lots. “We hope that our sensor positioning approaches can be a cornerstone to directly improve the performance of surveillance systems as well as the efficiency of allocated monitoring resources,” Kim said.

Source article: Optimal Sensor Positioning: A Probability Perspective Study. SIAM Journal on Scientific Computing, 39(5), 759-777.

Sung Ha Kang is an associate professor, Seong Jun Kim is a postdoc, and Haomin Zhou is a professor, all in the School of Mathematics at the Georgia Institute of Technology.

  1. Purdue's Niyogi develops big datasets, translates findings into tools to help day-to-day weather forecasting
  2. UCLA physicists produce new supercomputer simulations of black holes from the very early universe
  3. NJIT Oil spill expert assesses use of deep-sea dispersants in Deepwater Horizon cleanup
  4. Bell Prize goes to scientists who proved 'spooky' quantum entanglement is real
  5. New supercomputational model of chemical building blocks may help explain the origins of life
  6. Rutgers' Massa makes supercomputer model to reveal details of declining lung function
  7. Purdue enrollment in computer science doubles in five years
  8. Purdue's Boltasseva explores new chapter of physics
  9. UF, Army develop supercomputer model for lighter armor
  10. UC Santa Cruz's Morozova fights kids' cancer using supercomputers under the redwoods
  11. WSU supercomputing approaches human skill for first time in mapping brain
  12. Swiss researchers develop model of human psychology
  13. 'Organismic learning' mimics some aspects of human thought
  14. New Machine Learning program shows promise for early Alzheimer's diagnosis
  15. Disney's machine learning reveals the evolution of language
  16. Osinski's research into ultrafast laser technology could increase network speeds tenfold
  17. NASA Goddard uses ENLIL model to predict solar storm impact on instruments, spacecraft
  18. Brazilian VirtualCAE helps industry to design lighter, more efficient parts
  19. Boss extends his models of our solar system's shocking origin story
  20. U.S. Army Research Lab awards ICF $93 million cybersecurity services contract

Page 12 of 41