Numerous genetic variants associated with risk for schizophrenia have been identified. However, little is known about how these genes have their effects in the brain.

A proof-of-concept study published in the inaugural issue of the journal Biological Psychiatry: Cognitive Neuroscience and Neuroimaging proposes a supercomputer model for measuring the cumulative impact of multiple genetic variations on the function of individual neurons.

Authors of the study focused on schizophrenia-linked gene variations that are known to influence cellular structures such as ion channels and calcium transporters. Any changes to these structures would have an impact on the excitability of individual neurons. Using supercomputational analysis, the investigators were able to predict how alterations in these structures would change the behavior of neuronal cells.

"Our computational simulations suggest that mutations in many of the schizophrenia-linked genes affect specific aspects of neuronal excitability and synaptic integration at the single-cell level," explained Dr. Anders Dale, corresponding author of the study and Professor of Neurosciences and Radiology at University of California, San Diego. "The results may provide important clues about basic physiological mechanisms underlying schizophrenia, which could be targeted with pharmaceutical interventions, and also lead to more targeted biomarkers for assessing therapeutic effects."

The published results suggest that most of the studied gene variants could have significant effects on a neuron's behavior. These neuronal changes could be a fundamental contributor to schizophrenia and other psychiatric disorders.

In addition to schizophrenia, this new supercomputational model could be applied to other conditions including bipolar disorder, autism, attention-deficit/hyperactivity disorder, and substance use and addiction, as risk genes related to neuronal excitability in those conditions are identified.

"This novel approach integrates biological and mathematical modeling in the brain to help us understand the link between genetic risk and altered brain function," said Dr. Cameron Carter, Professor of Psychiatry and Behavioral Sciences at University of California, Davis and Editor of Biological Psychiatry: Cognitive Neuroscience and Neuroimaging. "Understanding the effect that disease-related genetic variation has on cells could unlock new insights into how to diagnose and treat these disorders."

The article is "Functional Effects of Schizophrenia-Linked Genetic Variants on Intrinsic Single-Neuron Excitability: A Modeling Study" by Tuomo Mäki-Marttunen, Geir Halnes, Anna Devor, Aree Witoelar, Francesco Bettella, Srdjan Djurovic, Yunpeng Wang, Gaute T.Einevoll, Ole A. Andreassen, and Anders M. Dale (doi: 10.1016/j.bpsc.2015.09.002). The article appears in Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, Volume 1, Issue 1 (January 2016), published by Elsevier.

Researchers have created nanoribbons of an emerging class of materials called topological insulators and used a magnetic field to control their semiconductor properties, a step toward harnessing the technology to study exotic physics and building new spintronic devices or quantum supercomputers.

Unlike ordinary materials that are either insulators or conductors, topological insulators are paradoxically both at the same time - they are insulators inside but conduct electricity on the surface, said Yong P. Chen, a Purdue University associate professor of physics and astronomy and electrical and computer engineering who worked with doctoral student Luis A. Jauregui and other researchers.

The materials might be used for "spintronic" devices and practical quantum supercomputers far more powerful than today's technologies. In the new findings, the researchers used a magnetic field to induce a so-called “helical mode” of electrons, a capability that could make it possible to control the spin state of electrons.

The findings are detailed in a research paper that appeared in the advance online publication of the journal Nature Nanotechnology on Jan. 18 and showed that a magnetic field can be used to induce the nanoribbons to undergo a “topological transition,” switching between a material possessing a band gap on the surface and one that does not.

“Silicon is a semiconductor, meaning it has a band gap, a trait that is needed to switch on and off the conduction, the basis for silicon-based digital transistors to store and process information in binary code,” Chen said. “Copper is a metal, meaning it has no band gap and is always a good conductor. In both cases the presence or absence of a band gap is a fixed property. What is weird about the surface of these materials is that you can control whether it has a band gap or not just by applying a magnetic field, so it’s kind of tunable, and this transition is periodic in the magnetic field, so you can drive it through many ‘gapped’ and ‘gapless’ states.”  

The nanoribbons are made of bismuth telluride, the material behind solid-state cooling technologies such as commercial thermoelectric refrigerators.

“Bismuth telluride has been the workhorse material of thermoelectric cooling for decades, but just in the last few years people found this material and related materials have this amazing additional property of being topological insulators,” he said.

The paper was authored by Jauregui; Michael T. Pettes, a former postdoctoral researcher at the University of Texas at Austin and now an assistant professor in the Department of Mechanical Engineering at the University of Connecticut; Leonid P. Rokhinson, a Purdue professor of physics and astronomy and electrical and computer engineering; Li Shi, BF Goodrich Endowed Professor in Materials Engineering at the University of Texas at Austin; and Chen.

A key finding was that the researchers documented the use of nanoribbons to measure so-called Aharonov-Bohm oscillations, which is possible by conducting electrons in opposite directions in ring-like paths around the nanoribbons. The structure of the nanoribbon - a nanowire that is topologically the same as a cylinder - is key to the discovery because it allows the study of electrons as they travel in a circular direction around the ribbon. The electrons conduct only on the surface of the nanowires, tracing out a cylindrical circulation.

“If you let electrons travel in two paths around a ring, in left and right paths, and they meet at the other end of the ring then they will interfere either constructively or destructively depending on the phase difference created by a magnetic field, resulting in either high or low conductivity, respectively, showing the quantum nature of electrons behaving as waves,” Jauregui said.

The researchers demonstrated a new variation on this oscillation in topological insulator surfaces by inducing the spin helical mode of the electrons. The result is the ability to flip from constructive to destructive interference and back.

“This provides very definitive evidence that we are measuring the spin helical electrons,” Jauregui said. “We are measuring these topological surface states. This effect really hasn’t been seen very convincingly until recently, so now this experiment really provides clear evidence that we are talking about these spin helical electrons propagating on the cylinder, so this is one aspect of this oscillation.”

Findings also showed this oscillation as a function of “gate voltage,” representing another way to switch conduction from high to low.

“The switch occurs whenever the circumference of the nanoribbon contains just an integer number of the quantum mechanical wavelength, or ‘fermi wavelength,’ which is tuned by the gate voltage of the electrons wrapping around the surface,” Chen said.

It was the first time researchers have seen this kind of gate-dependent oscillation in nanoribbons and further correlates it to the topological insulator band structure of bismuth telluride.

The nanoribbons are said to possess “topological protection,” preventing electrons on the surface from back scattering and enabling high conductivity, a quality not found in metals and conventional semiconductors. They were fabricated by researchers at the UT Austin.

The measurements were performed while the nanoribbons were chilled to about minus 273 degrees Celsius (nearly minus 460 degrees Fahrenheit).

“We have to operate at low temperatures to observe the quantum mechanical nature of the electrons,” Chen said.

Future research will include work to further investigate the nanowires as a platform to study the exotic physics needed for topological quantum computations. Researchers will aim to connect the nanowires with superconductors, which conduct electricity with no resistance, for hybrid topological insulator-superconducting devices. By further combining topological insulators with a superconductor, researchers may be able to build a practical quantum computer that is less susceptible to the environmental impurities and perturbations that have presented challenges thus far. Such a technology would perform calculations using the laws of quantum mechanics, making for computers much faster than conventional computers at certain tasks such as database searches and code breaking.

The research and the team have been supported with funding from the U.S. Defense Advanced Research Projects Agency, Intel Corp., National Science Foundation, Department of Energy and the Purdue Center for Topological Materials. 


We humans take for granted our remarkable ability to predict things that happen around us. For example, consider Rube Goldberg machines: One of the reasons we enjoy them is because we can watch a chain-reaction of objects fall, roll, slide and collide, and anticipate what happens next. 

But how do we do it? How do we effortlessly absorb enough information from the world to be able to react to our surroundings in real-time? And, as a computer scientist might then wonder, is this something that we can teach machines? 

That last question has recently been partially answered by researchers at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), who have developed a computational model that is just as accurate as humans at predicting how objects move.

By training itself on real-world videos and using a “3-D physics engine” to simulate human intuition, the system — dubbed “Galileo” — can infer the physical properties of objects and predict the outcome of a variety of physical events.

While the researchers’ paper focused on relatively simple experiments involving ramps and collisions, they say that the model’s ability to generalize its findings and continuously improve itself means that it could readily predict a range of actions.

“From a ramp scenario, for example, Galileo can infer the density of an object and then predict if it can float,” says postdoc Ilker Yildirim, who was lead author alongside CSAIL PhD student Jiajun Wu. “This is just the first step in imbuing computers with a deeper understanding of dynamic scenes as they unfold.”

The paper, which was presented this past month at the Conference on Neural Information Processing Systems (NIPS) in Montreal, was co-authored by postdoc Joseph Lim and professor William Freeman, as well as professor Joshua Tenenbaum from the Department of Brain and Cognitive Sciences.

How they did it

Recent research in neuroscience suggests that in order for humans to understand a scene and predict events within it, our brains rely on a mental “physics engine” consisting of detailed but noisy knowledge about the physical laws that govern objects and the larger world.

Using the human framework to develop their model, the researchers first trained Galileo on a set of 150 videos that depict physical events involving objects of 15 different materials, from cardboard and foam to metal and rubber. This training allowed the model to generate a dataset of objects and their various physical properties, including shape, volume, mass, friction, and position in space.

From there, the team fed the model information from Bullet, a 3-D physics engine often used to create special effects for movies and video games. By inputting the setup of a given scene and then physically simulating it forward in time, Bullet serves as a reality check against Galileo’s hypotheses.

Finally, the team developed deep-learning algorithms that allow the model to teach itself to further improve its predictions to the point that, by the very first frame of a video, Galileo can recognize the objects in the scene, infer the properties of the objects, and determine how these objects will interact with one another.

“Humans learn physical properties by actively interacting with the world, but for our computers this is tricky because there is no training data,” says Abhinav Gupta, an assistant professor of computer science at Carnegie Mellon University. “This paper solves this problem in a beautiful manner, by combining deep-learning convolutional networks with classical AI ideas like simulation engines.”

Human vs. machine

To assess Galileo’s predictive powers, the team pitted it against human subjects to predict a series of simulations (including one that has an interactive online demo).

In one, users see a series of object collisions, and then another video that stops at the moment of collision. The users are then asked to label how far they think the object will move.

“The scenario seems simple, but there are many different physical forces that make it difficult for a computer model to predict, from the objects’ relative mass and elasticity to gravity and the friction between surface and object,” Yildirim says. “Where humans learn to make such judgments intuitively, we essentially had to teach the system each of these properties and how they impact each other collectively.”

In another simulation, users first see a collision involving a 20-degree inclined ramp, and then are shown the first frame of a video with a 10-degree ramp and asked to predict whether the object will slide down the surface.

“Interestingly, both the computer model and human subjects perform this task at chance and have a bias at saying that the object will move,” Yildirim says. "This suggests not only that humans and computers make similar errors, but provides further evidence that human scene understanding can be best described as probabilistic simulation.”

What’s next

The team members say that they plan to extend the research to more complex scenarios involving fluids, springs, and other materials. Continued progress in this line of work, they say, could lead to direct applications in robotics and artificial intelligence.

“Imagine a robot that can readily adapt to an extreme physical event like a tornado or an earthquake,” Lim says. “Ultimately, our goal is to create flexible models that can assist humans in settings like that, where there is significant uncertainty.”

Caption: From left: Dustin Pho, Chris Wu, and Peter Steele.

Java the Hutt, a team of three Virginia Tech Department of Computer Science students and their advisor, will travel to Thailand in May 2016 to participate in the world finals of the Association of Computing Machinery’s International Collegiate Programming Competition (ACM-ICPC). 

This is the third year in a row that a team from Virginia Tech's College of Engineering has advanced to the world finals.

The trio -- Chris Wu and Dustin Pho; both of Springfield, Virginia; and Peter Steele of McLean, Virginia -- took second place in the 2015 MidAtlantic Regional intercollegiate programming competition on Nov. 7, placing higher than 183 teams from universities and colleges across the region.

Of the seven three-member teams Virginia Tech sent to the regionals, three finished in the top 20.

Coach and faculty advisor Godmar Bac, an associate professor of computer science, will travel with the team to Phuket Island, Thailand, in May.

“ACM’s International Collegiate Programming Competition is the oldest, largest, and most prestigious programming contest for college students in the world,” said Back. “Each year, almost 40,000 students participate worldwide.”

In the regional competition, the team was faced with seven different problems, with five hours and one computer to solve them.

Steele credited the team’s win to “a combination of preparation, teamwork, skill, and a bit of luck.” Preparation was intense, said the sophomore.

The team competed in mock five-hour contests every weekend as a team, and two two-hour individual coding contests each week.

“We couldn’t have done it without our coach,” said Steele.

“I am very happy about the success of this team, especially since it was their first time participating in the regional competition,” said Back. “They got a great start at their first attempt, but not without extensive practice leading in.”

Since 2012, Back’s training program has produced three world finalist teams.

“Being an ICPC world finalist is a coveted prize very few students are able to achieve,” he said. “It provides a lifelong badge of honor recognized by many in the field.”

Steele said the finals will be even more challenging.

“The world finals are an entirely different game. The problems are significantly harder, the time limit for each program to produce the correct output is tighter, and the teams that you compete against are much better,” he said.

A magnified view of a microbe on Arabidopsis plant roots seemingly provides a "window" into the rhizosphere, or root zone. In fact, that's exactly what a multi-institute research campaign is trying to frame — a view into the world of soil, roots and microorganisms. The image was obtained at EMSL, the Environmental Molecular Sciences Laboratory, a U.S. Department of Energy Office of Science national scientific user facility located at Pacific Northwest National Laboratory. The campaign includes scientists from EMSL, PNNL, DOE's Joint Genome Institute, Brookhaven National Laboratory, and the Universities of Minnesota and Missouri. Funded by DOE's Office of Biological and Environmental Research, the study examines carbon presence and distribution within the root zone, and impacts to rhizosphere microbial community diversity and functions. Results could surface climate and environmental solutions.

A dozen colorful images captured during research are featured

Enjoying the beauty of science year-round is easy with a new digital calendar and computer wallpaper containing captivating images that illustrate research at the Department of Energy's Pacific Northwest National Laboratory

The 2016 calendar and wallpaper feature 12 colorful images, including close-up views of materials under a microscope and visualized computational modeling results. The images — which showcase everything from bacteria to batteries — are the result of PNNL's diverse research, including biofuels, energy storage, cybersecurity and biological threat detection.

For example, the month of November shows a magnified microbe growing a on a plant root. The image was collected while PNNL scientists studied the carbon cycle among soil, roots and microogranisms. The research could one day lead to solutions for climate and environmental challenges.

PNNL chose the dozen images from more than 60 nominations that were submitted in 2015 by its staff. This is the fifth year in a row the national laboratory has produced a scientific art calendar.

To download a calendar or wallpaper, go to the 2016 Science as Art Calendar website. The images can also be seen on PNNL's Flickr account.

Page 9 of 106