LATEST

This is Sandeep Kumar, an assistant professor of mechanical engineering at UC Riverside.

Discoveries will help realize the promise of faster, energy-efficient spintronic supercomputers and ultra-high-capacity data storage

Engineers at the University of California, Riverside, have reported advances in so-called "spintronic" devices that will help lead to a new technology for computing and data storage. They have developed methods to detect signals from spintronic components made of low-cost metals and silicon, which overcomes a major barrier to wide application of spintronics. Previously such devices depended on complex structures that used rare and expensive metals such as platinum. The researchers were led by Sandeep Kumar, an assistant professor of mechanical engineering. 

UCR researchers have developed methods to detect signals from spintronic components made of low-cost metals and silicon.
UCR researchers have developed methods to detect signals from spintronic components made of low-cost metals and silicon.

Spintronic devices promise to solve major problems in today's electronic computers, in that the computers use massive amounts of electricity and generate heat that requires expending even more energy for cooling. By contrast, spintronic devices generate little heat and use relatively minuscule amounts of electricity. Spintronic supercomputers would require no energy to maintain data in memory. They would also start instantly and have the potential to be far more powerful than today's computers.

While electronics depends on the charge of electrons to generate the binary ones or zeroes of computer data, spintronics depends on the property of electrons called spin. Spintronic materials register binary data via the "up" or "down" spin orientation of electrons--like the north and south of bar magnets--in the materials. A major barrier to development of spintronics devices is generating and detecting the infinitesimal electric spin signals in spintronic materials.

In one paper published in the January issue of the scientific journal Applied Physics Letters, Kumar and colleagues reported an efficient technique of detecting the spin currents in a simple two-layer sandwich of silicon and a nickel-iron alloy called Permalloy. All three of the components are both inexpensive and abundant and could provide the basis for commercial spintronic devices. They also operate at room temperature. The layers were created with the widely used electronics manufacturing processes called sputtering. Co-authors of the paper were graduate students Ravindra Bhardwaj and Paul Lou.

In their experiments, the researchers heated one side of the Permalloy-silicon bi-layer sandwich to create a temperature gradient, which generated an electrical voltage in the bi-layer. The voltage was due to phenomenon known as the spin-Seebeck effect. The engineers found that they could detect the resulting "spin current" in the bi-layer due to another phenomenon known as the "inverse spin-Hall effect."

The researchers said their findings will have application to efficient magnetic switching in computer memories, and "these scientific breakthroughs may give impetus" to development of such devices. More broadly, they concluded, "These results bring the ubiquitous Si (silicon) to forefront of spintronics research and will lay the foundation of energy efficient Si spintronics and Si spin caloritronics devices."

In two other scientific papers, the researchers demonstrated that they could generate a key property for spintronics materials, called antiferromagnetism, in silicon. The achievement opens an important pathway to commercial spintronics, said the researchers, given that silicon is inexpensive and can be manufactured using a mature technology with a long history of application in electronics.

Ferromagnetism is the property of magnetic materials in which the magnetic poles of the atoms are aligned in the same direction. In contrast, antiferromagnetism is a property in which the neighboring atoms are magnetically oriented in opposite directions. These "magnetic moments" are due to the spin of electrons in the atoms, and is central to the application of the materials in spintronics.

In the two papers, Kumar and Lou reported detecting antiferromagnetism in the two types of silicon--called n-type and p-type--used in transistors and other electronic components. N-type semiconductor silicon is "doped" with substances that cause it to have an abundance of negatively-charged electrons; and p-type silicon is doped to have a large concentration of positively charged "holes." Combining the two types enables switching of current in such devices as transistors used in computer memories and other electronics.

In the paper in the Journal of Magnetism and Magnetic Materials, Lou and Kumar reported detecting the spin-Hall effect and antiferromagnetism in n-silicon. Their experiments used a multilayer thin film comprising palladium, nickel-iron Permalloy, manganese oxide and n-silicon.

And in the second paper, in the scientific journal physica status solidi, they reported detecting in p-silicon spin-driven antiferromagnetism and a transition of silicon between metal and insulator properties. Those experiments used a thin film similar to those with the n-silicon.

The researchers wrote in the latter paper that "The observed emergent antiferromagnetic behavior may lay the foundation of Si (silicon) spintronics and may change every field involving Si thin films. These experiments also present potential electric control of magnetic behavior using simple semiconductor electronics physics. The observed large change in resistance and doping dependence of phase transformation encourages the development of antiferromagnetic and phase change spintronics devices."

In further studies, Kumar and his colleagues are developing technology to switch spin currents on and off in the materials, with the ultimate goal of creating a spin transistor. They are also working to generate larger, higher-voltage spintronic chips. The result of their work could be extremely low-power, compact transmitters and sensors, as well as energy-efficient data storage and computer memories, said Kumar.

The study's findings will help train artificial intelligence to diagnose diseases

Researchers used machine learning techniques, including natural language processing algorithms, to identify clinical concepts in radiologist reports for CT scans, according to a study conducted at the Icahn School of Medicine at Mount Sinai and published today in the journal Radiology. The technology is an important first step in the development of artificial intelligence that could interpret scans and diagnose conditions.

From an ATM reading handwriting on a check to Facebook suggesting a photo tag for a friend, computer vision powered by artificial intelligence is increasingly common in daily life. Artificial intelligence could one day help radiologists interpret X-rays, computed tomography (CT) scans, and magnetic resonance imaging (MRI) studies. But for the technology to be effective in the medical arena, computer software must be "taught" the difference between a normal study and abnormal findings.

This study aimed to train this technology how to understand text reports written by radiologists. Researchers created a series of algorithms to teach the computer clusters of phrases. Examples of terminology included words like phospholipid, heartburn, and colonoscopy.

Researchers trained the computer software using 96,303 radiologist reports associated with head CT scans performed at The Mount Sinai Hospital and Mount Sinai Queens between 2010 and 2016. To characterize the "lexical complexity" of radiologist reports, researchers calculated metrics that reflected the variety of language used in these reports and compared these to other large collections of text: thousands of books, Reuters news stories, inpatient physician notes, and Amazon product reviews.

"The language used in radiology has a natural structure, which makes it amenable to machine learning," says senior author Eric Oermann, MD, Instructor in the Department of Neurosurgery at the Icahn School of Medicine at Mount Sinai. "Machine learning models built upon massive radiological text datasets can facilitate the training of future artificial intelligence-based systems for analyzing radiological images."

Deep learning describes a subcategory of machine learning that uses multiple layers of neural networks (computer systems that learn progressively) to perform inference, requiring large amounts of training data to achieve high accuracy. Techniques used in this study led to an accuracy of 91 percent, demonstrating that it is possible to automatically identify concepts in text from the complex domain of radiology.

"The ultimate goal is to create algorithms that help doctors accurately diagnose patients," says first author John Zech, a medical student at the Icahn School of Medicine at Mount Sinai. "Deep learning has many potential applications in radiology -- triaging to identify studies that require immediate evaluation, flagging abnormal parts of cross-sectional imaging for further review, characterizing masses concerning for malignancy -- and those applications will require many labeled training examples."

"Research like this turns big data into useful data and is the critical first step in harnessing the power of artificial intelligence to help patients," says study co-author Joshua Bederson, MD, Professor and System Chair for the Department of Neurosurgery at Mount Sinai Health System and Clinical Director of the Neurosurgery Simulation Core.

CAPTION Ritesh Agarwal's research on photonic computing has been focused on finding the right combination and physical configuration of materials that can amplify and mix light waves in ways that are analogous to electronic computer components. In a paper published in Nature Communications, he and his colleagues have taken an important step: precisely controlling the mixing of optical signals via tailored electric fields, and obtaining outputs with a near perfect contrast and extremely large on/off ratios. Those properties are key to the creation of a working optical transistor.  CREDIT Sajal Dhara

Current computer systems represent bits of information, the 1's and 0's of binary code, with electricity. Circuit elements, such as transistors, operate on these electric signals, producing outputs that are dependent on their inputs.

As fast and powerful as supercomputers have become, Ritesh Agarwal, professor in the Department of Materials Science and Engineering in the University of Pennsylvania's School of Engineering and Applied Science, knows they could be more powerful. The field of photonic supercomputing aims to achieve that goal by using light as the medium.

Agarwal's research on photonic supercomputing has been focused on finding the right combination and physical configuration of materials that can amplify and mix light waves in ways that are analogous to electronic computer components.

In a paper published in Nature Communications, he and his colleagues have taken an important step: precisely controlling the mixing of optical signals via tailored electric fields, and obtaining outputs with a near perfect contrast and extremely large on/off ratios. Those properties are key to the creation of a working optical transistor.

"Currently, to compute '5+7,' we need to send an electrical signal for '5' and an electrical signal for '7,' and the transistor does the mixing to produce an electrical signal for '12,'" Agarwal said. "One of the hurdles in doing this with light is that materials that are able to mix optical signals also tend to have very strong background signals as well. That background signal would drastically reduce the contrast and on/off ratios leading to errors in the output."

With background signals washing out the intended output, necessarily computational qualities for optical transistors, such as their on/off ratio, modulation strength and signal mixing contrast have all been extremely poor. Electric transistors have high standards for these qualities to prevent errors.

The search for materials that can serve in optical transistors is complicated by additional property requirements. Only "nonlinear" materials are capable of this kind of optical signal mixing.

To address this issue, Agarwal's research group started by finding a system which has no background signal to start: a nanoscale "belt" made out of cadmium sulfide. Then, by applying an electrical field across the nanobelt, Agarwal and his colleagues were able to introduce optical nonlinearities to the system that enable a signal mixing output that was otherwise zero.

"Our system turns on from zero to extremely large values, and hence has perfect contrast, as well as large modulation and on/off ratios," Agarwal said. "Therefore, for the first time, we have an optical device with output that truly resembles an electronic transistor."

With one of the key components coming into focus, the next steps toward a photonic computer will involve integrating them with optical interconnects, modulators, and detectors in order to demonstrate actual computation.

The Data Science Institute (DSI) at Columbia awarded Seeds Fund Grants to five research teams whose proposed projects will use state-of-the-art data science to solve seemingly intractable societal problems in the fields of cancer research, medical science, transportation and technology.

Each team will receive up to $100,000 for one year and be eligible for a second year of funding.

"In awarding these grants, the DSI review committee selected projects that brought together teams of scholars who aspire to push the state-of-the-art in data science by conducting novel, ethical and socially beneficial research," said Jeannette M.Wing, Avanessians Director of the Data Science Institute. "The five winning teams combine data-science experts with domain experts who'll work together to transform several fields throughout the university."

The inaugural DSI Seeds Fund Program supports collaborations between faculty members and researchers from various disciplines, departments and schools throughout Columbia University. DSI received several excellent proposals from faculty, which shows a growing and enthusiastic interest in data-science research at Columbia, added Wing.

The seed program is just one of many initiatives that Wing has spearheaded since the summer of 2017, when she was named director of DSI, a world-leading institution in field of data science. The other initiatives include founding a Post-doctoral Fellows Program; a Faculty Recruiting Program; and an Undergraduate Research Program.

What follows are brief descriptions of the winning projects, along with the names and affiliations of the researchers on each team.

p(true): Distilling Truth by Community Rating of Claims on the Web:

The team: Nikolaus Kriegeskorte, Professor, Psychology and Director of Cognitive Imaging, Zuckerman's Institute; Chris Wiggins, Associate Professor, Department of Applied Physics and Applied Mathematics, Columbia Engineering; Nima Mesgarani, Assistant Professor, Electrical Engineering Department, Columbia Engineering; Trenton Jerde, Lecturer, Applied Analytics Program, School of Professional Studies.

The social web is driven by feedback mechanisms, or "likes," which emotionalize the sharing culture and may contribute to the formation of echo chambers and political polarization, according to this team. In their p(true) project, the team will thus build a complementary mechanism for web-based sharing of reasoned judgments, so as to bring rationality to the social web.

Websites such as Wikipedia and Stack Overflow are surprisingly successful at providing a reliable representation of uncontroversial textbook knowledge, the team says. But the web doesn't work well in distilling the probability of contentious claims. The question the team seeks to answer is this: How can the web best be used to enable people to share their judgments and work together to find the truth?

The web gives users amazing power to communicate and collaborate, but users have yet to learn how to use that power to distill the truth, the team says. Web users can access content, share it with others, and give instant feedback on claims. But those actions end up boosting certain claims while blocking others, which amounts to a powerful mechanism of amplification and filtering. If the web is to help people think well together, then the mechanism that determines what information is amplified, the team maintains, should be based on rational judgment, rather than emotional responses as communicated by "likes" and other emoticons.

The team's goal is to build a website, called p(true), that enables people to rate and discuss claims (e.g., a New York Times headline) on the web. The team's method will enable the community to debunk falsehoods and lend support to solid claims. It's also a way to share reasoned judgments, pose questions to the community and start conversations.

Planetary Linguistics:

The team: David Kipping, Assistant Professor, Department of Astronomy; Michael Collins, Professor, Computer Science Department, Columbia Engineering.

In the last few years, the catalog of known exoplanets - planets orbiting other stars - has swelled from a few hundred to several thousand. With 200 billion stars in our galaxy alone and with the rise of online planet-hunting missions, an explosion of discovery of exoplanets seems imminent. Indeed, it has been argued that the cumulative number of known exoplanets doubles every 27 months, analogous to Moore's law, implying a million exoplanets by 2034, and a billion by 2057.

By studying the thousands of extrasolar planets that have been discovered in recent years, can researchers infer the rules by which planetary systems emerge and evolve? This team aims to answer that question by drawing upon an unusually interesting source: computational linguistics.

The challenge of understanding the emergence of planetary systems must be done from afar with limited information, according to the team. In that way, it's similar to attempting to learn an unknown language from snippets of conversation, says Kipping and Collins. Drawing upon this analogy, the two will explore whether the mathematical tools of computational linguistics can reveal the "grammatical" rules of planet formation. Their goals include building predictive models, much like the predictive text on a smartphone, which will be capable of intelligently optimizing telescope resources. They also aim to uncover the rules and regularities in planetary systems, specifically through the application of grammar-induction methods used in computational linguistics.

The team maintains that the pre-existing tools of linguistics and information theory are ideally suited for unlocking the rules of planet formation. Using this framework, they'll consider each planet to be analogous to a word; each planetary system to be a sentence; and the galaxy to be a corpus of text from which they might blindly infer the grammatical rules. The basic thesis of their project is to see whether they can use the well-established mathematical tools of linguistics and information theory to improve our understanding of exoplanetary systems.

The three central questions they hope to answer are: Is it possible to build predictive models that assign a probability distribution over different planetary-system architectures? Can those models predict the presence and properties of missing planets in planetary systems? And can they uncover the rules and regularities in planetary systems, specifically through the application of grammar-induction methods used in computational linguistics?

In short, the team says it intends to "speak planet."

A Game-Theoretical Framework for Modeling Strategic Interactions Between Autonomous and Human-Driven Vehicles:

The team: Xuan Sharon Di, Assistant Professor, Department of Civil Engineering and Engineering Mechanics, Columbia Engineering; Qiang Du, Professor, Applied Physics and Applied Mathematics, Columbia Engineering and Data Science Institute; Xi Chen, Associate Professor, Computer Science Department, Columbia Engineering; Eric Talley, Professor and Co-Director, Millstein Center for Global Markets and Corporate Ownership, Columbia Law School.

Autonomous vehicles, expected to arrive on roads over the course of the next decade, will connect with each other and to the surrounding environment by sending messages regarding timestamp, current location, speed and more.

Yet no one knows exactly how autonomous vehicles (AV) and conventional human-driven vehicles (HV) will co-exist and interact on roads. The vast majority of research has considered the engineering challenges from the perspective either of a single AV immersed in a world of human drivers or one in which only AVs are on the road. Much less research has focused on the transition path between these two scenarios. To fill that gap, this team will develop a framework using the game-theoretic approach to model strategic interactions of HVs and AVs. Their approach aims to understand the strategic interactions likely to link the two types of vehicles.

Along with exploring these technical matters, the team will address the ethical issues associated with autonomous vehicles. What decision should an AV make when confronted with an obstacle? If it swerves to miss the obstacle, it will hit five people, including an old woman and a child, whereas if it goes straight it will hit a car and injure five passengers. Algorithms are designed to solve such problems, but bias can creep in depending on how one frames the problem. The team will investigate how to design algorithms that account for such ethical dilemmas while eliminating bias.

Predicting Personalized Cancer Therapies Using Deep Probabilistic Modeling:

The team: David Blei, Professor, Statistics and Computer Science Department, Columbia Engineering; Raul Rabadan, Professor, Systems Biology and Biomedical Informatics, CUMC; Anna Lasorella, Associate Professor, Pathology and Cell Biology and Pediatrics, CUMC, Wesley Tansey, Postdoc Research Scientist, Department of Systems Biology, CUMC

Precision medicine aims to find the right drug for the right patient at the right moment and with the proper dosage. Such accuracy is especially relevant in cancer treatments, where standard therapies elicit different responses from different patients.

Cancers are caused by the accumulation of alterations in the genome. Exact causes vary between different tumors, and no two tumors have the same set of alterations. One can envision that by mapping a comprehensive set of causes of specific tumors, one can provide patient-specific drug recommendations. And drugs targeting specific alterations in the genome provide some of the most successful therapies.

This team has identified new alterations targeted by drugs that are either currently approved or in clinical trials. Most tumors, however, lack specific targets and patient-specific therapies. A recent study of 2,600 patients at the M.D. Anderson Center for instance showed that genetic analysis permits only 6.4 percent of patients to be paired with a drug aimed specifically at the mutation deemed responsible. This team believes the low percentage highlights the need for new approaches that will match drugs to genomic profiles

The team's goal, therefore, is to model, predict, and target therapeutic sensitivity and resistance of cancer. The predictions will be validated in cancer models designed in Dr. Lasorella's lab, which will enhance the accuracy of effort. They will also integrate Bayesian modeling with variational inference and deep-learning methods, leveraging the expertise of two leading teams in computational genomics (Rabadan's group) and machine learning (Blei's group) across the Medical and Morningside Heights campuses.

They have sequenced RNA and DNA from large collections of tumors and have access to databases of genomic, transcriptomic and drug response profiles of more than 1,000 cancer-cell lines. They will use their expertise in molecular characterization of tumors and machine-learning techniques to model and explore these databases.

Enabling Small-Data Medical Research with Private Transferable Knowledge from Big Data:

The team: Roxana Geambasu, Associate Professor, Computer Science Department, Columbia Engineering; Nicholas Tatonetti, Herbert Irving Assistant Professor, Biomedical Informatics, CUMC; Daniel Hsu, Assistant Professor, Computer Science Department, Columbia Engineering.

Clinical data hold enormous promise to advance medical science. A major roadblock to more research in this area is the lack of infrastructural support for sharing of large-scale, clinical datasets across institutions. Virtually every hospital and clinic collects detailed medical records about its patients. For example, New York Presbyterian has developed a large-scale, longitudinal dataset, called the Clinical Data Warehouse that contains clinical information from more than five million patients.

Daniel Hsu

Such datasets are made available to researchers within their home institutions through the Institutional Review Board (IRB), but hospitals are usually wary of sharing data with other institutions. Their main concern is privacy and the legal and public-relations implications of a potential data breach. At times, hospitals will permit the sharing of statistics from their data but generally cross-institution IRBs take years to finalize, often with rejection. Siloing these large-scale, clinical datasets severely constrains the quality and pace of innovation in data-driven medical science. The siloing effect limits the scope and rigor of the research that can be conducted on these datasets.

To overcome this challenge, the team is building an infrastructure system for sharing machine-learning models of large-scale, clinical datasets that will also preserve patients' privacy. The new system will enable medical researchers in small clinics or pharmaceutical companies to incorporate multitask feature models that are learned from big clinical datasets. The researchers, for example, could call upon New York Presbyterian's Clinical Data Warehouse and bootstrap their own machine-learning models on top of smaller clinical datasets. The multitask feature models, moreover, will protect the privacy of patient records in the large datasets through a rigorous method called differential privacy. The team anticipates that the system will vastly improve the pace of innovation in clinical-data research while alleviating privacy concerns.

New tools will fight skin condition, personalize diagnosis and management, and identify leads for drug treatment

An experienced interdisciplinary team of psoriasis and computational researchers from Case Western Reserve University School of Medicine (CWRU SOM) and University Hospitals Cleveland Medical Center (UHCMC) has received a $6.5M, 5-year grant from the National Institute of Arthritis, Musculoskeletal and Skin Diseases (NIAMS).

The grant supports a Center of Research Translation in Psoriasis (CORT) at CWRU and UHCMC.

"The CORT brings together the strengths of the Department of Dermatology and the Murdough Family Center for Psoriasis in psoriasis care and research with the innovative approaches of our Institute for Computational Biology, and Department of Population & Quantitative Health Sciences (PQHS)," said Kevin Cooper, MD who serves as NIH Contact Principal Investigator and Administrative Director of the Center.

The CORT will integrate cutting-edge technology and bioinformatics with basic and clinical science in order to advance translational discovery and application in psoriasis. The goal is to better identify and treat those psoriasis patients that are more susceptible to developing comorbidities (simultaneous medical conditions) associated with psoriasis, such as cardiovascular disease, diabetes, depression, and psoriatic arthritis. Currently it is difficult for physicians to determine which patients will develop these comorbidities.

The researchers will cull data collected from blood and skin samples of UHCMC psoriasis patients and preclinical models, looking for new patterns and relationships developed using a systems biology approach. The investigative team will combine these data with psoriasis-patient information from CLEARPATH, an Ohio-based database that integrates electronic medical records (EMR) from multiple hospital systems.

"Armed with this large pool of data and new ways to work with it, we can make better connections between groups of patients with similar forms of psoriasis versus an individual's unique biology and therapy options," said Mark Cameron, PhD from the department of PQHS, who leads the computational biology team.

This approach will use tailored computational processes to zero in on drug candidates or repurposed drugs matching the patient profiles from a large database of tens of thousands of drugs--and test the drugs in genetically engineered psoriasis mouse models. The hope is that drugs that are successful in treating the mice will advance to humans.

"The CWRU/UHCMC CORT is a focal point for new and innovative mouse models that mimic psoriasis in humans--including, crucially, comorbidities of human psoriasis patients," said the project's preclinical lead investigator, Nicole Ward, PhD, from the Department of Dermatology.

"We're getting better and better at managing psoriasis patients' skin disease," said Cooper. "But we still don't have a complete grasp of the comorbidities. Through this form of personalized medicine, we think we can make great strides in determining which psoriasis patients are likely to suffer from the various co-occurring ailments, ultimately fashioning treatments for them."

Published study involving Yale-NUS undergraduates provides evidence of similarities between two different mathematical models impacting magnetoresistance research

Two Yale-NUS College in Singapore undergraduates are part of a research team that concluded that two different mathematical models, which describe the same physical phenomenon, are essentially equivalent. The discovery could have implications for future research into magnetoresistance and its practical applications in a wide range of electronic devices. After implementing the two different models of magnetoresistance as supercomputer simulations, Lai Ying Tong, 21, and Silvia Lara, 22, found that the two simulations produced similar results under identical conditions. Magnetoresistance is a physical phenomenon where the electric resistivity of a material changes when subjected to a magnetic field. The research was published in the peer-reviewed journal Physical Review B in December 2017, and presented at international conferences in 2016 and 2017.

The two Yale-NUS undergraduate students worked on the project under the mentorship of Associate Professor Shaffique Adam from Yale-NUS College and the Department of Physics at the National University of Singapore's (NUS) Faculty of Science, and Associate Professor Meera Parish from Monash University. They were guided by Navneeth Ramakrishnan, a Masters student at the Department of Physics at the NUS Faculty of Science and NUS Centre for Advanced 2D Materials, who checked their results and wrote the paper. The findings provided a unified theoretical framework to understand a phenomenon known as 'linear unsaturating magnetoresistance', as well as clear predictions on how to manipulate the effect. Prior to their research, two separate theoretical mathematical models had been proposed to describe how the phenomenon works: the Random Resistance Network (RRN) model and the Effective Medium Theory (EMT) model. Empiricists exploring magnetoresistance generally refer to either of these two models to contextualise their experiments, but do not provide a detailed comparison between the theories and their experimental results. This latest finding not only unifies the two existing theories, but also validates that these theories are accurate descriptions which match with experimental data.

The findings have a direct impact on future research into magnetoresistance, which has practical applications in a diverse range of electronic devices, such as speed sensors, mobile phones, washing machines, and laptops. The principles of magnetoresistance are currently used in magnetic memory storage in hard drives, and certain companies are aiming to produce sensitive magnetometers - devices which measure magnetic fields - that can operate at room temperatures. This is a billion dollar industry which supports applications in many aspects of everyday life ranging from automobile collision warnings to traffic light burnout detection.

Ms Lai and Ms Lara began this research as a summer research project in their first year of undergraduate education, under the guidance of Assoc Prof Adam, who is also with the Centre for Advanced 2D Materials at NUS. Assoc Prof Adam highlighted both students' roles in the research, noting that they reviewed existing literature, implemented the mathematical models in a software environment, MATLAB, as well as ran the simulations and the subsequent analyses. The students also presented the research findings at international conferences, such as the American Physical Society March Meeting 2017.

Yale-NUS College funded the undergraduate students to work on this project. "This level of undergraduate engagement, not only in the research, but in shaping the direction of the work is extremely rare. At Yale-NUS, science students are able to actively participate in such research very early on in their learning experience," said Assoc Prof Adam.

Legion of Honor Award Ceremony (President Tatsuya Tanaka)

Fujitsu has announced that Tatsuya Tanaka, President and Representative Director of Fujitsu Limited, has been named a Chevalier (Knight) of the Légion d'Honneur (Legion of Honor) by the government of France. This high-rank honorary decoration was bestowed in a January 25 ceremony held at Élysée Palace in Paris.

The Legion of Honor, established by Napoléon Bonaparte in 1802, is the highest distinction conferred by the government of France. It is awarded to recognize those people and organizations that have made outstanding contributions to France in a variety of fields.

The Fujitsu Group began operations in France in 1999 and currently has approximately 380 employees in the country. As continental Europe's second largest economy, France represents a strategically important market for Fujitsu. It actively conducts business activities centered on IT services and system products, contributing to the French economy.

In addition, in March 2017, Fujitsu announced an investment to support digital transformation and innovation in France, with the French government's collaboration. This initiative has since involved leading French technology companies, higher education and research institutions, and French start-ups. 

This decoration is in recognition of Fujitsu Group's contributions to the development of France and France-Japan relations. Fujitsu's activities, including the creation of a Center of Excellence dedicated to Artificial Intelligence (AI) in Paris-Saclay in March 2017, will work to drive the development of ecosystems in France, and will be a way to actively engage in further strengthening the bilateral relationship with France.

"I would like to thank Tatsuya Tanaka for his personal commitment and trust in our country's economy, companies, as well as higher education and research institutes. I pay tribute, through him, to the strength of the Fujitsu Group, a global leader in digital innovation, and to its willingness to expand and develop in Europe, and especially in France," said French President Emmanuel Macron.

Tatsuya Tanaka, President and Representative Director commented, “It is indeed an honor to receive this prestigious decoration. I am overjoyed to have our activities thus far receive such high recognition. In developing our business in Europe, a region which for the Fujitsu Group is second in size only to Japan, France is a strategic market of great importance. We are also promoting various efforts in France as a development site for such fields of technology and services, given France's outstanding talent and strong AI ecosystem. We hope to bring these efforts to fruition, and by making ever-more progress in their development, to contribute to further growing the relationship of France and Japan.”

CAPTION Researchers from left to right: Nodar Samkharadze, Lieven Vandersypen and Guoji Zheng. CREDIT TU Delft/Marieke de Lorijn

The worldwide race to create more, better and reliable quantum processors is progressing fast, as a team of TU Delft scientists led by Professor Vandersypen has realised yet again. In a neck-and-neck race with their competitors, they showed that quantum information of an electron spin can be transported to a photon, in a silicon quantum chip. This is important in order to connect quantum bits across the chip and allowing to scale up to large numbers of qubits. Their work was published today in the journal Science.

The quantum supercomputer of the future will be able to carry out computations far beyond the capacity of today's high performance computers. Quantum superpositions and entanglement of quantum bits (qubits) make it possible to perform parallel computations. Scientists and companies worldwide are engaged in creating increasingly better quantum chips with more and more quantum bits. QuTech in Delft is working hard on several types of quantum chips.

Familiar material

The core of the quantum chips is made of silicon. "This is a material that we are very familiar with," explains Professor Lieven Vandersypen of QuTech and the Kavli Institute of Nanoscience Delft, "Silicon is widely used in transistors and so can be found in all electronic devices." But silicon is also a very promising material for quantum technology. PhD candidate Guoji Zheng: "We can use electrical fields to capture single electrons in silicon for use as quantum bits (qubits). This is an attractive material as it ensures the information in the qubit can be stored for a long time."

Large systems

Making useful computations requires large numbers of qubits and it is this upscaling to large numbers that is providing a challenge worldwide. "To use a lot of qubits at the same time, they need to be connected to each other; there needs to be good communication", explains researcher Nodar Samkharadze. At present the electrons that are captured as qubits in silicon can only make direct contact with their immediate neighbours. Nodar: "That makes it tricky to scale up to large numbers of qubits."

Neck-and-neck race

Other quantum systems use photons for long-distance interactions. For years, this was also a major goal for silicon. Only in recent years have various scientists made progress on this. The Delft scientists have now shown that a single electron spin and a single photon can be coupled on a silicon chip. This coupling makes it possible in principle to transfer quantum information between a spin and a photon. Guoji Zheng: "This is important to connect distant quantum bits on a silicon chip, thereby paving the way to upscaling quantum bits on silicon chips."

On to the next step

Vandersypen is proud of his team: "My team achieved this result in a relatively short time and under great pressure from worldwide competition." It is a true Delft breakthrough: "The substrate is made in Delft, the chip created in the Delft cleanrooms, and all measurements carried out at QuTech," adds Nodar Samkharadze. The scientists are now working hard on the next steps. Vandersypen: "The goal now is to transfer the information via a photon from on electron spin to another."

On Jan. 21, 2015, Nick Goldman of EMBL-EBI explained a new method for storing digital information in DNA to a packed audience at a World Economic Forum meeting in Davos, Switzerland.

University of Antwerp Ph.D. student Sander Wuyts has won the DNA Storage Bitcoin challenge, issued by Nick Goldman of EMBL-EBI in 2015

University of Antwerp PhD student Sander Wuyts has won the DNA Storage Bitcoin challenge, issued by Nick Goldman of the European Bioinformatics Institute (EMBL-EBI) in 2015. The value of the coin has risen rapidly in three years, while the value of scientific progress is inestimable.

*The challenge*

On 21 January 2015, Nick Goldman of the European Bioinformatics Institute (EMBL-EBI) explained a new method for storing digital information in DNA, developed at EMBL-EBI, to a packed audience at a World Economic Forum meeting in Davos, Switzerland. In his talk, he issued a challenge:

Goldman distributed test tubes containing samples of DNA encoding 1 Bitcoin to the audience (and subsequently posted samples to people who requested them). The first person to sequence (read) the DNA and decode the files it contains could take possession of the Bitcoin.

"Bitcoin is a form of money that now only exists on computers, and with cryptography, that's something we can easily store in DNA," explained Goldman in 2015. "We've bought a Bitcoin and encoded the information into DNA. You can follow the technical description of our method and sequence the sample, decode the Bitcoin. Whoever gets there first and decodes it gets the Bitcoin."

To win, competitors needed to decode the DNA sample to find its 'private key'. All they needed was the sample, access to a DNA sequencing machine and a grasp of how the code works.

The value of a Bitcoin in 2015: around 200 euros.

The deadline: 21 January 2018.

*Good timing*

One week from the deadline, Sander Wuyts, PhD student in the Department of Bioengineering at the University of Antwerp and Vrije Universiteit Brussel, Belgium, was the first to master the method and decode the private key, taking possession of the Bitcoin.

Its value on 19 January 2018: around 9500 euros.

*The contender*

Now completing his PhD in microbiology - exploring the universe of bacteria through DNA - Wuyts has the right balance of passion, coding skills and great colleagues to tackle complex puzzles like Goldman's DNA storage' Bitcoin' challenge.

Wuyts saw Goldman issue the challenge on YouTube back in 2015, but it was the tweet about the deadline in December 2017 - plus the skills he had acquired in the meantime - that made him swing into action. He wrote to Goldman for a sample, sequenced the sample with help from his colleague Eline Oerlemans, and set to work decoding.

Once he got started, Wuyts became increasingly aware that decoding the Bitcoin wouldn't be quite as simple as following a recipe. After one failed attempt and an essential pause over Christmas, he worked tirelessly, with the help of his colleague Stijn Wittouck, to put the data from sequencing into the right order and decode the files.

It didn't work.

"One week before the deadline, I was starting to give up. It felt like we didn't produce enough good quality data to decode the whole thing. However, on the way home from a small 'hackathon' together with Stijn I realised that I made a mistake in one of the algorithms. At home I deleted just one line of code and re-ran the whole program overnight. That next Sunday I was extremely surprised and excited that suddenly the decoded files were right there, perfectly readable - I clicked on them and they opened. There were the instructions on how to claim the Bitcoin, a drawing of James Joyce and some other things."

*What Sander won*

Wuyts, who once invested poster-prize money in cryptocurrency to figure out how this technology works, is proud of having won the challenge but cautious about the prize.

"I didn't win thousands of euros, I won one Bitcoin," he says. "I will probably cash it out, because I have my doubts about the long-term value of this cryptocurrency. What's more important is that before participating in this challenge I had my doubts about the feasibility of such a DNA technology - but now I don't. Instead I have a new perspective on DNA which might come in handy in my future research."

Wuyts intends to use some of the proceeds from selling the Bitcoin to invest in future science projects, thank the people who helped him, and celebrate passing his PhD in style, with everyone who has supported him along the way. 

  1. Artificial intelligence predicts corruption in Spain
  2. UK scientists run new simulations that show Sanchi oil spill contamination could reach Japan within a month
  3. BU med school uses new AI to significantly improve human kidney analysis
  4. KU researchers use machine learning to predict new details of geothermal heat flux beneath the Greenland Ice Sheet
  5. SETI project homes in on strange 'fast radio bursts'
  6. NYC Health Department spots 10 outbreaks of foodborne illness using Yelp reviews since 2012
  7. CaseMed researchers win $2.8 million to repurpose FDA-approved drugs to treat Alzheimer's disease
  8. UMN makes new discovery that improves brain-like memory, supercomputing
  9. Activist investor Starboard calls for substantial change at Israel’s Mellanox
  10. KU researchers make solar energy more efficient
  11. Researchers discover serious processor vulnerability
  12. We need one global network of 1000 stations to build an Earth observatory
  13. Statistical test relates pathogen mutation to infectious disease progression
  14. Reich lab creates collaborative network for combining models to improve flu season forecasts
  15. The VLA detected radio waves point to likely explanation for neutron-star merger phenomena
  16. Germany’s Tübingen University physicists are the first to link atoms and superconductors in key step towards new hardware for quantum supercomputers, networks
  17. How great is the influence, risk of social, political bots?
  18. Easter Island had a cooperative community, analysis of giant hats reveals
  19. Study resolves controversy about electron structure of defects in graphene
  20. AI insights could help reduce injuries in construction industry

Page 3 of 42