LATEST

The Data Science Institute (DSI) at Columbia awarded Seeds Fund Grants to five research teams whose proposed projects will use state-of-the-art data science to solve seemingly intractable societal problems in the fields of cancer research, medical science, transportation and technology.

Each team will receive up to $100,000 for one year and be eligible for a second year of funding.

"In awarding these grants, the DSI review committee selected projects that brought together teams of scholars who aspire to push the state-of-the-art in data science by conducting novel, ethical and socially beneficial research," said Jeannette M.Wing, Avanessians Director of the Data Science Institute. "The five winning teams combine data-science experts with domain experts who'll work together to transform several fields throughout the university."

The inaugural DSI Seeds Fund Program supports collaborations between faculty members and researchers from various disciplines, departments and schools throughout Columbia University. DSI received several excellent proposals from faculty, which shows a growing and enthusiastic interest in data-science research at Columbia, added Wing.

The seed program is just one of many initiatives that Wing has spearheaded since the summer of 2017, when she was named director of DSI, a world-leading institution in field of data science. The other initiatives include founding a Post-doctoral Fellows Program; a Faculty Recruiting Program; and an Undergraduate Research Program.

What follows are brief descriptions of the winning projects, along with the names and affiliations of the researchers on each team.

p(true): Distilling Truth by Community Rating of Claims on the Web:

The team: Nikolaus Kriegeskorte, Professor, Psychology and Director of Cognitive Imaging, Zuckerman's Institute; Chris Wiggins, Associate Professor, Department of Applied Physics and Applied Mathematics, Columbia Engineering; Nima Mesgarani, Assistant Professor, Electrical Engineering Department, Columbia Engineering; Trenton Jerde, Lecturer, Applied Analytics Program, School of Professional Studies.

The social web is driven by feedback mechanisms, or "likes," which emotionalize the sharing culture and may contribute to the formation of echo chambers and political polarization, according to this team. In their p(true) project, the team will thus build a complementary mechanism for web-based sharing of reasoned judgments, so as to bring rationality to the social web.

Websites such as Wikipedia and Stack Overflow are surprisingly successful at providing a reliable representation of uncontroversial textbook knowledge, the team says. But the web doesn't work well in distilling the probability of contentious claims. The question the team seeks to answer is this: How can the web best be used to enable people to share their judgments and work together to find the truth?

The web gives users amazing power to communicate and collaborate, but users have yet to learn how to use that power to distill the truth, the team says. Web users can access content, share it with others, and give instant feedback on claims. But those actions end up boosting certain claims while blocking others, which amounts to a powerful mechanism of amplification and filtering. If the web is to help people think well together, then the mechanism that determines what information is amplified, the team maintains, should be based on rational judgment, rather than emotional responses as communicated by "likes" and other emoticons.

The team's goal is to build a website, called p(true), that enables people to rate and discuss claims (e.g., a New York Times headline) on the web. The team's method will enable the community to debunk falsehoods and lend support to solid claims. It's also a way to share reasoned judgments, pose questions to the community and start conversations.

Planetary Linguistics:

The team: David Kipping, Assistant Professor, Department of Astronomy; Michael Collins, Professor, Computer Science Department, Columbia Engineering.

In the last few years, the catalog of known exoplanets - planets orbiting other stars - has swelled from a few hundred to several thousand. With 200 billion stars in our galaxy alone and with the rise of online planet-hunting missions, an explosion of discovery of exoplanets seems imminent. Indeed, it has been argued that the cumulative number of known exoplanets doubles every 27 months, analogous to Moore's law, implying a million exoplanets by 2034, and a billion by 2057.

By studying the thousands of extrasolar planets that have been discovered in recent years, can researchers infer the rules by which planetary systems emerge and evolve? This team aims to answer that question by drawing upon an unusually interesting source: computational linguistics.

The challenge of understanding the emergence of planetary systems must be done from afar with limited information, according to the team. In that way, it's similar to attempting to learn an unknown language from snippets of conversation, says Kipping and Collins. Drawing upon this analogy, the two will explore whether the mathematical tools of computational linguistics can reveal the "grammatical" rules of planet formation. Their goals include building predictive models, much like the predictive text on a smartphone, which will be capable of intelligently optimizing telescope resources. They also aim to uncover the rules and regularities in planetary systems, specifically through the application of grammar-induction methods used in computational linguistics.

The team maintains that the pre-existing tools of linguistics and information theory are ideally suited for unlocking the rules of planet formation. Using this framework, they'll consider each planet to be analogous to a word; each planetary system to be a sentence; and the galaxy to be a corpus of text from which they might blindly infer the grammatical rules. The basic thesis of their project is to see whether they can use the well-established mathematical tools of linguistics and information theory to improve our understanding of exoplanetary systems.

The three central questions they hope to answer are: Is it possible to build predictive models that assign a probability distribution over different planetary-system architectures? Can those models predict the presence and properties of missing planets in planetary systems? And can they uncover the rules and regularities in planetary systems, specifically through the application of grammar-induction methods used in computational linguistics?

In short, the team says it intends to "speak planet."

A Game-Theoretical Framework for Modeling Strategic Interactions Between Autonomous and Human-Driven Vehicles:

The team: Xuan Sharon Di, Assistant Professor, Department of Civil Engineering and Engineering Mechanics, Columbia Engineering; Qiang Du, Professor, Applied Physics and Applied Mathematics, Columbia Engineering and Data Science Institute; Xi Chen, Associate Professor, Computer Science Department, Columbia Engineering; Eric Talley, Professor and Co-Director, Millstein Center for Global Markets and Corporate Ownership, Columbia Law School.

Autonomous vehicles, expected to arrive on roads over the course of the next decade, will connect with each other and to the surrounding environment by sending messages regarding timestamp, current location, speed and more.

Yet no one knows exactly how autonomous vehicles (AV) and conventional human-driven vehicles (HV) will co-exist and interact on roads. The vast majority of research has considered the engineering challenges from the perspective either of a single AV immersed in a world of human drivers or one in which only AVs are on the road. Much less research has focused on the transition path between these two scenarios. To fill that gap, this team will develop a framework using the game-theoretic approach to model strategic interactions of HVs and AVs. Their approach aims to understand the strategic interactions likely to link the two types of vehicles.

Along with exploring these technical matters, the team will address the ethical issues associated with autonomous vehicles. What decision should an AV make when confronted with an obstacle? If it swerves to miss the obstacle, it will hit five people, including an old woman and a child, whereas if it goes straight it will hit a car and injure five passengers. Algorithms are designed to solve such problems, but bias can creep in depending on how one frames the problem. The team will investigate how to design algorithms that account for such ethical dilemmas while eliminating bias.

Predicting Personalized Cancer Therapies Using Deep Probabilistic Modeling:

The team: David Blei, Professor, Statistics and Computer Science Department, Columbia Engineering; Raul Rabadan, Professor, Systems Biology and Biomedical Informatics, CUMC; Anna Lasorella, Associate Professor, Pathology and Cell Biology and Pediatrics, CUMC, Wesley Tansey, Postdoc Research Scientist, Department of Systems Biology, CUMC

Precision medicine aims to find the right drug for the right patient at the right moment and with the proper dosage. Such accuracy is especially relevant in cancer treatments, where standard therapies elicit different responses from different patients.

Cancers are caused by the accumulation of alterations in the genome. Exact causes vary between different tumors, and no two tumors have the same set of alterations. One can envision that by mapping a comprehensive set of causes of specific tumors, one can provide patient-specific drug recommendations. And drugs targeting specific alterations in the genome provide some of the most successful therapies.

This team has identified new alterations targeted by drugs that are either currently approved or in clinical trials. Most tumors, however, lack specific targets and patient-specific therapies. A recent study of 2,600 patients at the M.D. Anderson Center for instance showed that genetic analysis permits only 6.4 percent of patients to be paired with a drug aimed specifically at the mutation deemed responsible. This team believes the low percentage highlights the need for new approaches that will match drugs to genomic profiles

The team's goal, therefore, is to model, predict, and target therapeutic sensitivity and resistance of cancer. The predictions will be validated in cancer models designed in Dr. Lasorella's lab, which will enhance the accuracy of effort. They will also integrate Bayesian modeling with variational inference and deep-learning methods, leveraging the expertise of two leading teams in computational genomics (Rabadan's group) and machine learning (Blei's group) across the Medical and Morningside Heights campuses.

They have sequenced RNA and DNA from large collections of tumors and have access to databases of genomic, transcriptomic and drug response profiles of more than 1,000 cancer-cell lines. They will use their expertise in molecular characterization of tumors and machine-learning techniques to model and explore these databases.

Enabling Small-Data Medical Research with Private Transferable Knowledge from Big Data:

The team: Roxana Geambasu, Associate Professor, Computer Science Department, Columbia Engineering; Nicholas Tatonetti, Herbert Irving Assistant Professor, Biomedical Informatics, CUMC; Daniel Hsu, Assistant Professor, Computer Science Department, Columbia Engineering.

Clinical data hold enormous promise to advance medical science. A major roadblock to more research in this area is the lack of infrastructural support for sharing of large-scale, clinical datasets across institutions. Virtually every hospital and clinic collects detailed medical records about its patients. For example, New York Presbyterian has developed a large-scale, longitudinal dataset, called the Clinical Data Warehouse that contains clinical information from more than five million patients.

Daniel Hsu

Such datasets are made available to researchers within their home institutions through the Institutional Review Board (IRB), but hospitals are usually wary of sharing data with other institutions. Their main concern is privacy and the legal and public-relations implications of a potential data breach. At times, hospitals will permit the sharing of statistics from their data but generally cross-institution IRBs take years to finalize, often with rejection. Siloing these large-scale, clinical datasets severely constrains the quality and pace of innovation in data-driven medical science. The siloing effect limits the scope and rigor of the research that can be conducted on these datasets.

To overcome this challenge, the team is building an infrastructure system for sharing machine-learning models of large-scale, clinical datasets that will also preserve patients' privacy. The new system will enable medical researchers in small clinics or pharmaceutical companies to incorporate multitask feature models that are learned from big clinical datasets. The researchers, for example, could call upon New York Presbyterian's Clinical Data Warehouse and bootstrap their own machine-learning models on top of smaller clinical datasets. The multitask feature models, moreover, will protect the privacy of patient records in the large datasets through a rigorous method called differential privacy. The team anticipates that the system will vastly improve the pace of innovation in clinical-data research while alleviating privacy concerns.

New tools will fight skin condition, personalize diagnosis and management, and identify leads for drug treatment

An experienced interdisciplinary team of psoriasis and computational researchers from Case Western Reserve University School of Medicine (CWRU SOM) and University Hospitals Cleveland Medical Center (UHCMC) has received a $6.5M, 5-year grant from the National Institute of Arthritis, Musculoskeletal and Skin Diseases (NIAMS).

The grant supports a Center of Research Translation in Psoriasis (CORT) at CWRU and UHCMC.

"The CORT brings together the strengths of the Department of Dermatology and the Murdough Family Center for Psoriasis in psoriasis care and research with the innovative approaches of our Institute for Computational Biology, and Department of Population & Quantitative Health Sciences (PQHS)," said Kevin Cooper, MD who serves as NIH Contact Principal Investigator and Administrative Director of the Center.

The CORT will integrate cutting-edge technology and bioinformatics with basic and clinical science in order to advance translational discovery and application in psoriasis. The goal is to better identify and treat those psoriasis patients that are more susceptible to developing comorbidities (simultaneous medical conditions) associated with psoriasis, such as cardiovascular disease, diabetes, depression, and psoriatic arthritis. Currently it is difficult for physicians to determine which patients will develop these comorbidities.

The researchers will cull data collected from blood and skin samples of UHCMC psoriasis patients and preclinical models, looking for new patterns and relationships developed using a systems biology approach. The investigative team will combine these data with psoriasis-patient information from CLEARPATH, an Ohio-based database that integrates electronic medical records (EMR) from multiple hospital systems.

"Armed with this large pool of data and new ways to work with it, we can make better connections between groups of patients with similar forms of psoriasis versus an individual's unique biology and therapy options," said Mark Cameron, PhD from the department of PQHS, who leads the computational biology team.

This approach will use tailored computational processes to zero in on drug candidates or repurposed drugs matching the patient profiles from a large database of tens of thousands of drugs--and test the drugs in genetically engineered psoriasis mouse models. The hope is that drugs that are successful in treating the mice will advance to humans.

"The CWRU/UHCMC CORT is a focal point for new and innovative mouse models that mimic psoriasis in humans--including, crucially, comorbidities of human psoriasis patients," said the project's preclinical lead investigator, Nicole Ward, PhD, from the Department of Dermatology.

"We're getting better and better at managing psoriasis patients' skin disease," said Cooper. "But we still don't have a complete grasp of the comorbidities. Through this form of personalized medicine, we think we can make great strides in determining which psoriasis patients are likely to suffer from the various co-occurring ailments, ultimately fashioning treatments for them."

Published study involving Yale-NUS undergraduates provides evidence of similarities between two different mathematical models impacting magnetoresistance research

Two Yale-NUS College in Singapore undergraduates are part of a research team that concluded that two different mathematical models, which describe the same physical phenomenon, are essentially equivalent. The discovery could have implications for future research into magnetoresistance and its practical applications in a wide range of electronic devices. After implementing the two different models of magnetoresistance as supercomputer simulations, Lai Ying Tong, 21, and Silvia Lara, 22, found that the two simulations produced similar results under identical conditions. Magnetoresistance is a physical phenomenon where the electric resistivity of a material changes when subjected to a magnetic field. The research was published in the peer-reviewed journal Physical Review B in December 2017, and presented at international conferences in 2016 and 2017.

The two Yale-NUS undergraduate students worked on the project under the mentorship of Associate Professor Shaffique Adam from Yale-NUS College and the Department of Physics at the National University of Singapore's (NUS) Faculty of Science, and Associate Professor Meera Parish from Monash University. They were guided by Navneeth Ramakrishnan, a Masters student at the Department of Physics at the NUS Faculty of Science and NUS Centre for Advanced 2D Materials, who checked their results and wrote the paper. The findings provided a unified theoretical framework to understand a phenomenon known as 'linear unsaturating magnetoresistance', as well as clear predictions on how to manipulate the effect. Prior to their research, two separate theoretical mathematical models had been proposed to describe how the phenomenon works: the Random Resistance Network (RRN) model and the Effective Medium Theory (EMT) model. Empiricists exploring magnetoresistance generally refer to either of these two models to contextualise their experiments, but do not provide a detailed comparison between the theories and their experimental results. This latest finding not only unifies the two existing theories, but also validates that these theories are accurate descriptions which match with experimental data.

The findings have a direct impact on future research into magnetoresistance, which has practical applications in a diverse range of electronic devices, such as speed sensors, mobile phones, washing machines, and laptops. The principles of magnetoresistance are currently used in magnetic memory storage in hard drives, and certain companies are aiming to produce sensitive magnetometers - devices which measure magnetic fields - that can operate at room temperatures. This is a billion dollar industry which supports applications in many aspects of everyday life ranging from automobile collision warnings to traffic light burnout detection.

Ms Lai and Ms Lara began this research as a summer research project in their first year of undergraduate education, under the guidance of Assoc Prof Adam, who is also with the Centre for Advanced 2D Materials at NUS. Assoc Prof Adam highlighted both students' roles in the research, noting that they reviewed existing literature, implemented the mathematical models in a software environment, MATLAB, as well as ran the simulations and the subsequent analyses. The students also presented the research findings at international conferences, such as the American Physical Society March Meeting 2017.

Yale-NUS College funded the undergraduate students to work on this project. "This level of undergraduate engagement, not only in the research, but in shaping the direction of the work is extremely rare. At Yale-NUS, science students are able to actively participate in such research very early on in their learning experience," said Assoc Prof Adam.

Legion of Honor Award Ceremony (President Tatsuya Tanaka)

Fujitsu has announced that Tatsuya Tanaka, President and Representative Director of Fujitsu Limited, has been named a Chevalier (Knight) of the Légion d'Honneur (Legion of Honor) by the government of France. This high-rank honorary decoration was bestowed in a January 25 ceremony held at Élysée Palace in Paris.

The Legion of Honor, established by Napoléon Bonaparte in 1802, is the highest distinction conferred by the government of France. It is awarded to recognize those people and organizations that have made outstanding contributions to France in a variety of fields.

The Fujitsu Group began operations in France in 1999 and currently has approximately 380 employees in the country. As continental Europe's second largest economy, France represents a strategically important market for Fujitsu. It actively conducts business activities centered on IT services and system products, contributing to the French economy.

In addition, in March 2017, Fujitsu announced an investment to support digital transformation and innovation in France, with the French government's collaboration. This initiative has since involved leading French technology companies, higher education and research institutions, and French start-ups. 

This decoration is in recognition of Fujitsu Group's contributions to the development of France and France-Japan relations. Fujitsu's activities, including the creation of a Center of Excellence dedicated to Artificial Intelligence (AI) in Paris-Saclay in March 2017, will work to drive the development of ecosystems in France, and will be a way to actively engage in further strengthening the bilateral relationship with France.

"I would like to thank Tatsuya Tanaka for his personal commitment and trust in our country's economy, companies, as well as higher education and research institutes. I pay tribute, through him, to the strength of the Fujitsu Group, a global leader in digital innovation, and to its willingness to expand and develop in Europe, and especially in France," said French President Emmanuel Macron.

Tatsuya Tanaka, President and Representative Director commented, “It is indeed an honor to receive this prestigious decoration. I am overjoyed to have our activities thus far receive such high recognition. In developing our business in Europe, a region which for the Fujitsu Group is second in size only to Japan, France is a strategic market of great importance. We are also promoting various efforts in France as a development site for such fields of technology and services, given France's outstanding talent and strong AI ecosystem. We hope to bring these efforts to fruition, and by making ever-more progress in their development, to contribute to further growing the relationship of France and Japan.”

CAPTION Researchers from left to right: Nodar Samkharadze, Lieven Vandersypen and Guoji Zheng. CREDIT TU Delft/Marieke de Lorijn

The worldwide race to create more, better and reliable quantum processors is progressing fast, as a team of TU Delft scientists led by Professor Vandersypen has realised yet again. In a neck-and-neck race with their competitors, they showed that quantum information of an electron spin can be transported to a photon, in a silicon quantum chip. This is important in order to connect quantum bits across the chip and allowing to scale up to large numbers of qubits. Their work was published today in the journal Science.

The quantum supercomputer of the future will be able to carry out computations far beyond the capacity of today's high performance computers. Quantum superpositions and entanglement of quantum bits (qubits) make it possible to perform parallel computations. Scientists and companies worldwide are engaged in creating increasingly better quantum chips with more and more quantum bits. QuTech in Delft is working hard on several types of quantum chips.

Familiar material

The core of the quantum chips is made of silicon. "This is a material that we are very familiar with," explains Professor Lieven Vandersypen of QuTech and the Kavli Institute of Nanoscience Delft, "Silicon is widely used in transistors and so can be found in all electronic devices." But silicon is also a very promising material for quantum technology. PhD candidate Guoji Zheng: "We can use electrical fields to capture single electrons in silicon for use as quantum bits (qubits). This is an attractive material as it ensures the information in the qubit can be stored for a long time."

Large systems

Making useful computations requires large numbers of qubits and it is this upscaling to large numbers that is providing a challenge worldwide. "To use a lot of qubits at the same time, they need to be connected to each other; there needs to be good communication", explains researcher Nodar Samkharadze. At present the electrons that are captured as qubits in silicon can only make direct contact with their immediate neighbours. Nodar: "That makes it tricky to scale up to large numbers of qubits."

Neck-and-neck race

Other quantum systems use photons for long-distance interactions. For years, this was also a major goal for silicon. Only in recent years have various scientists made progress on this. The Delft scientists have now shown that a single electron spin and a single photon can be coupled on a silicon chip. This coupling makes it possible in principle to transfer quantum information between a spin and a photon. Guoji Zheng: "This is important to connect distant quantum bits on a silicon chip, thereby paving the way to upscaling quantum bits on silicon chips."

On to the next step

Vandersypen is proud of his team: "My team achieved this result in a relatively short time and under great pressure from worldwide competition." It is a true Delft breakthrough: "The substrate is made in Delft, the chip created in the Delft cleanrooms, and all measurements carried out at QuTech," adds Nodar Samkharadze. The scientists are now working hard on the next steps. Vandersypen: "The goal now is to transfer the information via a photon from on electron spin to another."

On Jan. 21, 2015, Nick Goldman of EMBL-EBI explained a new method for storing digital information in DNA to a packed audience at a World Economic Forum meeting in Davos, Switzerland.

University of Antwerp Ph.D. student Sander Wuyts has won the DNA Storage Bitcoin challenge, issued by Nick Goldman of EMBL-EBI in 2015

University of Antwerp PhD student Sander Wuyts has won the DNA Storage Bitcoin challenge, issued by Nick Goldman of the European Bioinformatics Institute (EMBL-EBI) in 2015. The value of the coin has risen rapidly in three years, while the value of scientific progress is inestimable.

*The challenge*

On 21 January 2015, Nick Goldman of the European Bioinformatics Institute (EMBL-EBI) explained a new method for storing digital information in DNA, developed at EMBL-EBI, to a packed audience at a World Economic Forum meeting in Davos, Switzerland. In his talk, he issued a challenge:

Goldman distributed test tubes containing samples of DNA encoding 1 Bitcoin to the audience (and subsequently posted samples to people who requested them). The first person to sequence (read) the DNA and decode the files it contains could take possession of the Bitcoin.

"Bitcoin is a form of money that now only exists on computers, and with cryptography, that's something we can easily store in DNA," explained Goldman in 2015. "We've bought a Bitcoin and encoded the information into DNA. You can follow the technical description of our method and sequence the sample, decode the Bitcoin. Whoever gets there first and decodes it gets the Bitcoin."

To win, competitors needed to decode the DNA sample to find its 'private key'. All they needed was the sample, access to a DNA sequencing machine and a grasp of how the code works.

The value of a Bitcoin in 2015: around 200 euros.

The deadline: 21 January 2018.

*Good timing*

One week from the deadline, Sander Wuyts, PhD student in the Department of Bioengineering at the University of Antwerp and Vrije Universiteit Brussel, Belgium, was the first to master the method and decode the private key, taking possession of the Bitcoin.

Its value on 19 January 2018: around 9500 euros.

*The contender*

Now completing his PhD in microbiology - exploring the universe of bacteria through DNA - Wuyts has the right balance of passion, coding skills and great colleagues to tackle complex puzzles like Goldman's DNA storage' Bitcoin' challenge.

Wuyts saw Goldman issue the challenge on YouTube back in 2015, but it was the tweet about the deadline in December 2017 - plus the skills he had acquired in the meantime - that made him swing into action. He wrote to Goldman for a sample, sequenced the sample with help from his colleague Eline Oerlemans, and set to work decoding.

Once he got started, Wuyts became increasingly aware that decoding the Bitcoin wouldn't be quite as simple as following a recipe. After one failed attempt and an essential pause over Christmas, he worked tirelessly, with the help of his colleague Stijn Wittouck, to put the data from sequencing into the right order and decode the files.

It didn't work.

"One week before the deadline, I was starting to give up. It felt like we didn't produce enough good quality data to decode the whole thing. However, on the way home from a small 'hackathon' together with Stijn I realised that I made a mistake in one of the algorithms. At home I deleted just one line of code and re-ran the whole program overnight. That next Sunday I was extremely surprised and excited that suddenly the decoded files were right there, perfectly readable - I clicked on them and they opened. There were the instructions on how to claim the Bitcoin, a drawing of James Joyce and some other things."

*What Sander won*

Wuyts, who once invested poster-prize money in cryptocurrency to figure out how this technology works, is proud of having won the challenge but cautious about the prize.

"I didn't win thousands of euros, I won one Bitcoin," he says. "I will probably cash it out, because I have my doubts about the long-term value of this cryptocurrency. What's more important is that before participating in this challenge I had my doubts about the feasibility of such a DNA technology - but now I don't. Instead I have a new perspective on DNA which might come in handy in my future research."

Wuyts intends to use some of the proceeds from selling the Bitcoin to invest in future science projects, thank the people who helped him, and celebrate passing his PhD in style, with everyone who has supported him along the way. 

Artificial neural networks and a database of real cases have revealed the most predictive factors of corruption. / Pixabay

Researchers from the University of Valladolid in Spain have created a supercomputer model based on neural networks which provides in which Spanish provinces cases of corruption can appear with greater probability, as well as the conditions that favor their appearance. This alert system confirms that the probabilities increase when the same party stays in government more years.

Two researchers from the University of Valladolid have developed a model with artificial neural networks to predict in which Spanish provinces corruption cases could appear with more probability, after one, two and up to three years.

The study, published in Social Indicators Research, does not mention the provinces most prone to corruption so as not to generate controversy, explains one of the authors, Ivan Pastor, to Sinc, who recalls that, in any case, "a greater propensity or high probability does not imply corruption will actually happen."

The data indicate that the real estate tax (Impuesto de Bienes Inmuebles), the exaggerated increase in the price of housing, the opening of bank branches and the creation of new companies are some of the variables that seem to induce public corruption, and when they are added together in a region, it should be taken into account to carry out a more rigorous control of the public accounts.

"In addition, as might be expected, our model confirms that the increase in the number of years in the government of the same political party increases the chances of corruption, regardless of whether or not the party governs with majority," says Pastor.

"Anyway, fortunately - he adds -, for the next years this alert system predicts less indications of corruption in our country. This is mainly due to the greater public pressure on this issue and to the fact that the economic situation has worsened significantly during the crisis".

To carry out the study, the authors have relied on all cases of corruption that appeared in Spain between 2000 and 2012, such as the Mercasevilla case (in which the managers of this public company of the Seville City Council were charged) and the Baltar case (in which the president of the Diputación de Ourense was sentenced for more than a hundred contracts "that did not complied with the legal requirements").

The collection and analysis of all this information has been done with neural networks, which show the most predictive factors of corruption. "The use of this AI technique is novel, as well as that of a database of real cases, since until now more or less subjective indexes of perception of corruption were used, scorings assigned to each country by agencies such as Transparency International, based on surveys of businessmen and national analysts", highlights Pastor.

The authors hope that this study will contribute to better direct efforts to end corruption, focusing the efforts on those areas with the greatest propensity to appear, as well as continuing to move forward to apply their model internationally.

Ocean circulation around the area affected by the Sanchi spill. Brighter colour indicates faster currents and arrows indicate current direction. Of particular note is the strong flow of the Kuroshio Current running diagonally from left to right. This is a western boundary current similar to the Atlantic’s Gulf Stream. Within the East China, Yellow and Japan seas, important local currents such as the China Coastal Current and the Tsushima Warm Current can also be seen, although these are much weaker.

An updated emergency ocean model simulation shows that waters polluted by the sinking Sanchi oil tanker could reach Japan within a month. The new supercomputer simulation also shows that although pollution is most likely to reach the Japanese coast, it is also likely to affect Jeju Island, with a population of 600,000. However, the fate of the leaking oil is highly uncertain, as it may burn, evaporate, or mix into the surface ocean and contaminate the environment for an extended duration.

These latest predictions have been made possible by new information about where the Sanchi oil tanker finally sank. Based on this update, the team of scientists from the National Oceanography Centre (NOC) and the University of Southampton have run new ocean model simulations to assess the potential impact of local ocean circulation on the spread of pollutants. These simulations were run on the leading-edge, high-resolution global ocean circulation model, NEMO.

The Sanchi tanker collision originally occurred on the border between the Yellow and East China seas, an area with complex, strong and highly variable surface currents. However, in the following week, the stricken tanker drifted before sinking much closer to the edge of the East China Sea, and to the major western boundary current known as the Kuroshio Current.

The predictions of these new simulations differ significantly from those released last week, which suggested that contaminated waters would largely remain offshore, but could reach the Korean coast within three months. Using the same methods, but this new spill location, the revised simulations find that pollution may now be entrained within the Kuroshio and Tsushima currents.

These currents run adjacent to the northern and southern coasts of southwestern Japan. In the case of contaminated waters reaching the Kuroshio, the simulations suggest these will be transported quickly along the southern coasts of Kyushu, Shikoku and Honshu islands, potentially reaching the Greater Tokyo Area within two months. Pollution within the Kuroshio may then be swept into deeper oceanic waters of the North Pacific.

The revised simulations suggest that pollution from the spill may be distributed much further and faster than previously thought, and that larger areas of the coast may be impacted. The new simulations also shift the focus of possible impacts from South Korea to the Japanese mainland, where many more people and activities, including fisheries, may be affected.

Leading this research, Dr Katya Popova from the National Oceanography Centre, said: “Oil spills can have a devastating effect on the marine environment and on coastal communities. Strong ocean currents mean that, once released into the ocean, an oil spill can relatively rapidly spread over large distances. So understanding ocean currents and the timescale on which they transport ocean pollutants is critical during any maritime accidents, especially ones involving oil leaks.”

The team of scientists involved in this study ‘dropped’ virtual oil particles into the NEMO ocean model and tracked where they ended up over a three month period. Simulations were run for a series of scenarios of ocean circulation typical for the area the oil spill occurred in, and for this time of year. This allowed the scientists to produce a map of the potential extent of the oil spill, showing the risk of oil pollutants reaching a particular part of the ocean.

However, Stephen Kelly, the University of Southampton PhD student who ran the model simulations, said: “There was a high level of variation between different scenarios, depending on a number of factors. Primarily the location of the original oil spill and the way in which atmospheric conditions were affecting ocean circulation at that time.”

NOC scientist, Dr Andrew Yool, who collaborated in this study, discussed how the approach used during these model simulations could help optimise future search and recovery operations at sea by rapidly modelling oil spills in real-time. He said: “By using pre-existing ocean model output we can estimate which areas could potentially be affected over weekly to monthly timescales, and quickly at low computing cost. This approach complements traditional forecast simulations, which are very accurate for a short period of time but lose their reliability on timescales that are required to understand the fate of the spill on the scale from days to weeks.”

The NEMO ocean model is supported by UK National Capability funding from the Natural Environment Research Council (NERC). This model is widely used by both UK and international groups for research into ocean circulation, climate and marine ecosystems, and operationally as part of the UK Met Office’s weather forecasting.

The ability to quantify the extent of kidney damage and predict the life remaining in the kidney, using an image obtained at the time when a patient visits the hospital for a kidney biopsy, now is possible using a supercomputer model based on artificial intelligence (AI).

The findings, which appear in the journal Kidney International Reports, can help make predictions at the point-of-care and assist clinical decision-making.

Nephropathology is a specialization that analyzes kidney biopsy images. While large clinical centers in the U.S. might greatly benefit from having 'in-house' nephropathologists, this is not the case in most parts of the country or around the world.

According to the researchers, the application of machine learning frameworks, such as convolutional neural networks (CNN) for object recognition tasks, is proving to be valuable for classification of diseases as well as reliable for the analysis of radiology images including malignancies.

To test the feasibility of applying this technology to the analysis of routinely obtained kidney biopsies, the researchers performed a proof of principle study on kidney biopsy sections with various amounts of kidney fibrosis (also commonly known as scarring of tissue). The machine learning framework based on CNN relied on pixel density of digitized images, while the severity of disease was determined by several clinical laboratory measures and renal survival. CNN model performance then was compared with that of the models generated using the amount of fibrosis reported by a nephropathologist as the sole input and corresponding lab measures and renal survival as the outputs. For all scenarios, CNN models outperformed the other models.

"While the trained eyes of expert pathologists are able to gauge the severity of disease and detect nuances of kidney damage with remarkable accuracy, such expertise is not available in all locations, especially at a global level. Moreover, there is an urgent need to standardize the quantification of kidney disease severity such that the efficacy of therapies established in clinical trials can be applied to treat patients with equally severe disease in routine practice," explained corresponding author Vijaya B. Kolachalama, PhD, assistant professor of medicine at Boston University School of Medicine. "When implemented in the clinical setting, our work will allow pathologists to see things early and obtain insights that were not previously available," said Kolachalama.

The researchers believe their model has both diagnostic and prognostic applications and may lead to the development of a software application for diagnosing kidney disease and predicting kidney survival. "If healthcare providers around the world can have the ability to classify kidney biopsy images with the accuracy of a nephropathologist right at the point-of-care, then this can significantly impact renal practice. In essence, our model has the potential to act as a surrogate nephropathologist, especially in resource-limited settings," said Kolachalama.

Three attending nephrologists, Vipul Chitalia, MD, David Salant, MD and Jean Francis, MD, as well as a nephropathologist Joel Henderson, MD, all from Boston Medical Center contributed to this study.

Each year, kidney disease kills more people than breast or prostate cancer, and the overall prevalence of chronic kidney disease (CKD) in the general population is approximately 14 percent. More than 661,000 Americans have kidney failure. Of these, 468,000 individuals are on dialysis, and roughly 193,000 live with a functioning kidney transplant. In 2013, more than 47,000 Americans died from kidney disease. Medicare spending for patients with CKD ages 65 and older exceeded $50 billion in 2013 and represented 20 percent of all Medicare spending in this age group. Medicare fee-for-service spending for kidney failure beneficiaries rose by 1.6 percent, from $30.4 billion in 2012 to $30.9 billion in 2013, accounting for 7.1 percent of the overall Medicare paid claims costs.

  1. KU researchers use machine learning to predict new details of geothermal heat flux beneath the Greenland Ice Sheet
  2. SETI project homes in on strange 'fast radio bursts'
  3. NYC Health Department spots 10 outbreaks of foodborne illness using Yelp reviews since 2012
  4. CaseMed researchers win $2.8 million to repurpose FDA-approved drugs to treat Alzheimer's disease
  5. UMN makes new discovery that improves brain-like memory, supercomputing
  6. Activist investor Starboard calls for substantial change at Israel’s Mellanox
  7. KU researchers make solar energy more efficient
  8. Researchers discover serious processor vulnerability
  9. We need one global network of 1000 stations to build an Earth observatory
  10. Statistical test relates pathogen mutation to infectious disease progression
  11. Reich lab creates collaborative network for combining models to improve flu season forecasts
  12. The VLA detected radio waves point to likely explanation for neutron-star merger phenomena
  13. Germany’s Tübingen University physicists are the first to link atoms and superconductors in key step towards new hardware for quantum supercomputers, networks
  14. How great is the influence, risk of social, political bots?
  15. Easter Island had a cooperative community, analysis of giant hats reveals
  16. Study resolves controversy about electron structure of defects in graphene
  17. AI insights could help reduce injuries in construction industry
  18. SuperComputational modeling key to design supercharged virus-killing nanoparticles
  19. UW's Hyak supercomputer overcomes obstacles in peptide drug development
  20. Brigham and Women's Hospital's Golden findings show potential use of AI in detecting spread of breast cancer

Page 3 of 42