LATEST

On Jan. 21, 2015, Nick Goldman of EMBL-EBI explained a new method for storing digital information in DNA to a packed audience at a World Economic Forum meeting in Davos, Switzerland.

University of Antwerp Ph.D. student Sander Wuyts has won the DNA Storage Bitcoin challenge, issued by Nick Goldman of EMBL-EBI in 2015

University of Antwerp PhD student Sander Wuyts has won the DNA Storage Bitcoin challenge, issued by Nick Goldman of the European Bioinformatics Institute (EMBL-EBI) in 2015. The value of the coin has risen rapidly in three years, while the value of scientific progress is inestimable.

*The challenge*

On 21 January 2015, Nick Goldman of the European Bioinformatics Institute (EMBL-EBI) explained a new method for storing digital information in DNA, developed at EMBL-EBI, to a packed audience at a World Economic Forum meeting in Davos, Switzerland. In his talk, he issued a challenge:

Goldman distributed test tubes containing samples of DNA encoding 1 Bitcoin to the audience (and subsequently posted samples to people who requested them). The first person to sequence (read) the DNA and decode the files it contains could take possession of the Bitcoin.

"Bitcoin is a form of money that now only exists on computers, and with cryptography, that's something we can easily store in DNA," explained Goldman in 2015. "We've bought a Bitcoin and encoded the information into DNA. You can follow the technical description of our method and sequence the sample, decode the Bitcoin. Whoever gets there first and decodes it gets the Bitcoin."

To win, competitors needed to decode the DNA sample to find its 'private key'. All they needed was the sample, access to a DNA sequencing machine and a grasp of how the code works.

The value of a Bitcoin in 2015: around 200 euros.

The deadline: 21 January 2018.

*Good timing*

One week from the deadline, Sander Wuyts, PhD student in the Department of Bioengineering at the University of Antwerp and Vrije Universiteit Brussel, Belgium, was the first to master the method and decode the private key, taking possession of the Bitcoin.

Its value on 19 January 2018: around 9500 euros.

*The contender*

Now completing his PhD in microbiology - exploring the universe of bacteria through DNA - Wuyts has the right balance of passion, coding skills and great colleagues to tackle complex puzzles like Goldman's DNA storage' Bitcoin' challenge.

Wuyts saw Goldman issue the challenge on YouTube back in 2015, but it was the tweet about the deadline in December 2017 - plus the skills he had acquired in the meantime - that made him swing into action. He wrote to Goldman for a sample, sequenced the sample with help from his colleague Eline Oerlemans, and set to work decoding.

Once he got started, Wuyts became increasingly aware that decoding the Bitcoin wouldn't be quite as simple as following a recipe. After one failed attempt and an essential pause over Christmas, he worked tirelessly, with the help of his colleague Stijn Wittouck, to put the data from sequencing into the right order and decode the files.

It didn't work.

"One week before the deadline, I was starting to give up. It felt like we didn't produce enough good quality data to decode the whole thing. However, on the way home from a small 'hackathon' together with Stijn I realised that I made a mistake in one of the algorithms. At home I deleted just one line of code and re-ran the whole program overnight. That next Sunday I was extremely surprised and excited that suddenly the decoded files were right there, perfectly readable - I clicked on them and they opened. There were the instructions on how to claim the Bitcoin, a drawing of James Joyce and some other things."

*What Sander won*

Wuyts, who once invested poster-prize money in cryptocurrency to figure out how this technology works, is proud of having won the challenge but cautious about the prize.

"I didn't win thousands of euros, I won one Bitcoin," he says. "I will probably cash it out, because I have my doubts about the long-term value of this cryptocurrency. What's more important is that before participating in this challenge I had my doubts about the feasibility of such a DNA technology - but now I don't. Instead I have a new perspective on DNA which might come in handy in my future research."

Wuyts intends to use some of the proceeds from selling the Bitcoin to invest in future science projects, thank the people who helped him, and celebrate passing his PhD in style, with everyone who has supported him along the way. 

Artificial neural networks and a database of real cases have revealed the most predictive factors of corruption. / Pixabay

Researchers from the University of Valladolid in Spain have created a supercomputer model based on neural networks which provides in which Spanish provinces cases of corruption can appear with greater probability, as well as the conditions that favor their appearance. This alert system confirms that the probabilities increase when the same party stays in government more years.

Two researchers from the University of Valladolid have developed a model with artificial neural networks to predict in which Spanish provinces corruption cases could appear with more probability, after one, two and up to three years.

The study, published in Social Indicators Research, does not mention the provinces most prone to corruption so as not to generate controversy, explains one of the authors, Ivan Pastor, to Sinc, who recalls that, in any case, "a greater propensity or high probability does not imply corruption will actually happen."

The data indicate that the real estate tax (Impuesto de Bienes Inmuebles), the exaggerated increase in the price of housing, the opening of bank branches and the creation of new companies are some of the variables that seem to induce public corruption, and when they are added together in a region, it should be taken into account to carry out a more rigorous control of the public accounts.

"In addition, as might be expected, our model confirms that the increase in the number of years in the government of the same political party increases the chances of corruption, regardless of whether or not the party governs with majority," says Pastor.

"Anyway, fortunately - he adds -, for the next years this alert system predicts less indications of corruption in our country. This is mainly due to the greater public pressure on this issue and to the fact that the economic situation has worsened significantly during the crisis".

To carry out the study, the authors have relied on all cases of corruption that appeared in Spain between 2000 and 2012, such as the Mercasevilla case (in which the managers of this public company of the Seville City Council were charged) and the Baltar case (in which the president of the Diputación de Ourense was sentenced for more than a hundred contracts "that did not complied with the legal requirements").

The collection and analysis of all this information has been done with neural networks, which show the most predictive factors of corruption. "The use of this AI technique is novel, as well as that of a database of real cases, since until now more or less subjective indexes of perception of corruption were used, scorings assigned to each country by agencies such as Transparency International, based on surveys of businessmen and national analysts", highlights Pastor.

The authors hope that this study will contribute to better direct efforts to end corruption, focusing the efforts on those areas with the greatest propensity to appear, as well as continuing to move forward to apply their model internationally.

Ocean circulation around the area affected by the Sanchi spill. Brighter colour indicates faster currents and arrows indicate current direction. Of particular note is the strong flow of the Kuroshio Current running diagonally from left to right. This is a western boundary current similar to the Atlantic’s Gulf Stream. Within the East China, Yellow and Japan seas, important local currents such as the China Coastal Current and the Tsushima Warm Current can also be seen, although these are much weaker.

An updated emergency ocean model simulation shows that waters polluted by the sinking Sanchi oil tanker could reach Japan within a month. The new supercomputer simulation also shows that although pollution is most likely to reach the Japanese coast, it is also likely to affect Jeju Island, with a population of 600,000. However, the fate of the leaking oil is highly uncertain, as it may burn, evaporate, or mix into the surface ocean and contaminate the environment for an extended duration.

These latest predictions have been made possible by new information about where the Sanchi oil tanker finally sank. Based on this update, the team of scientists from the National Oceanography Centre (NOC) and the University of Southampton have run new ocean model simulations to assess the potential impact of local ocean circulation on the spread of pollutants. These simulations were run on the leading-edge, high-resolution global ocean circulation model, NEMO.

The Sanchi tanker collision originally occurred on the border between the Yellow and East China seas, an area with complex, strong and highly variable surface currents. However, in the following week, the stricken tanker drifted before sinking much closer to the edge of the East China Sea, and to the major western boundary current known as the Kuroshio Current.

The predictions of these new simulations differ significantly from those released last week, which suggested that contaminated waters would largely remain offshore, but could reach the Korean coast within three months. Using the same methods, but this new spill location, the revised simulations find that pollution may now be entrained within the Kuroshio and Tsushima currents.

These currents run adjacent to the northern and southern coasts of southwestern Japan. In the case of contaminated waters reaching the Kuroshio, the simulations suggest these will be transported quickly along the southern coasts of Kyushu, Shikoku and Honshu islands, potentially reaching the Greater Tokyo Area within two months. Pollution within the Kuroshio may then be swept into deeper oceanic waters of the North Pacific.

The revised simulations suggest that pollution from the spill may be distributed much further and faster than previously thought, and that larger areas of the coast may be impacted. The new simulations also shift the focus of possible impacts from South Korea to the Japanese mainland, where many more people and activities, including fisheries, may be affected.

Leading this research, Dr Katya Popova from the National Oceanography Centre, said: “Oil spills can have a devastating effect on the marine environment and on coastal communities. Strong ocean currents mean that, once released into the ocean, an oil spill can relatively rapidly spread over large distances. So understanding ocean currents and the timescale on which they transport ocean pollutants is critical during any maritime accidents, especially ones involving oil leaks.”

The team of scientists involved in this study ‘dropped’ virtual oil particles into the NEMO ocean model and tracked where they ended up over a three month period. Simulations were run for a series of scenarios of ocean circulation typical for the area the oil spill occurred in, and for this time of year. This allowed the scientists to produce a map of the potential extent of the oil spill, showing the risk of oil pollutants reaching a particular part of the ocean.

However, Stephen Kelly, the University of Southampton PhD student who ran the model simulations, said: “There was a high level of variation between different scenarios, depending on a number of factors. Primarily the location of the original oil spill and the way in which atmospheric conditions were affecting ocean circulation at that time.”

NOC scientist, Dr Andrew Yool, who collaborated in this study, discussed how the approach used during these model simulations could help optimise future search and recovery operations at sea by rapidly modelling oil spills in real-time. He said: “By using pre-existing ocean model output we can estimate which areas could potentially be affected over weekly to monthly timescales, and quickly at low computing cost. This approach complements traditional forecast simulations, which are very accurate for a short period of time but lose their reliability on timescales that are required to understand the fate of the spill on the scale from days to weeks.”

The NEMO ocean model is supported by UK National Capability funding from the Natural Environment Research Council (NERC). This model is widely used by both UK and international groups for research into ocean circulation, climate and marine ecosystems, and operationally as part of the UK Met Office’s weather forecasting.

The ability to quantify the extent of kidney damage and predict the life remaining in the kidney, using an image obtained at the time when a patient visits the hospital for a kidney biopsy, now is possible using a supercomputer model based on artificial intelligence (AI).

The findings, which appear in the journal Kidney International Reports, can help make predictions at the point-of-care and assist clinical decision-making.

Nephropathology is a specialization that analyzes kidney biopsy images. While large clinical centers in the U.S. might greatly benefit from having 'in-house' nephropathologists, this is not the case in most parts of the country or around the world.

According to the researchers, the application of machine learning frameworks, such as convolutional neural networks (CNN) for object recognition tasks, is proving to be valuable for classification of diseases as well as reliable for the analysis of radiology images including malignancies.

To test the feasibility of applying this technology to the analysis of routinely obtained kidney biopsies, the researchers performed a proof of principle study on kidney biopsy sections with various amounts of kidney fibrosis (also commonly known as scarring of tissue). The machine learning framework based on CNN relied on pixel density of digitized images, while the severity of disease was determined by several clinical laboratory measures and renal survival. CNN model performance then was compared with that of the models generated using the amount of fibrosis reported by a nephropathologist as the sole input and corresponding lab measures and renal survival as the outputs. For all scenarios, CNN models outperformed the other models.

"While the trained eyes of expert pathologists are able to gauge the severity of disease and detect nuances of kidney damage with remarkable accuracy, such expertise is not available in all locations, especially at a global level. Moreover, there is an urgent need to standardize the quantification of kidney disease severity such that the efficacy of therapies established in clinical trials can be applied to treat patients with equally severe disease in routine practice," explained corresponding author Vijaya B. Kolachalama, PhD, assistant professor of medicine at Boston University School of Medicine. "When implemented in the clinical setting, our work will allow pathologists to see things early and obtain insights that were not previously available," said Kolachalama.

The researchers believe their model has both diagnostic and prognostic applications and may lead to the development of a software application for diagnosing kidney disease and predicting kidney survival. "If healthcare providers around the world can have the ability to classify kidney biopsy images with the accuracy of a nephropathologist right at the point-of-care, then this can significantly impact renal practice. In essence, our model has the potential to act as a surrogate nephropathologist, especially in resource-limited settings," said Kolachalama.

Three attending nephrologists, Vipul Chitalia, MD, David Salant, MD and Jean Francis, MD, as well as a nephropathologist Joel Henderson, MD, all from Boston Medical Center contributed to this study.

Each year, kidney disease kills more people than breast or prostate cancer, and the overall prevalence of chronic kidney disease (CKD) in the general population is approximately 14 percent. More than 661,000 Americans have kidney failure. Of these, 468,000 individuals are on dialysis, and roughly 193,000 live with a functioning kidney transplant. In 2013, more than 47,000 Americans died from kidney disease. Medicare spending for patients with CKD ages 65 and older exceeded $50 billion in 2013 and represented 20 percent of all Medicare spending in this age group. Medicare fee-for-service spending for kidney failure beneficiaries rose by 1.6 percent, from $30.4 billion in 2012 to $30.9 billion in 2013, accounting for 7.1 percent of the overall Medicare paid claims costs.

CAPTION These are geothermal heat flux predictions for Greenland. Direct GHF measurements from the coastal rock cores, inferences from ice cores, and additional Gaussian-fit GHF data around ice core sites are used as training samples. Predictions are shown for three different values. The white dashed region roughly shows the extent of elevated heat flux and a possible trajectory of Greenland's movement over the Icelandic plume.

A paper appearing in Geophysical Research Letters uses machine learning to craft an improved model for understanding geothermal heat flux -- heat emanating from the Earth's interior -- below the Greenland Ice Sheet. It's a research approach new to glaciology that could lead to more accurate predictions for ice-mass loss and global sea-level rise.

Among the key findings:

Greenland has an anomalously high heat flux in a relatively large northern region spreading from the interior to the east and west.

Southern Greenland has relatively low geothermal heat flux, corresponding with the extent of the North Atlantic Craton, a stable portion of one of the oldest extant continental crusts on the planet. The research model predicts slightly elevated heat flux upstream of several fast-flowing glaciers in Greenland, including Jakobshavn Isbræ in the central-west, the fastest moving glacier on Earth.

"Heat that comes up from the interior of the Earth contributes to the amount of melt on the bottom of the ice sheet -- so it's extremely important to understand the pattern of that heat and how it's distributed at the bottom of the ice sheet," said Soroush Rezvanbehbahani, a doctoral student in geology at the University of Kansas who spearheaded the research. "When we walk on a slope that's wet, we're more likely to slip. It's the same idea with ice -- when it isn't frozen, it's more likely to slide into the ocean. But we don't have an easy way to measure geothermal heat flux except for extremely expensive field campaigns that drill through the ice sheet. Instead of expensive field surveys, we try to do this through statistical methods."

Rezvanbehbahani and his colleagues have adopted machine learning -- a type of artificial intelligence using statistical techniques and computer algorithms -- to predict heat flux values that would be daunting to obtain in the same detail via conventional ice cores.

Using all available geologic, tectonic and geothermal heat flux data for Greenland -- along with geothermal heat flux data from around the globe -- the team deployed a machine learning approach that predicts geothermal heat flux values under the ice sheet throughout Greenland based on 22 geologic variables such as bedrock topography, crustal thickness, magnetic anomalies, rock types and proximity to features like trenches, ridges, young rifts, volcanoes and hot spots.

"We have a lot of data points from around the Earth -- we know in certain parts of the world the crust is a certain thickness, composed of a specific kind of rock and located a known distance from a volcano -- and we take those relationships and apply them to what we know about Greenland," said co-author Leigh Stearns, associate professor of geology at KU.

The researchers said their new predictive model is a "definite improvement" over current models of geothermal heat flux that don't incorporate as many variables. Indeed, many numerical ice sheet models of Greenland assume that a uniform value of geothermal heat flux exists everywhere across Greenland.

"Most other models really only honor one particular data set," Stearns said. "They look at geothermal heat flux through seismic signals or magnetic data in Greenland, but not crustal thickness or rock type or distance from a hot spot. But we know those are related to geothermal heat flux. We try to incorporate as many geologic data sets as we can rather than assuming one is the most important."

In addition to Rezvanbehbahani and Stearns, the research team behind the new paper includes KU's J. Doug Walker and C.J. van der Veen, as well as Amir Kadivar of McGill University. Rezvanbehbahani and Stearns also are affiliated with the Center for the Remote Sensing of Ice Sheets, headquartered at KU.

The authors found the five most important geologic features in predicting geothermal flux values are topography, distance to young rifts, distance to trench, depth of lithosphere-asthenosphere boundary (layers of the Earth's mantle) and depth to Mohoroviči? discontinuity (the boundary between the crust and the mantle in the Earth). The researchers said their geothermal heat flux map of Greenland is expected to be within about 15 percent of true values.

"The most interesting finding is the sharp contrast between the south and the north of Greenland," said Rezvanbehbahani. "We had little information in the south, but we had three or four more cores in the northern part of the ice sheet. Based on the southern core we thought this was a localized low heat-flux region -- but our model shows that a much larger part of the southern ice sheet has low heat flux. By contrast, in the northern regions, we found large areas with high geothermal heat flux. This isn't as surprising because we have one ice core with a very high reading. But the spatial pattern and how the heat flux is distributed, that a was a new finding. That's not just one northern location with high heat flux, but a wide region."

The investigators said their model would be made even more accurate as more information on Greenland is compiled in the research community.

"We give the slight disclaimer that this is just another model -- it's our best statistical model -- but we have not reproduced reality," said Stearns. "In Earth science and glaciology, we're seeing an explosion of publicly available data. Machine learning technology that synthesizes this data and helps us learn from the whole range of data sensors is becoming increasingly important. It's exciting to be at the forefront."

CAPTION The 100-meter Green Bank Telescope in West Virginia is shown amid a starry night. A flash from the Fast Radio Burst source FRB 121102 is seen traveling toward the telescope. The burst shows a complicated structure, with multiple bright peaks; these may be created by the burst emission process itself or imparted by the intervening plasma near the source. This burst was detected using a new recording system developed by the Breakthrough Listen project. CREDIT Image design: Danielle Futselaar

Recent observations of a mysterious and distant object that emits intermittent bursts of radio waves so bright that they're visible across the universe provide new data about the source but fail to clear up the mystery of what causes them.

The observations by the Breakthrough Listen team at UC Berkeley using the Robert C. Byrd Green Bank Telescope in West Virginia show that the fast radio bursts from this object, called FRB 121102, are nearly 100 percent linearly polarized, an indication that the source of the bursts is embedded in strong magnetic fields like those around a massive black hole.

The measurements confirm observations by another team of astronomers from the Netherlands, which detected the polarized bursts using the William E. Gordon Telescope at the Arecibo Observatory in Puerto Rico.

Both teams will report their findings during a media briefing on Jan. 10 at a meeting of the American Astronomical Society in Washington, D.C. The results are detailed in a combined paper to be published online the same day by the journal Nature.

Fast radio bursts are brief, bright pulses of radio emission from distant but so far unknown sources, and FRB 121102 is the only one known to repeat: more than 200 high-energy bursts have been observed coming from this source, which is located in a dwarf galaxy about 3 billion light years from Earth.

The nearly 100 percent polarization of the radio bursts is unusual, and has only been seen in radio emissions from the extreme magnetic environments around massive black holes, such as those at the centers of galaxies. The Dutch and Breakthrough Listen teams suggest that the fast radio bursts may come from a highly magnetized rotating neutron star - a magnetar - in the vicinity of a massive black hole that is still growing as gas and dust fall into it.

The short bursts, which range from 30 microseconds to 9 milliseconds in duration, indicate that the source could be as small as 10 kilometers across - the typical size of a neutron star.

Other possible sources are a magnetar interacting with the nebula of material shed when the original star exploded to produce the magnetar; or interactions with the highly magnetized wind from a rotating neutron star, or pulsar.

"At this point, we don't really know the mechanism. There are many questions, such as, how can a rotating neutron star produce the high amount of energy typical of an FRB?" said UC Berkeley postdoctoral fellow Vishal Gajjar of Breakthrough Listen and the Berkeley SETI Research Center.

Gajjar will participate in the media briefing with three members of the Dutch ASTRON team: Daniele Michilli and Jason Hessels of the University of Amsterdam and Betsey Adams of the Kapteyn Astronomical Institute.

"This result is an excellent demonstration of the capabilities of the Breakthrough Listen instrumentation and the synergies between SETI and other types of astronomy," said Andrew Siemion, director of the Berkeley SETI Research Center and of the Breakthrough Listen program. "We look forward to working with the international scientific community to learn more about these enigmatic and dynamic sources."

Are FRBs signals from advanced civilizations?

Another possibility, though remote, is that the FRB is a high-powered signal from an advanced civilization. Hence the interest of Breakthrough Listen, which looks for signs of intelligent life in the universe, funded by $100 million over 10 years from internet investor Yuri Milner.

"We can not rule out completely the ET hypothesis for the FRBs in general," Gajjar said.

Breakthrough Listen has to date recorded data from a dozen FRBs, including FRB 121102, and plans eventually to sample all 30-some known sources of fast radio bursts.

"We want a complete sample so that we can conduct our standard SETI analysis in search of modulation patterns or narrow-band signals - any kind of information-bearing signal emitted from their direction that we don't expect from nature," he said.

Breakthrough Listen allotted tens of hours of observational time on the Green Bank Telescope to recording radio emissions from FRB 121102, and last August 26 detected 15 bursts over a relatively short period of five hours. They analyzed the two brightest of these and found that the radio waves were nearly 100 percent linearly polarized.

The team plans a few more observations of FRB 121102 before moving on to other FRB sources. Gajjar said that they want to observe at higher frequencies - up to 12 gigahertz, versus the present Green Bank observations in the 4-8 GHz range - to see if the energy drops off at higher frequencies. This could help narrow the range of possible sources, he said.

A supercomputer system developed by Columbia University with Health Department epidemiologists detects foodborne illness and outbreaks in NYC restaurants based on keywords in Yelp reviews

Using Yelp, 311, and reports from health care providers, the Health Department has identified and investigated approximately 28,000 complaints of suspected foodborne illness overall since 2012

The Health Department has announced that since 2012, 10 outbreaks of foodborne illness were identified solely through a supercomputer system jointly created with Columbia University's Department of Computer Science. Launched in 2012, the system tracks foodborne illnesses based on certain keywords that appear in Yelp restaurant reviews. This strategy has helped Health Department staff identify approximately 1,500 complaints of foodborne illness in New York City each year, for a total of 8,523 since July 2012.

Improvements to the supercomputer system are the subject of a joint study published this week by the Journal of the American Medical Informatics Association. The Health Department and Columbia continue to expand the system to include other social media sources, such as Twitter, which was added to the system in November 2016. The system allows the Health Department to investigate incidents and outbreaks that might otherwise go undetected. New Yorkers are encouraged to call 311 to report any suspected foodborne illness.

"Working with our partners at Columbia University, the Health Department continues to expand its foodborne illness surveillance capabilities," said Health Commissioner Dr. Mary T. Bassett. "Today we not only look at complaints from 311, but we also monitor online sites and social media. I look forward to working with Columbia University on future efforts to build on this system. The Health Department follows up on all reports of foodborne illness - whether it is reported to 311 or Yelp."

Each year, thousands of New York City residents become sick from consuming foods or drinks that are contaminated with harmful bacteria, viruses or parasites. The most common sources of food poisoning include raw or undercooked meat, poultry, eggs, shellfish and unpasteurized milk. Fruits and vegetables may become contaminated if they are handled or processed in facilities that are not kept clean, if they come into contact with contaminated fertilizer, or if they are watered or washed with contaminated water. Contamination may also occur if food is incorrectly handled by an infected food worker or if it touches other contaminated food.

"Effective information extraction regarding foodborne illness from social media is of high importance--online restaurant review sites are popular and many people are more likely to discuss food poisoning incidents in such sites than on official government channels," said Luis Gravano and Daniel Hsu, who are coauthors of the study and professors of Computer Science at Columbia Engineering. "Using machine learning has already had a significant impact on the detection of outbreaks of foodborne illnesses."

"The collaboration with Columbia University to identify reports of food poisoning in social media is crucial to improve foodborne illness outbreak detection efforts in New York City," said Health Department epidemiologists Vasudha Reddy and Katelynn Devinney, who are coauthors of the publication. "The incorporation of new data sources allows us to detect outbreaks that may not have been reported and for the earlier identification of outbreaks to prevent more New Yorkers from becoming sick."

"I applaud DOHMH Commissioner Bassett for embracing the role that crowdsourcing technology can play in identifying outbreaks of foodborne illness. Public health must be forward-thinking in its approach to triaging both everyday and acute medical concerns," said Brooklyn Borough President Eric Adams.

Most restaurant-associated outbreaks are identified through the Health Department's complaint system, which includes 311, Yelp, and reports from health care providers. Since 2012, the Department has identified and investigated approximately 28,000 suspected complaints of foodborne illness overall. The Health Department reviews and investigates all complaints of suspected foodborne illness in New York City.

The symptoms, onset and length of illness depend on the type of microbe, and how much of it is swallowed. Symptoms usually include vomiting, diarrhea and stomach cramps. If you suspect you became sick after eating or drinking a contaminated item, call 311 or submit an online complaint form.

New Yorkers should call their doctor if they experience a high fever (over 101.5°F), blood in the stool, prolonged vomiting, dehydration, or diarrhea for more than three days.

The Centers for Disease Control and Prevention estimates that there are 48 million illnesses and 3000 deaths caused by the consumption of contaminated food in the United States each year.

NIH award to propel biomedical informatic, clinical, and animal studies

Researchers from Case Western Reserve University School of Medicine and collaborators have received a five-year, $2.8 million grant from the National Institute on Aging to identify FDA-approved medications that could be repurposed to treat Alzheimer's disease. The award enables the researchers to develop supercomputer algorithms that search existing drug databases, and to test the most promising drug candidates using patient electronic health records and Alzheimer's disease mouse models.

Rong Xu, PhD is principal investigator on the new award and associate professor of biomedical informatics in the department of population and quantitative health sciences at Case Western Reserve University School of Medicine. The project builds on Xu's work developing DrugPredict, a computer program that connects drug characteristics with information about how they may interact with human proteins in specific diseases--like Alzheimer's. The program has already helped researchers identify new indications for old drugs.

"We will use DrugPredict, but the scope of this project is much more ambitious," Xu said. Xu plans to develop new algorithms as well as apply DrugPredict to Alzheimer's, building a publicly available database of putative Alzheimer's drugs in the process. The database will include drugs that have potentially beneficial mechanisms of action and that are highly likely to cross the blood-brain barrier.

The blood-brain barrier has been a major obstacle in drug discovery for brain disorders, including Alzheimer's disease. The protective membrane surrounds the brain to keep out foreign objects--like microbes--but can also keep out beneficial drugs. "Finding drugs that can pass the blood-brain barrier is the 'holy grail' for neurological drug discovery," Xu said. "With this award, we will develop novel machine-learning and artificial intelligence algorithms to predict whether chemicals can pass the blood-brain barrier and whether they may be effective in treating Alzheimer's disease."

The research team includes XiaoFeng Zhu, PhD, professor of population and quantitative health sciences at Case Western Reserve University School of Medicine, and David Kaelber, MD, PhD, chief medical informatics officer at MetroHealth. Together, they will work with Xu's research group to validate repurposed candidate drugs with computer-simulated clinical trials, involving electronic health records from more than 53 million patients nationwide. Xu will also team up with co-principal investigator on the award Riquiang Yan, PhD, vice chair of neurosciences at Cleveland Clinic's Lerner Research Institute, to study the most promising drug candidates in mice genetically modified to have Alzheimer's-like disease.

The new award could lead to human studies testing alternative medications for Alzheimer's disease. "The unique and powerful strength of our project is our ability to seamlessly combine novel computational predictions, clinical corroboration, and experimental testing," Xu said. "This approach allows us to rapidly identify innovative drug candidates that may work in real-world Alzheimer's disease patients. We anticipate the findings could then be expeditiously translated into clinical trials."

Xu was invited to speak about the newly funded project at the National Institutes of Health Alzheimer's Research Summit this March. She will also serve as a panelist on Emerging Therapeutics at the conference.

CAPTION The schematic figure illustrates the concept and behavior of magnetoresistance. The spins are generated in topological insulators. Those at the interface between ferromagnet and topological insulators interact with the ferromagnet and result in either high or low resistance of the device, depending on the relative directions of magnetization and spins.

University of Minnesota researchers demonstrate the existence of a new kind of magnetoresistance involving topological insulators

From various magnetic tapes, floppy disks and computer hard disk drives, magnetic materials have been storing our electronic information along with our valuable knowledge and memories for well over half of a century.

In more recent years, the new types phenomena known as magnetoresistance, which is the tendency of a material to change its electrical resistance when an externally-applied magnetic field or its own magnetization is changed, has found its success in hard disk drive read heads, magnetic field sensors and the rising star in the memory technologies, the magnetoresistive random access memory.

A new discovery, led by researchers at the University of Minnesota (Seymour Cray's alma mater), demonstrates the existence of a new kind of magnetoresistance involving topological insulators that could result in improvements in future computing and computer storage. The details of their research are published in the most recent issue of the scientific journal Nature Communications.

"Our discovery is one missing piece of the puzzle to improve the future of low-power computing and memory for the semiconductor industry, including brain-like computing and chips for robots and 3D magnetic memory," said University of Minnesota Robert F. Hartmann Professor of Electrical and Computer Engineering Jian-Ping Wang, director of the Center for Spintronic Materials, Interfaces, and Novel Structures (C-SPIN) based at the University of Minnesota and co-author of the study.

Emerging technology using topological insulators

While magnetic recording still dominates data storage applications, the magnetoresistive random access memory is gradually finding its place in the field of computing memory. From the outside, they are unlike the hard disk drives which have mechanically spinning disks and swinging heads--they are more like any other type of memory. They are chips (solid state) which you'd find being soldered on circuit boards in a computer or mobile device.

Recently, a group of materials called topological insulators has been found to further improve the writing energy efficiency of magnetoresistive random access memory cells in electronics. However, the new device geometry demands a new magnetoresistance phenomenon to accomplish the read function of the memory cell in 3D system and network.

Following the recent discovery of the unidirectional spin Hall magnetoresistance in a conventional metal bilayer material systems, researchers at the University of Minnesota collaborated with colleagues at Pennsylvania State University and demonstrated for the first time the existence of such magnetoresistance in the topological insulator-ferromagnet bilayers.

The study confirms the existence of such unidirectional magnetoresistance and reveals that the adoption of topological insulators, compared to heavy metals, doubles the magnetoresistance performance at 150 Kelvin (-123.15 Celsius). From an application perspective, this work provides the missing piece of the puzzle to create a proposed 3D and cross-bar type computing and memory device involving topological insulators by adding the previously missing or very inconvenient read functionality.

In addition to Wang, researchers involved in this study include Yang Lv, Delin Zhang and Mahdi Jamali from the University of Minnesota Department of Electrical and Computer Engineering and James Kally, Joon Sue Lee and Nitin Samarth from Pennsylvania State University Department of Physics.

This research was funded by the Center for Spintronic Materials, Interfaces and Novel Architectures (C-SPIN) at the University of Minnesota, a Semiconductor Research Corporation program sponsored by the Microelectronics Advanced Research Corp. (MARCO) and the Defense Advanced Research Projects Agency (DARPA).

To read the full research paper entitled "Unidirectional spin-Hall and Rashba-Edelstein magnetoresistance in topological insulator-ferromagnet layer heterostructures," visit the Nature Communications website.

  1. Activist investor Starboard calls for substantial change at Israel’s Mellanox
  2. KU researchers make solar energy more efficient
  3. Researchers discover serious processor vulnerability
  4. We need one global network of 1000 stations to build an Earth observatory
  5. Statistical test relates pathogen mutation to infectious disease progression
  6. Reich lab creates collaborative network for combining models to improve flu season forecasts
  7. The VLA detected radio waves point to likely explanation for neutron-star merger phenomena
  8. Germany’s Tübingen University physicists are the first to link atoms and superconductors in key step towards new hardware for quantum supercomputers, networks
  9. How great is the influence, risk of social, political bots?
  10. Easter Island had a cooperative community, analysis of giant hats reveals
  11. Study resolves controversy about electron structure of defects in graphene
  12. AI insights could help reduce injuries in construction industry
  13. SuperComputational modeling key to design supercharged virus-killing nanoparticles
  14. UW's Hyak supercomputer overcomes obstacles in peptide drug development
  15. Brigham and Women's Hospital's Golden findings show potential use of AI in detecting spread of breast cancer
  16. NUS scientist develops new toolboxes for quantum cybersecurity
  17. UCLA chemists synthesize narrow ribbons of graphene using only light, heat
  18. New machine learning algorithm recognizes distinct dolphin clicks in underwater recordings
  19. UB's Pokhriyal uses machine learning framework to create new maps for fighting extreme poverty
  20. SwRI’s Marchi models the massive collisions after moon formation remodeled early Earth

Page 3 of 41