LATEST

Legion of Honor Award Ceremony (President Tatsuya Tanaka)

Fujitsu has announced that Tatsuya Tanaka, President and Representative Director of Fujitsu Limited, has been named a Chevalier (Knight) of the Légion d'Honneur (Legion of Honor) by the government of France. This high-rank honorary decoration was bestowed in a January 25 ceremony held at Élysée Palace in Paris.

The Legion of Honor, established by Napoléon Bonaparte in 1802, is the highest distinction conferred by the government of France. It is awarded to recognize those people and organizations that have made outstanding contributions to France in a variety of fields.

The Fujitsu Group began operations in France in 1999 and currently has approximately 380 employees in the country. As continental Europe's second largest economy, France represents a strategically important market for Fujitsu. It actively conducts business activities centered on IT services and system products, contributing to the French economy.

In addition, in March 2017, Fujitsu announced an investment to support digital transformation and innovation in France, with the French government's collaboration. This initiative has since involved leading French technology companies, higher education and research institutions, and French start-ups. 

This decoration is in recognition of Fujitsu Group's contributions to the development of France and France-Japan relations. Fujitsu's activities, including the creation of a Center of Excellence dedicated to Artificial Intelligence (AI) in Paris-Saclay in March 2017, will work to drive the development of ecosystems in France, and will be a way to actively engage in further strengthening the bilateral relationship with France.

"I would like to thank Tatsuya Tanaka for his personal commitment and trust in our country's economy, companies, as well as higher education and research institutes. I pay tribute, through him, to the strength of the Fujitsu Group, a global leader in digital innovation, and to its willingness to expand and develop in Europe, and especially in France," said French President Emmanuel Macron.

Tatsuya Tanaka, President and Representative Director commented, “It is indeed an honor to receive this prestigious decoration. I am overjoyed to have our activities thus far receive such high recognition. In developing our business in Europe, a region which for the Fujitsu Group is second in size only to Japan, France is a strategic market of great importance. We are also promoting various efforts in France as a development site for such fields of technology and services, given France's outstanding talent and strong AI ecosystem. We hope to bring these efforts to fruition, and by making ever-more progress in their development, to contribute to further growing the relationship of France and Japan.”

CAPTION Researchers from left to right: Nodar Samkharadze, Lieven Vandersypen and Guoji Zheng. CREDIT TU Delft/Marieke de Lorijn

The worldwide race to create more, better and reliable quantum processors is progressing fast, as a team of TU Delft scientists led by Professor Vandersypen has realised yet again. In a neck-and-neck race with their competitors, they showed that quantum information of an electron spin can be transported to a photon, in a silicon quantum chip. This is important in order to connect quantum bits across the chip and allowing to scale up to large numbers of qubits. Their work was published today in the journal Science.

The quantum supercomputer of the future will be able to carry out computations far beyond the capacity of today's high performance computers. Quantum superpositions and entanglement of quantum bits (qubits) make it possible to perform parallel computations. Scientists and companies worldwide are engaged in creating increasingly better quantum chips with more and more quantum bits. QuTech in Delft is working hard on several types of quantum chips.

Familiar material

The core of the quantum chips is made of silicon. "This is a material that we are very familiar with," explains Professor Lieven Vandersypen of QuTech and the Kavli Institute of Nanoscience Delft, "Silicon is widely used in transistors and so can be found in all electronic devices." But silicon is also a very promising material for quantum technology. PhD candidate Guoji Zheng: "We can use electrical fields to capture single electrons in silicon for use as quantum bits (qubits). This is an attractive material as it ensures the information in the qubit can be stored for a long time."

Large systems

Making useful computations requires large numbers of qubits and it is this upscaling to large numbers that is providing a challenge worldwide. "To use a lot of qubits at the same time, they need to be connected to each other; there needs to be good communication", explains researcher Nodar Samkharadze. At present the electrons that are captured as qubits in silicon can only make direct contact with their immediate neighbours. Nodar: "That makes it tricky to scale up to large numbers of qubits."

Neck-and-neck race

Other quantum systems use photons for long-distance interactions. For years, this was also a major goal for silicon. Only in recent years have various scientists made progress on this. The Delft scientists have now shown that a single electron spin and a single photon can be coupled on a silicon chip. This coupling makes it possible in principle to transfer quantum information between a spin and a photon. Guoji Zheng: "This is important to connect distant quantum bits on a silicon chip, thereby paving the way to upscaling quantum bits on silicon chips."

On to the next step

Vandersypen is proud of his team: "My team achieved this result in a relatively short time and under great pressure from worldwide competition." It is a true Delft breakthrough: "The substrate is made in Delft, the chip created in the Delft cleanrooms, and all measurements carried out at QuTech," adds Nodar Samkharadze. The scientists are now working hard on the next steps. Vandersypen: "The goal now is to transfer the information via a photon from on electron spin to another."

On Jan. 21, 2015, Nick Goldman of EMBL-EBI explained a new method for storing digital information in DNA to a packed audience at a World Economic Forum meeting in Davos, Switzerland.

University of Antwerp Ph.D. student Sander Wuyts has won the DNA Storage Bitcoin challenge, issued by Nick Goldman of EMBL-EBI in 2015

University of Antwerp PhD student Sander Wuyts has won the DNA Storage Bitcoin challenge, issued by Nick Goldman of the European Bioinformatics Institute (EMBL-EBI) in 2015. The value of the coin has risen rapidly in three years, while the value of scientific progress is inestimable.

*The challenge*

On 21 January 2015, Nick Goldman of the European Bioinformatics Institute (EMBL-EBI) explained a new method for storing digital information in DNA, developed at EMBL-EBI, to a packed audience at a World Economic Forum meeting in Davos, Switzerland. In his talk, he issued a challenge:

Goldman distributed test tubes containing samples of DNA encoding 1 Bitcoin to the audience (and subsequently posted samples to people who requested them). The first person to sequence (read) the DNA and decode the files it contains could take possession of the Bitcoin.

"Bitcoin is a form of money that now only exists on computers, and with cryptography, that's something we can easily store in DNA," explained Goldman in 2015. "We've bought a Bitcoin and encoded the information into DNA. You can follow the technical description of our method and sequence the sample, decode the Bitcoin. Whoever gets there first and decodes it gets the Bitcoin."

To win, competitors needed to decode the DNA sample to find its 'private key'. All they needed was the sample, access to a DNA sequencing machine and a grasp of how the code works.

The value of a Bitcoin in 2015: around 200 euros.

The deadline: 21 January 2018.

*Good timing*

One week from the deadline, Sander Wuyts, PhD student in the Department of Bioengineering at the University of Antwerp and Vrije Universiteit Brussel, Belgium, was the first to master the method and decode the private key, taking possession of the Bitcoin.

Its value on 19 January 2018: around 9500 euros.

*The contender*

Now completing his PhD in microbiology - exploring the universe of bacteria through DNA - Wuyts has the right balance of passion, coding skills and great colleagues to tackle complex puzzles like Goldman's DNA storage' Bitcoin' challenge.

Wuyts saw Goldman issue the challenge on YouTube back in 2015, but it was the tweet about the deadline in December 2017 - plus the skills he had acquired in the meantime - that made him swing into action. He wrote to Goldman for a sample, sequenced the sample with help from his colleague Eline Oerlemans, and set to work decoding.

Once he got started, Wuyts became increasingly aware that decoding the Bitcoin wouldn't be quite as simple as following a recipe. After one failed attempt and an essential pause over Christmas, he worked tirelessly, with the help of his colleague Stijn Wittouck, to put the data from sequencing into the right order and decode the files.

It didn't work.

"One week before the deadline, I was starting to give up. It felt like we didn't produce enough good quality data to decode the whole thing. However, on the way home from a small 'hackathon' together with Stijn I realised that I made a mistake in one of the algorithms. At home I deleted just one line of code and re-ran the whole program overnight. That next Sunday I was extremely surprised and excited that suddenly the decoded files were right there, perfectly readable - I clicked on them and they opened. There were the instructions on how to claim the Bitcoin, a drawing of James Joyce and some other things."

*What Sander won*

Wuyts, who once invested poster-prize money in cryptocurrency to figure out how this technology works, is proud of having won the challenge but cautious about the prize.

"I didn't win thousands of euros, I won one Bitcoin," he says. "I will probably cash it out, because I have my doubts about the long-term value of this cryptocurrency. What's more important is that before participating in this challenge I had my doubts about the feasibility of such a DNA technology - but now I don't. Instead I have a new perspective on DNA which might come in handy in my future research."

Wuyts intends to use some of the proceeds from selling the Bitcoin to invest in future science projects, thank the people who helped him, and celebrate passing his PhD in style, with everyone who has supported him along the way. 

Artificial neural networks and a database of real cases have revealed the most predictive factors of corruption. / Pixabay

Researchers from the University of Valladolid in Spain have created a supercomputer model based on neural networks which provides in which Spanish provinces cases of corruption can appear with greater probability, as well as the conditions that favor their appearance. This alert system confirms that the probabilities increase when the same party stays in government more years.

Two researchers from the University of Valladolid have developed a model with artificial neural networks to predict in which Spanish provinces corruption cases could appear with more probability, after one, two and up to three years.

The study, published in Social Indicators Research, does not mention the provinces most prone to corruption so as not to generate controversy, explains one of the authors, Ivan Pastor, to Sinc, who recalls that, in any case, "a greater propensity or high probability does not imply corruption will actually happen."

The data indicate that the real estate tax (Impuesto de Bienes Inmuebles), the exaggerated increase in the price of housing, the opening of bank branches and the creation of new companies are some of the variables that seem to induce public corruption, and when they are added together in a region, it should be taken into account to carry out a more rigorous control of the public accounts.

"In addition, as might be expected, our model confirms that the increase in the number of years in the government of the same political party increases the chances of corruption, regardless of whether or not the party governs with majority," says Pastor.

"Anyway, fortunately - he adds -, for the next years this alert system predicts less indications of corruption in our country. This is mainly due to the greater public pressure on this issue and to the fact that the economic situation has worsened significantly during the crisis".

To carry out the study, the authors have relied on all cases of corruption that appeared in Spain between 2000 and 2012, such as the Mercasevilla case (in which the managers of this public company of the Seville City Council were charged) and the Baltar case (in which the president of the Diputación de Ourense was sentenced for more than a hundred contracts "that did not complied with the legal requirements").

The collection and analysis of all this information has been done with neural networks, which show the most predictive factors of corruption. "The use of this AI technique is novel, as well as that of a database of real cases, since until now more or less subjective indexes of perception of corruption were used, scorings assigned to each country by agencies such as Transparency International, based on surveys of businessmen and national analysts", highlights Pastor.

The authors hope that this study will contribute to better direct efforts to end corruption, focusing the efforts on those areas with the greatest propensity to appear, as well as continuing to move forward to apply their model internationally.

Ocean circulation around the area affected by the Sanchi spill. Brighter colour indicates faster currents and arrows indicate current direction. Of particular note is the strong flow of the Kuroshio Current running diagonally from left to right. This is a western boundary current similar to the Atlantic’s Gulf Stream. Within the East China, Yellow and Japan seas, important local currents such as the China Coastal Current and the Tsushima Warm Current can also be seen, although these are much weaker.

An updated emergency ocean model simulation shows that waters polluted by the sinking Sanchi oil tanker could reach Japan within a month. The new supercomputer simulation also shows that although pollution is most likely to reach the Japanese coast, it is also likely to affect Jeju Island, with a population of 600,000. However, the fate of the leaking oil is highly uncertain, as it may burn, evaporate, or mix into the surface ocean and contaminate the environment for an extended duration.

These latest predictions have been made possible by new information about where the Sanchi oil tanker finally sank. Based on this update, the team of scientists from the National Oceanography Centre (NOC) and the University of Southampton have run new ocean model simulations to assess the potential impact of local ocean circulation on the spread of pollutants. These simulations were run on the leading-edge, high-resolution global ocean circulation model, NEMO.

The Sanchi tanker collision originally occurred on the border between the Yellow and East China seas, an area with complex, strong and highly variable surface currents. However, in the following week, the stricken tanker drifted before sinking much closer to the edge of the East China Sea, and to the major western boundary current known as the Kuroshio Current.

The predictions of these new simulations differ significantly from those released last week, which suggested that contaminated waters would largely remain offshore, but could reach the Korean coast within three months. Using the same methods, but this new spill location, the revised simulations find that pollution may now be entrained within the Kuroshio and Tsushima currents.

These currents run adjacent to the northern and southern coasts of southwestern Japan. In the case of contaminated waters reaching the Kuroshio, the simulations suggest these will be transported quickly along the southern coasts of Kyushu, Shikoku and Honshu islands, potentially reaching the Greater Tokyo Area within two months. Pollution within the Kuroshio may then be swept into deeper oceanic waters of the North Pacific.

The revised simulations suggest that pollution from the spill may be distributed much further and faster than previously thought, and that larger areas of the coast may be impacted. The new simulations also shift the focus of possible impacts from South Korea to the Japanese mainland, where many more people and activities, including fisheries, may be affected.

Leading this research, Dr Katya Popova from the National Oceanography Centre, said: “Oil spills can have a devastating effect on the marine environment and on coastal communities. Strong ocean currents mean that, once released into the ocean, an oil spill can relatively rapidly spread over large distances. So understanding ocean currents and the timescale on which they transport ocean pollutants is critical during any maritime accidents, especially ones involving oil leaks.”

The team of scientists involved in this study ‘dropped’ virtual oil particles into the NEMO ocean model and tracked where they ended up over a three month period. Simulations were run for a series of scenarios of ocean circulation typical for the area the oil spill occurred in, and for this time of year. This allowed the scientists to produce a map of the potential extent of the oil spill, showing the risk of oil pollutants reaching a particular part of the ocean.

However, Stephen Kelly, the University of Southampton PhD student who ran the model simulations, said: “There was a high level of variation between different scenarios, depending on a number of factors. Primarily the location of the original oil spill and the way in which atmospheric conditions were affecting ocean circulation at that time.”

NOC scientist, Dr Andrew Yool, who collaborated in this study, discussed how the approach used during these model simulations could help optimise future search and recovery operations at sea by rapidly modelling oil spills in real-time. He said: “By using pre-existing ocean model output we can estimate which areas could potentially be affected over weekly to monthly timescales, and quickly at low computing cost. This approach complements traditional forecast simulations, which are very accurate for a short period of time but lose their reliability on timescales that are required to understand the fate of the spill on the scale from days to weeks.”

The NEMO ocean model is supported by UK National Capability funding from the Natural Environment Research Council (NERC). This model is widely used by both UK and international groups for research into ocean circulation, climate and marine ecosystems, and operationally as part of the UK Met Office’s weather forecasting.

The ability to quantify the extent of kidney damage and predict the life remaining in the kidney, using an image obtained at the time when a patient visits the hospital for a kidney biopsy, now is possible using a supercomputer model based on artificial intelligence (AI).

The findings, which appear in the journal Kidney International Reports, can help make predictions at the point-of-care and assist clinical decision-making.

Nephropathology is a specialization that analyzes kidney biopsy images. While large clinical centers in the U.S. might greatly benefit from having 'in-house' nephropathologists, this is not the case in most parts of the country or around the world.

According to the researchers, the application of machine learning frameworks, such as convolutional neural networks (CNN) for object recognition tasks, is proving to be valuable for classification of diseases as well as reliable for the analysis of radiology images including malignancies.

To test the feasibility of applying this technology to the analysis of routinely obtained kidney biopsies, the researchers performed a proof of principle study on kidney biopsy sections with various amounts of kidney fibrosis (also commonly known as scarring of tissue). The machine learning framework based on CNN relied on pixel density of digitized images, while the severity of disease was determined by several clinical laboratory measures and renal survival. CNN model performance then was compared with that of the models generated using the amount of fibrosis reported by a nephropathologist as the sole input and corresponding lab measures and renal survival as the outputs. For all scenarios, CNN models outperformed the other models.

"While the trained eyes of expert pathologists are able to gauge the severity of disease and detect nuances of kidney damage with remarkable accuracy, such expertise is not available in all locations, especially at a global level. Moreover, there is an urgent need to standardize the quantification of kidney disease severity such that the efficacy of therapies established in clinical trials can be applied to treat patients with equally severe disease in routine practice," explained corresponding author Vijaya B. Kolachalama, PhD, assistant professor of medicine at Boston University School of Medicine. "When implemented in the clinical setting, our work will allow pathologists to see things early and obtain insights that were not previously available," said Kolachalama.

The researchers believe their model has both diagnostic and prognostic applications and may lead to the development of a software application for diagnosing kidney disease and predicting kidney survival. "If healthcare providers around the world can have the ability to classify kidney biopsy images with the accuracy of a nephropathologist right at the point-of-care, then this can significantly impact renal practice. In essence, our model has the potential to act as a surrogate nephropathologist, especially in resource-limited settings," said Kolachalama.

Three attending nephrologists, Vipul Chitalia, MD, David Salant, MD and Jean Francis, MD, as well as a nephropathologist Joel Henderson, MD, all from Boston Medical Center contributed to this study.

Each year, kidney disease kills more people than breast or prostate cancer, and the overall prevalence of chronic kidney disease (CKD) in the general population is approximately 14 percent. More than 661,000 Americans have kidney failure. Of these, 468,000 individuals are on dialysis, and roughly 193,000 live with a functioning kidney transplant. In 2013, more than 47,000 Americans died from kidney disease. Medicare spending for patients with CKD ages 65 and older exceeded $50 billion in 2013 and represented 20 percent of all Medicare spending in this age group. Medicare fee-for-service spending for kidney failure beneficiaries rose by 1.6 percent, from $30.4 billion in 2012 to $30.9 billion in 2013, accounting for 7.1 percent of the overall Medicare paid claims costs.

CAPTION These are geothermal heat flux predictions for Greenland. Direct GHF measurements from the coastal rock cores, inferences from ice cores, and additional Gaussian-fit GHF data around ice core sites are used as training samples. Predictions are shown for three different values. The white dashed region roughly shows the extent of elevated heat flux and a possible trajectory of Greenland's movement over the Icelandic plume.

A paper appearing in Geophysical Research Letters uses machine learning to craft an improved model for understanding geothermal heat flux -- heat emanating from the Earth's interior -- below the Greenland Ice Sheet. It's a research approach new to glaciology that could lead to more accurate predictions for ice-mass loss and global sea-level rise.

Among the key findings:

Greenland has an anomalously high heat flux in a relatively large northern region spreading from the interior to the east and west.

Southern Greenland has relatively low geothermal heat flux, corresponding with the extent of the North Atlantic Craton, a stable portion of one of the oldest extant continental crusts on the planet. The research model predicts slightly elevated heat flux upstream of several fast-flowing glaciers in Greenland, including Jakobshavn Isbræ in the central-west, the fastest moving glacier on Earth.

"Heat that comes up from the interior of the Earth contributes to the amount of melt on the bottom of the ice sheet -- so it's extremely important to understand the pattern of that heat and how it's distributed at the bottom of the ice sheet," said Soroush Rezvanbehbahani, a doctoral student in geology at the University of Kansas who spearheaded the research. "When we walk on a slope that's wet, we're more likely to slip. It's the same idea with ice -- when it isn't frozen, it's more likely to slide into the ocean. But we don't have an easy way to measure geothermal heat flux except for extremely expensive field campaigns that drill through the ice sheet. Instead of expensive field surveys, we try to do this through statistical methods."

Rezvanbehbahani and his colleagues have adopted machine learning -- a type of artificial intelligence using statistical techniques and computer algorithms -- to predict heat flux values that would be daunting to obtain in the same detail via conventional ice cores.

Using all available geologic, tectonic and geothermal heat flux data for Greenland -- along with geothermal heat flux data from around the globe -- the team deployed a machine learning approach that predicts geothermal heat flux values under the ice sheet throughout Greenland based on 22 geologic variables such as bedrock topography, crustal thickness, magnetic anomalies, rock types and proximity to features like trenches, ridges, young rifts, volcanoes and hot spots.

"We have a lot of data points from around the Earth -- we know in certain parts of the world the crust is a certain thickness, composed of a specific kind of rock and located a known distance from a volcano -- and we take those relationships and apply them to what we know about Greenland," said co-author Leigh Stearns, associate professor of geology at KU.

The researchers said their new predictive model is a "definite improvement" over current models of geothermal heat flux that don't incorporate as many variables. Indeed, many numerical ice sheet models of Greenland assume that a uniform value of geothermal heat flux exists everywhere across Greenland.

"Most other models really only honor one particular data set," Stearns said. "They look at geothermal heat flux through seismic signals or magnetic data in Greenland, but not crustal thickness or rock type or distance from a hot spot. But we know those are related to geothermal heat flux. We try to incorporate as many geologic data sets as we can rather than assuming one is the most important."

In addition to Rezvanbehbahani and Stearns, the research team behind the new paper includes KU's J. Doug Walker and C.J. van der Veen, as well as Amir Kadivar of McGill University. Rezvanbehbahani and Stearns also are affiliated with the Center for the Remote Sensing of Ice Sheets, headquartered at KU.

The authors found the five most important geologic features in predicting geothermal flux values are topography, distance to young rifts, distance to trench, depth of lithosphere-asthenosphere boundary (layers of the Earth's mantle) and depth to Mohoroviči? discontinuity (the boundary between the crust and the mantle in the Earth). The researchers said their geothermal heat flux map of Greenland is expected to be within about 15 percent of true values.

"The most interesting finding is the sharp contrast between the south and the north of Greenland," said Rezvanbehbahani. "We had little information in the south, but we had three or four more cores in the northern part of the ice sheet. Based on the southern core we thought this was a localized low heat-flux region -- but our model shows that a much larger part of the southern ice sheet has low heat flux. By contrast, in the northern regions, we found large areas with high geothermal heat flux. This isn't as surprising because we have one ice core with a very high reading. But the spatial pattern and how the heat flux is distributed, that a was a new finding. That's not just one northern location with high heat flux, but a wide region."

The investigators said their model would be made even more accurate as more information on Greenland is compiled in the research community.

"We give the slight disclaimer that this is just another model -- it's our best statistical model -- but we have not reproduced reality," said Stearns. "In Earth science and glaciology, we're seeing an explosion of publicly available data. Machine learning technology that synthesizes this data and helps us learn from the whole range of data sensors is becoming increasingly important. It's exciting to be at the forefront."

CAPTION The 100-meter Green Bank Telescope in West Virginia is shown amid a starry night. A flash from the Fast Radio Burst source FRB 121102 is seen traveling toward the telescope. The burst shows a complicated structure, with multiple bright peaks; these may be created by the burst emission process itself or imparted by the intervening plasma near the source. This burst was detected using a new recording system developed by the Breakthrough Listen project. CREDIT Image design: Danielle Futselaar

Recent observations of a mysterious and distant object that emits intermittent bursts of radio waves so bright that they're visible across the universe provide new data about the source but fail to clear up the mystery of what causes them.

The observations by the Breakthrough Listen team at UC Berkeley using the Robert C. Byrd Green Bank Telescope in West Virginia show that the fast radio bursts from this object, called FRB 121102, are nearly 100 percent linearly polarized, an indication that the source of the bursts is embedded in strong magnetic fields like those around a massive black hole.

The measurements confirm observations by another team of astronomers from the Netherlands, which detected the polarized bursts using the William E. Gordon Telescope at the Arecibo Observatory in Puerto Rico.

Both teams will report their findings during a media briefing on Jan. 10 at a meeting of the American Astronomical Society in Washington, D.C. The results are detailed in a combined paper to be published online the same day by the journal Nature.

Fast radio bursts are brief, bright pulses of radio emission from distant but so far unknown sources, and FRB 121102 is the only one known to repeat: more than 200 high-energy bursts have been observed coming from this source, which is located in a dwarf galaxy about 3 billion light years from Earth.

The nearly 100 percent polarization of the radio bursts is unusual, and has only been seen in radio emissions from the extreme magnetic environments around massive black holes, such as those at the centers of galaxies. The Dutch and Breakthrough Listen teams suggest that the fast radio bursts may come from a highly magnetized rotating neutron star - a magnetar - in the vicinity of a massive black hole that is still growing as gas and dust fall into it.

The short bursts, which range from 30 microseconds to 9 milliseconds in duration, indicate that the source could be as small as 10 kilometers across - the typical size of a neutron star.

Other possible sources are a magnetar interacting with the nebula of material shed when the original star exploded to produce the magnetar; or interactions with the highly magnetized wind from a rotating neutron star, or pulsar.

"At this point, we don't really know the mechanism. There are many questions, such as, how can a rotating neutron star produce the high amount of energy typical of an FRB?" said UC Berkeley postdoctoral fellow Vishal Gajjar of Breakthrough Listen and the Berkeley SETI Research Center.

Gajjar will participate in the media briefing with three members of the Dutch ASTRON team: Daniele Michilli and Jason Hessels of the University of Amsterdam and Betsey Adams of the Kapteyn Astronomical Institute.

"This result is an excellent demonstration of the capabilities of the Breakthrough Listen instrumentation and the synergies between SETI and other types of astronomy," said Andrew Siemion, director of the Berkeley SETI Research Center and of the Breakthrough Listen program. "We look forward to working with the international scientific community to learn more about these enigmatic and dynamic sources."

Are FRBs signals from advanced civilizations?

Another possibility, though remote, is that the FRB is a high-powered signal from an advanced civilization. Hence the interest of Breakthrough Listen, which looks for signs of intelligent life in the universe, funded by $100 million over 10 years from internet investor Yuri Milner.

"We can not rule out completely the ET hypothesis for the FRBs in general," Gajjar said.

Breakthrough Listen has to date recorded data from a dozen FRBs, including FRB 121102, and plans eventually to sample all 30-some known sources of fast radio bursts.

"We want a complete sample so that we can conduct our standard SETI analysis in search of modulation patterns or narrow-band signals - any kind of information-bearing signal emitted from their direction that we don't expect from nature," he said.

Breakthrough Listen allotted tens of hours of observational time on the Green Bank Telescope to recording radio emissions from FRB 121102, and last August 26 detected 15 bursts over a relatively short period of five hours. They analyzed the two brightest of these and found that the radio waves were nearly 100 percent linearly polarized.

The team plans a few more observations of FRB 121102 before moving on to other FRB sources. Gajjar said that they want to observe at higher frequencies - up to 12 gigahertz, versus the present Green Bank observations in the 4-8 GHz range - to see if the energy drops off at higher frequencies. This could help narrow the range of possible sources, he said.

A supercomputer system developed by Columbia University with Health Department epidemiologists detects foodborne illness and outbreaks in NYC restaurants based on keywords in Yelp reviews

Using Yelp, 311, and reports from health care providers, the Health Department has identified and investigated approximately 28,000 complaints of suspected foodborne illness overall since 2012

The Health Department has announced that since 2012, 10 outbreaks of foodborne illness were identified solely through a supercomputer system jointly created with Columbia University's Department of Computer Science. Launched in 2012, the system tracks foodborne illnesses based on certain keywords that appear in Yelp restaurant reviews. This strategy has helped Health Department staff identify approximately 1,500 complaints of foodborne illness in New York City each year, for a total of 8,523 since July 2012.

Improvements to the supercomputer system are the subject of a joint study published this week by the Journal of the American Medical Informatics Association. The Health Department and Columbia continue to expand the system to include other social media sources, such as Twitter, which was added to the system in November 2016. The system allows the Health Department to investigate incidents and outbreaks that might otherwise go undetected. New Yorkers are encouraged to call 311 to report any suspected foodborne illness.

"Working with our partners at Columbia University, the Health Department continues to expand its foodborne illness surveillance capabilities," said Health Commissioner Dr. Mary T. Bassett. "Today we not only look at complaints from 311, but we also monitor online sites and social media. I look forward to working with Columbia University on future efforts to build on this system. The Health Department follows up on all reports of foodborne illness - whether it is reported to 311 or Yelp."

Each year, thousands of New York City residents become sick from consuming foods or drinks that are contaminated with harmful bacteria, viruses or parasites. The most common sources of food poisoning include raw or undercooked meat, poultry, eggs, shellfish and unpasteurized milk. Fruits and vegetables may become contaminated if they are handled or processed in facilities that are not kept clean, if they come into contact with contaminated fertilizer, or if they are watered or washed with contaminated water. Contamination may also occur if food is incorrectly handled by an infected food worker or if it touches other contaminated food.

"Effective information extraction regarding foodborne illness from social media is of high importance--online restaurant review sites are popular and many people are more likely to discuss food poisoning incidents in such sites than on official government channels," said Luis Gravano and Daniel Hsu, who are coauthors of the study and professors of Computer Science at Columbia Engineering. "Using machine learning has already had a significant impact on the detection of outbreaks of foodborne illnesses."

"The collaboration with Columbia University to identify reports of food poisoning in social media is crucial to improve foodborne illness outbreak detection efforts in New York City," said Health Department epidemiologists Vasudha Reddy and Katelynn Devinney, who are coauthors of the publication. "The incorporation of new data sources allows us to detect outbreaks that may not have been reported and for the earlier identification of outbreaks to prevent more New Yorkers from becoming sick."

"I applaud DOHMH Commissioner Bassett for embracing the role that crowdsourcing technology can play in identifying outbreaks of foodborne illness. Public health must be forward-thinking in its approach to triaging both everyday and acute medical concerns," said Brooklyn Borough President Eric Adams.

Most restaurant-associated outbreaks are identified through the Health Department's complaint system, which includes 311, Yelp, and reports from health care providers. Since 2012, the Department has identified and investigated approximately 28,000 suspected complaints of foodborne illness overall. The Health Department reviews and investigates all complaints of suspected foodborne illness in New York City.

The symptoms, onset and length of illness depend on the type of microbe, and how much of it is swallowed. Symptoms usually include vomiting, diarrhea and stomach cramps. If you suspect you became sick after eating or drinking a contaminated item, call 311 or submit an online complaint form.

New Yorkers should call their doctor if they experience a high fever (over 101.5°F), blood in the stool, prolonged vomiting, dehydration, or diarrhea for more than three days.

The Centers for Disease Control and Prevention estimates that there are 48 million illnesses and 3000 deaths caused by the consumption of contaminated food in the United States each year.

  1. CaseMed researchers win $2.8 million to repurpose FDA-approved drugs to treat Alzheimer's disease
  2. UMN makes new discovery that improves brain-like memory, supercomputing
  3. Activist investor Starboard calls for substantial change at Israel’s Mellanox
  4. KU researchers make solar energy more efficient
  5. Researchers discover serious processor vulnerability
  6. We need one global network of 1000 stations to build an Earth observatory
  7. Statistical test relates pathogen mutation to infectious disease progression
  8. Reich lab creates collaborative network for combining models to improve flu season forecasts
  9. The VLA detected radio waves point to likely explanation for neutron-star merger phenomena
  10. Germany’s Tübingen University physicists are the first to link atoms and superconductors in key step towards new hardware for quantum supercomputers, networks
  11. How great is the influence, risk of social, political bots?
  12. Easter Island had a cooperative community, analysis of giant hats reveals
  13. Study resolves controversy about electron structure of defects in graphene
  14. AI insights could help reduce injuries in construction industry
  15. SuperComputational modeling key to design supercharged virus-killing nanoparticles
  16. UW's Hyak supercomputer overcomes obstacles in peptide drug development
  17. Brigham and Women's Hospital's Golden findings show potential use of AI in detecting spread of breast cancer
  18. NUS scientist develops new toolboxes for quantum cybersecurity
  19. UCLA chemists synthesize narrow ribbons of graphene using only light, heat
  20. New machine learning algorithm recognizes distinct dolphin clicks in underwater recordings

Page 4 of 42