University of Tokyo chemists develop high-capacity tape for the era of big data

A  new magnetic material and recording process can vastly increase data capacity

It may seem odd to some that in the year 2020, magnetic tape is being discussed as a storage medium for digital data. After all, it has not been common in home computing since the 1980s. Surely the only relevant mediums today are solid-state drives and Blu-ray discs? However, in data centers everywhere, at universities, banks, internet service providers, or government offices, you will find that digital tapes are not only common but essential.

Though they are slower to access than other storage devices, such as hard disk drives and solid-state memory, digital tapes have very high storage densities. More information can be kept on tape than other devices of similar sizes, and they can also be more cost-effective too. So for data-intensive applications such as archives, backups, and anything covered by the broad term big data, they are extremely important. And as demand for these applications increases, so does the demand for high-capacity digital tapes.

Professor Shin-ichi Ohkoshi from the Department of Chemistry at the University of Tokyo and his team have developed a magnetic material which, together with a special process to access it, can offer greater storage densities than ever. The robust nature of the material means that the data would last for longer than with other mediums, and the novel process operates at low power. As a bonus, this system would also be very cheap to run.

"Our new magnetic material is called epsilon iron oxide, it is particularly suitable for long-term digital storage," said Ohkoshi. "When data is written to it, the magnetic states that represent bits become resistant to external stray magnetic fields that might otherwise interfere with the data. We say it has a strong magnetic anisotropy. Of course, this feature also means that it is harder to write the data in the first place; however, we have a novel approach to that part of the process too." 245324 web c526a{module INSIDE STORY} 

The recording process relies on high-frequency millimeter waves in the region of 30-300 gigahertz, or billions of cycles per second. These high-frequency waves are directed at strips of epsilon iron oxide, which is an excellent absorber of such waves. When an external magnetic field is applied, the epsilon iron oxide allows its magnetic direction, which represents either a binary 1 or 0, to flip in the presence of the high-frequency waves. Once the tape has passed by the recording head where this takes place, the data is then locked into the tape until it is overwritten.

"This is how we overcome what is called in the data science field 'the magnetic recording trilemma,'" said Project Assistant Professor Marie Yoshikiyo, from Ohkoshi's laboratory. "The trilemma describes how, to increase storage density, you need smaller magnetic particles, but the smaller particles come with greater instability and the data can easily be lost. So we had to use more stable magnetic materials and produce an entirely new way to write to them. What surprised me was that this process could also be power efficient too."

Epsilon iron oxide may also find uses beyond magnetic recording tape. The frequencies it absorbs well for recording purposes are also the frequencies that are intended for use in next-generation cellular communication technologies beyond 5G. So in the not too distant future when you are accessing a website on your 6G smartphone, both it and the data center behind the website may very well be making use of epsilon iron oxide.

"We knew early on that millimeter waves should theoretically be capable of flipping magnetic poles in epsilon iron oxide. But since it's a newly observed phenomenon, we had to try various methods before finding one that worked," said Ohkoshi. "Although the experiments were very difficult and challenging, the sight of the first successful signals was incredibly moving. I anticipate we will see magnetic tapes based on our new technology with 10 times the current capacities within five to 10 years."

Huge ring-like structure on the surface of Jupiter’s moon Ganymede may have been caused by a violent impact

Researchers from Kobe University and the National Institute of Technology, Oshima College have conducted a detailed reanalysis of image data from Voyager 1, 2, and Galileo spacecraft in order to investigate the orientation and distribution of the ancient tectonic troughs found on Jupiter’s moon Ganymede. They discovered that these troughs are concentrically distributed across almost the entire surface of the satellite. This global distribution indicates that these troughs may be actually part of one giant crater covering Ganymede. Impact simulation of an asteroid with a 150km radius colliding into Ganymede at 20km/s: It is speculated that this would result in a violent impact Note: The sharp vertical distribution of the material along the vertical axis that can be seen at a distance of 0 km at 12000 seconds is likely a numerical artifact caused by the boundary conditions in the simulation, but we confirmed that this does not affect the main results of this study. (Image credit: Naoyuki Hirata)

Based on the results of a supercomputer simulation conducted using the “PC Cluster” at the National Astronomical Observatory of Japan (NAOJ), it is speculated that this giant crater could have resulted from the impact of an asteroid with a radius of 150km. If so, the structure is the largest impact structure identified in the solar system so far. {module INSIDE STORY}Impact simulation of an asteroid with a 150km radius colliding into Ganymede at 20km/s: It is speculated that this would result in a violent impact Note: The sharp vertical distribution of the material along the vertical axis that can be seen at a distance of 0 km at 12000 seconds is likely a numerical artifact caused by the boundary conditions in the simulation, but we confirmed that this does not affect the main results of this study. (Image credit: Naoyuki Hirata)

The European Space Agency’s JUICE (Jupiter Icy Moon Explorer) mission, which will be launched in 2022 and arrive in Jupiter’s system in 2029, aims to increase our knowledge regarding Jupiter’s satellites, including Ganymede. It is hoped that this exploration will confirm the results of this study and further advance our understanding of the formation and evolution of Jupiter’s satellites.

The research team consisted of Kobe University Graduate School of Science’s Assistant Professor HIRATA Naoyuki and Professor OHTSUKI Keiji (both of the Department of Planetology), and Associate Professor SUETSUGU Ryo of National Institute of Technology, Oshima College. The paper for this study was published online in Icarus on July 15.

Li develops new way to address common computing problem

A computational framework for solving linear inverse problems takes a parallel computing approach

In this era of big data, there are some problems in scientific computing that are so large, so complex and contain so much information that attempting to solve them would be too big of a task for most computers.

Now, researchers at the McKelvey School of Engineering at Washington University in St. Louis have developed a new algorithm for solving a common class of problem -- known as linear inverse problems -- by breaking them down into smaller tasks, each of which can be solved in parallel on standard computers.

The research, from the lab of Jr-Shin Li, a professor in the Preston M. Green Department of Electrical & Systems Engineering, was published July 30 in an academic journal. Jr Shin Li{module INSIDE STORY} 

In addition to providing a framework for solving this class of problems, the approach, called Parallel Residual Projection (PRP), also delivers enhanced security and mitigates privacy concerns.

Linear inverse problems are those that attempt to take observational data and try to find a model that describes it. In their simplest form, they may look familiar: 2x+y = 1, x-y = 3. Many a high school student has solved for x and y without the help of a supercomputer.

And as more researchers in different fields collect increasing amounts of data in order to gain deeper insights, these equations continue to grow in size and complexity.

"We developed a computational framework to solve for the case when there are thousands or millions of such equations and variables," Li said.

This project was conceived while working on research problems from other fields involving big data. Li's lab had been working with a biologist researching the network of neurons that deal with the sleep-wake cycle.

"In the context of network inference, looking at a network of neurons, the inverse problem looks like this," said Vignesh Narayanan, a research associate in Li's lab:

Given the data recorded from a bunch of neurons, what is the 'model' that describes how these neurons are connected with each other?

"In earlier work from our lab, we showed that this inference problem can be formulated as a linear inverse problem," Narayanan said.

If the system has a few hundred nodes -- in this case, the nodes are the neurons -- the matrix which describes the interaction among neurons could be millions by millions; that's huge.

"Storing this matrix itself exceeds the memory of a common desktop," said Wei Miao, a Ph.D. student in Li's lab.

Add to that the fact that such complex systems are often dynamic, as is our understanding of them. "Say we already have a solution, but now I want to consider the interaction of some additional cells," Miao said. Instead of starting a new problem and solving it from scratch, PRP adds flexibility and scalability. "You can manipulate the problem any way you want."

Even if you do happen to have a supercomputer, Miao said, "There is still a chance that by breaking down the big problem, you can solve it faster."

In addition to breaking down a complex problem and solving in parallel on different machines, the computational framework also, importantly, consolidates results and computes an accurate solution to the initial problem.

An unintentional benefit of PRP has enhanced data security and privacy. When credit card companies use algorithms to research fraud or a hospital wants to analyze its massive database, "No one wants to give all of that access to one individual," Narayanan said.

"This was an extra benefit that we didn't even strive for," Narayanan said.