The first AI universe sim is fast, accurate; its creators don't know how it works

The new model can envision universes with unique parameters, such as extra dark matter, even without receiving training data in which those parameters varied

For the first time, astrophysicists have used artificial intelligence techniques to generate complex 3D supercomputer simulations of the universe. The results are so fast, accurate and robust that even the creators aren't sure how it all works.

"We can run these simulations in a few milliseconds, while other 'fast' simulations take a couple of minutes," says study co-author Shirley Ho, a group leader at the Flatiron Institute's Center for Computational Astrophysics in New York City and an adjunct professor at Carnegie Mellon University. "Not only that, but we're much more accurate."

The speed and accuracy of the project called the Deep Density Displacement Model, or D3M for short wasn't the biggest surprise to the researchers. The real shock was that D3M could accurately simulate how the universe would look if certain parameters were tweaked -- such as how much of the cosmos is dark matter -- even though the model had never received any training data where those parameters varied. A comparison of the accuracy of two models of the universe. The new model (left), dubbed D3M, is both faster and more accurate than an existing method (right) called second-order perturbation theory, or 2LPT. The colors represent the average displacement error in millions of light-years for each point in the grid relative to a high-accuracy (though much slower) model.{module In-article}

"It's like teaching image recognition software with lots of pictures of cats and dogs, but then it's able to recognize elephants," Ho explains. "Nobody knows how it does this, and it's a great mystery to be solved."

Ho and her colleagues present D3M June 24 in the Proceedings of the National Academy of Sciences. The study was led by Siyu He, a Flatiron Institute research analyst.

Ho and He worked in collaboration with Yin Li of the Berkeley Center for Cosmological Physics at the University of California, Berkeley, and the Kavli Institute for the Physics and Mathematics of the Universe near Tokyo; Yu Feng of the Berkeley Center for Cosmological Physics; Wei Chen of the Flatiron Institute; Siamak Ravanbakhsh of the University of British Columbia in Vancouver; and Barnabás Póczos of Carnegie Mellon University.

Computer simulations like those made by D3M have become essential to theoretical astrophysics. Scientists want to know how the cosmos might evolve under various scenarios, such as if the dark energy pulling the universe apart varied over time. Such studies require running thousands of simulations, making a lightning-fast and highly accurate computer model one of the major objectives of modern astrophysics.

D3M models how gravity shapes the universe. The researchers opted to focus on gravity alone because it is by far the most important force when it comes to the large-scale evolution of the cosmos.

The most accurate universe simulations calculate how gravity shifts each of billions of individual particles over the entire age of the universe. That level of accuracy takes time, requiring around 300 computation hours for one simulation. Faster methods can finish the same simulations in about two minutes, but the shortcuts required to result in lower accuracy.

Ho, He and their colleagues honed the deep neural network that powers D3M by feeding it 8,000 different simulations from one of the highest-accuracy models available. Neural networks take training data and run calculations on the information; researchers then compare the resulting outcome with the expected outcome. With further training, neural networks adapt over time to yield faster and more accurate results.

After training D3M, the researchers ran simulations of a box-shaped universe 600 million light-years across and compared the results to those of the slow and fast models. Whereas the slow-but-accurate approach took hundreds of hours of computation time per simulation and the existing fast method took a couple of minutes, D3M could complete a simulation in just 30 milliseconds.

D3M also churned out accurate results. When compared with the high-accuracy model, D3M had a relative error of 2.8 percent. Using the same comparison, the existing fast model had a relative error of 9.3 percent.

D3M's remarkable ability to handle parameter variations not found in its training data makes it an especially useful and flexible tool, Ho says. In addition to modeling other forces, such as hydrodynamics, Ho's team hopes to learn more about how the model works under the hood. Doing so could yield benefits for the advancement of artificial intelligence and machine learning, Ho says.

"We can be an interesting playground for a machine learner to use to see why this model extrapolates so well, why it extrapolates to elephants instead of just recognizing cats and dogs," she says. "It's a two-way street between science and deep learning."

German physicist Schölkopf wins 2019 Körber Prize of one million euros

The 2019 Körber European Science Prize, endowed with one million euros, is to be awarded to the German physicist, mathematician and computer scientist Bernhard Schölkopf. He has developed mathematical methods that have made a significant contribution to helping artificial intelligence (AI) reach its most recent heights. Schölkopf achieved worldwide renown with support-vector machines (SVMs). These are not machines in the classical sense, but sophisticated algorithms (program instructions) with which computers can perform highly complicated AI calculations quickly and precisely.

Bernhard Schölkopf, 51, is a pioneer of this new industrial revolution based on information. After studying physics, mathematics, and philosophy in Tübingen and London, the Stuttgart native went on a scholarship to the American Bell Labs, where his subsequent Ph.D. supervisor Vladimir Vapnik was just beginning to conduct research into SVMs. In 1997, Schölkopf received his doctorate in computer science from the TU Berlin. He already contributed decisively to the development of SVM technology to application maturity in the Vapnik team. After working in Cambridge, England, and at a New York biotech start-up, Schölkopf became Director of the Max Planck Institute (MPI) for Biological Cybernetics in Tübingen in 2001. In 2011, he was one of the founding directors of the MPI for Intelligent Systems in Tübingen. {module In-article}

Although almost everyone comes into contact with it on a daily basis, around half of Germans do not know what exactly is meant by the term "artificial intelligence". "AI is in play when a smartphone automatically groups stored photos according to faces and topics such as holidays," explains Schölkopf, "or translates texts from one language into another."

AI is currently experiencing a global boom, not least because of its growing economic importance. The USA and China are investing billions in this technology, which is likely to fundamentally change working life throughout the world. Even before the turn of the millennium, intelligent robots were moving into factories on a large scale, for example in the automotive industry. In the future, intelligent systems will increasingly take over routine tasks in offices.

The support-vector machines co-developed by Bernhard Schölkopf are similar to neural networks modeled on the brain, but provide more precise results for some tasks. In addition, they are based on solid mathematical principles, which makes their mode of operation more transparent. SVMs initially need to be trained, just as the human brain does when learning. Their special attribute is that their algorithms make clean-cut classifications in mathematical spaces of higher dimensions, but the computer can do this with comparatively simple and fast calculations.

The first SVM systems from the 1990s were able to recognize handwritten numbers on letters almost as well as humans and were better than any competing systems. They also gave a significant boost to computer science because of their systematic mathematical approach. Schölkopf is today's most frequently cited German computer scientist and, according to the research magazine "Science", ranks among the ten most influential computer scientists in the world.

The Schölkopf team at the MPI Tübingen is currently investigating algorithms that can also identify causal relationships from data. This promising new field of research is referred to as causal inference. One of its goals is to make AI systems more robust against interference. "If in a built-up area, a 30 km/h speed limit sign has been passed over in such a way that it looks like a 120 km/h sign, then the AI system of a driverless car must be able to infer from the context that this sign is to be ignored," says Bernhard Schölkopf.

Another of Schölkopf's concern is to help Germany achieve a leading position in the tough international competition for AI. He is the co-founder of the world-renowned "Cyber Valley" in the Stuttgart-Tübingen region – a center of excellence funded by the state of Baden-Württemberg that has also been able to integrate leading American companies. As part of the planned ELLIS program (European Laboratory for Learning and Intelligent Systems), Schölkopf hopes to "better network leading European locations, set up joint programs and train doctoral students. Young top researchers should not have to go to the USA to work at the highest level". In addition, it is important to have even more extensive state AI funding. Schölkopf intends to use the funds of the Körber Prize in his Causal Inference area of expertise and for workshops to promote the ELLIS project.

The Körber European Science Prize 2019 will be presented to Bernhard Schölkopf on 13 September in the Great Festival Hall of Hamburg City Hall. To mark its 60th anniversary, the Körber Foundation is increasing the prize endowment to one million euros as of this year. This makes the Körber Prize one of the world's most highly endowed research prizes. "We want to set an example for the recognition of top-class research in Europe," says Dr. Lothar Dittmer, Chairman of the Executive Board of the Körber Foundation, "and with our new stipulation that five percent of the prize money is to be used for science communication, we want to contribute to this recognition also growing in the public sphere." Every year since 1985, the Körber Foundation has honored a major breakthrough in the physical or life sciences in Europe. The prize is awarded to excellent and innovative research approaches with high application potential. To date, six prize winners have also been awarded the Nobel Prize after receiving the Körber Prize.

Managing the ups and downs of coffee production in Brazil

National Council for Scientific and Technological Development, Empresa de Pesquisa Agropecuaria de Minas Gerais

Each day, more than 2 billion cups of coffee are consumed worldwide.

Developing countries produce about 90% of the beans used to make all those lattes, espressos and mochas. That makes coffee a key source of revenue and livelihood for millions of people worldwide.

But coffee plants have up-and-down yield patterns. Years with high yields are often followed by years with low yields and vice-versa. This alternating pattern of high and low yields is called the "biennial effect".

"It's like physiological recovery," says Indalécio Cunha Vieira Júnior. "Coffee plants need to 'vegetate' for a year to produce well the following year." Cunha is a researcher at the Federal University of Lavras in Brazil. CAPTION This coffee bean plant of the cultivar 'Catigua' in its high production year. Catigua is one of the most commercially-grown cultivars in Brazil.  CREDIT César Elias Botelho{module In-article}

The biennial effect makes it challenging for coffee breeders to compare yields from different varieties of coffee. Without accurate measures of yield, breeders cannot know which varieties of coffee would be most useful for farmers to grow.

In a new study, Cunha and colleagues outline a computational model that compensates for the biennial effect in coffee. This model reduces experimental error. It also increases the usefulness of data obtained from field trials. In turn, the model directly impacts the quality of coffee varieties supplied to farmers.

"Ultimately, our findings could reduce the cost and time to launch a new coffee variety into the market by half," says Cunha.

The new model could also help farmers improve yields. "The model generates data on biennial growth at the level of individual coffee plants," says Cunha. Using information from the model, farmers could tailor cultivation strategies to individual plants. Effective management of growing conditions directly impacts harvest quality and yields.

The study also yielded some unexpected results. Researchers discovered that the biennial effect in coffee doesn't follow a well-defined pattern, as previously thought.

"Many researchers assumed that all coffee plants in an area would have similar yield patterns," says Cunha. But, researchers found that some coffee plants can have reasonably stable yields across years. Other plants may have high yields for two years and reduced yields in the third.

"These findings will change how coffee breeding experiments are analyzed," says Cunha.

The new model also allows researchers to determine why individual coffee plants may have high or low yields each year.

Some coffee plants with high yields may belong to high-yielding varieties. However, the plants of high-yielding varieties may produce low yields during recovery years.

"Our model enables us to delve deeper into the biennial effect," says Cunha. "This could allow us to recommend the most productive varieties for farmers with higher accuracy and lower costs."

Cunha and colleagues used a supercomputer simulation to test the effectiveness of their model. "The simulation allowed us to confirm our findings on real data," says Cunha. It also helped researchers test conditions in which the model performed well and when it ran into difficulties.

In general, "simulation results showed the model could effectively determine individual biennial stages," says Cunha. The new model was shown to be an improvement over older models.

Cunha is now trying to incorporate more genetic information into the current model. This would allow researchers to study the genetic control of the biennial effect. Understanding the genetic basis of the biennial effect could be very useful. For example, it might allow breeders to identify coffee varieties with more uniform yields across multiple years.

Coffee isn't the only crop to show biennial effects. Apple trees, for example, also exhibit biennial effects. Findings from Cunha's work could also apply to these other crop varieties.