Headlines

Online newpaper covering breaking news in the rapidly evolving supercomputer marketplace.
  1. SNCF engineers have been using mathematical models for many years to simulate the dynamic behavior of railways. These models have not been able to take into account large portions of the track have been extremely limited at modelling ballast, the gravel layer located under railway tracks. This is why SNCF Innovation & Recherche asked for help from specialists in wave propagation for all types of media and at varied scales: CNRS and INSA Strasbourg researchers. Together, they have shown that a large part of the energy introduced by a train passing is trapped by the ballast. Their work, published in the November issue of Computational Mechanics, shows that this trapping phenomenon, which is very dependent on train speed, could cause accelerated ballast degradation in railway tracks.

    SNCF engineers currently have two ways that they can take ballast into account to attempt to understand how railway tracks behave as a train passes. One is high-level modeling of interactions between each "grain" and the other is a simpler model where the ballast is represented as a homogeneous and continuous whole. Though taking into account interactions between grains allows demonstration of wear mechanisms locally, it becomes too complex to be applied to the entire track, to the passage of an entire train. By contrast the simple models can be used for large portions of tracks but cannot really tell us what happens in the gravel layer. In addition, measurements on vibrations near the tracks were much lower that what calculations predicted. In this context, the question is how to model an entire train passing, for several meters, or even kilometers, while retaining the specifics of the ballast's mechanical behavior. Something was missing in the modeling to be able to describe the influence of a train passing on the immediate surroundings of the railway.

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    The researchers have proposed a new mechanism that helps explain why vibrations are lower than predicted as the distance from the track increases. They stopped considering the ballast as a homogeneous medium and started considering it as a heterogeneous medium. This time, the mathematical model and physical measurements agree: they have shown that a large part of the energy introduced by a train passing is trapped in the heterogeneous ballast layer. This trapping phenomenon, very dependent on train speed, could cause degradation in the ballast layer, as the energy provided by the train passing dissipates by the grains rubbing together.

    Therefore this work opens paths to a better understanding of the behavior of how railway tracks behave as a train passes. By understanding where in the tracks the ballast traps the most energy, these results particularly open new perspectives on increasing the lifetime of railway tracks and reducing maintenance costs.

  2. Neutron stars are made out of cold ultra-dense matter. How this matter behaves is one of the biggest mysteries in modern nuclear physics. Researchers developed a new method for measuring the radius of neutron stars which helps them to understand what happens to the matter inside the star under extreme pressure.

    A new method for measuring neutron star size was developed in a study led by a high-energy astrophysics research group at the University of Turku. The method relies on modeling how thermonuclear explosions taking place in the uppermost layers of the star emit X-rays to us. By comparing the observed X-ray radiation from neutron stars to the  state-of-the-art theoretical radiation models, researchers were able to put constraints on the size of the emitting source. This new analysis suggests that the neutron star radius should be about 12.4 kilometres.

    – Previous measurements have shown that the radius of a neutron star is circa 10–16 kilometres. We constrained it to be around 12 kilometres with about 400 metres accuracy, or maybe 1000 metres if one wants to be really sure. Therefore, the new measurement is a clear improvement compared to that before, says Doctoral Candidate Joonas Nättilä who developed the method.

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    The new measurements help researchers to study what kind of nuclear-physical conditions exist inside extremely dense neutron stars. Researchers are particularly interested in determining equation of state of the neutron matter, which shows how compressible the matter is at extremely high densities.

    – The density of neutron star matter is circa 100 million tons per cubic centimetre. At the moment, neutron stars are the only objects appearing in nature, with which these types of extreme states of matter can be studied, says Juri Poutanen, the leader of the research group.

    The new results also help to understand the recently discovered gravitational waves that originated from the collision of two neutron stars. That is why the LIGO/VIRGO consortium that discovered these waves was quick to compare their recent observations with the new constraints obtained by the Finnish researchers.

    – The specific shape of the gravitational wave signal is highly dependent on the radii and the equation of state of the neutron stars. It is very exciting how these two completely different measurements tell the same story about the composition of neutron stars. The next natural step is to combine these two results. We have already been having active discussions with our colleagues on how to do this, says Nättilä.

  3. New study of the trading interactions that determine the stock price using AI algorithms reveals unexpected microstructure for stock evolution, useful for financial crash modeling

    Every day, thousands of orders for selling or buying stocks are registered and processed within milliseconds. Electronic stock exchanges, such as NASDAQ, use what is referred to as microscopic modelling of the order flow - reflecting the dynamics of order bookings - to facilitate trading. The study of such market microstructures is a relatively new research field focusing on the trading interactions that determine the stock price. Now, a German team from the University of Duisburg-Essen has analysed the statistical regularities and irregularities in the recent order flow of 96 different NASDAQ stocks. Since prices are strongly correlated during financial crises, they evolve in a way that is similar to what happens to nerve signals during epileptic seizures. The findings of the Duisburg-Essen group, published in EPJ B, contribute to modeling price evolution, and could ultimately be used to evaluate the impact of financial crises.

    The dynamics of stock prices typically shows patterns. For example, large price changes arise in a sequence, which is ten times larger than the average. By studying the microstructure of stock transactions, researchers have previously identified groups of stocks with similar stock order flow. However, there are still many open questions about the co-evolution of different stocks. In fact, our current knowledge of trading interactions is far less developed than our knowledge of the actual prices that are a result of the microscopic dynamics.

    In this study, the authors analyse the co-evolution of order flow for pairs of stocks listed in the index NASDAQ 100. They observe an abstract distance between every pair of stocks. The distance is small if both stocks behave similarly, and large if they behave differently. Using machine learning algorithms, they find that there are four groups of stocks with large mutual differences (large distances). This is surprising, as this rich microscopic diversity is not reflected in the actual prices.

  4. In the last issue of the prestigious journal EMS Surveys in Mathematical Sciences published by the European Mathematical Society there appeared a 102 pages long paper entitled "Numerical infinities and infinitesimals: Methodology, applications, and repercussions on two Hilbert problems" written by Yaroslav D. Sergeyev, Professor at Lobachevsky State University in Nizhni Novgorod, Russia and Distinguished Professor at the University of Calabria, Italy (see his Brief Bio below). The paper describes a recent computational methodology introduced by the author paying a special attention to the separation of mathematical objects from numeral systems involved in their representation. It has been introduced with the intention to allow people to work with infinities and infinitesimals numerically in a unique computational framework in all the situations requiring these notions. The methodology does not contradict Cantor's and non-standard analysis views and is based on the Euclid's Common Notion no. 5 "The whole is greater than the part" applied to all quantities (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). The non-contradictory of the approach has been proved by the famous Italian logician Prof. Gabriele Lolli.

    This computational methodology uses a new kind of supercomputer called the Infinity Computer (patented in USA and EU) working numerically (traditional theories work with infinities and infinitesimals only symbolically) with infinite and infinitesimal numbers that can be written in a positional numeral system with an infinite radix. There exists its working software prototype. The appearance of the Infinity Computer changes drastically the entire panorama of numerical computations enlarging horizons of what can be computed to different numerical infinities and infinitesimals. It is argued in the paper that numeral systems involved in computations limit our capabilities to compute and lead to ambiguities in theoretical assertions, as well. The introduced methodology gives the possibility to use the same numeral system for measuring infinite sets, working with divergent series, probability, fractals, optimization problems, numerical differentiation, ODEs, etc. Numerous numerical examples and theoretical illustrations are given.

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    In particular, it is shown that the new approach allows one to observe mathematical objects involved in the Hypotheses of Continuum and the Riemann zeta function with a higher accuracy than it is done by traditional tools. It is stressed that the hardness of both problems is not related to their nature but is a consequence of the weakness of traditional numeral systems used to study them. It is shown that the introduced methodology and numeral system change our perception of the mathematical objects studied in the two problems giving unexpected answers to both problems. The effect of employing the new methodology in the study of the above Hypotheses is comparable to the dissolution of computational problems posed in Roman numerals (e.g. X - X cannot be computed in Roman numerals since zero is absent in their numeral system) once a positional system capable of expressing zero is adopted. More papers on a variety of topics using the new computational methodology can be found at the Infinity computer web page: http://www.theinfinitycomputer.com

    Yaroslav D. Sergeyev, Ph.D., D.Sc., D.H.C. is President of the International Society of Global Optimization. His research interests include numerical analysis, global optimization, infinity computing and calculus, philosophy of computations, set theory, number theory, fractals, parallel computing, and interval analysis. Prof. Sergeyev was awarded several research prizes (Khwarizmi International Award, 2017; Pythagoras International Prize in Mathematics, Italy, 2010; EUROPT Fellow, 2016; Outstanding Achievement Award from the 2015 World Congress in Computer Science, Computer Engineering, and Applied Computing, USA; Honorary Fellowship, the highest distinction of the European Society of Computational Methods in Sciences, Engineering and Technology, 2015; The 2015 Journal of Global Optimization (Springer) Best Paper Award; Lagrange Lecture, Turin University, Italy, 2010; MAIK Prize for the best scientific monograph published in Russian, Moscow, 2008, etc.).

    His list of publications contains more than 250 items (among them 6 books). He is a member of editorial boards of 6 international journals and co-editor of 8 special issues. He delivered more than 60 plenary and keynote lectures at prestigious international congresses. He was Chairman of 7 international conferences and a member of Scientific Committees of more than 60 international congresses.

  5. A new method to predict human mobility, which can be used to chart the potential spread of disease or determine rush hour bottlenecks, has been developed by a team of researchers, including one from Arizona State University.

    The research, Universal model of individual and population mobility on diverse spatial scales, was published in the Nov. 21 issue of Nature Communications.

    The research was conducted by Ying-Cheng Lai, a professor of electrical, computer and energy engineering at ASU. He worked with Xio-Yong Yan and Zi-You-Gao from the Institute of Transportation System Science and Engineering at Beijing Jiaotong University and Wen Xu Wang from the School of Systems Science and Center for Complexity Research at Beijing Normal University.

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    The researchers found that, based on empirical data from cell phones and GPS records, people are most inclined to travel to "attractive" locations they've visited before, and these movements are independent of the size of a region. The new mobility method uses mathematical calculations based on that data, providing insights that can be discerned regardless of size of the region being tracked.

    "The new mobility prediction method is important because it works at both individual and population scales, regardless of region size," explained Arizona State University Professor Ying-Cheng Lai. "Until now, different models were necessary for predicting movement in large countries versus small countries or cities. You could not use the same prediction methods for countries like the U.S. or China that you'd use for Belgium or France."

    Information gathered using the new process will be valuable for a variety of prediction tasks, such as charting potential spread of disease, urban transportation planning, and location planning for services and businesses like restaurants, hospitals and police and fire stations.

    Tracking human movements began about a decade ago and revealed the necessity for two different prediction models - one for large geographic areas like large countries and one for small countries or cities. Additionally, tracking at scale currently relies on measuring travel flux between locations and travel trajectories during specific time frames, requiring large amounts of private data. The new algorithm, based solely on population distribution, provides an alternative, more practical approach.

    • Complementary portfolios and scale enable world-class end-to-end solutions
    • Diversifies revenue base and end markets; increases SAM to $16 billion+
    • Combined R&D innovation engine and IP portfolio accelerates product leadership
    • Creates best-in-class financial model

    Marvell Technology Group Ltd. and Cavium, Inc. have announced a definitive agreement, unanimously approved by the boards of directors of both companies, under which Marvell will acquire all outstanding shares of Cavium common stock in exchange for consideration of $40.00 per share in cash and 2.1757 Marvell common shares for each Caviumshare. Upon completion of the transaction, Marvell will become a leader in infrastructure solutions with approximately $3.4 billion in annual revenue.

    The transaction combines Marvell's portfolio of leading HDD and SSD storage controllers, networking solutions and high-performance wireless connectivity products with Cavium's portfolio of leading multi-core processing, networking communications, storage connectivity and security solutions. The combined product portfolios provide the scale and breadth to deliver comprehensive end-to-end solutions for customers across the cloud data center, enterprise and service provider markets, and expands Marvell's serviceable addressable market to more than $16 billion. This transaction also creates an R&D innovation engine to accelerate product development, positioning the company to meet today's massive and growing demand for data storage, heterogeneous computing and high-speed connectivity.

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    "This is an exciting combination of two very complementary companies that together equal more than the sum of their parts," said Marvell President and Chief Executive Officer, Matt Murphy. "This combination expands and diversifies our revenue base and end markets, and enables us to deliver a broader set of differentiated solutions to our customers. Syed Ali has built an outstanding company, and I'm excited that he is joining the Board. I'm equally excited that Cavium's Co-founder Raghib Hussain and Vice President of IC Engineering Anil Jain will also join my senior leadership team. Together, we all will be able to deliver immediate and long-term value to our customers, employees and shareholders."

    "Individually, our businesses are exceptionally strong, but together, we will be one of the few companies in the world capable of delivering such a comprehensive set of end-to-end solutions to our combined customer base," said Cavium Co-founder and Chief Executive Officer, Syed Ali. "Our potential is huge. We look forward to working closely with the Marvell team to ensure a smooth transition and to start unlocking the significant opportunities that our combination creates."

    The transaction is expected to generate at least $150 to $175 million of annual run-rate synergies within 18 months post close and to be significantly accretive to revenue growth, margins and non-GAAP EPS.

    Transaction Structure and Terms 
    Under the terms of the definitive agreement, Marvell will pay Cavium shareholders $40.00 in cash and 2.1757 Marvellcommon shares for each share of Cavium common stock. The exchange ratio was based on a purchase price of $80per share, using Marvell's undisturbed price prior to November 3, when media reports of the transaction first surfaced. This represents a transaction value of approximately $6 billion. Cavium shareholders are expected to own approximately 25% of the combined company on a pro forma basis.

    Marvell intends to fund the cash consideration with a combination of cash on hand from the combined companies and $1.75 billion in debt financing. Marvell has obtained commitments consisting of an $850 million bridge loan commitment and a $900 million committed term loan from Goldman Sachs Bank USA and Bank of America Merrill Lynch, in each case, subject to customary terms and conditions. The transaction is not subject to any financing condition.

    The transaction is expected to close in mid-calendar 2018, subject to regulatory approval as well as other customary closing conditions, including the adoption by Cavium shareholders of the merger agreement and the approval by Marvell shareholders of the issuance of Marvell common shares in the transaction.

    Management and Board of Directors 
    Matt Murphy will lead the combined company, and the leadership team will have strong representation from both companies, including Marvell's current Chief Financial Officer Jean Hu, Cavium's Co-founder and Chief Operating Officer Raghib Hussain and Cavium's Vice President of IC Engineering Anil Jain. In addition, Cavium's Co-founder and Chief Executive Officer, Syed Ali, will continue with the combined company as a strategic advisor and will join Marvell's Board of Directors, along with two additional board members from Cavium's Board of Directors, effective upon closing of the transaction.

    Advisors
    Goldman Sachs & Co. LLC served as the exclusive financial advisor to Marvell and Hogan Lovells US LLP served as legal advisor. Qatalyst Partners LP and J.P. Morgan Securities LLC served as financial advisors to Cavium and Skadden, Arps, Slate, Meagher & Flom LLP served as legal advisor.

    Marvell Preliminary Third Fiscal Quarter Results 
    Based on preliminary financial information, Marvell expects revenue of $610 to $620 million and non-GAAP earnings per share to be between $0.32 and $0.34, above the mid-point of guidance provided on August 24, 2017. Further information regarding third fiscal quarter results will be released on November 28, 2017 at 1:45 p.m. Pacific Time.

    Transaction Website 
    For more information, investors are encouraged to visit http://MarvellCavium.transactionannouncement.com, which will be used by Marvell and Cavium to disclose information about the transaction and comply with Regulation FD. 

  6. Kyumin Lee, assistant professor of computer science at Worcester Polytechnic Institute (WPI), is building algorithms to weed out crowdturfers with a high rate of accuracy

    A researcher at Worcester Polytechnic Institute (WPI) is using computer science to help fight the growing problem of crowdturfing--a troublesome phenomenon in which masses of online workers are paid to post phony reviews, circulate malicious tweets, and even spread fake news. Funded by a National Science Foundation CAREER Award, assistant professor Kyumin Lee has developed algorithms that have proven highly accurate in detecting fake "likes" and followers across various platforms like Amazon, Facebook, and Twitter.

    Crowdturfing (a term that combines crowdsourcing and astroturf, a fake grass) is like an online black market for false information. Its consequences can be dangerous, including customers buying products that don't live up to their artificially inflated reviews, malicious information being pushed out in fake tweets and posts, and even elections swayed by concerted disinformation campaigns.

    "We don't know what is real and what is coming from people paid to post phony or malicious information," said Lee, a pioneer in battling crowdturfing who said the problem can undermine the credibility of the Internet, leaving people feeling unsure about how much they can trust what they see even on their favorite websites.

    "We believe less than we used to believe," he said. "That's because the amount of fake information people see has been increasing. They're manipulating our information, whether it's a product review or fake news. My goal is to reveal a whole ecosystem of crowdturfing. Who are the workers performing these tasks? What websites are they targeting? What are they falsely promoting?"

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    Lee, who joined WPI in July, focuses first on crowdsourcing sites like Amazon's Mechanical Turk (MTurk), an online marketplace where anyone can recruit workers to complete tasks for pay. While most of the tasks are legitimate, the sites have also been used to recruit people to help with crowdturfing campaigns. Some crowdsourcing sites try to weed out such illegitimate tasks, but others don't. And the phony or malicious tasks can be quite popular, as they usually pay better than legitimate ones.

    Using machine learning and predictive modeling, Lee builds algorithms that sift through the posted tasks looking for patterns that his research has shown are associated with these illegitimate tasks: for example, higher hourly wages or jobs that involve manipulating or posting information on particular websites or clicking on certain kinds of links. The algorithm can identify the malicious organizations posting the tasks, the websites the crowdturfers are told to target, and even the workers who are signing up to complete the tasks.

    Looking next at the targeted websites, the predictive algorithms can gauge the probability that new users are, in fact, there to carry out assigned crowdturfing tasks--for example "liking" content on social media sites or "following" certain social media users. In Lee's research, the algorithms have detected fake likes with 90 percent accuracy and fake followers with 99 percent accuracy.

    While he hasn't specifically researched fake news, Lee said bots are not the only things pushing out propaganda and misleading stories online. People can easily be hired on crowdsourcing sites to spread fake news articles, increasing their reach and malicious intent.

    "The algorithm will potentially prevent future crowdturfing because you can predict what users will be doing," he added. "Hopefully, companies can apply these algorithms to filter malicious users and malicious content out of their systems in real time. It will make their sites more credible. It's all about what information we can trust and improving the trustworthiness of cyberspace."

    Lee's work is funded by his five-year, $516,000 NSF CAREER Award, which he received in 2016 while at Utah State University. In addition, in 2013 he was one of only 150 professors in the United States to receive a Google Faculty Research Award; that $43,295 award also supports his crowdturfing research.

    Before turning his attention to crowdturfing, Lee conducted research on spam detection. One of his next goals is to adapt his algorithms to detect both spam and crowdturfing. He said crowdturfing is more difficult to detect because, for example, a review of a new product can look legitimate, even if it's been bought and paid for. "The ideal solution is one method that can detect all of these problems at once," he said. "We can build a universal tool that can detect all kinds of malicious users. That's my future work."

    Lee added that he expects to make his algorithms openly available to companies and organizations, which could tailor them to their specific needs. "I expect to share the data set so people can come up with a better algorithm, adapted for their specific organization," he said. "When they read our papers, they can understand how this works and implement their own system."

    Lee has been assisted in his research by WPI computer science graduate students Thanh Tran and Nguyen Vo.

  7. Supercomputational tool could uncover molecular underpinnings of rare, deadly arrhythmias

    A new supercomputational model of heart tissue allows researchers to estimate the probability of rare heartbeat irregularities that can cause sudden cardiac death. The model, developed by Mark Walker from Johns Hopkins University, is presented in PLOS Computational Biology.

    An increased risk of sudden cardiac death is associated with some heart diseases. It occurs when an irregular heartbeat (arrhythmia) interferes with normal electrical signaling in the heart, leading to cardiac arrest. Previous research has shown that simultaneous, spontaneous calcium release by clusters of adjacent heart cells can cause premature heartbeats that trigger these deadly arrhythmias.

    Despite their importance, arrhythmias that can cause sudden cardiac death are so rare that estimating their probability is difficult, even for powerful high performance computers. For instance, using a "brute force" approach, more than 1 billion simulations would be required to accurately estimate the probability of an event that has a one in 1 million chance of occurring.

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    In the new study, the researchers developed a method that requires just hundreds of simulations in order to estimate the probability of a deadly arrhythmia. These simulations are powered by a supercomputational model that, unlike previously developed models, realistically incorporates details of the molecular processes that occur in heart cells.

    The researchers demonstrated that, by altering model parameters, they could use the model to investigate how particular molecular processes might control the probability of deadly arrhythmias. They found that specific, molecular-level electrical disruptions associated with heart failure increased the probability of deadly arrhythmias by several orders of magnitude.

    "This study represents an important step forward in understanding how to pinpoint the molecular processes that are the primary regulators of the probability of occurrence of rare arrhythmic events," says study co-author Raimond Winslow. "As such, our approach offers a powerful new computational tool for identifying the optimal drug targets for pharmacotherapy directed at preventing arrhythmias."

    "Multiscale [super]computer models are critical to link an improved understanding of drug-disease mechanisms to tissue and organ behavior in complex diseases like heart failure," says Karim Azer, Sr. Director and Head of Systems Pharmacology, Sanofi, who was not involved in the study. He added, "The models provide predictions that can be tested in the laboratory or the clinic, and as such, the pharmaceutical industry is increasingly utilizing mathematical and computational modeling approaches for enabling key drug discovery and development decisions regarding drug and patient characteristics (i.e., towards precision medicine)."

    In the future, the team plans to apply their method to three-dimensional cell clusters, instead of the one-dimensional fibers explored in this study. Future work could also employ experiments with engineered human cardiac tissue to help verify the model's predictions.

  8. Objects scattered to the inner region of the Solar System by Jupiter's growth brought most of the water now found on Earth

    Equipped with Newton's law of universal gravitation (published in Principia 330 years ago) and powerful computational resources (used to apply the law to more than 10,000 interacting bodies), a young Brazilian researcher and his former postdoctoral supervisor have just proposed a new physical model to explain the origin of water on Earth and the other Earth-like objects in the Solar System.

    André Izidoro, from the School of Engineering of Sao Paulo State University in Guaratinguetá, Brazil, explains that the novelty does not lie on the idea that Earth's water came predominantly from asteroids. "What we did was associate the asteroid contribution with the formation of Jupiter. Based on the resulting model, we 'delivered to Earth' amounts of water consistent with currently estimated values," said Izidoro, who is supported)by the Sao Paulo Research Foundation (FAPESP) through its Young Investigators Grants program.

    His explanation lies in the article "Origin of water in the inner Solar System: Planetesimals scattered inward during Jupiter and Saturn's rapid gas accretion", jointly signed with the American astrophysicist Sean Raymond, who is currently with the Bordeaux Astrophysics Laboratory in France. The article was published in the planetary science journal Icarus.

    Estimates of the amount of water on Earth vary a great deal. If the unit of measurement is terrestrial oceans, some scientists speak of three to four of them, while others estimate dozens. The variation derives from the fact that the amounts of water in the planet's hot mantle and its rocky crust are unknown. In any event, the model proposed covers the full range of estimates.

    "First of all, it's important to leave aside the idea that Earth received all its water via the impacts of comets from very distant regions. These 'deliveries' also occurred, but their contributions came later and were far less significant in percentage terms," Izidoro said. "Most of our water came to the region currently occupied by Earth's orbit before the planet was formed."

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    Pre-history of the Solar System: water-rich protoplanets

    To understand how this happened, it is worth restating the scenario defined in the conventional model of the Solar System's formation and then adding the new model for the advent of water. The initial condition is a gigantic cloud of gas and cosmic dust. Owing to some kind of gravitational disturbance or local turbulence, the cloud collapses and is attracted by a specific inner region that becomes a center.

    With the accumulation of matter, at about 4.5 billion years ago, the center became so massive and hot that it began the process of nuclear fusion, which transformed it into a star. Meanwhile, the remaining cloud continued to orbit the center and its matter agglutinated to form a disk, which later fragmented to define protoplanetary niches.

    "The water-rich region of this disk is estimated to have been located several astronomical units from our Sun. In the inner region, closer to the star, the temperature was too high for water to accumulate except, perhaps, in very small amounts in the form of vapor," Izidoro said.

    An astronomical unit (AU) is the average distance from the Earth to the Sun. The region between 1.8 AU and 3.2 AU is currently occupied by the Asteroid Belt, with hundreds of thousands of objects. The asteroids located between 1.8 AU and 2.5 AU are mostly water-poor, whereas those located beyond 2.5 AU are water-rich. The process whereby Jupiter was formed can explain the origin of this division, according to Izidoro.

    "The time elapsed between the Sun's formation and the complete dissipation of the gas disk was quite short on the cosmogonic scale: from only 5 million to, at most, 10 million years," he said. "The formations of gas giants as massive as Jupiter and Saturn can only have occurred during this youthful phase of the Solar System, so it was during this phase that Jupiter's rapid growth gravitationally disturbed thousands of water-rich planetesimals, dislodging them from their original orbits."

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    The traumatic birth of gas giants

    Jupiter is believed to have a solid core with a mass equivalent to several times that of Earth. This core is surrounded by a thick and massive layer of gas. Jupiter could only have acquired this wrapping during the solar nebular phase, when the system was forming and a huge amount of gaseous material was available.

    The acquisition of this gas by gravitational attraction was very fast because of the great mass of Jupiter's embryo. In the vicinity of the formation of the giant planet, located beyond the "snow line", thousands of planetesimals (rocky bodies similar to asteroids) orbited the center of the disk and, simultaneously, attracted each other.

    The rapid increase of Jupiter's mass undermined the fragile gravitational equilibrium of this system with many bodies. Several planetesimals were engulfed by proto-Jupiter. Others were propelled to the outskirts of the Solar System. In addition, a smaller number were hurled into the disk's inner region, delivering water to the material that later formed the terrestrial planets and the Asteroid Belt.

    "The period during which the Earth was formed is dated to between 30 million and 150 million years after the Sun's formation," Izidoro said. "When this happened, the region of the disk in which our planet was formed already contained large amounts of water, delivered by the planetesimals scattered by Jupiter and also by Saturn. A small proportion of Earth's water may have arrived later via collisions with comets and asteroids. An even smaller proportion may have been formed locally through endogenous physicochemical processes. But most of it came with the planetesimals."

    {loadmodule mod_ijoomla_adagency_zone,In-article advertising space}

    Model simulates gravitational interference suffered by icycle objects

    His argument is supported by the model he built with his former supervisor. "We used supercomputers to simulate the gravitational interactions among multiple bodies by means of numerical integrators in Fortran," he explained. "We introduced a modification to include the effects of the gas present in the medium during the era of planet formation because, in addition to all the gravitational interactions that were going on, the planetesimals were also impacted by the action of what's known as 'gas drag', which is basically a 'wind' blowing in the opposite direction of their movement. The effect is similar to the force perceived by a cyclist in motion as the molecules of air collide with his body."

    Owing to gas drag, the initially very elongated orbits of the planetesimals scattered by Jupiter were gradually "circularized". It was this effect that implanted these objects in what is now the Asteroid Belt.

    A key parameter in this type of simulation is the total mass of the solar nebula at the start of the process. To arrive at this number, Izidoro and Raymond used a model proposed in the early 1970s that was based on the estimated masses of all the objects currently observed in the Solar System.

    To compensate for losses due to matter ejection during the formation of the system, the model corrects the current masses of the different objects such that the proportions of heavy elements (oxygen, carbon, etc.) and light elements (hydrogen, helium, etc.) are equal to those of the Sun. The rationale for this is the hypothesis that the compositions of the gas disk and the Sun were the same. Following these alterations, the estimated mass of the primitive cloud is obtained.

    The researchers created a simulation from these parameters, available in the link . A graph is shown in the video; the horizontal axis shows the distance to the Sun in AU. The objects' orbital eccentricities are plotted along the vertical axis. As the animation progresses, it illustrates how the system evolved during the formative stage. The two black dots, located at just under 5.5 AU and a little past 7.0 AU, correspond to Jupiter and Saturn respectively. During the animation, these bodies grow as they accrete gas from the protoplanetary cloud, and their growth destabilizes planetesimals, scattering them in various directions. The different colors assigned to the planetesimals serve merely to show where they were to begin with and how they were scattered. The gray area marks the current position of the Asteroid Belt. Time passes in thousands of years, as shown at the top of the chart.

    A second animation adds a key ingrediente, which are the migrations of Jupiter and Saturn to positions nearer the Sun during their growth processes.

    All calculations of the gravitational interactions among the bodies were based on Newton's law. Numeric integrators enabled the researchers to calculate the positions of each body at different times, which would be impossible to do for some 10,000 bodies without a supercomputer.

  9. The first astronomers had a limited toolkit: their eyes. They could only observe those stars, planets and celestial events bright enough to pick up unassisted. But today's astronomers use increasingly sensitive and sophisticated instruments to view and track a bevy of cosmic wonders, including objects and events that were too dim or distant for their sky-gazing forebears.

    On Nov. 14, scientists with the California Institute of Technology, the University of Washington and eight additional partner institutions, announced that the Zwicky Transient Facility, the latest sensitive tool for astrophysical observations in the Northern Hemisphere, has seen "first light" and took its first detailed image of the night sky.

    When fully operational in 2018, the ZTF will scan almost the entire northern sky every night. Based at the Palomar Observatory in southern California and operated by Caltech, the ZTF's goal is to use these nightly images to identify "transient" objects that vary between observations -- identifying events ranging from supernovae millions of light years away to near-Earth asteroids.

    In 2016, the UW Department of Astronomy formally joined the ZTF team and will help develop new methods to identify the most "interesting" of the millions of changes in the sky -- including new objects -- that the ZTF will detect each night and alert scientists. That way, these high-priority transient objects can be followed up in detail by larger telescopes, including the UW's share of the Apache Point Observatory 3.5-meter telescope.

    "UW is a world leader in survey astronomy, and joining the ZTF will deepen our ability to perform cutting-edge science on the ZTF's massive, real-time data stream," said Eric Bellm, a UW assistant professor of astronomy and the ZTF's survey scientist. "One of the strengths of the ZTF is its global collaboration, consisting of experts in the field of time-domain astronomy from institutions around the world."

    Identifying, cataloguing and classifying these celestial objects will impact studies of stars, our solar system and the evolution of our universe. The ZTF could also help detect electromagnetic counterparts to gravitational wave sources discovered by Advanced LIGO and Virgo, as other observatories did in August when these detectors picked up gravitational waves from the merger of two neutron stars.

    But to unlock this promise, the ZTF requires massive data collection and real-time analysis -- and UW astronomers have a history of meeting such "big data" challenges.

    The roots of big data astronomy at the UW stretch back to the Sloan Digital Sky Survey, which used a telescope at the Apache Point Observatory in New Mexico to gather precise data on the "redshift" -- or increasing wavelength -- of galaxies as they move away from each other in the expanding universe. Once properly analyzed, the data helped astronomers create a more accurate 3-D "map" of the observable universe. The UW's survey astronomy group is gathered within the Data Intensive Research in Astrophysics and Cosmology (DIRAC) Institute, which includes scientists in the Department of Astronomy as well as the eScience Institute and the Paul G. Allen School of Computer Science & Engineering.

    "It was natural for the UW astronomy department to join the ZTF team, because we have assembled a dedicated team and expertise for 'big data' astronomy, and we have much to learn from ZTF's partnerships and potential discoveries," said UW associate professor of astronomy Mario Juric.

    From Earth, the sky is essentially a giant sphere surrounding our planet. That whole sphere has an area of more than 40,000 square degrees. The ZTF utilizes a new high-resolution camera mounted on the Palomar Observatory's existing Samuel Oschin 48-inch Schmidt Telescope. Together these instruments make up the duet that saw first light recently, and after months of fine-tuning they will be able to capture images of 3,750 square degrees each hour.

    These images will be an order of magnitude more numerous than those produced by the ZTF's predecessor survey at Palomar. But since these transient objects might fade or change position in the sky, analysis tools must run in near real time as images come in.

    "We'll be looking for anything subtle that changes over time," said Bellm. "And given how much of the sky ZTF will image each night, that could be tens of thousands of objects of potential interest identified every few days."

    From a data analysis standpoint, these are no easy tasks. But, they're precisely the sorts of tasks that UW astronomers have been working on in preparation for the Large Synoptic Survey Telescope, which is expected to see first light in the next decade. The LSST, located in northern Chile, is another big data project in astrophysics, and is expected to capture images of almost the entire night sky every few days.

    "Data from the ZTF surveys will impact nearly all fields of astrophysics, as well as prepare us for the LSST down the line," said Juric.