JPL machine-learning methods lead to discovery of rare quadruply imaged quasars that can help solve cosmological puzzles

With the help of machine-learning techniques, a team of astronomers has discovered a dozen quasars that have been warped by a naturally occurring cosmic "lens" and split into four similar images. Quasars are extremely luminous cores of distant galaxies that are powered by supermassive black holes.

Over the past four decades, astronomers had found about 50 of these "quadruply imaged quasars," or quads for short, which occur when the gravity of a massive galaxy that happens to sit in front of a quasar splits its single image into four. The latest study, which spanned only a year and a half, increases the number of known quads by about 25 percent and demonstrates the power of machine learning to assist astronomers in their search for these cosmic oddities.

"The quads are gold mines for all sorts of questions. They can help determine the expansion rate of the universe, and help address other mysteries, such as dark matter and quasar 'central engines,'" says Daniel Stern, lead author of the new study and a research scientist at the Jet Propulsion Laboratory, which is managed by Caltech for NASA. "They are not just needles in a haystack but Swiss Army knives because they have so many uses." Four of the newfound quadruply imaged quasars are shown here: From top left and moving clockwise, the objects are: GraL J1537-3010 or "Wolf's Paw;" GraL J0659+1629 or "Gemini's Crossbow;" GraL J1651-0417 or "Dragon's Kite;" GraL J2038-4008 or "Microscope Lens." The fuzzy dot in the middle of the images is the lensing galaxy, the gravity of which is splitting the light from the quasar behind it in such a way to produce four quasar images. By modeling these systems and monitoring how the different images vary in brightness over time, astronomers can determine the expansion rate of the universe and help solve cosmological problems. The pictures of Wolf's Paw, Gemini's Crossbow, and Dragon's Kite were taken by the Pan-STARRS1 Sky Survey; and the picture of Microscope Lens was captured by Dark Energy Survey.

The findings, to be published in The Astrophysical Journal, were made by combining machine-learning tools with data from several ground- and space-based telescopes, including the European Space Agency's Gaia mission; NASA's Wide-field Infrared Survey Explorer (or WISE); the W. M. Keck Observatory on Maunakea, Hawaii; Caltech's Palomar Observatory; the European Southern Observatory's New Technology Telescope in Chile; and the Gemini South telescope in Chile.

Cosmological Dilemma

In recent years, a discrepancy has emerged over the precise value of the universe's expansion rate, also known as Hubble's constant. Two primary means can be used to determine this number: one relies on measurements of the distance and speed of objects in our local universe, and the other extrapolates the rate from models based on distant radiation left over from the birth of our universe, called the cosmic microwave background. The problem is that the numbers do not match.

"There are potentially systematic errors in the measurements, but that is looking less and less likely," says Stern. "More enticingly, the discrepancy in the values could mean that something about our model of the universe is wrong and there is new physics to discover."

The new quasar quads, which the team gave nicknames such as Wolf's Paw and Dragon Kite, will help in future calculations of Hubble's constant and may illuminate why the two primary measurements are not in alignment. The quasars lie in between the local and distant targets used for the previous calculations, so they give astronomers a way to probe the intermediate range of the universe. A quasar-based determination of Hubble's constant could indicate which of the two values is correct, or, perhaps more interestingly, could show that the constant lies somewhere between the locally determined and distant value, a possible sign of previously unknown physics.

Gravitational Illusions

The multiplication of quasar images and other objects in the cosmos occurs when the gravity of a foreground object, such as a galaxy, bends and magnifies the light of objects behind it. The phenomenon, called gravitational lensing, has been seen many times before. Sometimes quasars are lensed into two similar images; less commonly, they are lensed into four.

"Quads are better than the doubly imaged quasars for cosmology studies, such as measuring the distance to objects, because they can be exquisitely well modeled," says co-author George Djorgovski, professor of astronomy and data science at Caltech. "They are relatively clean laboratories for making these cosmological measurements."

In the new study, the researchers used data from WISE, which has relatively coarse resolution, to find likely quasars, and then used the sharp resolution of Gaia to identify which of the WISE quasars were associated with possible quadruply imaged quasars. The researchers then applied machine-learning tools to pick out which candidates were most likely multiply imaged sources and not just different stars sitting close to each other in the sky. Follow-up observations by Keck, Palomar, the New Technology Telescope, and Gemini-South confirmed which of the objects were indeed quadruply imaged quasars lying billions of light-years away.

Humans and Machines Working Together

The first quad found with the help of machine learning, nicknamed Centaurus' Victory, was confirmed during an all-nighter the team spent at Caltech, with collaborators from Belgium, France, and Germany, while using a dedicated supercomputer in Brazil, recalls co-author Alberto Krone-Martins of UC Irvine. The team had been remotely observing their objects using the Keck Observatory.

"Machine learning was key to our study but it is not meant to replace human decisions," explains Krone-Martins. "We continuously train and update the models in an ongoing learning loop, such that humans and the human expertise are an essential part of the loop. When we talk about 'AI' in reference to machine-learning tools like these, it stands for Augmented Intelligence, not Artificial Intelligence."

"Alberto not only initially came up with the clever machine-learning algorithms for this project, but it was his idea to use the Gaia data, something that had not been done before for this type of project," says Djorgovski.

"This story is not just about finding interesting gravitational lenses," he says, "but also about how a combination of big data and machine learning can lead to new discoveries."

St John's College, University of Cambridge's artificial intelligence cracks the language of cancer, Alzheimer's

Powerful algorithms used by Netflix, Amazon and Facebook can 'predict' the biological language of cancer and neurodegenerative diseases like Alzheimer's, scientists have found.

Big data produced during decades of research was fed into a computer language model to see if artificial intelligence can make more advanced discoveries than humans.

Academics based at St John's College, University of Cambridge, found the machine-learning technology could decipher the 'biological language' of cancer, Alzheimer's, and other neurodegenerative diseases.

Their ground-breaking study has been published in the scientific journal PNAS today (April 8 2021) and could be used in the future to "correct the grammatical mistakes inside cells that cause disease."Fluorescence microscopy image of protein condensates forming inside living cells.  CREDIT Weitz lab, Harvard University

Professor Tuomas Knowles, lead author of the paper and a Fellow at St John's College, said: "Bringing machine-learning technology into research into neurodegenerative diseases and cancer is an absolute game-changer. Ultimately, the aim will be to use artificial intelligence to develop targeted drugs to dramatically ease symptoms or to prevent dementia happening at all."

Every time Netflix recommends a series to watch or Facebook suggests someone to befriend, the platforms are using powerful machine-learning algorithms to make highly educated guesses about what people will do next. Voice assistants like Alexa and Siri can even recognise individual people and instantly 'talk' back to you.

Dr Kadi Liis Saar, first author of the paper and a Research Fellow at St John's College, used similar machine-learning technology to train a large-scale language model to look at what happens when something goes wrong with proteins inside the body to cause disease.

She said: "The human body is home to thousands and thousands of proteins and scientists don't yet know the function of many of them. We asked a neural network based language model to learn the language of proteins.

"We specifically asked the program to learn the language of shapeshifting biomolecular condensates - droplets of proteins found in cells - that scientists really need to understand to crack the language of biological function and malfunction that cause cancer and neurodegenerative diseases like Alzheimer's. We found it could learn, without being explicitly told, what scientists have already discovered about the language of proteins over decades of research."

Proteins are large, complex molecules that play many critical roles in the body. They do most of the work in cells and are required for the structure, function and regulation of the body's tissues and organs - antibodies, for example, are a protein that function to protect the body.

Alzheimer's, Parkinson's and Huntington's diseases are three of the most common neurodegenerative diseases, but scientists believe there are several hundred.

In Alzheimer's disease, which affects 50 million people worldwide, proteins go rogue, form clumps and kill healthy nerve cells. A healthy brain has a quality control system that effectively disposes of these potentially dangerous masses of proteins, known as aggregates.

Scientists now think that some disordered proteins also form liquid-like droplets of proteins called condensates that don't have a membrane and merge freely with each other. Unlike protein aggregates which are irreversible, protein condensates can form and reform and are often compared to blobs of shapeshifting wax in lava lamps.

Professor Knowles said: "Protein condensates have recently attracted a lot of attention in the scientific world because they control key events in the cell such as gene expression - how our DNA is converted into proteins - and protein synthesis - how the cells make proteins.

"Any defects connected with these protein droplets can lead to diseases such as cancer. This is why bringing natural language processing technology into research into the molecular origins of protein malfunction is vital if we want to be able to correct the grammatical mistakes inside cells that cause disease."

Dr Saar said: "We fed the algorithm all of data held on the known proteins so it could learn and predict the language of proteins in the same way these models learn about human language and how WhatsApp knows how to suggest words for you to use.

"Then we were able ask it about the specific grammar that leads only some proteins to form condensates inside cells. It is a very challenging problem and unlocking it will help us learn the rules of the language of disease."

The machine-learning technology is developing at a rapid pace due to the growing availability of data, increased computing power, and technical advances which have created more powerful algorithms.

Further use of machine-learning could transform future cancer and neurodegenerative disease research. Discoveries could be made beyond what scientists currently already know and speculate about diseases and potentially even beyond what the human brain can understand without the help of machine-learning.

Dr Saar explained: "Machine-learning can be free of the limitations of what researchers think are the targets for scientific exploration and it will mean new connections will be found that we have not even conceived of yet. It is really very exciting indeed."

The network developed has now been made freely available to researchers around the world to enable advances to be worked on by more scientists.

Real estate software firm Propelio selects Woolpert to provide Google Maps platform, APIs

Woolpert’s geospatial mapping and survey experience cited as key differentiators for the Texas-based company.

Woolpert has been contracted by Propelio, a real estate software company, to provide Google Maps Platform services, Google Workspace, application programming interfaces (APIs) and technical support. Propelio helps real estate agents analyze properties, identify investment opportunities and locate motivated sellers.

Woolpert Senior Account Executive Jeremy Quam said Propelio has used the Google Maps Platform for the last few years. A pricing model switch prompted the nationwide firm to pursue a Google service provider that could support its extensive location intelligence and infrastructure-related needs at a better price point. Woolpert has been a Google Premier Partner since 2016.

“Propelio uses Google to supplement their imagery, using Google’s Snap to Roads and Street View, and aggregates their data into one tool,” Quam said. “We’re a global mapping company that also is an architecture and engineering company. We have dedicated surveyors that support municipalities across the country and are familiar with property lines, land ownership, parcel information and related data.”

Quam said Propelio benefits from the fact that Woolpert is a longtime Google service provider who understands the mechanics of the real estate industry and the latest geospatial technology.

“Propelio plans to integrate additional GIS layers into its platform to further enrich capabilities for customers, and we are ready and able to help whenever needed,” he said. “We’re excited to work with Propelio and support their current and future location intelligence needs.”