CNRS researcher Pierron's supercomputer-simulated genetic data shows human expansion 1,000 years ago linked to Madagascar’s loss of large vertebrates

Current anthropized landscape of Madagascar  CREDIT MAGE ConsortiumThe island of Madagascar—one of the last large land masses colonized by humans—sits about 250 miles (400 kilometers) off the coast of East Africa. While it’s still regarded as a place of unique biodiversity, Madagascar long ago lost all its large-bodied vertebrates, including giant lemurs, elephant birds, turtles, and hippopotami. A human genetic study reported in the journal Current Biology on November 4 links these losses in time with the first major expansion of humans on the island, around 1,000 years ago.

“This human demographic expansion was simultaneous with a cultural and ecological transition on the island,” says Denis Pierron, French National Centre for Scientific Research (CNRS) researcher in Toulouse, France. “Around the same period, cities appeared in Madagascar and all the vertebrates of more than 10 kilograms disappeared.”

The origins of humans in Madagascar have long been an enigma, Pierron explained. Madagascar is home to 25 million people who speak an Asian language despite the island’s proximity to East Africa. Other groups who speak similar languages live more than 4,000 miles away. The people that live in Madagascar are known to trace their roots back to two small populations: one Bantu-speaking from Africa and another Austronesian-speaking from Asia. But, beyond that, the history remained rather murky.

To retrace the history and understand more about the origin of the Malagasy people, a multi-disciplinary consortium launched 2007 a project known as Madagascar Genetic and Ethnolinguistic (MAGE). Over 10 years, Malagasy and international researchers visited more than 250 villages across the country to sample the cultural and genetic human diversity.

In the new study, Pierron and his colleagues took a close look at the human genetic evidence. More specifically, they closely studied how various segments of human chromosomes were shared with local ancestry information and supercomputer-simulated genetic data. Together, they’ve inferred that the Malagasy ancestral Asian population was isolated on the island for more than 1,000 years with an effective population size of just a few hundred individuals.

Their isolation ended about 1,000 years ago when a small group of Bantu-speaking African people came to Madagascar. Afterward, the population continued to expand rapidly over generations. The growing human population led to extensive changes to the Madagascar landscape and the loss of all large-bodied vertebrates that once lived there, they suggest.

The findings have important implications that may now be applied to studies of other human populations. For instance, it shows it’s possible to untangle the demographic history of ancient populations even well after two or more groups have mixed, by using genetic data and supercomputer simulations to test the likelihood of different scenarios. The findings also offer new insights into how past changes in human populations led to changes in whole ecosystems.

“Our study supports the theory that it was not directly the arrival of humans on the island that caused the disappearance of the megafauna, but rather a change in lifestyle that caused both a human population expansion and a reduction in biodiversity in Madagascar,” Pierron says.

While these efforts have led to a much better understanding of Madagascar’s history, many intriguing questions remain. For instance, Pierron asks, “If the ancestral Asian population was isolated for more than a millennium before mixing with the African population, where was this population? Already in Madagascar or in Asia? Why did the Asian population isolate itself over 2,000 years ago? Around 1,000 years ago, what triggered the observed cultural and demographic transition?” 

Prof Koyama offers insights into gravity on cosmological scales

a) The implicit correlation prior, as a function of redshift, induced by using the cubic spline to connect the 11 redshift nodes. All three functions, ΩX, µ and Σ, are subject to the same implicit prior, with no cross-correlation between different functions. b) The Horndeski prior correlating the nodes of ΩX, µ and Σ. The correlation between the nodes of each function is much stronger than that introduced by the cubic spline. The Horndeski prior also introduces a strong correlation between µ and Σ. c) The correlation obtained from our “Baseline” data posterior covariance of the nodes, i.e. that determined by the data and the implicit prior correlation in Panel (a). d) The correlation corresponding to the posterior covariance derived from the Baseline data with the help of the Horndeski prior in Panel (b).Scientists from around the world have reconstructed the laws of gravity, to help get a more precise picture of the Universe and its constitution.

The standard model of cosmology is based on General Relativity, which describes gravity as the curving or warping of space and time. While the Einstein equations have been proven to work very well in our solar system, they had not been observationally confirmed to work over the entire Universe.

An international team of cosmologists, including scientists from the University of Portsmouth in England, has now been able to test Einstein's theory of gravity in the outer reaches of space.

They did this by examining new observational data from space and ground-based telescopes that measure the expansion of the Universe, as well as the shapes and the distribution of distant galaxies.

The study, published in Nature Astronomy, explored whether modifying General Relativity could help resolve some of the open problems faced by the standard model of cosmology. 

Professor Kazuya Koyama, from the Institute of Cosmology and Gravitation at the University of Portsmouth, said: “We know the expansion of the universe is accelerating, but for Einstein’s theory to work we need this mysterious cosmological constant.

“Different measurements of the rate of cosmic expansion give us different answers, also known as the Hubble tension. To try and combat this, we altered the relationship between matter and spacetime and studied how well we can constrain deviations from the prediction of General Relativity. The results were promising, but we’re still a long way off a solution.”

An earlier version of the code used in this work, MGCosmoMC, is publicly available on GitHub. 

Possible modifications to the General Relativity equation are encased in three phenomenological functions describing the expansion of the Universe, the effects of gravity on light, and the effects on matter. Using a statistical method known as the Bayesian inference, the team reconstructed the three functions simultaneously for the first time.

“Partial reconstructions of these functions have been done in the last 5 to 10 years, but we didn't have enough data to accurately reconstruct all three at the same time”, added Professor Koyama.

“What we found was that current observations are getting good enough to get a limit on deviations from General Relativity. But at the same time, we find it very difficult to solve this problem we have in the standard model even by extending our theory of gravity.

“One exciting prospect is that in a few years we’ll have a lot more data from new probes. This means that we will be able to continue improving the limits on modifications to General Relativity using these statistical methods.”

Up-and-coming missions will deliver a highly accurate 3D map of the clustered matter in the Universe, which cosmologists call large-scale structure. These will offer an unprecedented insight into gravity at large distances.

Professor Levon Pogosian, from Simon Fraser University in Canada, said: “As the era of precision cosmology is unfolding, we are on the brink of learning about gravity on cosmological scales with high precision. Current data already draws an interesting picture, which, if confirmed with higher constraining power, could pave the way to resolving some of the open challenges in cosmology.”

Intel reports sharp sales drop, more bad news ahead

Intel has reported a 20% decline in the third quarter sales to $15.3 billion, and a shocking 85% decline in profit to $1 billion for the quarter. In the previous quarter, Intel’s revenue declined by 22%.

The chipmaker also lowered its annual revenue guidance for the second time this year to $63 billion, down from the $65 billion-$68 billion it expected at the end of last quarter, which was lower than the original revenue guidance of $76 billion.

The company's data center chips declined by 27% during the quarter to $4.21 billion.

ezgif 5 6504fc4c5f c7cf7

Intel plans up to $10 billion in cost reductions and efficiency improvements in the next three years.

“We are planning for the economic uncertainty to persist into 2023,” declared Pat Gelsinger, Intel CEO on a teleconference. “Inclusive in our efforts will be steps to optimize our headcount. These are difficult decisions affecting our loyal Intel family.”

“Despite the worsening economic conditions, we delivered solid results and made significant progress with our product and process execution during the quarter,” said Gelsinger. “To position ourselves for this business cycle, we are aggressively addressing costs and driving efficiencies across the business to accelerate our IDM 2.0 flywheel for the digital future.”

“As we usher in the next phase of IDM 2.0, we are focused on embracing an internal foundry model to allow our manufacturing group and business units to be more agile, make better decisions and establish a leadership cost structure,” said David Zinsner, Intel CFO. “We remain committed to the strategy and long-term financial model communicated at our Investor Meeting.”

 

Japanese prof Ishimoto predicts where the wear will occur in engines

183 computationally predicting where wear will occur t a6791A research group has created an analysis method to predict wear and seizure locations in the sliding parts of engine piston pins. The breakthrough will help limit wear and tear on transportation and industrial machinery components and make them more fuel efficient.

Improvements to the efficiency of internal combustion engines are necessary if we are to overcome their environmental and sustainability problems. Reciprocating engines use reciprocating pistons to extract power from combustion and convert it into rotational motion. They are commonly used in automobiles.

The most common cause of reciprocating engine failure occurs when the oil film of the lubricating oil breaks, causing metal parts to come into contact, resulting in scratching and sticking. When such a seizure happens, it is impossible to start the engine. A fluid lubrication calculation model between piston pin and connecting rod. ©Jun Ishimoto

Piston pins and connecting rods in constant reciprocating and rotating motion require fluid lubrication. However, long-term loading tests are needed to verify the wear and seizure locations in fluid lubrication and predicting or calculating this was thought to be unattainable.

That was until Professor Jun Ishimoto led a group at Tohoku University's Institute of Fluid Science and Honda Motor Co., Ltd. that established the multiphase fluid-structure coupled analysis method. It not only simulated and predicted tribological properties under severe loading conditions but also identified the piston pin's bow-like defamation as the cause of mechanical contact and seizure at the connecting rod edge.

"Proper safety guidelines that help prevent unnecessary damage to automobile engines and other industrial machinery will be easier to create thanks to this prediction method," said Ishimoto.The researchers succeed in computationally predicting the wear and seizure locations in sliding parts of engine piston pins. Results show the coupled 3D multiphase fluid-structure analyses, factoring in the elastic deformation of both the piston-pin and connecting rod, and also the thin-film cavitation lubrication with an unsteady flow channel variation. ©Jun Ishimoto

Australian made AI may improve suicide prevention in the future

The loss of any life can be devastating, but the loss of life from suicide is especially tragic. 

Around nine Australians take their own life each day, and it is the leading cause of death for Australians aged 15–44. Suicide attempts are more common, with some estimates stating that they occur up to 30 times as often as deaths.

“Suicide has large effects when it happens. It impacts many people and has far-reaching consequences for family, friends, and communities,” says Karen Kusuma, a UNSW Sydney Ph.D. candidate in psychiatry at the Black Dog Institute, who investigates suicide prevention in adolescents.

Ms. Kusuma and a team of researchers from the Black Dog Institute and the Centre for Big Data Research in Health recently investigated the evidence base of machine learning models and their ability to predict future suicidal behaviors and thoughts. They evaluated the performance of 54 machine learning algorithms previously developed by researchers to predict suicide-related outcomes of ideation, attempt, and death.

The meta-analysis, published in the Journal of Psychiatric Research, found machine learning models outperformed traditional risk prediction models in predicting suicide-related outcomes, which have traditionally performed poorly.  

“Overall, the findings show there is a preliminary but compelling evidence base that machine learning can be used to predict future suicide-related outcomes with very good performance,” Ms. Kusuma says. 

Traditional suicide risk assessment models 

Identifying individuals at risk of suicide is essential for preventing and managing suicidal behaviors. However, risk prediction is difficult.

In emergency departments (EDs), risk assessment tools such as questionnaires and rating scales are commonly used by clinicians to identify patients at elevated risk of suicide. However, evidence suggests they are ineffective in accurately predicting suicide risk in practice.

“While there are some common factors shown to be associated with suicide attempts, what the risks look like for one person may look very different in another,” Ms. Kusuma says. “But suicide is complex, with many dynamic factors that make it difficult to assess a risk profile using this assessment process.”

A post-mortem analysis of people who died by suicide in Queensland found, of those who received a formal suicide risk assessment, 75 percent were classified as low risk, and none was classified as high risk. Previous research examining the past 50 years of quantitative suicide risk prediction models also found they were only slightly better than chance in predicting future suicide risk

“Suicide is a leading cause of years of life lost in many parts of the world, including Australia. But the way suicide risk assessment is done hasn’t developed recently, and we haven’t seen substantial decreases in suicide deaths. In some years, we’ve seen increases,” Ms Kusuma says. 

Despite the shortage of evidence in favor of traditional suicide risk assessments, their administration remains a standard practice in healthcare settings to determine a patient’s level of care and support. Those identified as having a high risk typically receive the highest level of care, while those identified as low risk are discharged. 

“Using this approach, unfortunately, the high-level interventions aren’t being given to the people who really need help. So we must look to reform the process and explore ways we can improve suicide prevention,” Ms. Kusuma says. 

Machine learning suicide screening 

Ms. Kusuma says there is a need for more innovation in suicidology and a re-evaluation of standard suicide risk prediction models. Efforts to improve risk prediction have led to her research using artificial intelligence (AI) to develop suicide risk algorithms. 

“Having AI that could take in a lot more data than a clinician would be able to better recognize which patterns are associated with suicide risk,” Ms. Kusuma says. 

In the meta-analysis study, machine learning models outperformed the benchmarks set previously by traditional clinical, theoretical and statistical suicide risk prediction models. They correctly predicted 66 percent of people who would experience a suicide outcome and correctly predicted 87 percent of people who would not experience a suicide outcome. 

“Machine learning models can predict suicide deaths well relative to traditional prediction models and could become an efficient and effective alternative to conventional risk assessments,” Ms. Kusuma says. 

The strict assumptions of traditional statistical models do not bind machine learning models. Instead, they can be flexibly applied to large datasets to model complex relationships between many risk factors and suicidal outcomes. They can also incorporate responsive data sources, including social media, to identify peaks of suicide risk and flag times where interventions are most needed. 

“Over time, machine learning models could be configured to take in more complex and larger data to better identify patterns associated with suicide risk,” Ms. Kusuma says. 

The use of machine learning algorithms to predict suicide-related outcomes is still an emerging research area, with 80 percent of the identified studies published in the past five years. Ms. Kusuma says future research will also help address the risk of aggregation bias found in algorithmic models to date.

“More research is necessary to improve and validate these algorithms, which will then help progress the application of machine learning in suicidology,” Ms. Kusuma says. “While we’re still a way off implementation in a clinical setting, research suggests this is a promising avenue for improving suicide risk screening accuracy in the future.”