Intel and the Gordon and Betty Moore Foundation announced that company co-founder Gordon Moore died on March 24, 2023, at the age of 94. (Credit: Intel Corporation)
Intel and the Gordon and Betty Moore Foundation announced that company co-founder Gordon Moore died on March 24, 2023, at the age of 94. (Credit: Intel Corporation)

Gordon Moore, Intel co-founder, dies at 94

Moore, who set the course for the future of the semiconductor industry, devoted his later years to philanthropy. Intel and the Gordon and Betty Moore Foundation have announced that company co-founder Gordon Moore has passed away at the age of 94. The foundation reported he died peacefully on Friday, March 24, 2023, surrounded by family at his home in Hawaii.

Moore and his longtime colleague Robert Noyce founded Intel in July 1968. Moore initially served as executive vice president until 1975, when he became president. In 1979, Moore was named chairman of the board and chief executive officer, posts he held until 1987 when he gave up the CEO position and continued as chairman. In 1997, Moore became chairman emeritus, stepping down in 2006. Andy Grove (from left), Gordon Moore and Robert Noyce at Intel Corporation in a photo from the 1970s. Intel and the Gordon and Betty Moore Foundation announced that company co-founder Gordon Moore died on March 24, 2023, at the age of 94. (Credit: Intel Corporation)

During his lifetime, Moore also dedicated his focus and energy to philanthropy, particularly environmental conservation, science, and patient care improvements. Along with his wife of 72 years, he established the Gordon and Betty Moore Foundation, which has donated more than $5.1 billion to charitable causes since its founding in 2000.

“Those of us who have met and worked with Gordon will forever be inspired by his wisdom, humility, and generosity,” reflected foundation president Harvey Fineberg. “Though he never aspired to be a household name, Gordon’s vision and his life’s work enabled the phenomenal innovation and technological developments that shape our everyday lives. Yet those historic achievements are only part of his legacy. His and Betty’s generosity as philanthropists will shape the world for generations to come.”

Pat Gelsinger, Intel CEO, said, “Gordon Moore defined the technology industry through his insight and vision. He was instrumental in revealing the power of transistors and inspired technologists and entrepreneurs across the decades. We at Intel remain inspired by Moore’s Law and intend to pursue it until the periodic table is exhausted. Gordon’s vision lives on as our true north as we use the power of technology to improve the lives of every person on Earth. My career and much of my life took shape within the possibilities fueled by Gordon’s leadership at the helm of Intel, and I am humbled by the honor and responsibility to carry his legacy forward.”

Frank D. Yeary, chairman of Intel’s board of directors, said, “Gordon was a brilliant scientist and one of America’s leading entrepreneurs and business leaders. It is impossible to imagine the world we live in today, with computing so essential to our lives, without the contributions of Gordon Moore. He will always be an inspiration to our Intel family and his thinking is at the core of our innovation culture.”

Andy Bryant, former chairman of Intel’s board of directors, said, “I will remember Gordon as a brilliant scientist, a straight-talker, and an astute businessperson who sought to make the world better and always do the right thing. It was a privilege to know him, and I am grateful that his legacy lives on in the culture of the company he helped to create.”

Before establishing Intel, Moore and Noyce participated in the founding of Fairchild Semiconductor, where they played central roles in the first commercial production of diffused silicon transistors and later the world’s first commercially viable integrated circuits. The two had previously worked together under William Shockley, the co-inventor of the transistor and founder of Shockley Semiconductor, which was the first semiconductor company established in what would become Silicon Valley. Upon striking out on their own, Moore and Noyce hired future Intel CEO Andy Grove as the third employee, and the three of them built Intel into one of the world’s great companies. Together they became known as the “Intel Trinity,” and their legacy continues today.

In addition to Moore’s seminal role in founding two of the world’s pioneering technology companies, he famously forecast in 1965 that the number of transistors on an integrated circuit would double every year – a prediction that came to be known as Moore’s Law.

“All I was trying to do was get that message across, that by putting more and more stuff on a chip we were going to make all electronics cheaper,” Moore said in a 2008 interview.

With his 1965 prediction proven correct, in 1975 Moore revised his estimate to the doubling of transistors on an integrated circuit every two years for the next 10 years. Regardless, the idea of chip technology growing at an exponential rate, continually making electronics faster, smaller, and cheaper, became the driving force behind the semiconductor industry and paved the way for the ubiquitous use of chips in millions of everyday products.

In 2022, Gelsinger announced the renaming of the Ronler Acres campus in Oregon – where Intel teams develop future process technologies – to Gordon Moore Park at Ronler Acres. The RA4 building that’s home to much of Intel’s Technology Development Group was also renamed The Moore Center along with its café, The Gordon.

“I can think of no better way to honor Gordon and the profound impact he’s had on this company than by bestowing his name on this campus,” Gelsinger said at the event. “I hope we did you proud today, Gordon. And the world thanks you.”

Gordon Earle Moore was born in San Francisco on Jan. 3, 1929, to Walter Harold and Florence Almira “Mira” (Williamson) Moore. Moore was educated at San Jose State University, the University of California at Berkeley, and the California Institute of Technology, where he was awarded a Ph.D. in chemistry in 1954.

He started his research career at the Johns Hopkins Applied Physics Laboratory in Maryland. He returned to California in 1956 to join Shockley Semiconductor. In 1957, Moore co-founded Fairchild Semiconductor, a division of Fairchild Camera and Instrument, along with Robert Noyce and six other colleagues from Shockley Semiconductor. Eleven years later, Moore and Noyce co-founded Intel.

With Fairchild and Intel came financial success. Beginning with individual gifts, many of them anonymous, then forming the Moore Family Foundation, and eventually, in 2000, creating the Gordon and Betty Moore Foundation, Moore and his wife sought philanthropy to make the world a better place for future generations. His passion for impact and measurement were hallmarks of his philanthropic work and aspirations.

He received the National Medal of Technology from President George H.W. Bush in 1990, and the Presidential Medal of Freedom, the nation’s highest civilian honor, from President George W. Bush in 2002.

After retiring from Intel in 2006, Moore divided his time between California and Hawaii, serving as chairman of the board for the Gordon and Betty Moore Foundation until transitioning to chairman emeritus in 2018. Moore also served as a member of the board of directors of Conservation International and Gilead Sciences, Inc. He was a member of the National Academy of Engineering, a Fellow of the Royal Society of Engineers, and a Fellow of the Institute of Electrical and Electronics Engineers. He served as chairman of the board of trustees of the California Institute of Technology from 1995 until the beginning of 2001 and continued as a Life Trustee.

In 1950, Moore married Betty Irene Whitaker, who survives him. Moore is also survived by sons Kenneth and Steven and four grandchildren.

 

A schematic illustration of the first star’s supernovae and observed spectra of extremely metal-poor stars. Ejecta from the supernovae enrich pristine hydrogen and helium gas with heavy elements in the universe (cyan, green, and purple objects surrounded by clouds of ejected material). If the first stars are born as a multiple stellar system rather than as an isolated single stars, elements ejected by the supernovae are mixed together and incorporated into the next generation of stars. The characteristic chemical abundances in such a mechanism are preserved in the atmosphere of the long-lived low-mass stars observed in our Milky Way Galaxy. The team invented the machine learning algorithm to distinguish whether the observed stars were formed out of ejecta of a single (small red stars) or multiple (small blue stars) previous supernovae, based on measured elemental abundances from the spectra of the stars. (Credit: Kavli IPMU)
A schematic illustration of the first star’s supernovae and observed spectra of extremely metal-poor stars. Ejecta from the supernovae enrich pristine hydrogen and helium gas with heavy elements in the universe (cyan, green, and purple objects surrounded by clouds of ejected material). If the first stars are born as a multiple stellar system rather than as an isolated single stars, elements ejected by the supernovae are mixed together and incorporated into the next generation of stars. The characteristic chemical abundances in such a mechanism are preserved in the atmosphere of the long-lived low-mass stars observed in our Milky Way Galaxy. The team invented the machine learning algorithm to distinguish whether the observed stars were formed out of ejecta of a single (small red stars) or multiple (small blue stars) previous supernovae, based on measured elemental abundances from the spectra of the stars. (Credit: Kavli IPMU)

Japanese-built AI discovers the first stars were not alone

By using machine learning and state-of-the-art supernova nucleosynthesis, a team of researchers has found the majority of observed second-generation stars in the universe were enriched by multiple supernovae.

Nuclear astrophysics research has shown that elements that are heavier than carbon in the universe are produced in stars. But the first stars, stars born soon after the Big Bang, did not contain such heavy elements, which astronomers call ‘metals’. The next generation of stars contained only a tiny amount of heavy elements produced by the first stars. Understanding the universe in its infancy requires researchers to study these metal-poor stars. Figure 3. (from left) Visiting Senior Scientist Ken’ichi Nomoto, Visiting Associate Scientist Miho Ishigaki, Kavli IPMU Visiting Associate Scientist Tilman Hartwig, Visiting Senior Scientist Chiaki Kobayashi, and Visiting Senior Scientist Nozomu Tominaga. (Credit: Kavli IPMU, Nozomu Tominaga)

Luckily, these second-generation metal-poor stars are observed in our Milky Way Galaxy and have been studied by a team of Affiliate Members of the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) to close in on the physical properties of the first stars in the universe.

The team, led by Kavli IPMU Visiting Associate Scientist and The University of Tokyo Institute for Physics of Intelligence Assistant Professor Tilman Hartwig, including Visiting Associate Scientist and National Astronomical Observatory of Japan Assistant Professor Miho Ishigaki, Visiting Senior Scientist and the University of Hertfordshire Professor Chiaki Kobayashi, Visiting Senior Scientist and National Astronomical Observatory of Japan Professor Nozomu Tominaga, and Visiting Senior Scientist and The University of Tokyo Professor Emeritus Ken’ichi Nomoto, used artificial intelligence to analyze elemental abundances in more than 450 extremely metal-poor stars observed to date. Based on the newly developed supervised machine learning algorithm trained on theoretical supernova nucleosynthesis models, they found that 68 percent of the observed extremely metal-poor stars have a chemical fingerprint consistent with enrichment by multiple previous supernovae.

The team’s results give the first quantitative constraint based on observations on the diversity of the first stars.

“Multiplicity of the first stars were only predicted from numerical simulations so far, and there was no way to observationally examine the theoretical prediction until now”, said lead author Hartwig. “Our result suggests that most first stars formed in small clusters so that multiple of their supernovae can contribute to the metal enrichment of the early interstellar medium,” he said. Figure 2. Carbon vs. iron abundance of extremely metal-poor (EMP) stars. The colour bar shows the probability for mono-enrichment from our machine learning algorithm. Stars above the dashed lines (at [C/Fe] = 0.7) are called carbon-enhanced metal-poor (CEMP) stars and most of them are mono-enriched. (Credit: Hartwig et al.)

“Our new algorithm provides an excellent tool to interpret the big data we will have in the next decade from ongoing and future astronomical surveys across the world,” said Kobayashi, a Leverhulme Research Fellow.

“At the moment, the available data of old stars are the tip of the iceberg within the solar neighborhood. The Prime Focus Spectrograph, a cutting-edge multi-object spectrograph on the Subaru Telescope developed by the international collaboration led by Kavli IPMU, is the best instrument to discover ancient stars in the outer regions of the Milky Way far beyond the solar neighborhood," said Ishigaki.

The new algorithm invented in this study opens the door to making the most of diverse chemical fingerprints in metal-poor stars discovered by the Prime Focus Spectrograph.

“The theory of the first stars tells us that the first stars should be more massive than the Sun. The natural expectation was that the first star was born in a gas cloud containing a mass a million times more than the Sun. However, our new finding strongly suggests that the first stars were not born alone, but instead formed as a part of a star cluster or a binary or multiple-star system. This also means that we can expect gravitational waves from the first binary stars soon after the Big Bang, which could be detected in future missions in space or on the Moon,” said Kobayashi.

Hartwig has made the code developed in this study publicly available at https://gitlab.com/thartwig/emu-c.

Università di Trento prof Passerini wins €8M grant for the revolution of human-centric AI systems

The new EU-funded project that will kick off in autumn 2023, with 21 partner organizations from 9 countries across Europe, is set to develop a new generation of human-centric AI systems and to strengthen the leadership of Europe in this area

Artificial Intelligence (AI) holds tremendous potential to enhance human decisions and avoid cognitive overload and bias in high-stakes scenarios. To date, however, the adoption of AI-based support systems has been minimal in settings such as hospitals, tribunals, and public administrations.

The EU recognizes the need to foster research and innovation in this field, and on 13 March, TANGO secured €8M to develop the theoretical foundations and the computational framework for synergistic human-machine decision-making, paving the way for the next generation of human-centric AI systems. The new EU-funded project that will kick off in autumn 2023, with 21 partner organizations from 9 countries across Europe, is set to strengthen the leadership of Europe in this area.

TANGO argues that for AI to fully develop its enormous potential in terms of positive impact on individuals, society, and the economy, we need to completely rethink how AI systems are conceived. People should feel they can trust the systems they interact with, in terms of the reliability of their predictions and decisions, the capacity of the systems to understand their needs, and guarantees that they are genuinely aiming at supporting them rather than some undisclosed third party. In other words, a symbiosis should be established between humans and machines, in which all parties are aligned in terms of values, goals, and beliefs, and support and complement each other to reach objectives beyond what each would be able to do by itself.

“It takes two to TANGO! Our perspective is that a deep mutual understanding between humans and machines is essential for the development of truly effective and innovative AI systems that can expand human reasoning and decision-making capabilities“ says the project coordinator Andrea Passerini.

The potential impact on individuals and society of the TANGO framework will be evaluated on a pool of real-world use cases of extremely high social impact, namely supporting women during pregnancy and postpartum, supporting surgical teams in intraoperative decision-making, supporting loan officers and applicants in credit lending decision processes, and helping public policymakers in designing incentives and allocating funds. The success of these case studies will foster the adoption of TANGO as the framework of reference for developing a new generation of synergistic AI systems and will strengthen the leadership of Europe in human-centric AI.

Schematic diagram showing the research flow of constructing the mathematical model and the realization of high-speed computation by making full use of it. The graph in the figure indicates the high accuracy and wide applicability of the new mathematical model developed in this study.
Schematic diagram showing the research flow of constructing the mathematical model and the realization of high-speed computation by making full use of it. The graph in the figure indicates the high accuracy and wide applicability of the new mathematical model developed in this study.

Japanese prof Nakata designs simplified calculations that reproduce complex plasma flows approximately 1,500 times faster

Advances in theoretical studies on turbulence-driven heat transport in fusion plasms

Accurate and fast calculation of heat flow (heat transport) due to fluctuations and turbulence in plasmas is an important issue in elucidating the physical mechanisms and in predicting and controlling the performance of fusion reactors. 

A research group led by Associate Professor Motoki Nakata of the National Institute for Fusion Science in Japan and Tomonari Nakayama, a Ph.D. student at the Graduate University for Advanced Studies, has successfully developed a high-precision mathematical model to predict the heat transport level. This was achieved by applying a mathematical optimization method to a lot of turbulence and heat transport data obtained from large-scale numerical calculations using a supercomputer. This new mathematical model enables us to predict turbulence and heat transport in fusion plasmas only by simplified small-scale numerical calculations, which are approximately 1,500 times faster than conventional large-scale ones. This research result will not only accelerate research on fusion plasma turbulence but also contribute to the study of various complex flow phenomena with fluctuations, turbulence, and flows.

A paper summarizing this research result will be published in the online edition of Scientific Reports, an open-access scientific journal, on March 16.

Research Background

In general, large-scale numerical calculations using supercomputers are indispensable to elucidate the physical mechanisms of complex structures and motions, such as atmospheric and ocean currents, neuronal signal transduction in the brain, and the molecular dynamics of proteins.

In a fusion reactor, high-temperature plasmas (high-temperature gaseous material in which electrons and nuclear ions are moving separately) are confined by magnetic fields, and a complex state called turbulence can occur in the plasma. The complex motion of vortices with various sizes cause heat flow (heat transport) in the turbulence. If the confined heat in the plasma is lost due to turbulence, the performance of the fusion reactor will be degraded, and thus plasma turbulence is one of the most important issues in fusion research.

Large-scale numerical calculations on supercomputers have been used to investigate the generation mechanism of the plasma turbulence, how to suppress it, and the heat transport due to the turbulence. Nonlinear calculations are used to solve the equations of motion of the plasma. However, since turbulence varies depending on the plasma state, an enormous amount of computational resources is required to carry out large-scale nonlinear calculations for the entire plasma region with a variety of states. There has been much research attempting to reproduce the results of nonlinear calculations by simplified theoretical models or small-scale numerical calculations, but the degraded accuracy for different plasma conditions and the limited range of applications still remain to be improved. Therefore, a new mathematical model that can solve these issues us expected to be realized.

Research Results

A research group led by Associate Professor Motoki Nakata of the National Institute for Fusion Science and Tomonari Nakayama, a Ph.D student of the Graduate University for Advanced Studies, Professor Mitsuru Honda of Kyoto University, Dr. Emi Narita of the National Institute for Quantum Science and Technology, Associate Professor Masanori Nunami and Assistant Professor Seikichi Matsuoka of the National Institute for Fusion Science has conducted a study on a novel method to reproduce the nonlinear calculation results of turbulence and heat transport by “linear” calculations, which are small-scale ones based on a simplified equation of motion. Thus, a high-speed and high-accuracy prediction with wider applicability has been achieved.

First, Prof. Nakata and his colleagues performed a number of large-scale nonlinear calculations to analyze turbulence at multiple locations in the plasma and at many temperature distribution states, and obtained the data on the turbulence intensity and heat transport level. They then proposed a simplified mathematical model based on physical considerations to reproduce it. This contained eight tuning parameters, and it was necessary to find their optimal values to best reproduce the data from the large-scale nonlinear calculations. Mr. Nakayama, a graduate student, searched for the optimal values among a huge number of combinations by applying mathematical optimization techniques used in path finding and machine learning, etc. As a result, he succeeded in constructing a new mathematical model that maintains high accuracy and greatly expands the range of applicability compared to that  used in previous research.

By combining this mathematical model with linear calculations for plasma instabilities, it is now possible to predict plasma turbulence and heat transport level with high accuracy, about 1,500 times faster than conventional large-scale nonlinear calculations (Figure).

Significance of the Results and Future Developments

The newly constructed fast and accurate mathematical model will greatly accelerate research on turbulence in fusion plasmas. In addition, the model will also advance research on integrated simulations, combining the mathematical model of turbulence and numerical simulations of the other phenomena (e.g., temporal variations in temperature and density distribution, confinement magnetic field, etc.) in order to analyze the entire fusion plasma field. In addition, the model is expected to contribute to the elucidation of the mechanism of suppressing turbulence-driven heat transport, and to make a significant contribution to research towards innovative fusion reactors based on such a mechanism.

The challenge of predicting "complexity" from "simplicity" is a common issue in various sciences and technologies that deal with complex structures and dynamics. In the future, we will apply the modeling methods developed in this research to the study of complex flows, not limited to fusion plasmas.

NASA, ESA, and M. Montes (University of New South Wales) Massive galaxy cluster Abell S1063 as captured by NASA's Hubble Space Telescope
NASA, ESA, and M. Montes (University of New South Wales) Massive galaxy cluster Abell S1063 as captured by NASA's Hubble Space Telescope

Institute for Advanced Study astrophysicists deploy AI to show how to 'weigh' galaxy clusters

Scholars from the Institute for Advanced Study in Princeton, New Jersey, have used a machine learning algorithm known as “symbolic regression” to generate new equations that help solve a fundamental problem in astrophysics: inferring the mass of galaxy clusters.

Galaxy clusters are the most massive objects in the Universe: a single cluster contains anything from a hundred to many thousands of galaxies, alongside collections of plasma, hot X-ray emitting gas, and dark matter. These components are held together by the cluster’s gravity. Understanding such galaxy clusters is crucial to pinning down the origin and continuing evolution of our universe. Digvijay Wadekar The performance of the new equation from symbolic regression is shown in the middle panel, whereas that of the traditional method is shown in the top. The lower panel explicitly quantifies the reduction in the scatter.

Perhaps the most crucial quantity determining the properties of a galaxy cluster is its total mass. But measuring this quantity is difficult—galaxies cannot be “weighed” by placing them on a scale. The problem is further complicated by the fact that the dark matter that makes up much of a cluster’s mass is invisible. Instead, scientists infer the mass of a cluster from other observable quantities.

Previously, scholars considered a cluster’s mass to be roughly proportional to another, more easily measurable quantity called the “integrated electron pressure” (or the Sunyaev-Zel’dovich flux, often abbreviated to YSZ). The theoretical foundations of the Sunyaev-Zel’dovich flux were laid in the early 1970s by Rashid Sunyaev, a current Distinguished Visiting Professor in the Institute’s School of Natural Sciences, and his collaborator Yakov B. Zel’dovich.

However, the integrated electron pressure is not a reliable proxy for mass because it can behave inconsistently across different galaxy clusters. The outskirts of clusters tend to exhibit very similar YSZ, but their cores are much more variable. The YSZ/mass equivalence was problematic in that it gave equal weight to all parts of the cluster. As a result, a lot of “scatter” was observed, meaning that the error bars on the mass inferences were large.

Digvijay Wadekar, a current Member of the Institute’s School of Natural Sciences, has worked with collaborators across ten different institutions to develop an AI program to improve the understanding of the relationship between the mass and the YSZDigvijay Wadekar The trade-offs between different machine learning techniques. Symbolic regression is much less powerful than deep neural networks on high-dimensional datasets, but it is much more interpretable as it provides mathematical equations as output.

Wadekar and his collaborators “fed” their AI program with state-of-the-art cosmological simulations that have been developed by groups at the Harvard & Smithsonian Center for Astrophysics, and at the Flatiron Institute's Center for Computational Astrophysics (CCA) in New York. Their program searched for and identified additional variables that might make inferring the mass from the YSZ more accurate.

AI is useful for identifying new parameter combinations that could be overlooked by human analysts. While it is easy for human analysts to identify two significant parameters in a data set, AI is better able to parse through high volumes often revealing unexpected influencing factors.

More specifically, the AI method that Wadekar and his collaborators employed is known as symbolic regression. “Right now, a lot of the machine learning community focuses on deep neural networks,” Wadekar explained. “These are very powerful but the drawback is that they are almost like a black box. We cannot understand what goes on in them. In physics, if something is giving good results, we want to know why it is doing so. Symbolic regression is beneficial because it searches a given dataset and generates simple, mathematical expressions in the form of simple equations that you can understand. It provides an easily interpretable model.”

Their symbolic regression program (called PySR) handed them a new equation, which was able to better predict the mass of the galaxy cluster by augmenting YSZ with information about the cluster’s gas concentration. Wadekar and his collaborators then worked backward from this AI-generated equation and tried to find a physical explanation for it. They realized that gas concentration is correlated with the noisy areas of clusters where mass inferences are less reliable. Their new equation, therefore, improved mass inferences by providing a way for these noisy areas of the cluster to be “down-weighted”. In a sense, the galaxy cluster can be compared to a spherical doughnut. The new equation extracts the jelly at the center of the doughnut (that introduces larger errors), and concentrates on the doughy outskirts for more reliable mass inferences.

The new equations can provide observational astronomers engaged in upcoming galaxy cluster surveys with better insights into the mass of the objects that they observe. “There are quite a few surveys targeting galaxy clusters which are planned shortly,” Wadekar stated. “Examples include the Simons Observatory (SO), the Stage 4 CMB experiment (CMB-S4), and an X-ray survey called eROSITA. The new equations can help us in maximizing the scientific return from these surveys.”

He also hopes that this publication will be just the tip of the iceberg when it comes to using symbolic regression in astrophysics. “We think that symbolic regression is highly applicable to answering many astrophysical questions,” Wadekar added. “In a lot of cases in astronomy, people make a linear fit between two parameters and ignore everything else. But nowadays, with these tools, you can go further. Symbolic regression and other artificial intelligence tools can help us go beyond existing two-parameter power laws in a variety of different ways, ranging from investigating small astrophysical systems like exoplanets to galaxy clusters, the biggest things in the universe.”