The new platform technology, modeled after the brain, consists of a mesh of silver wires on a bed of electrodes.
The new platform technology, modeled after the brain, consists of a mesh of silver wires on a bed of electrodes.

Experimental brain-like supercomputing system: Unlocking the potential of neuromorphic nanowire networks

Scientists and researchers have been inspired by the intricacies of the human brain in creating advanced computing systems. The California NanoSystems Institute at UCLA has been at the forefront of developing a brain-inspired platform technology for computation. Their experimental computing system has shown remarkable potential in learning and identifying handwritten numbers with an impressive accuracy of 93.4%. This breakthrough is attributed to a novel training algorithm that provides continuous real-time feedback to the system, surpassing conventional machine-learning approaches.

The Brain-Like System: A Tangled Network of Nanowires

The brain-like computing system developed by researchers at UCLA and the University of Sydney is composed of a network of nanowires made of silver and selenium. These nanoscale wires self-organize into a complex, tangled structure on top of an array of electrodes. Unlike traditional computers, where memory and processing modules are separate entities, this nanowire network physically reconfigures itself in response to stimuli, with memory distributed throughout its atomic structure. The connections between wires can form or break, similar to the behavior of synapses in the biological brain.

Training the Brain-Like System with Handwritten Numbers

To train and test the brain-like system, researchers utilized a dataset of handwritten numbers provided by the National Institute of Standards and Technology. The images of these numbers were communicated to the system pixel-by-pixel using pulses of electricity, with varying voltages representing light or dark pixels. The streamlined algorithm developed by the University of Sydney allowed the system to process multiple streams of data simultaneously and adapt dynamically, leveraging its brain-like capabilities.

Real-Time Feedback: A Key to Enhanced Learning

The groundbreaking aspect of this experimental system lies in the real-time feedback provided during the training process. Unlike conventional approaches where training is performed after processing a batch of data, the brain-like system received continuous information about its success at the task as it learned. This constant feedback loop proved to be highly effective, resulting in an accuracy of 93.4% in identifying handwritten numbers. In comparison, a conventional machine-learning approach achieved an accuracy of 91.4%.

The brain-like computing system has some unique features that distinguish it from other computing approaches. One of these features is the system's distributed memory, which stores past inputs within the nanowire network. This embedded memory enhances the system's learning capabilities, making it highly accurate in identifying handwritten numbers.

Compared to silicon-based artificial intelligence systems, the brain-like system has the potential to operate with significantly lower power consumption. It can perform tasks that current AI systems find difficult, such as analyzing patterns in weather, traffic, and other dynamic systems.

The research team employed a co-design approach, developing both the hardware and software simultaneously. This approach ensures optimal integration between the brain-like system and its custom algorithm, resulting in enhanced performance. The combination of brain-like memory and processing capabilities embedded in physical systems with continuous adaptation and learning opens up new possibilities for edge computing.

Edge computing processes complex data on the spot without relying on communication with remote servers, making it suitable for various applications in robotics, autonomous navigation, smart devices, health monitoring, and more.

The brain-like computing system is still in the development phase, but its potential impact on various industries is immense. Its ability to perform complex tasks with lower energy consumption makes it an attractive alternative to traditional AI systems. The neuromorphic nanowire networks can unlock new opportunities in fields such as robotics, autonomous vehicles, the Internet of Things, and multi-location sensor coordination.

The experimental brain-like computing system represents a significant advancement in the field of neuromorphic computing. By harnessing the unique properties of nanowire networks and integrating them with custom algorithms, researchers have shown the potential for creating highly efficient and adaptable computing systems. As this technology continues to evolve, we can expect to see further breakthroughs in AI, edge computing, and various other domains, transforming the way we process and analyze complex data.

The study's corresponding authors include James Gimzewski, a distinguished professor of chemistry and member of the California NanoSystems Institute at UCLA, Adam Stieg, a research scientist and associate director of CNSI, Zdenka Kuncic, a professor of physics at the University of Sydney, and Ruomin Zhu, a doctoral student at the University of Sydney and first author of the study. Other co-authors include Sam Lilak, Alon Loeffler, and Joseph Lizier, all contributing to the research at UCLA and the University of Sydney.

The research was supported by the University of Sydney and the Australian-American Fulbright Commission. The brain-like computing system has some unique features that distinguish it from other computing approaches. One of these features is the system's distributed memory, which stores past inputs within the nanowire network. This embedded memory enhances the system's learning capabilities, making it highly accurate in identifying handwritten numbers.

Compared to silicon-based artificial intelligence systems, the brain-like system has the potential to operate with significantly lower power consumption. It can perform tasks that current AI systems find difficult, such as analyzing patterns in weather, traffic, and other dynamic systems.

The research team employed a co-design approach, developing both the hardware and software simultaneously. This approach ensures optimal integration between the brain-like system and its custom algorithm, resulting in enhanced performance. The combination of brain-like memory and processing capabilities embedded in physical systems with continuous adaptation and learning opens up new possibilities for edge computing.

Edge computing processes complex data on the spot without relying on communication with remote servers, making it suitable for various applications in robotics, autonomous navigation, smart devices, health monitoring, and more.

The brain-like computing system is still in the development phase, but its potential impact on various industries is immense. Its ability to perform complex tasks with lower energy consumption makes it an attractive alternative to traditional AI systems. The neuromorphic nanowire networks can unlock new opportunities in fields such as robotics, autonomous vehicles, the Internet of Things, and multi-location sensor coordination.

The experimental brain-like computing system represents a significant advancement in the field of neuromorphic computing. By harnessing the unique properties of nanowire networks and integrating them with custom algorithms, researchers have shown the potential for creating highly efficient and adaptable computing systems. As this technology continues to evolve, we can expect to see further breakthroughs in AI, edge computing, and various other domains, transforming the way we process and analyze complex data.

The study's corresponding authors include James Gimzewski, a distinguished professor of chemistry and member of the California NanoSystems Institute at UCLA, Adam Stieg, a research scientist and associate director of CNSI, Zdenka Kuncic, a professor of physics at the University of Sydney, and Ruomin Zhu, a doctoral student at the University of Sydney and first author of the study. Other co-authors include Sam Lilak, Alon Loeffler, and Joseph Lizier, all contributing to the research at UCLA and the University of Sydney.

The research was supported by the University of Sydney and the Australian-American Fulbright Commission.

The future of supercomputing: Harnessing the power of chiral magnets

In the pursuit of more energy-efficient supercomputing, researchers from UCL and Imperial College London have made significant strides by utilizing the unique properties of chiral magnets. Their groundbreaking study explores the potential of physical reservoir computing, a brain-inspired approach that uses the intrinsic physical properties of materials. By applying an external magnetic field and manipulating temperature, the researchers demonstrate how chiral magnets can be reconfigured to adapt to different machine-learning tasks, paving the way for more efficient and adaptable supercomputing systems.

Limitations of Traditional Computing

Traditional computing systems, with their separate units for data storage and processing, consume vast amounts of electricity. This architecture necessitates constant shuffling of information between the two units, resulting in energy waste and excessive heat generation. This inefficiency is particularly problematic for machine learning applications that require large datasets for processing. Training a single AI model can generate hundreds of tons of carbon dioxide, highlighting the urgent need for more sustainable computing solutions.

Introduction of Physical Reservoir Computing

Physical reservoir computing is a promising neuromorphic approach that aims to eliminate the need for distinct memory and processing units, leading to more efficient data processing. In addition to its potential as a sustainable alternative to conventional computing, physical reservoir computing can be seamlessly integrated into existing circuitry, offering energy-efficient additional capabilities.

Role of Chiral Magnets

In their study, the international team of researchers focused on chiral magnets as a computational medium. Chiral magnets possess unique physical properties that make them well-suited for specific computing tasks. By leveraging an external magnetic field and temperature variations, the researchers were able to adapt the characteristics of chiral magnets to different machine-learning tasks.

Exploring the Phases of Chiral Magnets

The team discovered that different magnetic phases of chiral magnets excelled at different types of computing tasks. For example, the skyrmion phase, characterized by swirling magnetized particles in a vortex-like pattern, exhibited a remarkable memory capacity ideal for forecasting tasks. On the other hand, the conical phase, with its non-linearity, proved to be excellent for transformation tasks and classification, such as identifying whether an image contains a cat or a dog.

Implications for Energy Efficiency 

By harnessing the unique properties of chiral magnets and their ability to adapt to specific computing tasks, physical reservoir computing holds the potential to revolutionize energy efficiency in computing. With its integration into existing circuitry, this approach could significantly reduce energy consumption while increasing computational capabilities.

Future Directions and Challenges

While the findings of this study are promising, the researchers acknowledge that further research is needed to identify commercially viable and scalable materials and device architectures. The development of practical applications using chiral magnets as a computational medium requires careful consideration of various factors, including cost-effectiveness and manufacturing processes.

Collaborative Efforts and Funding

This groundbreaking study was the result of collaboration between researchers from UCL, Imperial College London, the University of Tokyo, and Technische Universität München. The project received support from esteemed organizations such as the Leverhulme Trust, the Engineering and Physical Sciences Research Council (EPSRC), the Royal Academy of Engineering, the Japan Science and Technology Agency, the Katsu Research Encouragement Award, Asahi Glass Foundation, and the DFG (German Research Foundation).

Conclusion

The study led by UCL and Imperial College London researchers has brought us closer to realizing the full potential of physical reservoir computing. By leveraging the unique properties of chiral magnets and their adaptability to different machine-learning tasks, this brain-inspired approach offers the promise of significantly reducing energy consumption in supercomputing systems. As researchers continue to explore the possibilities and overcome challenges, the integration of physical reservoir computing into existing circuitry holds tremendous potential for creating a more sustainable and efficient future of supercomputing.

Carichino's LEAPS-MPS award highlights the significance of her research in modeling of the interaction between the eye, contact lenses

Ortho-k lenses are a unique type of contact lens that can help reduce myopic progression in children and young adults. These lenses are worn overnight and gently reshape the cornea, providing clear vision during the day without the need for glasses or traditional contact lenses. However, finding the right fit for ortho-k lenses can be challenging due to the different shapes available. Lucia Carichino is researching to minimize the trial and error involved in this process. She aims to develop a computational tool that can predict how a particular lens will interact with an individual's eye.

Carichino's research involves working with Riley Supple, a mathematical modeling Ph.D. student, and Kara Maki, an associate professor in the School of Mathematics and Statistics at RIT. They are developing a mathematical model that can anticipate the shape of the eye based on the use of a specific contact lens. By analyzing the interaction between the eye and the lens, their computational tool aims to help eye doctors and contact lens manufacturers select the most appropriate lens for each patient, improving the overall fitting process.

Carichino's research project highlights how mathematics and science intersect in biomedical mathematics. Mathematical modeling plays an important role in predicting and simulating experimental outcomes in biology. By incorporating mathematical components into the study of biology, Carichino aims to optimize the fitting process of ortho-k lenses, benefiting millions of people worldwide who rely on contact lenses.

Carichino's research excellence is recognized by the LEAPS-MPS award program, which also supports increasing diversity and inclusion in the mathematical and physical science fields. She collaborates with RIT's diversity, equity, and inclusion initiatives for the College of Science, and the Undergraduates Research Training Initiative for Scientific Enhancement (U-RISE) program for deaf and hard-of-hearing students offered at the National Technical Institute for the Deaf. Carichino is committed to involving underrepresented groups in her research endeavors, promoting diversity and inclusion in the scientific community.

In conclusion, Lucia Carichino's receipt of the LEAPS-MPS award from the NSF highlights the significance of her research in computational modeling of the interaction between the eye and contact lenses. By developing a mathematical model to predict the fit and performance of ortho-k lenses, Carichino aims to revolutionize the fitting process and enhance the comfort of contact lens wearers. This interdisciplinary research project showcases the importance of mathematics in the field of biology and highlights the commitment to diversity and inclusion in the scientific community. With this award, Carichino is taking a significant step towards advancing eye care and positively impacting the lives of millions of contact lens users worldwide.

Researchers can now use a new statistical technique called prediction-powered inference to safely test scientific hypotheses using machine learning predictions. DALL-E, an AI system, has generated an artistic interpretation of the technique, as shown in the image. This breakthrough was made possible due to the efforts of Michael Jordan.
Researchers can now use a new statistical technique called prediction-powered inference to safely test scientific hypotheses using machine learning predictions. DALL-E, an AI system, has generated an artistic interpretation of the technique, as shown in the image. This breakthrough was made possible due to the efforts of Michael Jordan.

How to use AI for discovery without leading science astray

In the last decade, artificial intelligence (AI) has played a significant role in scientific research. Machine learning models have been employed to predict protein structures, estimate deforestation levels in the Amazon rainforest, and classify distant galaxies to identify potential exoplanets. However, although AI has the potential to accelerate scientific discovery, there is a risk of misleading or false results. Machine learning models, like chatbots that sometimes produce fictitious responses, can sometimes produce inaccurate outcomes.

To address this issue, researchers at the University of California, Berkeley, have introduced a new statistical technique called prediction-powered inference (PPI). This technique allows scientists to use predictions obtained from machine learning models while correcting for potential errors and biases. In this article, we will explore how PPI works, its applications in various scientific domains, and its significance in data-intensive research.

When conducting scientific experiments, researchers aim to obtain a range of plausible answers instead of a single definitive answer. They achieve this by calculating confidence intervals, which assess the variability of results obtained through repeated experiments. However, machine learning systems focus primarily on individual data points and cannot provide scientists with the uncertainty assessments they require.

For instance, AlphaFold, a popular machine learning model used for predicting protein structures, can only offer a single structure prediction without any measure of confidence. Scientists may be tempted to treat these predictions as data and compute classical confidence intervals, disregarding the fact that machine learning models have hidden biases that come from the training data. Such biases can skew the results, especially when exploring phenomena at the boundaries between known and unknown realms.

To address these limitations, researchers at UC Berkeley developed a technique called Prediction-Powered Inference (PPI). PPI leverages a small amount of unbiased real-world data to correct the output of large, general models like AlphaFold, specifically in the context of scientific inquiries. By combining these two sources of evidence, PPI enables the formation of valid confidence intervals.

The central idea behind PPI is to combine the predictions from machine learning models with unbiased data related to the specific hypothesis being investigated. This approach allows scientists to leverage the benefits of machine learning models while correcting for potential errors and biases. The key advantage of PPI is its ability to provide reliable confidence intervals even when the nature of errors in the machine learning model is unknown at the outset.
Applying PPI in Scientific Research

PPI has proven to be effective in various scientific domains, ranging from environmental studies to astrophysics and genetics. Let's delve into some notable applications of PPI in these fields:

  • Environmental Studies: Estimating Deforestation Levels in the Amazon

One of the uses of PPI is in estimating deforestation levels in the Amazon rainforest by utilizing satellite imagery. Machine learning models trained on satellite data can accurately identify deforestation in specific regions of the forest. However, when combined to estimate deforestation across the entire Amazon, these models can yield skewed confidence intervals due to their inability to recognize newer patterns of deforestation. PPI helps to correct this bias by incorporating a small number of human-labeled regions of deforestation.

  • Protein Folding: Predicting Protein Structures with Confidence

Protein folding is a critical process in comprehending protein function and designing therapeutics. AlphaFold, a machine learning model, has shown remarkable success in predicting protein structures. However, it cannot provide confidence intervals for its predictions. By applying PPI, scientists can integrate additional unbiased data to obtain valid confidence intervals for protein structures predicted by AlphaFold.

  • Astrophysics: Classifying Distant Galaxies

Machine learning models have been utilized to classify distant galaxies based on their characteristics, aiding in the search for exoplanets. However, these models might generate unrealistic output when faced with phenomena at the edge of our current knowledge. PPI offers a solution by allowing scientists to correct potential errors and biases in the models using unbiased data.

  • Genetics: Exploring Gene Expression Levels

Understanding gene expression levels is crucial in deciphering genetic mechanisms and their role in various biological processes. Machine learning models can be used to predict gene expression levels based on diverse factors. PPI enables scientists to incorporate additional unbiased data to obtain valid confidence intervals for gene expression predictions.

Other Applications of PPI

The potential applications of PPI are vast and not limited to the examples mentioned above. The technique can be utilized in diverse research fields, including but not limited to plankton counting, investigating the relationship between income and private health insurance, and exploring other scientific phenomena.

AI has transformed scientific research by speeding up processes and enabling predictions in various domains. However, it is crucial to use AI models with caution, considering the potential for misleading or false results. UC Berkeley researchers have developed the prediction-powered inference (PPI) technique to overcome this challenge, enabling scientists to use machine learning models while correcting for errors and biases.

PPI incorporates a small amount of unbiased data to form valid confidence intervals, providing scientists with the necessary uncertainty assessments. By applying PPI, scientists can leverage the power of AI models in diverse scientific inquiries, ranging from environmental studies to astrophysics and genetics. This technique contributes to the advancement of modern data-intensive, model-intensive, and collaborative science.

With the advent of PPI, researchers can confidently embrace AI for scientific discovery, knowing that they can mitigate potential errors and biases. The future of AI in scientific research appears promising, with PPI becoming an integral component of data analysis, hypothesis testing, and decision-making processes. As scientists continue to unlock the mysteries of the natural world, PPI will play a crucial role in ensuring the reliability and validity of AI-driven insights and predictions.

NASA's Juno mission has discovered that the atmospheric winds on Jupiter penetrate the planet in a cylindrical manner and parallel to its spin axis. The most prominent jet observed by Juno is located at 21 degrees north latitude at cloud level, but it shifts to 13 degrees north latitude at a depth of 1,800 miles (3,000 kilometers) below the clouds.
NASA's Juno mission has discovered that the atmospheric winds on Jupiter penetrate the planet in a cylindrical manner and parallel to its spin axis. The most prominent jet observed by Juno is located at 21 degrees north latitude at cloud level, but it shifts to 13 degrees north latitude at a depth of 1,800 miles (3,000 kilometers) below the clouds.

Juno Mission finds Jupiter's winds in cylindrical layers, revealing internal structure

NASA's Juno mission has made a groundbreaking discovery about Jupiter's internal structure. By studying the planet's atmosphere, scientists have found that the atmospheric winds of Jupiter operate in a cylindrical manner parallel to its spin axis. This discovery provides deeper insights into the gas giant's long-debated internal structure.

Juno, the spacecraft that entered Jupiter's orbit in 2016, has been closely observing Jupiter's atmosphere. Its scientific instruments have delved beneath Jupiter's turbulent cloud deck during its 55 flybys. The mission aims to unravel the mysteries of Jupiter's internal workings.

One of the key ways the Juno mission investigates the planet's interior is through radio science. Scientists track Juno's radio signal as it passes by Jupiter at incredible speeds, using the Deep Space Network antennas. These measurements allow researchers to detect tiny variations in Juno's velocity, caused by fluctuations in Jupiter's gravity field.

The high-precision data collected by Juno has led to a series of significant discoveries. These include the identification of a dilute core deep within Jupiter and the determination of the depth of the planet's zones and belts. These zones and belts span approximately 1,860 miles (3,000 kilometers) from the cloud tops.

To pinpoint the location and characteristics of Jupiter's winds, scientists used mathematical techniques commonly used to model the gravitational variations and surface elevations of terrestrial planets, such as Earth. By leveraging Juno's precise measurements, researchers achieved a four-fold increase in resolution over previous models, which relied on data obtained by Voyager and Galileo, previous pioneering missions to Jupiter carried out by NASA.

The study revealed that Jupiter's dominant jet streams, known as zonal flows, extend inward cylindrically from the cloud-level white and red zones and belts, rather than radiating in every direction like a sphere. This finding confirms a two-decade-old model while providing valuable insights into the orientation and structure of these powerful east-west zonal flows.

Ryan Park, a Juno scientist and lead of the mission's gravity science investigation at NASA's Jet Propulsion Laboratory in Southern California, expressed excitement about the application of this constraining technique to outer planets. He stated, "This is the first time such a technique has been applied to an outer planet."

The findings represent a major milestone in our understanding of Jupiter's complex dynamics and internal processes. By unraveling the mysteries of its atmospheric winds, Juno continues to transform our knowledge of this gas giant, shedding light on the mechanisms that drive its extreme weather patterns.

While there is still much to learn about Jupiter, these new revelations bring us one step closer to comprehending the inner workings of this awe-inspiring celestial giant. NASA's Juno mission reinforces the agency's commitment to pushing the boundaries of scientific exploration and unlocking the secrets of our solar system's most mysterious and captivating planets.