USC Medicine Crump lab develops the Constellations algo for understanding head development

Cranial neural crest cells, or CNCCs, contribute to many more body parts than their humble name suggests. These remarkable stem cells not only form most of the skull and facial skeleton in all vertebrates ranging from fish to humans but also can generate everything from gills to the cornea. To understand this versatility, scientists from the lab of Gage Crump created a series of atlases over time to understand the molecular decisions by which CNCCs commit to forming specific tissues in developing zebrafish. Their findings may provide new insights into normal head development, as well as craniofacial birth defects. Confocal microscopy image of an adult zebrafish head with neural crest-derived cells in red. The Crump lab has used single-cell sequencing to understand how these cells build and repair the head skeleton, with implications for understanding human craniofacial birth defects and improving repair of skeletal tissues.  CREDIT Image courtesy of Peter Fabian

“CNCCs have long fascinated biologists by the incredible diversity of cell types they can generate. By studying this process in the genetically tractable zebrafish, we have identified many of the potential switches that allow CNCCs to form these very different cell types,” said Gage Crump, professor of stem cell biology and regenerative medicine at the Keck School of Medicine of USC.

Led by postdoc Peter Fabian and Ph.D. students Kuo-Chang Tseng, Mathi Thiruppathy, and Claire Arata, the team of scientists permanently labeled CNCCs with a red fluorescent protein to keep track of which cell types came from CNCCs throughout the lifetime of zebrafish. They then used a powerful type of approach, known as “single-cell genomics,” to identify the complete set of active genes and the organization of the DNA across hundreds of thousands of individual CNCCs. The massive quantity of data generated required the scientists to develop a new computational tool to make sense of it.

“We created a type of computational analysis that we called ‘Constellations,’ because the final visual output of the technique is reminiscent of constellations of stars in the sky,” said Fabian. “In contrast to astrology, our Constellations algorithm really can predict the future of cells and reveal the key genes that likely control their development.”

Through this new bioinformatic approach, the team discovered that CNCCs do not start with all the information required to make the huge diversity of cell types. Instead, only after they disperse throughout the embryo do CNCCs begin reorganizing their genetic material in preparation for becoming specific tissues. Constellations accurately identified genetic signs that point to these specific destinies for CNCCs. Real-life experiments confirmed that Constellations correctly pinpointed the role of a family of “FOX” genes in facial cartilage formation and a previously unappreciated function for “GATA” genes in the formation of gill respiratory cell types that allow fish to breathe.

“By conducting one of the most comprehensive single-cell studies of a vertebrate cell population to date, we not only gained significant insights into the development of the vertebrate head but also created a broadly useful computational tool for studying the development and regeneration of organ systems throughout the body,” said Crump.

University of Tokyo researchers find public trust in AI varies greatly depending on the app

Prompted by the increasing prominence of artificial intelligence (AI) in society, University of Tokyo researchers investigated public attitudes toward the ethics of AI. Their findings quantify how different demographics and ethical scenarios affect these attitudes. As part of this study, the team developed an octagonal visual metric, analogous to a rating system, which could be useful to AI researchers who wish to know how their work may be perceived by the public. Octagon chart. An example chart showing a respondent’s ratings of the eight themes for each of the four ethical scenarios on a different application of AI. © 2021 Yokoyama et al.

Many people feel the rapid development of technology often outpaces that of the social structures that implicitly guide and regulate it, such as law or ethics. AI in particular exemplifies this as it has become so pervasive in everyday life for so many, seemingly overnight. This proliferation, coupled with the relative complexity of AI compared to more familiar technology, can breed fear and mistrust of this key component of modern living. Who distrusts AI and in what ways are matters that would be useful to know for developers and regulators of AI technology, but these kinds of questions are not easy to quantify.

Researchers at the University of Tokyo, led by Professor Hiromi Yokoyama from the Kavli Institute for the Physics and Mathematics of the Universe, set out to quantify public attitudes toward ethical issues around AI. There were two questions, in particular, the team, through analysis of surveys, sought to answer: how attitudes change depending on the scenario presented to a respondent, and how the demographic of the respondent changed attitudes.

Ethics cannot be quantified, so to measure attitudes toward the ethics of AI, the team employed eight themes common to many AI applications that raised ethical questions: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and promotion of human values. These, which the group has termed “octagon measurements,” were inspired by a 2020 paper by Harvard University researcher Jessica Fjeld and her team.

Survey respondents were given a series of four scenarios to judge according to these eight criteria. Each scenario looked at a different application of AI. They were: AI-generated art, customer service AI, autonomous weapons, and crime prediction.

The survey respondents also gave the researchers information about themselves such as age, gender, occupation, and level of education, as well as a measure of their level of interest in science and technology by way of an additional set of questions. This information was essential for the researchers to see what characteristics of people would correspond to certain attitudes.

“Prior studies have shown that risk is perceived more negatively by women, older people, and those with more subject knowledge. I was expecting to see something different in this survey given how commonplace AI has become, but surprisingly we saw similar trends here,” said Yokoyama. “Something we saw that was expected, however, was how the different scenarios were perceived, with the idea of AI weapons being met with far more skepticism than the other three scenarios.” Octagon measurements. The eight themes common to a wide range of AI scenarios for which the public have pressing ethical concerns. © 2021 Yokoyama et al.

The team hopes the results could lead to the creation of a sort of universal scale to measure and compare ethical issues around AI. This survey was limited to Japan, but the team has already begun gathering data in several other countries.

“With a universal scale, researchers, developers, and regulators could better measure the acceptance of specific AI applications or impacts and act accordingly,” said Assistant Professor Tilman Hartwig. “One thing I discovered while developing the scenarios and questionnaire is that many topics within AI require significant explanation, more so than we realized. This goes to show there is a huge gap between perception and reality when it comes to AI.”

Seeking a way of preventing audio models for AI machine learning from being fooled

Warnings have emerged about the unreliability of the metrics used to detect whether an audio perturbation designed to fool AI models can be perceived by humans

Artificial intelligence (AI) is increasingly based on machine learning models, trained using large datasets. Likewise, human-computer interaction is increasingly dependent on speech communication, mainly due to the remarkable performance of machine learning models in speech recognition tasks. CAPTION Jon Vadillo, in his office at the University of The Basque Country.  CREDIT Nagore Iraola. UPV/EHU

However, these models can be fooled by "adversarial" examples, in other words, inputs intentionally perturbed to produce a wrong prediction without the changes being noticed by humans. "Suppose we have a model that classifies audio (e.g. voice command recognition) and we want to deceive it, in other words, generate a perturbation that maliciously prevents the model from working properly. If a signal is heard properly, a person can notice whether a signal says 'yes', for example. When we add an adversarial perturbation we will still hear 'yes', but the model will start to hear 'no', or 'turn right' instead of left or any other command we don't want to execute," explained Jon Vadillo, a researcher in the UPV/EHU’s Department of Computer Science and Artificial Intelligence.

This could have "very serious implications at the level of applying these technologies to real-world or highly sensitive problems", added Vadillo. It remains unclear why this happens. Why would a model that behaves so intelligently suddenly stop working properly when it receives even slightly altered signals?

Deceiving the model by using an undetectable perturbation

“It is important to know whether a model or a program has vulnerabilities," added the researcher from the Faculty of Informatics. “Firstly, we investigate these vulnerabilities, to check that they exist and because that is the first step in eventually fixing them.” While much research has focused on the development of new techniques for generating adversarial perturbations, less attention has been paid to the aspects that determine whether these perturbations can be perceived by humans and what these aspects are like. This issue is important, as the adversarial perturbation strategies proposed only pose a threat if the perturbations cannot be detected by humans.

This study has investigated the extent to which the distortion metrics proposed in the literature for audio adversarial examples can reliably measure the human perception of perturbations. In an experiment in which 36 people evaluated adversarial examples or audio perturbations according to various factors, the researchers showed that "the metrics that are being used by convention in the literature are not completely robust or reliable. In other words, they do not adequately represent the auditory perception of humans; they may tell you that a perturbation cannot be detected, but then when we evaluate it with humans, it turns out to be detectable. So we want to issue a warning that due to the lack of reliability of these metrics, the study of these audio attacks is not being conducted very well," said the researcher.

In addition, the researchers have proposed a more robust evaluation method that is the outcome of the "analysis of certain properties or factors in the audio that are relevant when assessing detectability, for example, the parts of the audio in which a perturbation is most detectable". Even so, "this problem remains open because it is very difficult to come up with a mathematical metric that is capable of modeling auditory perception. Depending on the type of audio signal, different metrics will probably be required or different factors will need to be considered. Achieving general audio metrics that are representative is a complex task," concluded Vadillo.

MIT student Mathews streamlines turbulence theory by combining ML, physics to model complex plasma phenomena

To make fusion energy a viable resource for the world’s energy grid, researchers need to understand the turbulent motion of plasmas: a mix of ions and electrons swirling around in reactor vessels. The plasma particles, following magnetic field lines in toroidal chambers known as tokamaks, must be confined long enough for fusion devices to produce significant gains in net energy, a challenge when the hot edge of the plasma (over 1 million degrees Celsius) is just centimeters away from the much cooler solid walls of the vessel.

Abhilash Mathews, a Ph.D. candidate in the Department of Nuclear Science and Engineering working at MIT’s Plasma Science and Fusion Center (PSFC), believes this plasma edge to be a particularly rich source of unanswered questions. A turbulent boundary is central to understanding plasma confinement, fueling, and the potentially damaging heat fluxes that can strike material surfaces — factors that impact fusion reactor designs. Visualized are two-dimensional pressure fluctuations within a larger three-dimensional magnetically confined fusion plasma simulation. With recent advances in machine-learning techniques, these types of partial observations provide new ways to test reduced turbulence models in both theory and experiment. Credits:Image courtesy of the Plasma Science and Fusion Center.

To better understand edge conditions, scientists focus on modeling turbulence at this boundary using numerical simulations that will help predict the plasma's behavior. However, “first principles” simulations of this region are among the most challenging and time-consuming computations in fusion research. Progress could be accelerated if researchers could develop “reduced” supercomputer models that run much faster, but with quantified levels of accuracy.

For decades, tokamak physicists have regularly used a reduced “two-fluid theory” rather than higher-fidelity models to simulate boundary plasmas in experiments, despite uncertainty about accuracy. In a pair of recent publications, Mathews begins directly testing the accuracy of this reduced plasma turbulence model in a new way: he combines physics with machine learning.

“A successful theory is supposed to predict what you're going to observe,” explains Mathews, “for example, the temperature, the density, the electric potential, the flows. And it’s the relationships between these variables that fundamentally define a turbulence theory. What our work essentially examines is the dynamic relationship between two of these variables: the turbulent electric field and the electron pressure.”

Mathews employs a novel deep-learning technique that uses artificial neural networks to build representations of the equations governing the reduced fluid theory. With this framework, he demonstrates a way to compute the turbulent electric field from an electron pressure fluctuation in the plasma consistent with the reduced fluid theory. Models commonly used to relate the electric field to pressure break down when applied to turbulent plasmas, but this one is robust even too noisy pressure measurements.

Then, Mathews further investigates this connection, contrasting it against higher-fidelity turbulence simulations. This first-of-its-kind comparison of turbulence across models has previously been difficult — if not impossible — to evaluate precisely. Mathews finds that in plasmas relevant to existing fusion devices, the reduced fluid model's predicted turbulent fields are consistent with high-fidelity calculations. In this sense, the reduced turbulence theory works. But to fully validate it, “one should check every connection between every variable,” says Mathews.

Mathews’ advisor, Principal Research Scientist Jerry Hughes, notes that plasma turbulence is notoriously difficult to simulate, more so than the familiar turbulence seen in air and water. “This work shows that, under the right set of conditions, physics-informed machine-learning techniques can paint a very full picture of the rapidly fluctuating edge plasma, beginning from a limited set of observations. I’m excited to see how we can apply this to new experiments, in which we essentially never observe every quantity we want.”

These physics-informed deep-learning methods pave new ways in testing old theories and expanding what can be observed from new experiments. David Hatch, a research scientist at the Institute for Fusion Studies at the University of Texas at Austin, believes these applications are the start of a promising new technique.

“Abhi’s work is a major achievement with the potential for broad application,” he says. “For example, given limited diagnostic measurements of a specific plasma quantity, physics-informed machine learning could infer additional plasma quantities in a nearby domain, thereby augmenting the information provided by a given diagnostic. The technique also opens new strategies for model validation.”

Mathews sees exciting research ahead.

“Translating these techniques into fusion experiments for real edge plasmas is one goal we have insight into, and work is currently underway,” he says. “But this is just the beginning.”

Terahertz light-driven spin-lattice control can open up a new path to faster storage

An international team of researchers from the University of Cologne (Germany), Radboud University Nijmegen (The Netherlands), the Ioffe Institute, and the Prokhorov General Physics Institute (Russia) has discovered a new mechanism to control spin-lattice interaction using ultrashort terahertz (THz) pulses (terahertz means 1012 hertz). This mechanism can open up new and elegant ways to control the propagation of spin waves and thus make an important step to conceptually new technologies of data processing in the future. The results have been published in a recent Science publication entitled ‘Terahertz light-driven coupling of antiferromagnetic spins to lattice’.

Currently, magnetic data recording is dominating data storage technology. It is estimated that soon, more than 7% of the world’s energy production will be spent on data storage centers. Hence there is an urgent demand to develop new technologies to process and store data using ultrafast processes in an energy-efficient manner. 

Spin-lattice interaction plays a decisive role in magnetic recording processes, where a spin is the elementary magnetic moment of an electron, whose orientation control (up and down) is the base of modern binary computer systems.  The scientists used special antiferromagnets in their study – materials in which the ordered spins of electrons align in a regular pattern with neighboring spins pointing in opposite directions. The collective motion of spins in these materials, so-called spin waves, are typically 10 times faster than their counterparts in traditional ferromagnetic materials. In contrast to electrons, such spin waves practically do not interact with the crystal lattice and thus can propagate over macroscopic distances without losses. In the future, spintronics could replace traditional electronics and function as a carrier of information in a magnetic material. This brings the potential for much faster and efficient data processing. At the same time, the weak interaction makes control over the propagation of the spin waves challenging. The scientists then ‘drive’ the spin-lattice coupling by applying an ultrashort terahertz pulse.

Dr. Evgeny Mashkovich, Senior Researcher at the Optical Condensed Matter Science group at the University of Cologne’s Institute for Experimental Physics said: "We showed that we can now control the interaction between lattice and spin waves and make it a strong interaction. I believe that this discovery is an important step towards conceptually new technologies for ultra-fast data processing and efficient data storage in the future."