ACADEMIA
ELSI researchers use biological evolution to inspire machine learning
As Charles Darwin wrote in at the end of his seminal 1859 book On the Origin of the Species, "whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved." Scientists have since long believed that the diversity and range of forms of life on Earth provide evidence that biological evolution spontaneously innovates in an open-ended way, constantly inventing new things. However, attempts to construct artificial simulations of evolutionary systems tend to run into limits in the complexity and novelty which they can produce. This is sometimes referred to as "the problem of open-endedness." Because of this difficulty, to date scientists can't easily make artificial systems capable of exhibiting the richness and diversity of biological systems.
In a new study published in the journal Artificial Life, a research team led by Nicholas Guttenberg and Nathaniel Virgo of the Earth-Life Science Institute (ELSI) at Tokyo Institute of Technology, Japan, and Alexandra Penn of The Centre for Evaluation of Complexity Across the Nexus (CECAN), University of Surrey UK (CRESS), examine the connection between biological evolutionary open-endedness and recent studies in machine learning, hoping that by connecting ideas from artificial life and machine learning, it will become possible to combine neural networks with the motivations and ideas of artificial life to create new forms of open-endedness. {module In-article}
One source of open-endedness in evolving biological systems is an "arms race" for survival. For example, faster foxes may evolve to catch faster rabbits, which in turn may evolve to become even faster to get away from the faster foxes. This idea is mirrored in recent developments involving placing networks in competition with each other to produce things such as realistic images using generative adversarial networks (GANs), and to discover strategies in games such as Go, which can now easily beat top human players. In evolution, factors such as mutation can limit the extent of an arms race. However, as neural networks have been scaled up, no such limitation seems to exist and the network can continue to improve as additional data is fed to their algorithms.
Guttenberg had been studying evolutionary open-endedness since graduate school, but it was only in the last few years that his focus shifted to artificial intelligence and neural networks. Around that time, methods such as GANs were invented, which struck him as very similar to the open-ended co-evolutionary systems he had previously worked on. Suddenly, he saw an opportunity to tear down a barrier between the communities to help make progress on something which had for him been a persistently important and interesting problem.
The researchers showed that while they can use scaling analyses to demonstrate open-endedness in evolutionary and cognitive contexts, there is a difference between making something which, for example, becomes infinitely good at making cat pictures and something which, having tired of making cat pictures, decides to go on to making music instead. In artificial evolutionary systems, these sorts of major qualitative leaps have to be anticipated by the programmer - they'd need to make an artificial world in which music is possible for the "organisms" to decide to be musicians. In systems such as neural networks, concepts such as abstraction are more easily captured, and so one can start to imagine ways in which populations of interacting agents could create new problems to be solved among themselves.
This work raises some deep and interesting questions. For example, if the drive for qualitatively different novelty in a computational system arises internally from abstraction, what determines the "meaning" of the novelty artificial systems generate? Machine learning has been shown to sometimes lead to the creation of artificial languages by interacting computational agents, but these languages are still grounded in the task the agents are cooperating to solve. If the agents really do rely on the interactions within the system to drive open-endedness far from whatever was provided as starting material, would it even be possible to recognize or interpret the things that come out, or would one have to be an organism living in such a system in order to understand its richness?
Ultimately, this study suggests it may be possible to make artificial systems that autonomously and continuously invent or discover new things, which would constitute a significant advance in artificial intelligence, and may help in understanding the evolution and origin of life.