Research Overview
Current research at Nitschke Lab focuses on several topics including: Swarm Robotics, Hybrid Artificial Life, Neural Complexity, Evolutionary design and Collective Behaviour Transfer.
Swarm robotics work focuses on devising new methods for the evolution of robotic controllers that are robust to damage (e.g., sensor-actuator damage to individual robots, destroyed and disabled robots) and more generally changing environments (e.g., new tasks and changes to the physical space). The key notion is that given such internal (swarm damage) or external (environmental) changes, the swarm can adapt on-the-fly, re-allocating behavioral roles and re-organising itself such that it continues to effectively function in its environment, accomplish its tasks. Future applications include automated search and rescue in disaster zones, environmental clean-up or reconnaissance and surveying of remote and hostile environments.
Hybrid Artificial Life research focuses on the problem of how to, on-demand, synthesize (biological and digital) artificial life problem solvers, for given applications. Directed evolution for optimal design implies sufficient knowledge of an organism's underlying evolutionary (fitness) landscape. Though, mapping fitness landscapes of biological organisms has proved to be an intractable problem. One solution is using computational tools from Artificial Life (ALIFE) research field. Using ALIFE simulations, one can execute adaptive (fitness landscape) walks in silico to discover new organism designs, where such designs are verified experimentally in vitro with synthesized versions of the digital organisms. A core approach of this work is deriving suitable fitness functions and fitness landscape mappings for biological data-sets such that adaptive (artificial evolution) walks can be executed to discover novel artificial life designs (“life as it could be”). Future applications include synthetic organisms (artificial life) that clean-up environmental pollution, or recycle organic waste into bio-fuels or plant-based food products.
Neural Complexity work focuses on using agent-based (computational) and evolutionary-robotics (simulation) experimental platforms to run models of neural networks (controllers) within dynamic environments that change during an agent’s lifetime as well as over successive (agent) generations. One goal is to examine the impact of environmental changes on agent lifetime and generational adaptation, where such agent adaptation directly results from neural controller structural (complexity) changes. For example, what level of neural complexity is best suited for agent adaptivity to an environment where food resources change from edible to poisonous across changing seasons. Future applications include using the computational agent-based and evolutionary robotics experimental platforms to test hypotheses about the evolution of neural complexity in biological organisms, and as a means to automate robot controller design for dynamic environments.
Evolutionary design research focuses on using evolutionary computation as a design tool that automates the synthesis of novel solutions across various problem domains. For example, the evolutionary design of anti-malware software agents that automatically adapt to (classify and neutralize) dynamically changing malware operating within computational experimental sandboxes. Another example, is the evolutionary design of building facades that satisfy multiple user-specified design objectives (e.g., maximizing internal heating in the winter, cooling in the winter, while minimizing material cost). Evolutionary design applications include automated anti-malware agents that roam computer networks, classifying and neutralizing emerging malware, and computational multi-objective design product toolkits that, given a set of user defined minimisation-maximisation objectives, automate the design of a product that best satisfies the objectives – where such a product can subsequently be built to simulated specifications.
Collective behaviour transfer work focuses on deriving novel transfer learning methods (traditionally applied within reinforcement learning agents) to operate within groups of evolutionary agents – i.e., where simple agent group behaviors (solving low difficulty tasks) learned in one generation are passed to successive generations, built upon, and adapted into more complex group behaviors (solving high difficulty tasks), enabling agents to solve progressively complex tasks. Future applications include solving incremental or layered (cooperative) tasks in collective (few robots) or swarm robotic (many robots) systems – e.g., first: searching for resources in an unknown environment (low difficulty task), second: cooperatively gathering resources (medium difficulty task), and third: using gathered resources to cooperatively construct or repair buildings (high difficulty task).