Catalogue of Artificial Intelligence Techniques

   

Jump to: Top | Entry | References | Comments

View Maths as: Images | MathML

Evolutionary Robotics

Keywords: evolutionary robotics

Categories: Robotics


Author(s): Timothy Skelton

Evolutionary robotics is a technique within Artificial Intelligence for creating autonomous robots automatically, through a “survival of the fittest” process similar to that of natural evolution (a function describing this “fitness” is defined to enable this). It makes use of tools such as genetic algorithms, dynamic systems, and most commonly neural networks, with the aim of creating robots that are robust and simple, yet flexible, in the same way as their biological counterparts. During a robot's evolution there should be no human intervention or a priori information given to it, it must instead interact with its environment in order to evolve. One advantage of this is that it does not require in-depth human knowledge or coding, so potentially robots can be produced capable of functioning in situations we do not fully understand, as no training examples are required (ideal for situations which are the unknown emergent properties of interactions with the environment), the fitness function need only describe success or failure, rather than precise desired actions.

The process requires an initial population of random artificial chromosomes, each encoding a robot's control system. A robot with each control system is then put into the environment, and left free to interact with it for a period of time, after which its performance is automatically evaluated . Those matching the fitness function best are reproduced, swapping parts of their genetic code with others while including small random mutations, while the less successful are culled. This process is repeated over many generations until a robot is found completely satisfying the fitness function. Artificial evolution requires a large number of evaluations over a large population. When physical robots are used this is known as embodied evolution and is very time consuming. An alternative to this is to evolve controllers through software, in simulated robots living in simulated environments, allowing robots to be destroyed so fitness can be based on actual survival. This allows evolution to proceed much faster, and prevents initial random controllers (which may exhibit potentially harmful behaviour - such as repeatedly crashing into a wall) from damaging the robot. However care must be taken during design because all possibilities are freely explored during evolution, including any inaccuracies of the simulation, also transferring controllers to physical robots can be very difficult and a major challenge when using the ER approach. More recently it has become common to use simulation for some of the evolution and real robots to continue the evolution.

As well as evolving controllers, it is possible to encode the physical structure of a robot and evolve that also. Recently this has been accomplished by formulating a set of easily fabricated modular building units, configurable into an almost infinite variety of non-trivial robot bodies. There have also been several examples of evolvable hardware circuits being evolved for robot control. Some tasks where there have already been some success include locomotion, object avoidance, and moving toward light sources. More complex tasks such as those including multiple robots, with some being predators and others being prey have also produced interesting results and the technique has been applied in other ways, for example researching biological neural networks through the study artificial ones, exploring the intricacies of evolutionary theory, and reproducing psychological phenomena.

Related fields: behaviour-based robotics, robot learning, artificial life.


References:


Comments:

Add Comment

No comments.