Catalogue of Artificial Intelligence Techniques


Jump to: Top | Entry | References | Comments

View Maths as: Images | MathML


Keywords: learning, neural networks, supervised learning, training

Categories: Neural Networks

Author(s): Tom Whatmore

Backpropagation is the name given to a particular method of training a neural network. It is a type of supervised learning, which means the network is given a series of training data, comprised of inputs and desired outputs. For supervised learning to work, the network needs a method of improving its ability to recognise new inputs. In other words, it needs an algorithmic process to handle the ‘learning’. Backpropagation is one such process, suitable for multi-level networks.

This method is called backpropagation due to the way it calculates the error at the output first, and then works backwards through the network, distributing the ‘blame’ for this error one level of the network at a time. At each level, it works out how close then neuron is to the ideal result and then attempts to minimise this local error, usually by using the stochastic gradient descent algorithm. When the neuron is as close to the expected result as possible, it passes the remaining error to the proceeding level’s neurons, dividing it up by weight. A strongly weighted neuron will be ‘blamed’ for a higher proportion of the error. This process repeats up the levels of the network.

Backpropagation is fairly efficient, and usually converges on a satisfactorily low error quite quickly.

For a graphical, in depth explanation of how the algorithm works see here (external link).



Add Comment

No comments.