Catalogue of Artificial Intelligence Techniques

   

Jump to: Top | Entry | References | Comments

View Maths as: Images | MathML

Multi-layer Perceptrons

Keywords: back-propagation, learning

Categories: Neural Networks


Author(s): Steve Renals

Multi-layer perceptrons are feed-forward networks built in a layered structure, with an input layer (directly linked to the environment), one or more hidden layers and an output layer for which there is desired pattern of activation, corresponding to a classification or prediction. Unlike single-layer Perceptrons, multi-layer perceptrons are capable of learning arbitrary input-output mappings. This is achieved through the use of hidden units, which have no direct link to the environment. Single-layer perceptrons may be trained trivially, since the weights in the network correspond directly to the input-output mapping under investigation. These dependencies are nested for multi-layer perceptrons and a means of adjusting the weights leading into hidden units is required. A solution to this credit assignment problem is the back-propagation of error algorithm, which propagates the required weight changes backwards through the network by employing the chain rule of differentiation.


References:


Comments:

Add Comment

No comments.