Catalogue of Artificial Intelligence Techniques

   

Jump to: Top | Entry | References | Comments

View Maths as: Images | MathML

Supervised Learning

Keywords: AQ11, back-propagation, learning, supervised learning

Categories: Learning


Author(s): Jeremy Wyatt

In supervised learning the learner is presented with pairs of input-output patterns. Its task is to infer a more compact representation of the mapping between these sets. Supervised learning methods are usually incremental (with the notable exception of ID3). Each pair of input-output patterns is presented in turn, and the whole set of patterns, called the training set, may be presented several times before learning is complete. Compare this technique with Unsupervised Learning. If a learning procedure is incremental it may get stuck, by converging to a fundamentally incorrect approximation to the true function. This phenomenon is sometimes described in terms of becoming stuck in a local minimum within the space of possible approximations to the function. Some of the procedures use gradient descent on an error surface (e.g., the delta rule of back-propagation). Others start with a space of possible descriptions for a concept and reduce that space according to the training pairs presented (Focussing, AQ11 ). ID3 uses information theory to build the most compact representation of the mapping it can in the form of a discrimination net.


References:


Comments:

Add Comment

No comments.