Catalogue of Artificial Intelligence Techniques

   

Jump to: Top | Entry | References | Comments

View Maths as: Images | MathML

Probably Approximately Correct Learning

Aliases: PAC Learning

Keywords: approximately, computational learning theory, correct, learning, pac, probably, valiant

Categories: Learning


Author(s): Michael Beaton

Originally proposed by Leslie Valiant in 1984, Probably Approximately Correct (PAC) Learning aims to learn of an unknown concept by means of finding a hypothesis that is both highly probable and a good approximation of it.

Generally, the instance space is assumed to be {0,1}^n, although it is also possible to use real values. Both the concept and the generated hypothesis are subsets of this. There must also exist a probability distribution D giving the probability of each instance occurring. To find whether a hypothesis h is a good approximation of concept c:

error(h)=xhΔcD(x),

where Δ denotes symmetric difference, must be small.

In the simplest case, finding a good hypothesis is done by taking random examples of the concept to be approximated and making sure that they are 'positive' examples. The learning algorithm then computationally works out and returns a hypothesis given these examples. This is done in, at worst, polynomial time and so it is computationally efficient.


References:


Comments:

Add Comment

No comments.