# Catalogue of Artificial Intelligence Techniques

## Probably Approximately Correct Learning

**Aliases:**
PAC Learning

**Keywords:**
approximately, computational learning theory, correct, learning, pac, probably, valiant

### Categories: Learning

Author(s): **Michael Beaton**

Originally proposed by Leslie Valiant in 1984, Probably Approximately Correct (PAC) Learning aims to learn of an unknown concept by means of finding a hypothesis that is both highly probable and a good approximation of it.

Generally, the instance space is assumed to be {0,1}^n, although it is also possible to use real values. Both the concept and the generated hypothesis are subsets of this. There must also exist a probability distribution D giving the probability of each instance occurring. To find whether a hypothesis h is a good approximation of concept c:

$error\left(h\right)\text{\hspace{0.17em}}=\text{\hspace{0.17em}}\sum _{x\in \text{\hspace{0.17em}}h\mathrm{\Delta}c}D\left(x\right)$,

where Δ denotes symmetric difference, must be small.

In the simplest case, finding a good hypothesis is done by taking random examples of the concept to be approximated and making sure that they are 'positive' examples. The learning algorithm then computationally works out and returns a hypothesis given these examples. This is done in, at worst, polynomial time and so it is computationally efficient.

### References:

- Leslie Valient,
*A Theory of the Learnable*,*A Theory of the Learnable*, 1984. - David Haussler,
*Probably Approximately Correct Learning*Probably Approximately Correct Learning.

### Comments:

No comments.