Catalogue of Artificial Intelligence Techniques
Aliases: Neural Networks, Neuro-computation, Parallel Distributed Processing
Keywords: learning, reinforcement learning
Categories: Computer Architecture , Neural Networks
Author(s): Ashley Walker
Connectionism is an alternative computing paradigm to that offered by von Neumann. It is inspired by ideas from neuroscience and draws much of its methodology from statistical physics. Connectionist architectures exploit the biologically proven principle that sophisticated computation can arise from a network of appropriately connected simple processing units (e.g., neurons) which receive simple excitatory and inhibitory messages from other units and perform some function on these inputs in order to calculate their output. Some of the earliest work in the field is due to McCulloch and Pitts who proposed a simple model of a neuron as a binary threshold unit. They proved that a synchronous assembly of these artificial neurons are capable, in principle, of universal computation. The operation of a particular network depends upon the pattern of interconnectivity (i.e., the `synaptic strength' or `weights') between elemental units. Although these connections may be programmed directly from model equations, much of the utility of connectionist networks lies in their ability to learn the connections necessary to solve a problem from presentation of example data. Moreover, networks can generalise over examples so as to interpolate and extrapolate new relationships between members of the data which are not explicitly represented. Learning in connectionist networks may be guided by a supervisor or teacher--in which case a direct comparison is made between network response to input data and a known correct response. (Supervised Learning involves a specialised case of reinforcement learning, where the content of the feedback signal only describes whether each output is correct or incorrect.) However, if the learning goal is not defined, Unsupervised Learning algorithms can be used to discover categories through the correlations of patterns in the input data.
- Hertz, J., Krough, A. and Palmer, R.G., Introduction to the Theory of Neural Computation
, Addison Wesley, 1991.