Catalogue of Artificial Intelligence Techniques
Keywords: Travelling Salesman Problem, learning, neural networks
Categories: Neural Networks
Author(s): Ashley Walker
Self-organising (or Unsupervised Learning) algorithms train a neural network to find patterns, categories or correlations in data without a feedback signal from a teacher. In competitive networks, this information is represented by a single output unit or class of units. The technique is useful for data encoding and compression, combinatorial optimisation (e.g. Travelling Salesman Problem), function approximation, image processing and statistical analysis. Simple competitive networks contain one layer of output units, each fully connected to a layer of (N-dimensional) inputs by excitatory connections. The response of an output is simply the input signal amplified (or diminished) in proportion to the strength of the weighted connections along which the input signal is carried. The output node most excited by an input is said to `win' and, as a reward, gets the efficacy of its weighted connections adjusted so that it encodes a stronger description of the current input. (Self-Organising Feature Mapping algorithms are a special case of competitive learning where the geometrical arrangement of output units (or groups of units) preserves topological information about the input space.) There is no guarantee that competitive learning will converge to the best solution. Some stability can be imposed by decreasing weight changes over time--thus freezing the learned categories, but this affects the network's plasticity (its ability to react to new data). Carpenter and Grossberg's Adaptive Resonance Theory seeks a solution to the stability-plasticity dilemma.=-1
- A massively parallel architecture for a self-organising neural pattern recognition machine
Computer Vision, Graphics and Image Processing 37, 54--115.