is a technique used for training neural networks
. It is useful only for feed-forward networks (networks that have no feedback, or simply, that have no connections that loop).
Backpropagation also requires that the transfer function used for the neurons be differentiable.
The gist of the technique is as follows -
- Present a training sample to the neural network.
- Compare the NN's output to the required output from that sample pair. Calculate the error in each output neuron.
- For each neuron, calculate from the error, the actual output, and a scaling factor, how much lower or higher it should be. This is the local error.
- Using the neurons weights on it's incoming connections, assign "blame" for the local error to neurons at the previous level.
- Repeat the steps above on the neurons at the previous level, using each ones "blame" as their error.
As the algorithm
's name implies, the errors (and therefore the learning) propagate backwards from the output nodes to the inner nodes.
Backpropagation usually allows quick convergence on local minima in the kind of networks to which it is suited.