back-propagation

from The Free On-line Dictionary of Computing (8 July 2008)
back-propagation

   (Or "backpropagation") A learning {algorithm} for modifying a
   {feed-forward} {neural network} which minimises a continuous
   "{error function}" or "{objective function}."
   Back-propagation is a "{gradient descent}" method of training
   in that it uses gradient information to modify the network
   weights to decrease the value of the error function on
   subsequent tests of the inputs.  Other gradient-based methods
   from {numerical analysis} can be used to train networks more
   efficiently.

   Back-propagation makes use of a mathematical trick when the
   network is simulated on a digital computer, yielding in just
   two traversals of the network (once forward, and once back)
   both the difference between the desired and actual output, and
   the derivatives of this difference with respect to the
   connection weights.
    

[email protected]