Almeida–Pineda recurrent backpropagation

From Wikipedia, the free encyclopedia

Almeida–Pineda recurrent backpropagation is an extension to the backpropagation algorithm that is applicable to recurrent neural networks. It is a type of supervised learning. It was described somewhat cryptically in Richard Feynman's senior thesis, and rediscovered independently in the context of artificial neural networks by both and .[1][2][3]

A recurrent neural network for this algorithm consists of some input units, some output units and eventually some hidden units.

For a given set of (input, target) states, the network is trained to settle into a stable activation state with the output units in the target state, based on a given input state clamped on the input units.

References[]

  1. ^ Feynman, Richard P. (August 1939). "Forces in Molecules". Physical Review. American Physical Society. 56 (4): 340–3. Bibcode:1939PhRv...56..340F. doi:10.1103/PhysRev.56.340.
  2. ^ Pineda, Fernando (9 November 1987). ""Generalization of Back-Propagation to Recurrent Neural Networks". Physical Review Letters. 19 (59): 2229–32. Bibcode:1987PhRvL..59.2229P. doi:10.1103/PhysRevLett.59.2229. PMID 10035458.
  3. ^ Almeida, Luis B. (June 1987). A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. IEEE First International Conference on Neural Networks. San Diego, CA, USA. pp. 608–18.


Retrieved from ""