Artigo Revisado por pares

A Subgrouping Strategy that Reduces Complexity and Speeds Up Learning in Recurrent Networks

1989; The MIT Press; Volume: 1; Issue: 4 Linguagem: Inglês

10.1162/neco.1989.1.4.552

ISSN

1530-888X

Autores

David Zipser,

Tópico(s)

Ferroelectric and Negative Capacitance Devices

Resumo

An algorithm, called RTRL, for training fully recurrent neural networks has recently been studied by Williams and Zipser (1989a, b). Whereas RTRL has been shown to have great power and generality, it has the disadvantage of requiring a great deal of computation time. A technique is described here for reducing the amount of computation required by RTRL without changing the connectivity of the networks. This is accomplished by dividing the original network into subnets for the purpose of error propagation while leaving them undivided for activity propagation. An example is given of a 12-unit network that learns to be the finite-state part of a Turing machine and runs 10 times faster using the subgrouping strategy than the original algorithm.

Referência(s)
Altmetric
PlumX