Learning and generalization in a two-layer neural network: The role of the Vapnik-Chervonvenkis dimension
1994; American Physical Society; Volume: 72; Issue: 13 Linguagem: Inglês
10.1103/physrevlett.72.2113
ISSN1092-0145
Autores Tópico(s)Model Reduction and Neural Networks
ResumoBounds for the generalization ability of neural networks based on Vapnik-Chervonenkis (VC) theory are compared with statistical mechanics results for the case of the parity machine. For fixed phase space dimension, the VC dimension can grows arbitrarily by increasing the number K of hidden units. Generalization is impossible up to a critical number of training examples that grows with the VC dimension. The asymptotic decrease of the generalization error ${\mathrm{\ensuremath{\varepsilon}}}_{\mathit{G}}$ comes out independent of K and the VC bounds strongly overestimate ${\mathrm{\ensuremath{\varepsilon}}}_{\mathit{G}}$. This shows that phase space dimension and VC dimension can play independent and different roles for the generalization process.
Referência(s)