Artigo Revisado por pares

Some new results on neural network approximation

1993; Elsevier BV; Volume: 6; Issue: 8 Linguagem: Inglês

10.1016/s0893-6080(09)80018-x

ISSN

1879-2782

Autores

Kurt Hornik,

Tópico(s)

Stochastic Gradient Optimization Techniques

Resumo

We show that standard feedforward networks with as few as a single hidden layer can uniformly approximate continuous functions on compacta provided that the activation function ψ is locally Riemann integrable and nonpolynomial, and have universal Lp (μ) approximation capabilities for finite and compactly supported input environment measures μ provided that ψ is locally bounded and nonpolynomial. In both cases, the input-to-hidden weights and hidden layer biases can be constrained to arbitrarily small sets; if in addition ψ is locally analytic a single universal bias will do.

Referência(s)
Altmetric
PlumX