Artigo Acesso aberto Revisado por pares

Convergence of Estimates Under Dimensionality Restrictions

1973; Institute of Mathematical Statistics; Volume: 1; Issue: 1 Linguagem: Inglês

10.1214/aos/1193342380

ISSN

2168-8966

Autores

Lucien LeCam,

Tópico(s)

Markov Chains and Monte Carlo Methods

Resumo

Consider independent identically distributed observations whose distribution depends on a parameter $\theta$. Measure the distance between two parameter points $\theta_1, \theta_2$ by the Hellinger distance $h(\theta_1, \theta_2)$. Suppose that for $n$ observations there is a good but not perfect test of $\theta_0$ against $\theta_n$. Then $n^{\frac{1}{2}}h(\theta_0, \theta_n)$ stays away from zero and infinity. The usual parametric examples, regular or irregular, also have the property that there are estimates $\hat{\theta}_n$ such that $n^{\frac{1}{2}}h(\hat{\theta}_n, \theta_0)$ stays bounded in probability, so that rates of separation for tests and estimates are essentially the same. The present paper shows that need not be true in general but is correct under certain metric dimensionality assumptions on the parameter set. It is then shown that these assumptions imply convergence at the required rate of the Bayes estimates or maximum probability estimates.

Referência(s)
Altmetric
PlumX