Asymptotic Optimality of the Fast Randomized Versions of GCV and $C_L$ in Ridge Regression and Regularization
1991; Institute of Mathematical Statistics; Volume: 19; Issue: 4 Linguagem: Inglês
10.1214/aos/1176348380
ISSN2168-8966
Autores Tópico(s)Probabilistic and Robust Engineering Design
ResumoRidge regression is a well-known technique to estimate the coefficients of a linear model. The method of regularization is a similar approach commonly used to solve underdetermined linear equations with discrete noisy data. When applying such a technique, the choice of the smoothing (or regularization) parameter $h$ is crucial. Generalized cross-validation (GCV) and Mallows' $C_L$ are two popular methods for estimating a good value for $h,$ from the data. Their asymptotic properties, such as consistency and asymptotic optimality, have been largely studied [Craven and Wahba (1979); Golub, Heath and Wahba (1979); Speckman (1985)]. Very interesting convergence results for the actual (random) parameter given by GCV and $C_L$ have been shown by Li (1985, 1986). Recently, Girard (1987, 1989) has proposed fast randomized versions of GCV and $C_L.$ The purpose of this paper is to show that the above convergence results also hold for these new methods.
Referência(s)