The Conspiracy of Random Predictors and Model Violations against Classical Inference in Regression

2014; Cornell University; Linguagem: Inglês

Autores

Andreas Buja, Richard A. Berk, Lawrence Brown, Edward I. George, Emil Pitkin, Mikhail Traskin, K. Zhang, Liwei Zhao,

Tópico(s)

Bayesian Modeling and Causal Inference

Resumo

xed, White permits models to be \misspecied and predictors to be random. Careful reading of his theory shows that it is a synergistic eect | a \conspiracy | of nonlinearity and randomness of the predictors that has the deepest consequences for statistical inference. It will be seen that the synonym \heteroskedasticity-consistent for the sandwich estimator is misleading because nonlinearity is a more consequential form of model deviation than heteroskedasticity, and both forms are handled asymptotically correctly by the sandwich estimator. The same analysis shows that a valid alternative to the sandwich estimator is given by the \pairs bootstrap for which we establish a direct connection to the sandwich estimator. We continue with an asymptotic comparison of the sandwich estimator and the standard error estimator from classical linear models theory. The comparison shows that when standard errors from linear models theory deviate from their sandwich analogs, they are usually too liberal, but occasionally they can be too conservative as well. We conclude by answering questions that would occur to statisticians acculturated to the assumption of model correctness and conditionality on the predictors: (1) Why should we be interested in inference for models that are not correct? (2) What are the arguments for conditioning on predictors, and why might they not be valid? In this review we limit ourselves to linear least squares regression as the demonstration object, but the qualitative insights hold for all forms of regression.

Referência(s)