Recursive least squares estimation
2006; Cambridge University Press; Linguagem: Inglês
10.1017/cbo9780511526480.009
AutoresJohn M. Lewis, S. Lakshmivarahan, Sudarshan Dhall,
Tópico(s)Machine Learning and Algorithms
ResumoSo far in Chapters 5 through 7, it was assumed that the number m of observations is fixed and is known in advance. This treatment has come to be known as the fixed sample or off-line version of the least squares problem. In this chapter, we introduce the rudiments of the dual problem wherein the data or the observations are not known in advance and arrive sequentially in time. The challenge is to keep updating the optimal estimates as the new observations arrive on the scene. A naive way would be to repeatedly solve a sequence of least squares problems after the arrival of every new observation using the methods described in Chapters 5 through 7. A little reflection will, however, reveal that this is inefficient and computationally very expensive. The real question is: knowing the optimal estimate x(m) based on the m samples, can we compute x(m + 1), the optimal estimate for (m + 1) samples, recursively by computing an increment or a correction to x(m) that reflects the new information contained in the new (m + 1)th observation? The answer is indeed “yes”, and leads to the sequential or recursive method for least squares estimation which is the subject of this chapter.
Referência(s)