Lessons learnt from reading “A Fresh Look at the Kalman Filter, by Jeffrey Humphreys, Preston Redd, and Jeremy West”

While Kalman filter is amongst the first innovators in the field of states estimation, his work on global poisoning, tacking, robotics, radar, fault detection, computer vision, stabilization, automotive control system, and voice recognition points at one thing, the mathematical precision of state estimation. However, this particular paper on filter for state estimation in a noisy linear discrete time dynamics system is unique as it offers a broader base for understanding of the concept as well as the mathematical applications, and computations. From this paper, I learnt the foundation of state estimation and the use of Newton’s method for root finding. It is also clear that the Newton’s method for root finding is an effective approach for finding a range of variations such as recursive estimations for precision and also smoothing, as well as estimates with fading memory. The use of Newton’s method for root finding is also useful when deriving the extended Kalman filter for non-liner systems (Pearson and, &, Stear, 1974, pp. 319–329).

Background of the study

The paper is founded on the basic linear least squares estimation and Newton’s method to generate data. b = Ax +ε, where A s the m x n matrix of rank n, making ε an m-dimensional random variable with a zero mean and known positive definite covariance ((Humpreys, Redd, &, West, 2012, pp. 801-823)

The state estimate approach also proves that the Gauss–Markov estimator tends to minimize the mean-squared error. The minimum variance linear estimator is also called the Gauss–Markov estimator as they both have the same effect on the mean squared error.

In the same way, I learned that by root finding on the gradient of the systems objective function, it becomes easy to find the minimize, while the inverse of Hessian is equal to the covariance. This proves that Newton’s method can be iterated to get a minimizing solution.

The Kalman Filter:

In a discrete- timed linear system, it is easy tom find the content that is also the zero mean random variable and because the inverse covariance is one of the positive definite block diagonal matrix.

I also learnt that the best way to solve or find the best linear unbiased estimate is by solving one of the normal questions derived from the linier model. It is also important to note that the best linear unbiased estimator can be derived from the weighted least squares solution.

The Kalman filter is composed of The Two-Step Kalman Filter. The two-step Kalman filter is founded on the fact that the number of observations is equal to the number of inputs.

The two-step Kalman filter can be used in special case especially when dealing with the estimation of current states. For example, based on its recursive algorithm, the Kalman filter can provide the best linear unbiased estimate (Humpreys, Redd, &, West, 2012, pp. 801-823)

The two-step is composed of the predictive step, and the second step in which the prior estimate-covariance pair. The second step is called the update step as the previous estimate- covariance pair is corrected

I also learned that because the hessian in a positive definite, a single integration of the Newton’s method can help arrive at the minimize. I learnt that the foundation of the hessian is more specifically the network’s method.

The Kalman filter is also varied and this has been explained in the paper. For example, one can used the Kalman filter to determine the most recent state as it focuses on using the recent observation.

Both the predictive estimates and the smoothed estimate are based on the recursive algorithms. It is also clear that while the standard Kalman filter can only be used to find the estimates of the current states, the standard Kalman filter can be modified to find the best linear unbiased estimates of a number of previous states. In this case, the new outputs will be observed. By estimating the entire vector of state estimate, one realizes that the Newton’s method is used in both smoothed and predictive estimate (Humpreys, Redd, &, West, 2012, pp. 801-823)

There fixed lag smoothing is the process by which a recursive iterative method is flowed to find the estimates of a few previous states. The fixed lag is more specifically used if the upstream is considerably problematic and caution is required. However, there is the fading memory, which does not require the previous input and output to update the current state. The fading memory uses the latest input and measurements to update the current state variable. It is also important to understand that in as much as this is a recursive algorithm. Therefore, the current state is updated the least square based estimation. The drawback of using this technique is the risk of errors in the older measurements (Humpreys, Redd, &, West, 2012, pp. 801-823)

Gauss Markov theorem

The gauss Markov theorem is based on the networks method. The gauss Markov theorem can prove that the linear unbiased estimator x = Kb has a minimum mean squared error that meets the optimization objective or problems. From the formula, it becomes clear that the shape is convex making this an objective function. The objective function is convex and this is what makes it a convex optimization problem.

Minimize

K∈Rn×m

tr (KQKT )

Subject to KA = I.

However, the standard Kalman filter is useful for solving basic optimization and state estimation panels. However, it would be easy if a biased algorithm could be used considering that, there are very many optimization problems. This crates the need for an extended Kalman filter. I learn from the paper that the extended Kalman filter is a recursive state estimation algorithm for noisy nonlinear systems. I also learnt that the extended Kalman filter is not the unbiased minimum man squared error estimator and cannot be used as minimum variance unbiased estimator of any state. According to Humpreys, Redd, &, West, (2012, pp. 801-823), the extended Kalman filter offers a good approach to estimating the non-linear system. Therefore, the performance of the extended Kalman filter depends on how well the local linear dynamics match the non-linear dynamics

Conclusion

Considering the classical continuous time Markov chains, it is quite clear that the Kalman filter and all its variants are founded on the Newton’s method. The main advantage of this method is that it is simple because only multivariable calculus and linear algebra are utilized for state estimation.

References

J. B. Pearson and E. B. Stear,(1974). Kalman filter applications in airborne radar tracking, IEEE Trans. Aerospace and Electronic Systems, 10 (1974), pp. 319–329

Jeffrey Humpreys, Preston Redd, &, Jeremy West, (2012). " A Fresh Look at the Kalman Filter", by SIAM Review, 54(4), 801-823,

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s