# Particle Filtering

Particle filters, also known as Sequential Monte Carlo methods (SMC), are sophisticated model estimation techniques based on simulation.

They are usually used to estimate Bayesian models and are the sequential (online) analogue of Markov chain Monte Carlo (MCMC) batch methods and are often similar to importance sampling methods. If well designed, particle filters can be much faster than MCMC. They are often an alternative to the Extended Kalman Filtering (EKF) with the advantage that, with sufficient samples, they approach the Bayesian optimal estimate, so they can be made more accurate than the EKF. The approaches can also be combined by using a Kalman filter as a proposal distribution for the particle filter.

## Goal

The particle filter aims to estimate the sequence of hidden parameters, x_k for ,based only on the observed data y_k for . All Bayesian estimates of x_k follow from the posterior distribution, but rather than the full posterior , which would be the usual MCMC or importance sampling approach, particle methods estimate the filtering distribution .

## Model

Particle methods assume x_k and the observations y_k can be modelled in this form:

• is a first order Markov process such that

and with an initial distribution p(x_0).

• The observations are conditionally independent provided that are known
In other words, each y_k only depends on x_k

Particle filters are an approximation, but with enough particles can be much more accurate.

## Monte Carlo approximation

Particle methods, like all sampling-based approaches (e.g., MCMC), generate a set of samples that approximate the filtering distribution . So, with [itex]P[/itex] samples, expectations with respect to the filtering distribution are approximated by

and , in the usual way for Monte Carlo, can give all the moments etc. of the distribution up to some degree of approximation.

Generally, the algorithm is repeated iteratively for a specific number of k values (call this N. Initializing x_k=0|_{k=0} for all particles provides a starting place to generate x_1, which can then be used to generate x_2, which can be used to generate x_3 and so on up to k=N. When done, the mean of x_k over all the particles (or is approximately the actual value of x_k.

## Sampling Importance Resampling (SIR)

Sampling importance resampling (SIR) is a very commonly used particle filtering algorithm, which approximates the filtering distribution by a weighted set of particles

.

The importance weights are approximations to the relative posterior probabilities (or densities) of the particles such that .

SIR is a sequential (i.e., recursive) version of importance sampling. As in importance sampling, the expectation of a function can be approximated as a weighted average

Sampling Importance Resampling (SIR) filters with transition prior as importance function are commonly known as bootstrap filter and condensation algorithm.

Resampling is used to avoid the problem of degeneracy of the algorithm, that is, avoiding the situation that all but one of the importance weights are close to zero. The performance of the algorithm can be also affected by proper choice of resampling method. The stratified resampling is optimal in terms of variance.

A single step of sequential importance resampling is as follows:

1) For draw samples from the importance distributions
2) For evaluate the importance weights up to a normalizing constant:

3) For compute the normalized importance weights:
4) Compute an estimate of the effective number of particles as
5) If the effective number of particles is less than a given threshold , then perform resampling:
a) Draw P particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one.
b) For set .

The term Sequential Importance Resampling is also sometimes used when referring to SIR filters.

## Sequential Importance Sampling (SIS)

• Is the same as Sampling Importance Resampling, but without the resampling stage.