Build your own (Bayes) Filter
Correspondence with Kalman Filtering?
A frequent discussion point is the correspondence between Kalman/particle/log-flow filtering strategies and factor graph formulations. This section aims to shed light on the relationship, and to show that factor graph interpretations are a powerful generalization of existing filtering techniques. The discussion follows a build-your-own-filter style and combines the Approximate Convolution and Multiplying Densities pages as the required prediction and update cycle steps, respectively. Using the steps described here, the user will be able to build fully-functional–-i.e. non-Gaussian–-(Bayes) filters.
A very simple 1D predict correct Bayesian filtering example (using underlying convolution and product operations of the mmisam algorithm) can be used as a rough template to familiarize yourself on the correspondence between filters and newer graph-based operations.
Other major topics are: How out-marginalization works under a factor graph inference formulation, which is discussed separately as part of the Bayes tree description page. Second, see FAQ on why do we even care about non-Gaussian signal processing. The steps described on this page are accessible via several multi-language interfaces (middleware).
The conclusion at the end of the page tries to summarize why using a factor graph approach (w/ Bayes/junction tree inference) in a incremental/fixed-lag/federated sense–-i.e. simultaneous localization and mapping (SLAM)–-has merit. The described steps forms part of the core operations used by the multimodal incremental smoothing and mapping (mmisam) algorithm.
The Target Tracking Problem (Conventional Filtering)
Consider a common example, two dimensional target tracking, where a projectile transits over a tracking station using various sensing technologies [Zarchan 2013]. Position and velocity estimates of the target
Prediction Step using a Factor Graph
Assume a constant velocity model from which the estimate will be updated through the measurement model described in the next section. A constant velocity model is taken as (cartesian)
or polar coordinates
In this example, noise is introduced as an affine slack variable \eta
, but could be added as any part of the process model:
where p
is any allowable noise probability density/distribution model – discussed more in the next section.
After integration (assume zeroth order) the associated residual function can be constructed:
Filter prediction steps are synonymous with a binary factor (conditional likelihood) between two variables where a prior estimate from one variable is projected (by means of a convolution) to the next variable. The convolutional principle page describes a more detailed example on how a convolution can be computed.
Factor graphs are constructed along with the evolution of time which allows the mmisam inference algorithm to resolve variable marginal estimates both forward and backwards in time. Conventional filtering only allows for forward-backward "smoothing" as two separate processes. When inferring over a factor graph, all variables are factors are considered simultaneously according the topological connectivity irrespective of when and where which measurements were made or communicated – as long as the factor graph (probabilistic model) captures the stochastics of the situation with sufficient accuracy.
Measurement Step using a Factor Graph
The measurement update is a product operation of infinite functional objects (probability densities)
where Z_.
represents conditional information for two beliefs on the same variable. The product of the two functional estimates (beliefs) are multiplied by a stochastic algorithm described in more detail on the multiplying functions page.
Direct or indirect measurements of the state variables are should be modeled with the most sensible function y = h(\theta, \eta)
– i.e. linear or non-linear functions that approximate the underlying stochastics and physics of the process at hand.
Mmisam allows for parametric, non-parametric, or intensity noise models which can be incorporated into any differentiable residual function.
Direct state observations can be added to the factor graph as prior factors directly on the variables. Alternatively measurement models can be used to project belief through a measurement function:
An illustration of both predictions (binary likelihood process model) and direct observations (measurements) is presented:
Target Tracking (Beyond Filtering)
Consider a multi-sensory system along with data transmission delays, variable sampling rates, etc.; when designing a filtering system to track one or multiple targets, it quickly becomes difficult to augment state vectors with the required state and measurement histories. In contrast, the factor graph as a language allows for heterogeneous data streams to be combined in a common inference framework, and is discussed further in the building distributed factor graphs section.
TODO: Multi-modal (belief) vs. multi-hypothesis – see thesis work on multimodal solutions in the mean time.