# Factor Graph as a Whole

Assuming some factor graph object has been constructed by hand or automation, it is often very useful to be able to store that factor graph to file for later loading, solving, analysis etc. Caesar.jl provides such functionality through easy saving and loading. To save a factor graph, simply do:

saveDFG("/somewhere/myfg", fg)
DistributedFactorGraphs.saveDFGFunction
saveDFG(folder, dfg)

Save a DFG to a folder. Will create/overwrite folder if it exists.

DevNotes:

• TODO remove compress kwarg.

Example

using DistributedFactorGraphs, IncrementalInference
# Create a DFG - can make one directly, e.g. LightDFG{NoSolverParams}() or use IIF:
dfg = initfg()
# ... Add stuff to graph using either IIF or DFG:
v1 = addVariable!(dfg, :a, ContinuousScalar, tags = [:POSE], solvable=0)
# Now save it:
saveDFG(dfg, "/tmp/saveDFG.tar.gz")

Similarly in the same or a new Julia context, you can load a factor graph object

# using Caesar

Convenience wrapper to DFG.loadDFG! taking only one argument, the file name, to load a DFG object in standard format.

Load a DFG from a saved folder. Always provide the IIF module as the second parameter.

Example

using DistributedFactorGraphs, IncrementalInference
# Create a DFG - can make one directly, e.g. LightDFG{NoSolverParams}() or use IIF:
dfg = initfg()
# Use the DFG as you do normally.
ls(dfg)
Note

Julia natively provides a direct in memory deepcopy function for making duplicate objects if you wish to keep a backup of the factor graph, e.g.

fg2 = deepcopy(fg)

### Adding an Entry=>Data Blob store

A later part of the documentation will show how to include a Entry=>Data blob store.

## Querying the FactorGraph

### List Variables:

A quick summary of the variables in the factor graph can be retrieved with:

# List variables
ls(fg)
# List factors attached to x0
ls(fg, :x0)
# TODO: Provide an overview of getVal, getVert, getBW, getBelief, etc.

It is possible to filter the listing with Regex string:

ls(fg, r"x\d")
DistributedFactorGraphs.lsFunction

List the DFGVariables in the DFG. Optionally specify a label regular expression to retrieves a subset of the variables. Tags is a list of any tags that a node must have (at least one match).

Notes:

• Returns Vector{Symbol}
ls(dfg)
ls(dfg, node; solvable)

Retrieve a list of labels of the immediate neighbors around a given variable or factor.

unsorted = intersect(ls(fg, r"x"), ls(fg, Pose2))  # by regex

# sorting in most natural way (as defined by DFG)
sorted = sortDFG(unsorted)
DistributedFactorGraphs.sortDFGFunction
sortDFG(vars; by, kwargs...)

Convenience wrapper for Base.sort. Sort variable (factor) lists in a meaningful way (by timestamp, label, etc), for example [:april;:x1_3;:x1_6;] Defaults to sorting by timestamp for variables and factors and using natural_lt for Symbols. See Base.sort for more detail.

Notes

• Not fool proof, but does better than native sort.

Example

sortDFG(ls(dfg)) sortDFG(ls(dfg), by=getLabel, lt=natural_lt)

Related

ls, lsf

### List Factors:

unsorted = lsf(fg)
unsorted = ls(fg, Pose2Point2BearingRange)

or using the tags (works for variables too):

lsf(fg, tags=[:APRILTAGS;])
DistributedFactorGraphs.lsfFunction

List the DFGFactors in the DFG. Optionally specify a label regular expression to retrieves a subset of the factors.

Notes

• Return Vector{Symbol}

There are a variety of functions to query the factor graph, please refer to Function Reference for details and note that many functions still need to be added to this documentation.

### Extracting a Subgraph

Sometimes it is useful to make a deepcopy of a segment of the factor graph for some purpose:

sfg = buildSubgraph(fg, [:x1;:x2;:l7], 1)

# Solving Graphs

When you have built the graph, you can call the solver to perform inference with the following:

# Perform inference
tree, smt, hist = solveTree!(fg)

The returned Bayes (Junction) tree object is described in more detail on a dedicated documentation page, while smt and hist return values most closely relate to development and debug outputs which can be ignored during general use. Should an error occur during, the exception information is easily accessible in the smt object (as well as file logs which default to /tmp/caesar/).

IncrementalInference.solveTree!Function
solveTree!(dfgl)
solveTree!(dfgl, oldtree; timeout, storeOld, verbose, verbosefid, delaycliqs, recordcliqs, limititercliqs, injectDelayBefore, skipcliqids, eliminationOrder, variableOrder, eliminationConstraints, variableConstraints, smtasks, dotreedraw, runtaskmonitor, algorithm, multithread)

Perform inference over the Bayes tree according to opt::SolverParams.

Notes

• Variety of options, including fixed-lag solving – see getSolverParams(fg) for details.
• See online Documentation for more details: https://juliarobotics.org/Caesar.jl/latest/
• Latest result always stored in solvekey=:default.
• Experimental storeOld::Bool=true will duplicate the current result as supersolve :default_k.
• Based on solvable==1 assumption.
• limititercliqs allows user to limit the number of iterations a specific CSM does.
• keywords verbose and verbosefid::IOStream can be used together to to send output to file or default stdout.
• keyword recordcliqs=[:x0; :x7...] identifies by frontals which cliques to record CSM steps.

Example

# pass in old tree to enable compute recycling -- see online Documentation for more details
tree, smt, hist = solveTree!(fg [,tree])

Related

## Using Incremental Updates (Clique Recycling I)

One of the major features of the MM-iSAMv2 algorithm (implemented by IncrementalInference.jl) is reducing computational load by recycling and marginalizing different (usually older) parts of the factor graph. In order to utilize the benefits of recycing, the previous Bayes (Junction) tree should also be provided as input (see fixed-lag examples for more details):

tree, smt, hist = solveTree!(fg, tree)

## Using Clique out-marginalization (Clique Recycling II)

When building sysmtes with limited computation resources, the out-marginalization of cliques on the Bayes tree can be used. This approach limits the amount of variables that are inferred on each solution of the graph. This method is also a compliment to the above Incremental Recycling – these two methods can work in tandem. There is a default setting for a FIFO out-marginalization strategy (with some additional tricks):

defaultFixedLagOnTree!(fg, 50, limitfixeddown=true)

This call will keep the latest 50 variables fluid for inference during Bayes tree inference. The keyword limitfixeddown=true in this case will also prevent downward message passing on the Bayes tree from propagating into the out-marginalized branches on the tree. A later page in this documentation will discuss how the inference algorithm and Bayes tree aspects are put together.

# Extracting Belief Results (and PPE)

Once you have solved the graph, you can review the full marginal with:

X0 = getBelief(fg, :x0)
# Evaluate the marginal density function just for fun at [0.0, 0, 0].
X0(zeros(3,1))

This object is currently a Kernel Density which contains kernels at specific points on the associated manifold. These kernel locations can be retrieved with:

X0pts = getPoints(X0)

## Parametric Point Estimates (PPE)

Since Caesar.jl is build around the each variable state being estimated as a total marginal posterior belief, it is often useful to get the equivalent parametric point estimate from the belief. Many of these computations are already done by the inference library and avalable via the various getPPE methods, e.g.:

getPPE(fg, :l3)
getPPESuggested(fg, :l5)

There are values for mean, max, or hybrid combinations.

DistributedFactorGraphs.getPPEFunction
getPPE(vari)
getPPE(vari, solveKey)

Get the parametric point estimate (PPE) for a variable in the factor graph.

Notes

• Defaults on keywords solveKey and method

Related

getMeanPPE, getMaxPPE, getKDEMean, getKDEFit, getPPEs, getVariablePPEs

getPPE(dfg, variablekey)
getPPE(dfg, variablekey, ppekey)

Get the parametric point estimate (PPE) for a variable in the factor graph for a given solve key.

Notes

• Defaults on keywords solveKey and method

Related getMeanPPE, getMaxPPE, getKDEMean, getKDEFit, getPPEs, getVariablePPEs

IncrementalInference.calcPPEFunction
calcPPE(var, varType; method, solveKey)

Get the ParametricPointEstimates–-based on full marginal belief estimates–-of a variable in the distributed factor graph.

DevNotes

• TODO update for manifold subgroups.
• TODO standardize after AMP3D

Related

calcPPE(dfg::AbstractDFG, label::Symbol; solveKey, method) -> MeanMaxPPE

Calculate new Parametric Point Estimates for a given variable.

Notes

• Different methods are possible, currently MeanMaxPPE <: AbstractPointParametricEst.

Aliases

• calcVariablePPE

Related

setPPE!

## Getting Many Marginal Samples

It is also possible to sample the above belief objects for more samples:

pts = rand(X0, 200)

## Building On-Manifold KDEs

These kernel density belief objects can be constructed from points as follows:

X0_ = manikde!(pts, Pose2)

## Logging Output (Unique Folder)

Each new factor graph is designated a unique folder in /tmp/caesar. This is usaully used for debugging or large scale test analysis. Sometimes it may be useful for the user to also use this temporary location. The location is stored in the SolverParams:

getSolverParams(fg).logpath

The functions of interest are:

IncrementalInference.getLogPathFunction
getLogPath(opt)

Get the folder location where debug and solver information is recorded for a particular factor graph.

Note

A useful tip for doing large scale processing might be to reduce amount of write operations to a solid-state drive that will be written to default location /tmp/caesar by simplying adding a symbolic link to a USB drive or SDCard, perhaps similar to:

cd /tmp
mkdir -p /media/MYFLASHDRIVE/caesar
ln -s /media/MYFLASHDRIVE/caesar caesar

## Other Useful Functions

DistributedFactorGraphs.getManifoldsFunction

Interface function to return the variableType manifolds of an InferenceVariable, extend this function for all Types<:InferenceVariable.