Pose Fusion with Chain Pose Graphs for Automated Driving

Back to Index



1 Resources


2 Basic Information

2.1 Authors

2.2 Conference

2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)

2.3 Abstract

2.4 Keywords


3 Introduction

3.1 Problem & Solution

Individual localization system is not enough, and the combination of orthogonal localization systems is more powerful.

3.2 Objective

This paper provides an approach to multi-sensor data fusion decouples the localization from the fusion task, which eneables the ability to incorporate third-party localization modules for which source code is unavailable.

3.3 Formulation

A coarse localization (red triangles), a precise but only temporary available localization (blue triangles), and odometry as dead reckoning trajectory (blue) are used to estimate the true trajectory (red) of a vehicle. The estimated poses are shown as black triangles: the goal is to approximate the unknown red line as closely as possible with the black triangles.

image

3.4 Contributions


4 Related Work

4.1 Multi-sensor data fusion for navigation systems

4.2 Methodical origin


5 Method: Pose Graph Fusion

5.1 Nonlinear least squares problem

This paper exploits the state-of-the-art graph optimization framework g2o3.

The key idea is that given the state vector \(x=(x_1^T, ..., x_m^T)^T\) and a set of measurements, where \(z_{ij}\) is the mean and \(\Omega_{ij}\) is the information matrix4 of a single measurement relating \(x_i\) to \(x_j\), least squares estimation seeks the state

\[x^*=\arg\min_x{\sum_{i,j}e_{ij}^T\Omega_{ij}e_{ij}} \label{eq:state}\]

that best explains all measurements given the \(\mathit{l}_2\) norm. The vector error function \(e_{ij}=e(x_i,x_j,z_{ij})\) measures how well the constraint from the measurement \(z_{ij}\) is satisfied. Solving \((\ref{eq:state})\) requires iteratively solving a linear system with the system matrix5 \(H\) and the right-hand side vector \(b\) such that \[H=\sum_{i,j}{J_{ij}(x)^T\Omega_{ij}J_{ij}(x)}\] \[b^T=\sum_{i,j}e_{ij}^T\Omega_{ij}J_{ij}(x)\] where \(J_{ij}(x)\) refers to the Jacobian6 of the error function computed in state \(x\).

5.2 Sliding window chain pose graph fusion

5.2.1 About the online state estimation system

5.2.2 About the graph structure

image

5.2.3 About the algorithm working frequency

5.2.4 About the system matrix \(H\)

5.3 Time behavior

5.4 Marginalization in the form of a prior node

5.4.1 Effect of the Schur complement on chain pose graphs

image

\[H^{small} \Delta x^{small} =\left[\begin{array}{cc} H_{bb}^{small} & H_{bc}^{small} \\ H_{cb}^{small} & H_{cc}^{small} \\ \end{array}\right] \Delta x^{small} = - \left[\begin{array}{c} b_{b}^{small} \\ b_{c}^{small} \\ \end{array}\right]\]

\[\begin{array}{rcl} H^{full} \Delta x^{full} & = & \left[\begin{array}{ccc} \bar{H}_{aa} + \hat{H}_{aa} & \hat{H}_{ab} & 0 \\ \hat{H}_{ba} & H_{bb}^{small} + \hat{H}_{bb} & H_{bc}^{small} \\ 0 & H_{cb}^{small} & H_{cc}^{small} \\ \end{array}\right] \Delta x^{full} \\ & = & \left(\left[\begin{array}{ccc} 0 & 0 & 0 \\ 0 & H_{bb}^{small} & H_{bc}^{small} \\ 0 & H_{cb}^{small} & H_{cc}^{small} \\ \end{array}\right]+\left[\begin{array}{ccc} \bar{H}_{aa} + \hat{H}_{aa} & \hat{H}_{ab} & 0 \\ \hat{H}_{ba} & \hat{H}_{bb} & 0 \\ 0 & 0 & 0 \\ \end{array}\right]\right) \Delta x^{full} \\ & = & -\left[\begin{array}{c} 0 \\ b_b^{small} \\ b_c^{small} \\ \end{array}\right] - \left[\begin{array}{c} \bar{b}_a+\hat{b}_a \\ \hat{b}_b \\ 0 \\ \end{array}\right] = -\left[\begin{array}{c} \bar{b}_a + \hat{b}_a \\ b_b^{small} + \hat{b}_b \\ b_c^{small} \\ \end{array}\right]\\ \end{array}\]

Use Schur complement of \(H^{full}\) and \(b^{full}\) to marginalize \(x_a\) 9

\[\begin{array}{rcl} \left[\begin{array}{cc} \bar{H}_{aa} + \hat{H}_{aa} & \hat{H}_{ab} \\ \hat{H}_{ba} & \hat{H}_{bb} \\ \end{array}\right] & \Rightarrow & H^{schur}=\hat{H}_{bb}-\hat{H}_{ba}(\bar{H}_{aa}+\hat{H}_{aa})^{-1}\hat{H}_{ab} \\ b^{schur}=H^{schur}\Delta x_b & = & \hat{b}_b - \hat{H}_{ba}(\bar{H}_{aa}+\hat{H}_{aa})^{-1}(\bar{b}_a+\hat{b}_a) \\ \end{array}\]

The marginalized system matrix \(H^{marg}\) and coefficient vector \(b^{marg}\)

\[\begin{array}{rcl} H^{marg} & = & \left[\begin{array}{cc} H_{bb}^{small} + H^{schur} & H_{bc}^{small} \\ H_{cb}^{small} & H_{cc}^{small} \\ \end{array}\right] \\ b^{marg} & = & \left[\begin{array}{c} b_b^{small} + b^{schur} \\ b_c^{small} \\ \end{array}\right] \end{array}\]

After the marginalization, the structure of the system matrix is still block-tridiagonal due to the particular design of the chain pose graph, meaning that the sparsity pattern of \(H\) is retained after marginalization without fill-in.

5.4.2 Derivation of the prior node

A different marginalization method to replace \(x_a\) and its connected observed nodes and edges with a prior node \(\bar{x}_p\) (will be a measurement). The result is same, but the marginalized nodes can be visualized as an observed node.

\[\begin{array}{rcl} H^{prior} & = & \left[\begin{array}{cc} H_{bb}^{small} + \bar{H}^{prior} & H_{bc}^{small} \\ H_{cb}^{small} & H_{cc}^{small} \\ \end{array}\right] \\ b^{prior} & = & \left[\begin{array}{c} b_b^{small} + \bar{b}^{prior} \\ b_c^{small} \\ \end{array}\right] \end{array}\]

The \(x_p\) can be defined as the mean estimate of any observed node of \(x_b\). Then define the error function \(e_{pb}\) and consequently its Jacobian \(J_{pb}(x)\).

\[\begin{array}{rcl} e_{pb} & = & \left[\begin{array}{cc} R & 0 \\ 0 & 1 \\ \end{array}\right] (x_b-\bar{x}_p) \\ J_{pb}(x) & = & \left[\begin{array}{cc} R & 0 \\ 0 & 1 \\ \end{array}\right] \end{array}\]

Therefore

\[\begin{array}{rcl} \bar{H}_p & = & J_{pb}(x)^T \bar{\Omega}_p J_{pb}(x) = H^{schur}\\ \Rightarrow \bar{\Omega}_p & = & (J_{pb}(x)^T)^{-1} H^{schur} (J_{pb}(x))^{-1} \\ \bar{b}_p & = & J_{pb}(x)^T \bar{\Omega}_p e_{pb} = J_{pb}(x)^T \bar{\Omega}_p J_{pb}(x) (x_b-\bar{x}_p) = \bar{H}_p(x_b-\bar{x}_p) \\ \Rightarrow \bar{x}_p & = & x_b-(H^{schur})^-1 b^{schur} \\ \end{array}\]

5.5 Assessing the uncertainty of the fused estimate

The optimization assigns to the hidden nodes the global poses which best satisfy all constraints. Solving the nonlinear least squares problem of the pose fusion with a sparse solver typically involves computing the Cholesky factorization \(H=R^TR\), where \(R\) is an upper triangular matrix with entries \(r_ij\). Following the referred paper [13]10, the uncertainty matrix of the hidden nodes \(H^{-1}\) with entries \(H_{ij}^{-1}\) is obtained by the recursive formula:

\[\begin{array}{rcl} H_{ii}^{-1} & = & \frac{1}{r_{ii}}\left[\frac{1}{r_{ii}}-\sum_{\begin{array}{c} k=i+1 \\ r_{ik} \neq 0 \\ \end{array}}^{n}{r_{ik}H_{ki}^{-1}}\right] \\ H_{ij}^{-1} & = & \frac{1}{r_{ii}}\left[-\sum_{\begin{array}{c} k=i+1 \\ r_{ik} \neq 0 \\ \end{array}}^{j}{r_{ik}H_{kj}^{-1}}-\sum_{\begin{array}{c} k=j+1 \\ r_{ik} \neq 0 \\ \end{array}}^{n}{r_{ik}H_{jk}^{-1}}\right] \end{array}\]

The formula yields an \(\mathbb{O}(n)\) time complexity with \(n\) being the number of state variables. It becomes constant time for sliding window pose graphs as the number of state variables is upper bounded by a constant value.

5.6 Noise correlations between input sources

6 Evaluation

Baseline: the online ML (maximum likelihood) estimate, scaling from a filtering to a batch least squares solution.

6.1 Experiment on a real prototype vehicle

6.1.1 Settings

6.1.2 Pose estimate quality

Baseline: batch solution.
Noises are not independently distributed with zero mean and the uncertainty models are unknown.

image

6.1.3 Latency and availability

Latency: \(\leq 10ms\).
Availability:

6.1.4 Runtime performance

a single core of a laptop with an Intel i7-4800QM processor

image image

6.2 Simulated input data

6.2.1 Number of hidden nodes v.s. accuracy

Larger sliding window size \(\Rightarrow\) more accurate result

image

6.2.2 Number of pose sources v.s. accuracy

More pose sources \(\Rightarrow\) more accurate result

image

7 Conclusion


8 Note


  1. M. Kaess, H. Johannsson, R. Roberts, V. Ila, J. Leonard, and F. Del- laert, ``iSAM2: Incremental Smoothing and Mapping using the Bayes tree," Int. Journal of Robotics Research, pp. 216–235, 2012.

  2. G. Sibley, L. Matthies, and G. Sukhatme, ``SlidingWindow Filter with Application to Planetary Landing," Journal of Field Robotics, vol. 27, no. 5, pp. 587–608, 2010

  3. R. K¨ ummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, ``g2o: A General Framework for Graph Optimization," in Proc. IEEE Int. Conf. Robotics and Automation (ICRA), 2011, pp. 3607–3613.

  4. \(\Omega=\Sigma^{-1}\), \(b=\Omega \mu\)

  5. https://en.wikipedia.org/wiki/State-space_representation

  6. https://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant

  7. https://en.wikipedia.org/wiki/Cholesky_decomposition

  8. https://en.wikipedia.org/wiki/Schur_complement

  9. \(\hat{H}_{ab}\Delta x_b=(\bar{b}_a+\hat{b}_a)-(\bar{H}_{aa}+\hat{H}_aa)\Delta x_a,\text{~where~}\Delta x_a=0\)

  10. M. Kaess and F. Dellaert, ``Covariance Recovery from a Square Root Information Matrix for Data Association," Robotics and Autonomous Systems, pp. 1198–1210, 2009.

  11. S. J. Julier and J. K. Uhlmann, ``A non-divergent estimation algorithm in the presence of unknown correlations," in Proc. Amer. Control Conf. (ACC), 1997, pp. 2369–2373.