|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
<html class="no-js" lang="en" >
|
|
Packit |
ea1746 |
<head>
|
|
Packit |
ea1746 |
<meta charset="utf-8">
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
<title>Covariance Estimation — Ceres Solver</title>
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
<link rel="top" title="Ceres Solver" href="index.html"/>
|
|
Packit |
ea1746 |
<link rel="next" title="General Unconstrained Minimization" href="gradient_solver.html"/>
|
|
Packit |
ea1746 |
<link rel="prev" title="Solving Non-linear Least Squares" href="nnls_solving.html"/>
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
<script src="_static/js/modernizr.min.js"></script>
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
</head>
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
<body class="wy-body-for-nav" role="document">
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
<nav data-toggle="wy-nav-shift" class="wy-nav-side">
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Ceres Solver
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
1.13
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
<form id="rtd-search-form" class="wy-form" action="search.html" method="get">
|
|
Packit |
ea1746 |
<input type="text" name="q" placeholder="Search docs" />
|
|
Packit |
ea1746 |
<input type="hidden" name="check_keywords" value="yes" />
|
|
Packit |
ea1746 |
<input type="hidden" name="area" value="default" />
|
|
Packit |
ea1746 |
</form>
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Why?
|
|
Packit |
ea1746 |
Installation
|
|
Packit |
ea1746 |
Tutorial
|
|
Packit |
ea1746 |
On Derivatives
|
|
Packit |
ea1746 |
Modeling Non-linear Least Squares
|
|
Packit |
ea1746 |
Solving Non-linear Least Squares
|
|
Packit |
ea1746 |
Covariance Estimation
|
|
Packit |
ea1746 |
Introduction
|
|
Packit |
ea1746 |
Gauge Invariance
|
|
Packit |
ea1746 |
Covariance
|
|
Packit |
ea1746 |
Rank of the Jacobian
|
|
Packit |
ea1746 |
Example Usage
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
General Unconstrained Minimization
|
|
Packit |
ea1746 |
FAQS, Tips & Tricks
|
|
Packit |
ea1746 |
Users
|
|
Packit |
ea1746 |
Contributing
|
|
Packit |
ea1746 |
Version History
|
|
Packit |
ea1746 |
Bibliography
|
|
Packit |
ea1746 |
License
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
</nav>
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
<section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
<nav class="wy-nav-top" role="navigation" aria-label="top navigation">
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Ceres Solver
|
|
Packit |
ea1746 |
</nav>
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Docs »
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Covariance Estimation
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Covariance Estimation¶
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Introduction¶
|
|
Packit |
ea1746 |
One way to assess the quality of the solution returned by a non-linear
|
|
Packit |
ea1746 |
least squares solver is to analyze the covariance of the solution.
|
|
Packit |
ea1746 |
Let us consider the non-linear regression problem
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[y = f(x) + N(0, I)\]
|
|
Packit |
ea1746 |
i.e., the observation \(y\) is a random non-linear function of the
|
|
Packit |
ea1746 |
independent variable \(x\) with mean \(f(x)\) and identity
|
|
Packit |
ea1746 |
covariance. Then the maximum likelihood estimate of \(x\) given
|
|
Packit |
ea1746 |
observations \(y\) is the solution to the non-linear least squares
|
|
Packit |
ea1746 |
problem:
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[x^* = \arg \min_x \|f(x)\|^2\]
|
|
Packit |
ea1746 |
And the covariance of \(x^*\) is given by
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[C(x^*) = \left(J'(x^*)J(x^*)\right)^{-1}\]
|
|
Packit |
ea1746 |
Here \(J(x^*)\) is the Jacobian of \(f\) at \(x^*\). The
|
|
Packit |
ea1746 |
above formula assumes that \(J(x^*)\) has full column rank.
|
|
Packit |
ea1746 |
If \(J(x^*)\) is rank deficient, then the covariance matrix \(C(x^*)\)
|
|
Packit |
ea1746 |
is also rank deficient and is given by the Moore-Penrose pseudo inverse.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[C(x^*) = \left(J'(x^*)J(x^*)\right)^{\dagger}\]
|
|
Packit |
ea1746 |
Note that in the above, we assumed that the covariance matrix for
|
|
Packit |
ea1746 |
\(y\) was identity. This is an important assumption. If this is
|
|
Packit |
ea1746 |
not the case and we have
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[y = f(x) + N(0, S)\]
|
|
Packit |
ea1746 |
Where \(S\) is a positive semi-definite matrix denoting the
|
|
Packit |
ea1746 |
covariance of \(y\), then the maximum likelihood problem to be
|
|
Packit |
ea1746 |
solved is
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[x^* = \arg \min_x f'(x) S^{-1} f(x)\]
|
|
Packit |
ea1746 |
and the corresponding covariance estimate of \(x^*\) is given by
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[C(x^*) = \left(J'(x^*) S^{-1} J(x^*)\right)^{-1}\]
|
|
Packit |
ea1746 |
So, if it is the case that the observations being fitted to have a
|
|
Packit |
ea1746 |
covariance matrix not equal to identity, then it is the user’s
|
|
Packit |
ea1746 |
responsibility that the corresponding cost functions are correctly
|
|
Packit |
ea1746 |
scaled, e.g. in the above case the cost function for this problem
|
|
Packit |
ea1746 |
should evaluate \(S^{-1/2} f(x)\) instead of just \(f(x)\),
|
|
Packit |
ea1746 |
where \(S^{-1/2}\) is the inverse square root of the covariance
|
|
Packit |
ea1746 |
matrix \(S\).
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Gauge Invariance¶
|
|
Packit |
ea1746 |
In structure from motion (3D reconstruction) problems, the
|
|
Packit |
ea1746 |
reconstruction is ambiguous upto a similarity transform. This is
|
|
Packit |
ea1746 |
known as a Gauge Ambiguity. Handling Gauges correctly requires the
|
|
Packit |
ea1746 |
use of SVD or custom inversion algorithms. For small problems the
|
|
Packit |
ea1746 |
user can use the dense algorithm. For more details see the work of
|
|
Packit |
ea1746 |
Kanatani & Morris [KanataniMorris].
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Covariance allows the user to evaluate the covariance for a
|
|
Packit |
ea1746 |
non-linear least squares problem and provides random access to its
|
|
Packit |
ea1746 |
blocks. The computation assumes that the cost functions compute
|
|
Packit |
ea1746 |
residuals such that their covariance is identity.
|
|
Packit |
ea1746 |
Since the computation of the covariance matrix requires computing the
|
|
Packit |
ea1746 |
inverse of a potentially large matrix, this can involve a rather large
|
|
Packit |
ea1746 |
amount of time and memory. However, it is usually the case that the
|
|
Packit |
ea1746 |
user is only interested in a small part of the covariance
|
|
Packit |
ea1746 |
matrix. Quite often just the block diagonal. Covariance
|
|
Packit |
ea1746 |
allows the user to specify the parts of the covariance matrix that she
|
|
Packit |
ea1746 |
is interested in and then uses this information to only compute and
|
|
Packit |
ea1746 |
store those parts of the covariance matrix.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Rank of the Jacobian¶
|
|
Packit |
ea1746 |
As we noted above, if the Jacobian is rank deficient, then the inverse
|
|
Packit |
ea1746 |
of \(J'J\) is not defined and instead a pseudo inverse needs to be
|
|
Packit |
ea1746 |
computed.
|
|
Packit |
ea1746 |
The rank deficiency in \(J\) can be structural – columns
|
|
Packit |
ea1746 |
which are always known to be zero or numerical – depending on the
|
|
Packit |
ea1746 |
exact values in the Jacobian.
|
|
Packit |
ea1746 |
Structural rank deficiency occurs when the problem contains parameter
|
|
Packit |
ea1746 |
blocks that are constant. This class correctly handles structural rank
|
|
Packit |
ea1746 |
deficiency like that.
|
|
Packit |
ea1746 |
Numerical rank deficiency, where the rank of the matrix cannot be
|
|
Packit |
ea1746 |
predicted by its sparsity structure and requires looking at its
|
|
Packit |
ea1746 |
numerical values is more complicated. Here again there are two
|
|
Packit |
ea1746 |
cases.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
The rank deficiency arises from overparameterization. e.g., a
|
|
Packit |
ea1746 |
four dimensional quaternion used to parameterize \(SO(3)\),
|
|
Packit |
ea1746 |
which is a three dimensional manifold. In cases like this, the
|
|
Packit |
ea1746 |
user should use an appropriate
|
|
Packit |
ea1746 |
LocalParameterization . Not only will this lead to better
|
|
Packit |
ea1746 |
numerical behaviour of the Solver, it will also expose the rank
|
|
Packit |
ea1746 |
deficiency to the Covariance object so that it can
|
|
Packit |
ea1746 |
handle it correctly.
|
|
Packit |
ea1746 |
More general numerical rank deficiency in the Jacobian requires
|
|
Packit |
ea1746 |
the computation of the so called Singular Value Decomposition
|
|
Packit |
ea1746 |
(SVD) of \(J'J\). We do not know how to do this for large
|
|
Packit |
ea1746 |
sparse matrices efficiently. For small and moderate sized
|
|
Packit |
ea1746 |
problems this is done using dense linear algebra.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Covariance::Options
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
class Covariance:: Options ¶
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
int Covariance::Options:: num_threads ¶
|
|
Packit |
ea1746 |
Default: 1
|
|
Packit |
ea1746 |
Number of threads to be used for evaluating the Jacobian and
|
|
Packit |
ea1746 |
estimation of covariance.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
SparseLinearAlgebraLibraryType Covariance::Options:: sparse_linear_algebra_library_type ¶
|
|
Packit |
ea1746 |
Default: SUITE_SPARSE Ceres Solver is built with support for
|
|
Packit |
ea1746 |
SuiteSparse
|
|
Packit |
ea1746 |
and EIGEN_SPARSE otherwise. Note that EIGEN_SPARSE is
|
|
Packit |
ea1746 |
always available.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
CovarianceAlgorithmType Covariance::Options:: algorithm_type ¶
|
|
Packit |
ea1746 |
Default: SPARSE_QR
|
|
Packit |
ea1746 |
Ceres supports two different algorithms for covariance estimation,
|
|
Packit |
ea1746 |
which represent different tradeoffs in speed, accuracy and
|
|
Packit |
ea1746 |
reliability.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
SPARSE_QR uses the sparse QR factorization algorithm to
|
|
Packit |
ea1746 |
compute the decomposition
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[\begin{split}QR &= J\\
|
|
Packit |
ea1746 |
\left(J^\top J\right)^{-1} &= \left(R^\top R\right)^{-1}\end{split}\]
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
The speed of this algorithm depends on the sparse linear algebra
|
|
Packit |
ea1746 |
library being used. Eigen ‘s sparse QR factorization is a
|
|
Packit |
ea1746 |
moderately fast algorithm suitable for small to medium sized
|
|
Packit |
ea1746 |
matrices. For best performance we recommend using
|
|
Packit |
ea1746 |
SuiteSparseQR which is enabled by setting
|
|
Packit |
ea1746 |
Covaraince::Options::sparse_linear_algebra_library_type
|
|
Packit |
ea1746 |
to SUITE_SPARSE .
|
|
Packit |
ea1746 |
Neither SPARSE_QR cannot compute the covariance if the
|
|
Packit |
ea1746 |
Jacobian is rank deficient.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
DENSE_SVD uses Eigen ‘s JacobiSVD to perform the
|
|
Packit |
ea1746 |
computations. It computes the singular value decomposition
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[U S V^\top = J\]
|
|
Packit |
ea1746 |
and then uses it to compute the pseudo inverse of J’J as
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[(J'J)^{\dagger} = V S^{\dagger} V^\top\]
|
|
Packit |
ea1746 |
It is an accurate but slow method and should only be used for
|
|
Packit |
ea1746 |
small to moderate sized problems. It can handle full-rank as
|
|
Packit |
ea1746 |
well as rank deficient Jacobians.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
int Covariance::Options:: min_reciprocal_condition_number ¶
|
|
Packit |
ea1746 |
Default: \(10^{-14}\)
|
|
Packit |
ea1746 |
If the Jacobian matrix is near singular, then inverting \(J'J\)
|
|
Packit |
ea1746 |
will result in unreliable results, e.g, if
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[\begin{split}J = \begin{bmatrix}
|
|
Packit |
ea1746 |
1.0& 1.0 \\
|
|
Packit |
ea1746 |
1.0& 1.0000001
|
|
Packit |
ea1746 |
\end{bmatrix}\end{split}\]
|
|
Packit |
ea1746 |
which is essentially a rank deficient matrix, we have
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[\begin{split}(J'J)^{-1} = \begin{bmatrix}
|
|
Packit |
ea1746 |
2.0471e+14& -2.0471e+14 \\
|
|
Packit |
ea1746 |
-2.0471e+14 2.0471e+14
|
|
Packit |
ea1746 |
\end{bmatrix}\end{split}\]
|
|
Packit |
ea1746 |
This is not a useful result. Therefore, by default
|
|
Packit |
ea1746 |
Covariance::Compute() will return false if a rank
|
|
Packit |
ea1746 |
deficient Jacobian is encountered. How rank deficiency is detected
|
|
Packit |
ea1746 |
depends on the algorithm being used.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
DENSE_SVD
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[\frac{\sigma_{\text{min}}}{\sigma_{\text{max}}} < \sqrt{\text{min_reciprocal_condition_number}}\]
|
|
Packit |
ea1746 |
where \(\sigma_{\text{min}}\) and
|
|
Packit |
ea1746 |
\(\sigma_{\text{max}}\) are the minimum and maxiumum
|
|
Packit |
ea1746 |
singular values of \(J\) respectively.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
SPARSE_QR
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[\operatorname{rank}(J) < \operatorname{num\_col}(J)\]
|
|
Packit |
ea1746 |
Here \(\operatorname{rank}(J)\) is the estimate of the rank
|
|
Packit |
ea1746 |
of \(J\) returned by the sparse QR factorization
|
|
Packit |
ea1746 |
algorithm. It is a fairly reliable indication of rank
|
|
Packit |
ea1746 |
deficiency.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
int Covariance::Options:: null_space_rank ¶
|
|
Packit |
ea1746 |
When using DENSE_SVD , the user has more control in dealing
|
|
Packit |
ea1746 |
with singular and near singular covariance matrices.
|
|
Packit |
ea1746 |
As mentioned above, when the covariance matrix is near singular,
|
|
Packit |
ea1746 |
instead of computing the inverse of \(J'J\), the Moore-Penrose
|
|
Packit |
ea1746 |
pseudoinverse of \(J'J\) should be computed.
|
|
Packit |
ea1746 |
If \(J'J\) has the eigen decomposition \((\lambda_i,
|
|
Packit |
ea1746 |
e_i)\), where \(\lambda_i\) is the \(i^\textrm{th}\)
|
|
Packit |
ea1746 |
eigenvalue and \(e_i\) is the corresponding eigenvector, then
|
|
Packit |
ea1746 |
the inverse of \(J'J\) is
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[(J'J)^{-1} = \sum_i \frac{1}{\lambda_i} e_i e_i'\]
|
|
Packit |
ea1746 |
and computing the pseudo inverse involves dropping terms from this
|
|
Packit |
ea1746 |
sum that correspond to small eigenvalues.
|
|
Packit |
ea1746 |
How terms are dropped is controlled by
|
|
Packit |
ea1746 |
min_reciprocal_condition_number and null_space_rank.
|
|
Packit |
ea1746 |
If null_space_rank is non-negative, then the smallest
|
|
Packit |
ea1746 |
null_space_rank eigenvalue/eigenvectors are dropped irrespective
|
|
Packit |
ea1746 |
of the magnitude of \(\lambda_i\). If the ratio of the
|
|
Packit |
ea1746 |
smallest non-zero eigenvalue to the largest eigenvalue in the
|
|
Packit |
ea1746 |
truncated matrix is still below min_reciprocal_condition_number,
|
|
Packit |
ea1746 |
then the Covariance::Compute() will fail and return false.
|
|
Packit |
ea1746 |
Setting null_space_rank = -1 drops all terms for which
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
\[\frac{\lambda_i}{\lambda_{\textrm{max}}} < \textrm{min_reciprocal_condition_number}\]
|
|
Packit |
ea1746 |
This option has no effect on SPARSE_QR .
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
bool Covariance::Options:: apply_loss_function ¶
|
|
Packit |
ea1746 |
Default: true
|
|
Packit |
ea1746 |
Even though the residual blocks in the problem may contain loss
|
|
Packit |
ea1746 |
functions, setting apply_loss_function to false will turn off
|
|
Packit |
ea1746 |
the application of the loss function to the output of the cost
|
|
Packit |
ea1746 |
function and in turn its effect on the covariance.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
class
Covariance ¶
|
|
Packit |
ea1746 |
Covariance::Options as the name implies is used to control
|
|
Packit |
ea1746 |
the covariance estimation algorithm. Covariance estimation is a
|
|
Packit |
ea1746 |
complicated and numerically sensitive procedure. Please read the
|
|
Packit |
ea1746 |
entire documentation for Covariance::Options before using
|
|
Packit |
ea1746 |
Covariance .
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
bool Covariance:: Compute (const vector<pair<const double *, const double *>> &covariance_blocks, Problem *problem)¶
|
|
Packit |
ea1746 |
Compute a part of the covariance matrix.
|
|
Packit |
ea1746 |
The vector covariance_blocks , indexes into the covariance
|
|
Packit |
ea1746 |
matrix block-wise using pairs of parameter blocks. This allows the
|
|
Packit |
ea1746 |
covariance estimation algorithm to only compute and store these
|
|
Packit |
ea1746 |
blocks.
|
|
Packit |
ea1746 |
Since the covariance matrix is symmetric, if the user passes
|
|
Packit |
ea1746 |
<block1, block2> , then GetCovarianceBlock can be called with
|
|
Packit |
ea1746 |
block1 , block2 as well as block2 , block1 .
|
|
Packit |
ea1746 |
covariance_blocks cannot contain duplicates. Bad things will
|
|
Packit |
ea1746 |
happen if they do.
|
|
Packit |
ea1746 |
Note that the list of covariance_blocks is only used to
|
|
Packit |
ea1746 |
determine what parts of the covariance matrix are computed. The
|
|
Packit |
ea1746 |
full Jacobian is used to do the computation, i.e. they do not have
|
|
Packit |
ea1746 |
an impact on what part of the Jacobian is used for computation.
|
|
Packit |
ea1746 |
The return value indicates the success or failure of the covariance
|
|
Packit |
ea1746 |
computation. Please see the documentation for
|
|
Packit |
ea1746 |
Covariance::Options for more on the conditions under which
|
|
Packit |
ea1746 |
this function returns false .
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
bool
GetCovarianceBlock (const double *parameter_block1, const double *parameter_block2, double *covariance_block) const¶
|
|
Packit |
ea1746 |
Return the block of the cross-covariance matrix corresponding to
|
|
Packit |
ea1746 |
parameter_block1 and parameter_block2 .
|
|
Packit |
ea1746 |
Compute must be called before the first call to GetCovarianceBlock
|
|
Packit |
ea1746 |
and the pair <parameter_block1, parameter_block2> OR the pair
|
|
Packit |
ea1746 |
<parameter_block2, parameter_block1> must have been present in the
|
|
Packit |
ea1746 |
vector covariance_blocks when Compute was called. Otherwise
|
|
Packit |
ea1746 |
GetCovarianceBlock will return false.
|
|
Packit |
ea1746 |
covariance_block must point to a memory location that can store
|
|
Packit |
ea1746 |
a parameter_block1_size x parameter_block2_size matrix. The
|
|
Packit |
ea1746 |
returned covariance will be a row-major matrix.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
bool
GetCovarianceBlockInTangentSpace (const double *parameter_block1, const double *parameter_block2, double *covariance_block) const¶
|
|
Packit |
ea1746 |
Return the block of the cross-covariance matrix corresponding to
|
|
Packit |
ea1746 |
parameter_block1 and parameter_block2 .
|
|
Packit |
ea1746 |
Returns cross-covariance in the tangent space if a local
|
|
Packit |
ea1746 |
parameterization is associated with either parameter block;
|
|
Packit |
ea1746 |
else returns cross-covariance in the ambient space.
|
|
Packit |
ea1746 |
Compute must be called before the first call to GetCovarianceBlock
|
|
Packit |
ea1746 |
and the pair <parameter_block1, parameter_block2> OR the pair
|
|
Packit |
ea1746 |
<parameter_block2, parameter_block1> must have been present in the
|
|
Packit |
ea1746 |
vector covariance_blocks when Compute was called. Otherwise
|
|
Packit |
ea1746 |
GetCovarianceBlock will return false.
|
|
Packit |
ea1746 |
covariance_block must point to a memory location that can store
|
|
Packit |
ea1746 |
a parameter_block1_local_size x parameter_block2_local_size matrix. The
|
|
Packit |
ea1746 |
returned covariance will be a row-major matrix.
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Example Usage¶
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
double y[2];
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Problem problem;
|
|
Packit |
ea1746 |
problem.AddParameterBlock(x, 3);
|
|
Packit |
ea1746 |
problem.AddParameterBlock(y, 2);
|
|
Packit |
ea1746 |
<Build Problem>
|
|
Packit |
ea1746 |
<Solve Problem>
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
Covariance::Options options;
|
|
Packit |
ea1746 |
Covariance covariance(options);
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
vector<pair<const double*, const double*> > covariance_blocks;
|
|
Packit |
ea1746 |
covariance_blocks.push_back(make_pair(x, x));
|
|
Packit |
ea1746 |
covariance_blocks.push_back(make_pair(y, y));
|
|
Packit |
ea1746 |
covariance_blocks.push_back(make_pair(x, y));
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
CHECK(covariance.Compute(covariance_blocks, &problem));
|
|
Packit |
ea1746 |
|
|
Packit |
ea1746 |
double covariance_xx[3 * 3];
|
|
Packit |
ea1746 |
double covariance_yy[2 * 2];
|
|
Packit |
ea1746 |
double covariance_xy[3 * 2];
|
|
Packit |
ea1746 |
covariance.GetCovarianceBlock(x, x, covariance_xx)
|
|
Packit |
ea1746 |
covariance.GetCovarianceBlock(y, y, covariance_yy)
|
|
Packit |
ea1746 |
covariance.GetCovarianceBlock(x, y, covariance_xy)
|