Blame docs/source/gradient_solver.rst

Packit ea1746
.. highlight:: c++
Packit ea1746
Packit ea1746
.. default-domain:: cpp
Packit ea1746
Packit ea1746
.. _chapter-gradient_problem_solver:
Packit ea1746
Packit ea1746
==================================
Packit ea1746
General Unconstrained Minimization
Packit ea1746
==================================
Packit ea1746
Packit ea1746
Modeling
Packit ea1746
========
Packit ea1746
Packit ea1746
:class:`FirstOrderFunction`
Packit ea1746
---------------------------
Packit ea1746
Packit ea1746
.. class:: FirstOrderFunction
Packit ea1746
Packit ea1746
  Instances of :class:`FirstOrderFunction` implement the evaluation of
Packit ea1746
  a function and its gradient.
Packit ea1746
Packit ea1746
  .. code-block:: c++
Packit ea1746
Packit ea1746
   class FirstOrderFunction {
Packit ea1746
     public:
Packit ea1746
      virtual ~FirstOrderFunction() {}
Packit ea1746
      virtual bool Evaluate(const double* const parameters,
Packit ea1746
                            double* cost,
Packit ea1746
                            double* gradient) const = 0;
Packit ea1746
      virtual int NumParameters() const = 0;
Packit ea1746
   };
Packit ea1746
Packit ea1746
.. function:: bool FirstOrderFunction::Evaluate(const double* const parameters, double* cost, double* gradient) const
Packit ea1746
Packit ea1746
   Evaluate the cost/value of the function. If ``gradient`` is not
Packit ea1746
   ``NULL`` then evaluate the gradient too. If evaluation is
Packit ea1746
   successful return, ``true`` else return ``false``.
Packit ea1746
Packit ea1746
   ``cost`` guaranteed to be never ``NULL``, ``gradient`` can be ``NULL``.
Packit ea1746
Packit ea1746
.. function:: int FirstOrderFunction::NumParameters() const
Packit ea1746
Packit ea1746
   Number of parameters in the domain of the function.
Packit ea1746
Packit ea1746
Packit ea1746
:class:`GradientProblem`
Packit ea1746
------------------------
Packit ea1746
Packit ea1746
.. class:: GradientProblem
Packit ea1746
Packit ea1746
.. code-block:: c++
Packit ea1746
Packit ea1746
  class GradientProblem {
Packit ea1746
   public:
Packit ea1746
    explicit GradientProblem(FirstOrderFunction* function);
Packit ea1746
    GradientProblem(FirstOrderFunction* function,
Packit ea1746
                    LocalParameterization* parameterization);
Packit ea1746
    int NumParameters() const;
Packit ea1746
    int NumLocalParameters() const;
Packit ea1746
    bool Evaluate(const double* parameters, double* cost, double* gradient) const;
Packit ea1746
    bool Plus(const double* x, const double* delta, double* x_plus_delta) const;
Packit ea1746
  };
Packit ea1746
Packit ea1746
Instances of :class:`GradientProblem` represent general non-linear
Packit ea1746
optimization problems that must be solved using just the value of the
Packit ea1746
objective function and its gradient. Unlike the :class:`Problem`
Packit ea1746
class, which can only be used to model non-linear least squares
Packit ea1746
problems, instances of :class:`GradientProblem` not restricted in the
Packit ea1746
form of the objective function.
Packit ea1746
Packit ea1746
Structurally :class:`GradientProblem` is a composition of a
Packit ea1746
:class:`FirstOrderFunction` and optionally a
Packit ea1746
:class:`LocalParameterization`.
Packit ea1746
Packit ea1746
The :class:`FirstOrderFunction` is responsible for evaluating the cost
Packit ea1746
and gradient of the objective function.
Packit ea1746
Packit ea1746
The :class:`LocalParameterization` is responsible for going back and
Packit ea1746
forth between the ambient space and the local tangent space. When a
Packit ea1746
:class:`LocalParameterization` is not provided, then the tangent space
Packit ea1746
is assumed to coincide with the ambient Euclidean space that the
Packit ea1746
gradient vector lives in.
Packit ea1746
Packit ea1746
The constructor takes ownership of the :class:`FirstOrderFunction` and
Packit ea1746
:class:`LocalParamterization` objects passed to it.
Packit ea1746
Packit ea1746
Packit ea1746
.. function:: void Solve(const GradientProblemSolver::Options& options, const GradientProblem& problem, double* parameters, GradientProblemSolver::Summary* summary)
Packit ea1746
Packit ea1746
   Solve the given :class:`GradientProblem` using the values in
Packit ea1746
   ``parameters`` as the initial guess of the solution.
Packit ea1746
Packit ea1746
Packit ea1746
Solving
Packit ea1746
=======
Packit ea1746
Packit ea1746
:class:`GradientProblemSolver::Options`
Packit ea1746
---------------------------------------
Packit ea1746
Packit ea1746
.. class:: GradientProblemSolver::Options
Packit ea1746
Packit ea1746
   :class:`GradientProblemSolver::Options` controls the overall
Packit ea1746
   behavior of the solver. We list the various settings and their
Packit ea1746
   default values below.
Packit ea1746
Packit ea1746
.. function:: bool GradientProblemSolver::Options::IsValid(string* error) const
Packit ea1746
Packit ea1746
   Validate the values in the options struct and returns true on
Packit ea1746
   success. If there is a problem, the method returns false with
Packit ea1746
   ``error`` containing a textual description of the cause.
Packit ea1746
Packit ea1746
.. member:: LineSearchDirectionType GradientProblemSolver::Options::line_search_direction_type
Packit ea1746
Packit ea1746
   Default: ``LBFGS``
Packit ea1746
Packit ea1746
   Choices are ``STEEPEST_DESCENT``, ``NONLINEAR_CONJUGATE_GRADIENT``,
Packit ea1746
   ``BFGS`` and ``LBFGS``.
Packit ea1746
Packit ea1746
.. member:: LineSearchType GradientProblemSolver::Options::line_search_type
Packit ea1746
Packit ea1746
   Default: ``WOLFE``
Packit ea1746
Packit ea1746
   Choices are ``ARMIJO`` and ``WOLFE`` (strong Wolfe conditions).
Packit ea1746
   Note that in order for the assumptions underlying the ``BFGS`` and
Packit ea1746
   ``LBFGS`` line search direction algorithms to be guaranteed to be
Packit ea1746
   satisifed, the ``WOLFE`` line search should be used.
Packit ea1746
Packit ea1746
.. member:: NonlinearConjugateGradientType GradientProblemSolver::Options::nonlinear_conjugate_gradient_type
Packit ea1746
Packit ea1746
   Default: ``FLETCHER_REEVES``
Packit ea1746
Packit ea1746
   Choices are ``FLETCHER_REEVES``, ``POLAK_RIBIERE`` and
Packit ea1746
   ``HESTENES_STIEFEL``.
Packit ea1746
Packit ea1746
.. member:: int GradientProblemSolver::Options::max_lbfs_rank
Packit ea1746
Packit ea1746
   Default: 20
Packit ea1746
Packit ea1746
   The L-BFGS hessian approximation is a low rank approximation to the
Packit ea1746
   inverse of the Hessian matrix. The rank of the approximation
Packit ea1746
   determines (linearly) the space and time complexity of using the
Packit ea1746
   approximation. Higher the rank, the better is the quality of the
Packit ea1746
   approximation. The increase in quality is however is bounded for a
Packit ea1746
   number of reasons.
Packit ea1746
Packit ea1746
     1. The method only uses secant information and not actual
Packit ea1746
        derivatives.
Packit ea1746
Packit ea1746
     2. The Hessian approximation is constrained to be positive
Packit ea1746
        definite.
Packit ea1746
Packit ea1746
   So increasing this rank to a large number will cost time and space
Packit ea1746
   complexity without the corresponding increase in solution
Packit ea1746
   quality. There are no hard and fast rules for choosing the maximum
Packit ea1746
   rank. The best choice usually requires some problem specific
Packit ea1746
   experimentation.
Packit ea1746
Packit ea1746
.. member:: bool GradientProblemSolver::Options::use_approximate_eigenvalue_bfgs_scaling
Packit ea1746
Packit ea1746
   Default: ``false``
Packit ea1746
Packit ea1746
   As part of the ``BFGS`` update step / ``LBFGS`` right-multiply
Packit ea1746
   step, the initial inverse Hessian approximation is taken to be the
Packit ea1746
   Identity.  However, [Oren]_ showed that using instead :math:`I *
Packit ea1746
   \gamma`, where :math:`\gamma` is a scalar chosen to approximate an
Packit ea1746
   eigenvalue of the true inverse Hessian can result in improved
Packit ea1746
   convergence in a wide variety of cases.  Setting
Packit ea1746
   ``use_approximate_eigenvalue_bfgs_scaling`` to true enables this
Packit ea1746
   scaling in ``BFGS`` (before first iteration) and ``LBFGS`` (at each
Packit ea1746
   iteration).
Packit ea1746
Packit ea1746
   Precisely, approximate eigenvalue scaling equates to
Packit ea1746
Packit ea1746
   .. math:: \gamma = \frac{y_k' s_k}{y_k' y_k}
Packit ea1746
Packit ea1746
   With:
Packit ea1746
Packit ea1746
  .. math:: y_k = \nabla f_{k+1} - \nabla f_k
Packit ea1746
  .. math:: s_k = x_{k+1} - x_k
Packit ea1746
Packit ea1746
  Where :math:`f()` is the line search objective and :math:`x` the
Packit ea1746
  vector of parameter values [NocedalWright]_.
Packit ea1746
Packit ea1746
  It is important to note that approximate eigenvalue scaling does
Packit ea1746
  **not** *always* improve convergence, and that it can in fact
Packit ea1746
  *significantly* degrade performance for certain classes of problem,
Packit ea1746
  which is why it is disabled by default.  In particular it can
Packit ea1746
  degrade performance when the sensitivity of the problem to different
Packit ea1746
  parameters varies significantly, as in this case a single scalar
Packit ea1746
  factor fails to capture this variation and detrimentally downscales
Packit ea1746
  parts of the Jacobian approximation which correspond to
Packit ea1746
  low-sensitivity parameters. It can also reduce the robustness of the
Packit ea1746
  solution to errors in the Jacobians.
Packit ea1746
Packit ea1746
.. member:: LineSearchIterpolationType GradientProblemSolver::Options::line_search_interpolation_type
Packit ea1746
Packit ea1746
   Default: ``CUBIC``
Packit ea1746
Packit ea1746
   Degree of the polynomial used to approximate the objective
Packit ea1746
   function. Valid values are ``BISECTION``, ``QUADRATIC`` and
Packit ea1746
   ``CUBIC``.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Options::min_line_search_step_size
Packit ea1746
Packit ea1746
   The line search terminates if:
Packit ea1746
Packit ea1746
   .. math:: \|\Delta x_k\|_\infty < \text{min_line_search_step_size}
Packit ea1746
Packit ea1746
   where :math:`\|\cdot\|_\infty` refers to the max norm, and
Packit ea1746
   :math:`\Delta x_k` is the step change in the parameter values at
Packit ea1746
   the :math:`k`-th iteration.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Options::line_search_sufficient_function_decrease
Packit ea1746
Packit ea1746
   Default: ``1e-4``
Packit ea1746
Packit ea1746
   Solving the line search problem exactly is computationally
Packit ea1746
   prohibitive. Fortunately, line search based optimization algorithms
Packit ea1746
   can still guarantee convergence if instead of an exact solution,
Packit ea1746
   the line search algorithm returns a solution which decreases the
Packit ea1746
   value of the objective function sufficiently. More precisely, we
Packit ea1746
   are looking for a step size s.t.
Packit ea1746
Packit ea1746
   .. math:: f(\text{step_size}) \le f(0) + \text{sufficient_decrease} * [f'(0) * \text{step_size}]
Packit ea1746
Packit ea1746
   This condition is known as the Armijo condition.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Options::max_line_search_step_contraction
Packit ea1746
Packit ea1746
   Default: ``1e-3``
Packit ea1746
Packit ea1746
   In each iteration of the line search,
Packit ea1746
Packit ea1746
   .. math:: \text{new_step_size} \geq \text{max_line_search_step_contraction} * \text{step_size}
Packit ea1746
Packit ea1746
   Note that by definition, for contraction:
Packit ea1746
Packit ea1746
   .. math:: 0 < \text{max_step_contraction} < \text{min_step_contraction} < 1
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Options::min_line_search_step_contraction
Packit ea1746
Packit ea1746
   Default: ``0.6``
Packit ea1746
Packit ea1746
   In each iteration of the line search,
Packit ea1746
Packit ea1746
   .. math:: \text{new_step_size} \leq \text{min_line_search_step_contraction} * \text{step_size}
Packit ea1746
Packit ea1746
   Note that by definition, for contraction:
Packit ea1746
Packit ea1746
   .. math:: 0 < \text{max_step_contraction} < \text{min_step_contraction} < 1
Packit ea1746
Packit ea1746
.. member:: int GradientProblemSolver::Options::max_num_line_search_step_size_iterations
Packit ea1746
Packit ea1746
   Default: ``20``
Packit ea1746
Packit ea1746
   Maximum number of trial step size iterations during each line
Packit ea1746
   search, if a step size satisfying the search conditions cannot be
Packit ea1746
   found within this number of trials, the line search will stop.
Packit ea1746
Packit ea1746
   As this is an 'artificial' constraint (one imposed by the user, not
Packit ea1746
   the underlying math), if ``WOLFE`` line search is being used, *and*
Packit ea1746
   points satisfying the Armijo sufficient (function) decrease
Packit ea1746
   condition have been found during the current search (in :math:`\leq`
Packit ea1746
   ``max_num_line_search_step_size_iterations``).  Then, the step size
Packit ea1746
   with the lowest function value which satisfies the Armijo condition
Packit ea1746
   will be returned as the new valid step, even though it does *not*
Packit ea1746
   satisfy the strong Wolfe conditions.  This behaviour protects
Packit ea1746
   against early termination of the optimizer at a sub-optimal point.
Packit ea1746
Packit ea1746
.. member:: int GradientProblemSolver::Options::max_num_line_search_direction_restarts
Packit ea1746
Packit ea1746
   Default: ``5``
Packit ea1746
Packit ea1746
   Maximum number of restarts of the line search direction algorithm
Packit ea1746
   before terminating the optimization. Restarts of the line search
Packit ea1746
   direction algorithm occur when the current algorithm fails to
Packit ea1746
   produce a new descent direction. This typically indicates a
Packit ea1746
   numerical failure, or a breakdown in the validity of the
Packit ea1746
   approximations used.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Options::line_search_sufficient_curvature_decrease
Packit ea1746
Packit ea1746
   Default: ``0.9``
Packit ea1746
Packit ea1746
   The strong Wolfe conditions consist of the Armijo sufficient
Packit ea1746
   decrease condition, and an additional requirement that the
Packit ea1746
   step size be chosen s.t. the *magnitude* ('strong' Wolfe
Packit ea1746
   conditions) of the gradient along the search direction
Packit ea1746
   decreases sufficiently. Precisely, this second condition
Packit ea1746
   is that we seek a step size s.t.
Packit ea1746
Packit ea1746
   .. math:: \|f'(\text{step_size})\| \leq \text{sufficient_curvature_decrease} * \|f'(0)\|
Packit ea1746
Packit ea1746
   Where :math:`f()` is the line search objective and :math:`f'()` is the derivative
Packit ea1746
   of :math:`f` with respect to the step size: :math:`\frac{d f}{d~\text{step size}}`.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Options::max_line_search_step_expansion
Packit ea1746
Packit ea1746
   Default: ``10.0``
Packit ea1746
Packit ea1746
   During the bracketing phase of a Wolfe line search, the step size
Packit ea1746
   is increased until either a point satisfying the Wolfe conditions
Packit ea1746
   is found, or an upper bound for a bracket containing a point
Packit ea1746
   satisfying the conditions is found.  Precisely, at each iteration
Packit ea1746
   of the expansion:
Packit ea1746
Packit ea1746
   .. math:: \text{new_step_size} \leq \text{max_step_expansion} * \text{step_size}
Packit ea1746
Packit ea1746
   By definition for expansion
Packit ea1746
Packit ea1746
   .. math:: \text{max_step_expansion} > 1.0
Packit ea1746
Packit ea1746
.. member:: int GradientProblemSolver::Options::max_num_iterations
Packit ea1746
Packit ea1746
   Default: ``50``
Packit ea1746
Packit ea1746
   Maximum number of iterations for which the solver should run.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Options::max_solver_time_in_seconds
Packit ea1746
Packit ea1746
   Default: ``1e6``
Packit ea1746
   Maximum amount of time for which the solver should run.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Options::function_tolerance
Packit ea1746
Packit ea1746
   Default: ``1e-6``
Packit ea1746
Packit ea1746
   Solver terminates if
Packit ea1746
Packit ea1746
   .. math:: \frac{|\Delta \text{cost}|}{\text{cost}} \leq \text{function_tolerance}
Packit ea1746
Packit ea1746
   where, :math:`\Delta \text{cost}` is the change in objective
Packit ea1746
   function value (up or down) in the current iteration of the line search.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Options::gradient_tolerance
Packit ea1746
Packit ea1746
   Default: ``1e-10``
Packit ea1746
Packit ea1746
   Solver terminates if
Packit ea1746
Packit ea1746
   .. math:: \|x - \Pi \boxplus(x, -g(x))\|_\infty \leq \text{gradient_tolerance}
Packit ea1746
Packit ea1746
   where :math:`\|\cdot\|_\infty` refers to the max norm, :math:`\Pi`
Packit ea1746
   is projection onto the bounds constraints and :math:`\boxplus` is
Packit ea1746
   Plus operation for the overall local parameterization associated
Packit ea1746
   with the parameter vector.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Options::parameter_tolerance
Packit ea1746
Packit ea1746
   Default: ``1e-8``
Packit ea1746
Packit ea1746
   Solver terminates if
Packit ea1746
Packit ea1746
   .. math:: \|\Delta x\| \leq (\|x\| + \text{parameter_tolerance}) * \text{parameter_tolerance}
Packit ea1746
Packit ea1746
   where :math:`\Delta x` is the step computed by the linear solver in
Packit ea1746
   the current iteration of the line search.
Packit ea1746
Packit ea1746
.. member:: LoggingType GradientProblemSolver::Options::logging_type
Packit ea1746
Packit ea1746
   Default: ``PER_MINIMIZER_ITERATION``
Packit ea1746
Packit ea1746
.. member:: bool GradientProblemSolver::Options::minimizer_progress_to_stdout
Packit ea1746
Packit ea1746
   Default: ``false``
Packit ea1746
Packit ea1746
   By default the :class:`Minimizer` progress is logged to ``STDERR``
Packit ea1746
   depending on the ``vlog`` level. If this flag is set to true, and
Packit ea1746
   :member:`GradientProblemSolver::Options::logging_type` is not
Packit ea1746
   ``SILENT``, the logging output is sent to ``STDOUT``.
Packit ea1746
Packit ea1746
   The progress display looks like
Packit ea1746
Packit ea1746
   .. code-block:: bash
Packit ea1746
Packit ea1746
      0: f: 2.317806e+05 d: 0.00e+00 g: 3.19e-01 h: 0.00e+00 s: 0.00e+00 e:  0 it: 2.98e-02 tt: 8.50e-02
Packit ea1746
      1: f: 2.312019e+05 d: 5.79e+02 g: 3.18e-01 h: 2.41e+01 s: 1.00e+00 e:  1 it: 4.54e-02 tt: 1.31e-01
Packit ea1746
      2: f: 2.300462e+05 d: 1.16e+03 g: 3.17e-01 h: 4.90e+01 s: 2.54e-03 e:  1 it: 4.96e-02 tt: 1.81e-01
Packit ea1746
Packit ea1746
   Here
Packit ea1746
Packit ea1746
   #. ``f`` is the value of the objective function.
Packit ea1746
   #. ``d`` is the change in the value of the objective function if
Packit ea1746
      the step computed in this iteration is accepted.
Packit ea1746
   #. ``g`` is the max norm of the gradient.
Packit ea1746
   #. ``h`` is the change in the parameter vector.
Packit ea1746
   #. ``s`` is the optimal step length computed by the line search.
Packit ea1746
   #. ``it`` is the time take by the current iteration.
Packit ea1746
   #. ``tt`` is the total time taken by the minimizer.
Packit ea1746
Packit ea1746
.. member:: vector<IterationCallback> GradientProblemSolver::Options::callbacks
Packit ea1746
Packit ea1746
   Callbacks that are executed at the end of each iteration of the
Packit ea1746
   :class:`Minimizer`. They are executed in the order that they are
Packit ea1746
   specified in this vector. See the documentation for
Packit ea1746
   :class:`IterationCallback` for more details.
Packit ea1746
Packit ea1746
   The solver does NOT take ownership of these pointers.
Packit ea1746
Packit ea1746
Packit ea1746
:class:`GradientProblemSolver::Summary`
Packit ea1746
---------------------------------------
Packit ea1746
Packit ea1746
.. class:: GradientProblemSolver::Summary
Packit ea1746
Packit ea1746
   Summary of the various stages of the solver after termination.
Packit ea1746
Packit ea1746
.. function:: string GradientProblemSolver::Summary::BriefReport() const
Packit ea1746
Packit ea1746
   A brief one line description of the state of the solver after
Packit ea1746
   termination.
Packit ea1746
Packit ea1746
.. function:: string GradientProblemSolver::Summary::FullReport() const
Packit ea1746
Packit ea1746
   A full multiline description of the state of the solver after
Packit ea1746
   termination.
Packit ea1746
Packit ea1746
.. function:: bool GradientProblemSolver::Summary::IsSolutionUsable() const
Packit ea1746
Packit ea1746
   Whether the solution returned by the optimization algorithm can be
Packit ea1746
   relied on to be numerically sane. This will be the case if
Packit ea1746
   `GradientProblemSolver::Summary:termination_type` is set to `CONVERGENCE`,
Packit ea1746
   `USER_SUCCESS` or `NO_CONVERGENCE`, i.e., either the solver
Packit ea1746
   converged by meeting one of the convergence tolerances or because
Packit ea1746
   the user indicated that it had converged or it ran to the maximum
Packit ea1746
   number of iterations or time.
Packit ea1746
Packit ea1746
.. member:: TerminationType GradientProblemSolver::Summary::termination_type
Packit ea1746
Packit ea1746
   The cause of the minimizer terminating.
Packit ea1746
Packit ea1746
.. member:: string GradientProblemSolver::Summary::message
Packit ea1746
Packit ea1746
   Reason why the solver terminated.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Summary::initial_cost
Packit ea1746
Packit ea1746
   Cost of the problem (value of the objective function) before the
Packit ea1746
   optimization.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Summary::final_cost
Packit ea1746
Packit ea1746
   Cost of the problem (value of the objective function) after the
Packit ea1746
   optimization.
Packit ea1746
Packit ea1746
.. member:: vector<IterationSummary> GradientProblemSolver::Summary::iterations
Packit ea1746
Packit ea1746
   :class:`IterationSummary` for each minimizer iteration in order.
Packit ea1746
Packit ea1746
.. member:: int num_cost_evaluations
Packit ea1746
Packit ea1746
   Number of times the cost (and not the gradient) was evaluated.
Packit ea1746
Packit ea1746
.. member:: int num_gradient_evaluations
Packit ea1746
Packit ea1746
   Number of times the gradient (and the cost) were evaluated.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Summary::total_time_in_seconds
Packit ea1746
Packit ea1746
   Time (in seconds) spent in the solver.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Summary::cost_evaluation_time_in_seconds
Packit ea1746
Packit ea1746
   Time (in seconds) spent evaluating the cost vector.
Packit ea1746
Packit ea1746
.. member:: double GradientProblemSolver::Summary::gradient_evaluation_time_in_seconds
Packit ea1746
Packit ea1746
   Time (in seconds) spent evaluating the gradient vector.
Packit ea1746
Packit ea1746
.. member:: int GradientProblemSolver::Summary::num_parameters
Packit ea1746
Packit ea1746
   Number of parameters in the problem.
Packit ea1746
Packit ea1746
.. member:: int GradientProblemSolver::Summary::num_local_parameters
Packit ea1746
Packit ea1746
   Dimension of the tangent space of the problem. This is different
Packit ea1746
   from :member:`GradientProblemSolver::Summary::num_parameters` if a
Packit ea1746
   :class:`LocalParameterization` object is used.
Packit ea1746
Packit ea1746
.. member:: LineSearchDirectionType GradientProblemSolver::Summary::line_search_direction_type
Packit ea1746
Packit ea1746
   Type of line search direction used.
Packit ea1746
Packit ea1746
.. member:: LineSearchType GradientProblemSolver::Summary::line_search_type
Packit ea1746
Packit ea1746
   Type of the line search algorithm used.
Packit ea1746
Packit ea1746
.. member:: LineSearchInterpolationType GradientProblemSolver::Summary::line_search_interpolation_type
Packit ea1746
Packit ea1746
   When performing line search, the degree of the polynomial used to
Packit ea1746
   approximate the objective function.
Packit ea1746
Packit ea1746
.. member:: NonlinearConjugateGradientType GradientProblemSolver::Summary::nonlinear_conjugate_gradient_type
Packit ea1746
Packit ea1746
   If the line search direction is `NONLINEAR_CONJUGATE_GRADIENT`,
Packit ea1746
   then this indicates the particular variant of non-linear conjugate
Packit ea1746
   gradient used.
Packit ea1746
Packit ea1746
.. member:: int GradientProblemSolver::Summary::max_lbfgs_rank
Packit ea1746
Packit ea1746
   If the type of the line search direction is `LBFGS`, then this
Packit ea1746
   indicates the rank of the Hessian approximation.