Home | History | Annotate | Download | only in docs

Lines Matching full:solver

1 %!TEX root = ceres-solver.tex
3 Effective use of Ceres requires some familiarity with the basic components of a nonlinear least squares solver, so before we describe how to configure the solver, we will begin by taking a brief look at how some of the core optimization algorithms in Ceres work and the various linear solvers and preconditioners that power it.
8 $ F(x) = \left[f_1(x), \hdots, f_{m}(x) \right]^{\top}$ be a $m$-dimensional function of $x$. We are interested in solving the following optimization problem~\footnote{At the level of the non-linear solver, the block and residual structure is not relevant, therefore our discussion here is in terms of an optimization problem defined over a state vector of size $n$.},
81 An inexact Newton method requires two ingredients. First, a cheap method for approximately solving systems of linear equations. Typically an iterative linear solver like the Conjugate Gradients method is used for this purpose~\cite{nocedal2000numerical}. Second, a termination rule for the iterative solver. A typical termination rule is of the form
87 Ceres supports both exact and inexact step solution strategies. When the user chooses a factorization based linear solver, the exact step Levenberg-Marquardt algorithm is used. When the user chooses an iterative linear solver, the inexact step Levenberg-Marquardt algorithm is used.
178 Setting \texttt{Solver::Options::use\_inner\_iterations} to true
184 Setting \texttt{Solver::Options::num\_threads} to the maximum number
203 Setting \texttt{Solver::Options::use\_nonmonotonic\_steps} to \texttt{true}
309 cameras. Ceres implements this strategy as the \texttt{DENSE\_SCHUR} solver.
320 factorization. Ceres implements this strategy as the \texttt{SPARSE\_SCHUR} solver.
323 For general sparse problems, if the problem is too large for \texttt{CHOLMOD} or a sparse linear algebra library is not linked into Ceres, another option is the \texttt{CGNR} solver. This solver uses the Conjugate Gradients solver on the {\em normal equations}, but without forming the normal equations explicitly. It exploits the relation
327 When the user chooses \texttt{ITERATIVE\_SCHUR} as the linear solver, Ceres automatically switches from the exact step algorithm to an inexact step algorithm.
330 Another option for bundle adjustment problems is to apply PCG to the reduced camera matrix $S$ instead of $H$. One reason to do this is that $S$ is a much smaller matrix than $H$, but more importantly, it can be shown that $\kappa(S)\leq \kappa(H)$. Ceres implements PCG on $S$ as the \texttt{ITERATIVE\_SCHUR} solver. When the user chooses \texttt{ITERATIVE\_SCHUR} as the linear solver, Ceres automatically switches from the exact step algorithm to an inexact step algorithm.
332 The cost of forming and storing the Schur complement $S$ can be prohibitive for large problems. Indeed, for an inexact Newton solver that computes $S$ and runs PCG on it, almost all of its time is spent in constructing $S$; the time spent inside the PCG algorithm is negligible in comparison. Because PCG only needs access to $S$ via its product with a vector, one way to evaluate $Sx$ is to observe that
360 The order in which variables are eliminated in a linear solver can
368 solver about the variable elimination ordering to use. This can range
369 from no hints, where the solver is free to decide the best ordering
370 based on the user's choices like the linear solver being used, to an
402 \item $\{0: x, y\}$: Solver gets to decide the elimination order.
424 If the user leaves the choice to Ceres, then the solver uses an
427 \section{\texttt{Solver::Options}}
429 \texttt{Solver::Options} controls the overall behavior of the
430 solver. We list the various settings and their default values below.
457 amount of time for which the solver should run.
470 \item{\texttt{min\_trust\_region\_radius } ($10^{-32}$)} The solver
494 Solver terminates if
501 \item \texttt{Solver::Options::gradient\_tolerance } Solver terminates if
507 \item{\texttt{parameter\_tolerance }}($10^{-8}$) Solver terminates if
511 where $\Delta x$ is the step computed by the linear solver in the current iteration of Levenberg-Marquardt.
517 linear solver used to compute the solution to the linear least
524 preconditioner used by the iterative linear solver. The default is
545 threads used by the linear solver.
554 \texttt{Solver::Options::inner\_iterations} is true, then the user
558 \item Let the solver heuristically decide which parameter blocks to
568 of the ordering object informs the solver about the desired order in
572 If \texttt{NULL}, the solver is free to choose an ordering that it
575 \texttt{SPARSE\_NORMAL\_CHOLESKY} solver is forth coming.
593 Minimum number of iterations used by the linear solver. This only
594 makes sense when the linear solver is an iterative solver, e.g.,
598 Minimum number of iterations used by the linear solver. This only
599 makes sense when the linear solver is an iterative solver, e.g.,
603 Forcing sequence parameter. The truncated Newton solver uses this
613 passed to the linear solver. This improves the numerical
628 If true, the vectors \texttt{Solver::Summary::initial\_residuals } and
629 \texttt{Solver::Summary::final\_residuals } are filled with the
636 If true, the vectors \texttt{Solver::Summary::initial\_gradient } and
637 \texttt{Solver::Summary::final\_gradient } are filled with the
651 returned in \texttt{Solver::Summary::initial\_jacobian } and
652 \texttt{Solver::Summary::final\_jacobian } respectively.
695 \texttt{protobuf} be linked into Ceres Solver.
750 The solver does NOT take ownership of these pointers.
753 Normally the parameter blocks are only updated when the solver
758 \item{\texttt{solver\_log}} If non-empty, a summary of the execution of the solver is
766 \section{\texttt{Solver::Summary}}