Home | History | Annotate | Download | only in docs

Lines Matching refs:Where

13 Here, the Jacobian $J(x)$ of $F(x)$ is an $m\times n$ matrix, where $J_{ij}(x) = \partial_j f_i(x)$  and the gradient vector $g(x) = \nabla  \frac{1}{2}\|F(x)\|^2 = J(x)^\top F(x)$. Since the efficient global optimization of~\eqref{eq:nonlinsq} for general $F(x)$ is an intractable problem, we will have to settle for finding a local minimum.
23 of the step $\Delta x$. And this is where the idea of a trust-region
65 Where, $\lambda$ is a Lagrange multiplier that is inverse related to $\mu$. In Ceres, we solve for
158 This idea can be generalized to cases where the residual is not
231 For small problems (a couple of hundred parameters and a few thousand residuals) with relatively dense Jacobians, \texttt{DENSE\_QR} is the method of choice~\cite{bjorck1996numerical}. Let $J = QR$ be the QR-decomposition of $J$, where $Q$ is an orthonormal matrix and $R$ is an upper triangular matrix~\cite{trefethen1997numerical}. Then it can be shown that the solution to~\eqref{eq:normal} is given by
238 Large non-linear least square problems are usually sparse. In such cases, using a dense QR factorization is inefficient. Let $H = R^\top R$ be the Cholesky factorization of the normal equations, where $R$ is an upper triangular matrix, then the solution to ~\eqref{eq:normal} is given by
261 Suppose that the SfM problem consists of $p$ cameras and $q$ points and the variable vector $x$ has the block structure $x = [y_{1},\hdots,y_{p},z_{1},\hdots,z_{q}]$. Where, $y$ and $z$ correspond to camera and point parameters, respectively. Further, let the camera blocks be of size $c$ and the point blocks be of size $s$ (for most problems $c$ = $6$--$9$ and $s = 3$). Ceres does not impose any constancy requirement on these block sizes, but choosing them to be constant simplifies the exposition.
271 where, $B \in \reals^{pc\times pc}$ is a block sparse matrix with $p$ blocks of size $c\times c$ and $C \in \reals^{qs\times qs}$ is a block diagonal matrix with $q$ blocks of size $s\times s$. $E \in \reals^{pc\times qs}$ is a general block sparse matrix, with a block of size $c\times s$ for each observation. Let us now block partition $\Delta x = [\Delta y,\Delta z]$ and $g=[v,w]$ to restate~\eqref{eq:normal} as the block structured linear system
306 is direct factorization, where we store and factor $S$ as a dense
345 The convergence rate of Conjugate Gradients for solving~\eqref{eq:normal} depends on the distribution of eigenvalues of $H$~\cite{saad2003iterative}. A useful upper bound is $\sqrt{\kappa(H)}$, where, $\kappa(H)$f is the condition number of the matrix $H$. For most bundle adjustment problems, $\kappa(H)$ is high and a direct application of Conjugate Gradients to~\eqref{eq:normal} results in extremely poor performance.
364 O(n) storage, where as a bad ordering will result in an completely
369 from no hints, where the solver is free to decide the best ordering
498 where, $\Delta \text{cost}$ is the change in objective function value
505 where $\|\cdot\|_\infty$ refers to the max norm, and $x_0$ is the vector of initial parameter values.
511 where $\Delta x$ is the step computed by the linear solver in the current iteration of Levenberg-Marquardt.
734 that this way, numeric differentiation works for functions where