HomeSort by relevance Sort by last modified time
    Searched refs:kappa (Results 1 - 3 of 3) sorted by null

  /external/v8/src/
fast-dtoa.cc 56 // Input: * buffer containing the digits of too_high / 10^kappa
60 // * rest = (too_high - buffer * 10^kappa).f() * unit
61 // * ten_kappa = 10^kappa * unit
177 // The rounding might shift the whole buffer in which case the kappa is
178 // adjusted. For example "99", kappa = 3 might become "10", kappa = 4.
191 int* kappa) {
198 // 10^kappa == 40 then there is no way to tell which way to round.
200 // Even if unit is just half the size of 10^kappa we are already completely
204 // If 2 * (rest + unit) <= 10^kappa we can safely round down
652 int kappa; local
701 int kappa; local
    [all...]
  /external/dropbear/libtomcrypt/src/ciphers/
anubis.c 901 ulong32 kappa[MAX_N]; local
    [all...]
  /external/ceres-solver/docs/
solving.tex 330 Another option for bundle adjustment problems is to apply PCG to the reduced camera matrix $S$ instead of $H$. One reason to do this is that $S$ is a much smaller matrix than $H$, but more importantly, it can be shown that $\kappa(S)\leq \kappa(H)$. Ceres implements PCG on $S$ as the \texttt{ITERATIVE\_SCHUR} solver. When the user chooses \texttt{ITERATIVE\_SCHUR} as the linear solver, Ceres automatically switches from the exact step algorithm to an inexact step algorithm.
345 The convergence rate of Conjugate Gradients for solving~\eqref{eq:normal} depends on the distribution of eigenvalues of $H$~\cite{saad2003iterative}. A useful upper bound is $\sqrt{\kappa(H)}$, where, $\kappa(H)$f is the condition number of the matrix $H$. For most bundle adjustment problems, $\kappa(H)$ is high and a direct application of Conjugate Gradients to~\eqref{eq:normal} results in extremely poor performance.
347 The solution to this problem is to replace~\eqref{eq:normal} with a {\em preconditioned} system. Given a linear system, $Ax =b$ and a preconditioner $M$ the preconditioned system is given by $M^{-1}Ax = M^{-1}b$. The resulting algorithm is known as Preconditioned Conjugate Gradients algorithm (PCG) and its worst case complexity now depends on the condition number of the {\em preconditioned} matrix $\kappa(M^{-1}A)$.
349 The computational cost of using a preconditioner $M$ is the cost of computing $M$ and evaluating the product $M^{-1}y$ for arbitrary vectors $y$. Thus, there are two competing factors to consider: How much of $H$'s structure is captured by $M$ so that the condition number $\kappa(HM^{-1})$ is low, and the computational cost of constructing and using $M$. The ideal preconditioner would be one for which $\kappa(M^{-1}A) =1$. $M=A$ achieves this, but it is not a practical choice, as applying this preconditioner would require solving a linear system equivalent to the unpreconditioned problem. It is usually the case that the more information $M$ has about $H$, the more expensive it is use. For example, Incomplete Cholesky factorization based preconditioners have much better convergence behavior than the Jacobi preconditioner, but are also much more expensive.
    [all...]

Completed in 107 milliseconds