Lines Matching full:matrix
31 n` matrix, where :math:`J_{ij}(x) = \partial_j f_i(x)` and the
95 matrix used to define a metric on the domain of :math:`F(x)` and
146 The matrix :math:`D(x)` is a non-negative diagonal matrix, typically
147 the square root of the diagonal of the matrix :math:`J(x)^\top J(x)`.
150 will assume that the matrix :math:`\sqrt{\mu} D` has been concatenated
151 at the bottom of the matrix :math:`J` and similarly a vector of zeros
270 Similar structure can be found in the matrix factorization with
300 Hessian matrix's sparsity structure into a collection of
378 be the identity matrix. This is not a good search direction for
437 :math:`J`, where :math:`Q` is an orthonormal matrix and :math:`R` is
438 an upper triangular matrix [TrefethenBau]_. Then it can be shown that
454 :math:`R` is an upper triangular matrix, then the solution to
463 factorization of :math:`H` is the same upper triangular matrix
465 orthonormal matrix, :math:`J=QR` implies that :math:`J^\top J = R^\top
501 turn implies that the matrix :math:`H` is of the form
503 .. math:: H = \left[ \begin{matrix} B & E\\ E^\top & C \end{matrix} \right]\ ,
506 where, :math:`B \in \mathbb{R}^{pc\times pc}` is a block sparse matrix
508 \mathbb{R}^{qs\times qs}` is a block diagonal matrix with :math:`q` blocks
510 general block sparse matrix, with a block of size :math:`c\times s`
515 .. math:: \left[ \begin{matrix} B & E\\ E^\top & C \end{matrix}
516 \right]\left[ \begin{matrix} \Delta y \\ \Delta z
517 \end{matrix} \right] = \left[ \begin{matrix} v\\ w
518 \end{matrix} \right]\ ,
522 a block diagonal matrix, with small diagonal blocks of size
531 The matrix
536 the *reduced camera matrix*, because the only variables
539 symmetric positive definite matrix, with blocks of size :math:`c\times
549 inversion of the block diagonal matrix :math:`C`, a few matrix-matrix
550 and matrix-vector multiplies, and the solution of block sparse
560 depending upon the structure of the matrix, there are, in general, two
562 :math:`S` as a dense matrix [TrefethenBau]_. This method has
568 But, :math:`S` is typically a fairly sparse matrix, as most images
571 sparse matrix, use row and column re-ordering algorithms to maximize
604 reduced camera matrix :math:`S` instead of :math:`H`. One reason to do
605 this is that :math:`S` is a much smaller matrix than :math:`H`, but
651 number of the matrix :math:`H`. For most bundle adjustment problems,
661 matrix :math:`\kappa(M^{-1}A)`.
686 matrix :math:`B` [Mandel]_ and the block diagonal :math:`S`, i.e, the
818 inverse of the Hessian matrix. The rank of the approximation
1073 The ``LEVENBERG_MARQUARDT`` strategy, uses a diagonal matrix to
1075 the values of this diagonal matrix.
1081 The ``LEVENBERG_MARQUARDT`` strategy, uses a diagonal matrix to
1083 the values of this diagonal matrix.
1155 dense matrix factorizations. Currently ``EIGEN`` and ``LAPACK`` are
1205 ordering to permute the columns of the Jacobian matrix. There are
1208 1. Compute the Jacobian matrix in some order and then have the
1215 matrix, thus Ceres pre-permutes the columns of the Jacobian matrix
1221 expense of an extra copy of the Jacobian matrix. Setting
1390 dense matrix. The vectors :math:`D`, :math:`x` and :math:`f` are
1716 A compressed row sparse matrix used primarily for communicating the
1717 Jacobian matrix to the user.
1719 A compressed row matrix stores its contents in three arrays,
1732 non-zeros in the matrix.
1734 e.g, consider the 3x4 sparse matrix
1894 If :math:`J(x^*)` is rank deficient, then the covariance matrix :math:`C(x^*)`
1899 Note that in the above, we assumed that the covariance matrix for
1905 Where :math:`S` is a positive semi-definite matrix denoting the
1916 covariance matrix not equal to identity, then it is the user's
1921 matrix :math:`S`.
1942 Since the computation of the covariance matrix requires computing the
1943 inverse of a potentially large matrix, this can involve a rather large
1946 matrix. Quite often just the block diagonal. :class:`Covariance`
1947 allows the user to specify the parts of the covariance matrix that she
1949 store those parts of the covariance matrix.
1966 Numerical rank deficiency, where the rank of the matrix cannot be
2029 the Jacobian matrix J is well conditioned. For ill-conditioned
2032 factorization, i.e., it cannot reliably detect when the matrix
2035 checking if the matrix is rank deficient (cholmod_rcond), but it
2041 matrix. Therefore, if you are doing ``SPARSE_CHOLESKY``, we strongly
2055 i.e., it can reliably detect when the Jacobian matrix is rank
2065 If the Jacobian matrix is near singular, then inverting :math:`J'J`
2075 which is essentially a rank deficient matrix, we have
2127 As mentioned above, when the covariance matrix is near singular,
2148 truncated matrix is still below min_reciprocal_condition_number,
2177 Compute a part of the covariance matrix.
2180 matrix block-wise using pairs of parameter blocks. This allows the
2184 Since the covariance matrix is symmetric, if the user passes
2192 determine what parts of the covariance matrix are computed. The
2203 Return the block of the covariance matrix corresponding to
2213 a ``parameter_block1_size x parameter_block2_size`` matrix. The
2214 returned covariance will be a row-major matrix.