Home | History | Annotate | Download | only in doc
      1 namespace Eigen {
      2 
      3 /** \page TutorialMatrixArithmetic Tutorial page 2 - %Matrix and vector arithmetic
      4     \ingroup Tutorial
      5 
      6 \li \b Previous: \ref TutorialMatrixClass
      7 \li \b Next: \ref TutorialArrayClass
      8 
      9 This tutorial aims to provide an overview and some details on how to perform arithmetic
     10 between matrices, vectors and scalars with Eigen.
     11 
     12 \b Table \b of \b contents
     13   - \ref TutorialArithmeticIntroduction
     14   - \ref TutorialArithmeticAddSub
     15   - \ref TutorialArithmeticScalarMulDiv
     16   - \ref TutorialArithmeticMentionXprTemplates
     17   - \ref TutorialArithmeticTranspose
     18   - \ref TutorialArithmeticMatrixMul
     19   - \ref TutorialArithmeticDotAndCross
     20   - \ref TutorialArithmeticRedux
     21   - \ref TutorialArithmeticValidity
     22 
     23 \section TutorialArithmeticIntroduction Introduction
     24 
     25 Eigen offers matrix/vector arithmetic operations either through overloads of common C++ arithmetic operators such as +, -, *,
     26 or through special methods such as dot(), cross(), etc.
     27 For the Matrix class (matrices and vectors), operators are only overloaded to support
     28 linear-algebraic operations. For example, \c matrix1 \c * \c matrix2 means matrix-matrix product,
     29 and \c vector \c + \c scalar is just not allowed. If you want to perform all kinds of array operations,
     30 not linear algebra, see the \ref TutorialArrayClass "next page".
     31 
     32 \section TutorialArithmeticAddSub Addition and subtraction
     33 
     34 The left hand side and right hand side must, of course, have the same numbers of rows and of columns. They must
     35 also have the same \c Scalar type, as Eigen doesn't do automatic type promotion. The operators at hand here are:
     36 \li binary operator + as in \c a+b
     37 \li binary operator - as in \c a-b
     38 \li unary operator - as in \c -a
     39 \li compound operator += as in \c a+=b
     40 \li compound operator -= as in \c a-=b
     41 
     42 <table class="example">
     43 <tr><th>Example:</th><th>Output:</th></tr>
     44 <tr><td>
     45 \include tut_arithmetic_add_sub.cpp
     46 </td>
     47 <td>
     48 \verbinclude tut_arithmetic_add_sub.out
     49 </td></tr></table>
     50 
     51 \section TutorialArithmeticScalarMulDiv Scalar multiplication and division
     52 
     53 Multiplication and division by a scalar is very simple too. The operators at hand here are:
     54 \li binary operator * as in \c matrix*scalar
     55 \li binary operator * as in \c scalar*matrix
     56 \li binary operator / as in \c matrix/scalar
     57 \li compound operator *= as in \c matrix*=scalar
     58 \li compound operator /= as in \c matrix/=scalar
     59 
     60 <table class="example">
     61 <tr><th>Example:</th><th>Output:</th></tr>
     62 <tr><td>
     63 \include tut_arithmetic_scalar_mul_div.cpp
     64 </td>
     65 <td>
     66 \verbinclude tut_arithmetic_scalar_mul_div.out
     67 </td></tr></table>
     68 
     69 
     70 \section TutorialArithmeticMentionXprTemplates A note about expression templates
     71 
     72 This is an advanced topic that we explain on \ref TopicEigenExpressionTemplates "this page",
     73 but it is useful to just mention it now. In Eigen, arithmetic operators such as \c operator+ don't
     74 perform any computation by themselves, they just return an "expression object" describing the computation to be
     75 performed. The actual computation happens later, when the whole expression is evaluated, typically in \c operator=.
     76 While this might sound heavy, any modern optimizing compiler is able to optimize away that abstraction and
     77 the result is perfectly optimized code. For example, when you do:
     78 \code
     79 VectorXf a(50), b(50), c(50), d(50);
     80 ...
     81 a = 3*b + 4*c + 5*d;
     82 \endcode
     83 Eigen compiles it to just one for loop, so that the arrays are traversed only once. Simplifying (e.g. ignoring
     84 SIMD optimizations), this loop looks like this:
     85 \code
     86 for(int i = 0; i < 50; ++i)
     87   a[i] = 3*b[i] + 4*c[i] + 5*d[i];
     88 \endcode
     89 Thus, you should not be afraid of using relatively large arithmetic expressions with Eigen: it only gives Eigen
     90 more opportunities for optimization.
     91 
     92 \section TutorialArithmeticTranspose Transposition and conjugation
     93 
     94 The transpose \f$ a^T \f$, conjugate \f$ \bar{a} \f$, and adjoint (i.e., conjugate transpose) \f$ a^* \f$ of a matrix or vector \f$ a \f$ are obtained by the member functions \link DenseBase::transpose() transpose()\endlink, \link MatrixBase::conjugate() conjugate()\endlink, and \link MatrixBase::adjoint() adjoint()\endlink, respectively.
     95 
     96 <table class="example">
     97 <tr><th>Example:</th><th>Output:</th></tr>
     98 <tr><td>
     99 \include tut_arithmetic_transpose_conjugate.cpp
    100 </td>
    101 <td>
    102 \verbinclude tut_arithmetic_transpose_conjugate.out
    103 </td></tr></table>
    104 
    105 For real matrices, \c conjugate() is a no-operation, and so \c adjoint() is equivalent to \c transpose().
    106 
    107 As for basic arithmetic operators, \c transpose() and \c adjoint() simply return a proxy object without doing the actual transposition. If you do <tt>b = a.transpose()</tt>, then the transpose is evaluated at the same time as the result is written into \c b. However, there is a complication here. If you do <tt>a = a.transpose()</tt>, then Eigen starts writing the result into \c a before the evaluation of the transpose is finished. Therefore, the instruction <tt>a = a.transpose()</tt> does not replace \c a with its transpose, as one would expect:
    108 <table class="example">
    109 <tr><th>Example:</th><th>Output:</th></tr>
    110 <tr><td>
    111 \include tut_arithmetic_transpose_aliasing.cpp
    112 </td>
    113 <td>
    114 \verbinclude tut_arithmetic_transpose_aliasing.out
    115 </td></tr></table>
    116 This is the so-called \ref TopicAliasing "aliasing issue". In "debug mode", i.e., when \ref TopicAssertions "assertions" have not been disabled, such common pitfalls are automatically detected. 
    117 
    118 For \em in-place transposition, as for instance in <tt>a = a.transpose()</tt>, simply use the \link DenseBase::transposeInPlace() transposeInPlace()\endlink  function:
    119 <table class="example">
    120 <tr><th>Example:</th><th>Output:</th></tr>
    121 <tr><td>
    122 \include tut_arithmetic_transpose_inplace.cpp
    123 </td>
    124 <td>
    125 \verbinclude tut_arithmetic_transpose_inplace.out
    126 </td></tr></table>
    127 There is also the \link MatrixBase::adjointInPlace() adjointInPlace()\endlink function for complex matrices.
    128 
    129 \section TutorialArithmeticMatrixMul Matrix-matrix and matrix-vector multiplication
    130 
    131 Matrix-matrix multiplication is again done with \c operator*. Since vectors are a special
    132 case of matrices, they are implicitly handled there too, so matrix-vector product is really just a special
    133 case of matrix-matrix product, and so is vector-vector outer product. Thus, all these cases are handled by just
    134 two operators:
    135 \li binary operator * as in \c a*b
    136 \li compound operator *= as in \c a*=b (this multiplies on the right: \c a*=b is equivalent to <tt>a = a*b</tt>)
    137 
    138 <table class="example">
    139 <tr><th>Example:</th><th>Output:</th></tr>
    140 <tr><td>
    141 \include tut_arithmetic_matrix_mul.cpp
    142 </td>
    143 <td>
    144 \verbinclude tut_arithmetic_matrix_mul.out
    145 </td></tr></table>
    146 
    147 Note: if you read the above paragraph on expression templates and are worried that doing \c m=m*m might cause
    148 aliasing issues, be reassured for now: Eigen treats matrix multiplication as a special case and takes care of
    149 introducing a temporary here, so it will compile \c m=m*m as:
    150 \code
    151 tmp = m*m;
    152 m = tmp;
    153 \endcode
    154 If you know your matrix product can be safely evaluated into the destination matrix without aliasing issue, then you can use the \link MatrixBase::noalias() noalias()\endlink function to avoid the temporary, e.g.:
    155 \code
    156 c.noalias() += a * b;
    157 \endcode
    158 For more details on this topic, see the page on \ref TopicAliasing "aliasing".
    159 
    160 \b Note: for BLAS users worried about performance, expressions such as <tt>c.noalias() -= 2 * a.adjoint() * b;</tt> are fully optimized and trigger a single gemm-like function call.
    161 
    162 \section TutorialArithmeticDotAndCross Dot product and cross product
    163 
    164 For dot product and cross product, you need the \link MatrixBase::dot() dot()\endlink and \link MatrixBase::cross() cross()\endlink methods. Of course, the dot product can also be obtained as a 1x1 matrix as u.adjoint()*v.
    165 <table class="example">
    166 <tr><th>Example:</th><th>Output:</th></tr>
    167 <tr><td>
    168 \include tut_arithmetic_dot_cross.cpp
    169 </td>
    170 <td>
    171 \verbinclude tut_arithmetic_dot_cross.out
    172 </td></tr></table>
    173 
    174 Remember that cross product is only for vectors of size 3. Dot product is for vectors of any sizes.
    175 When using complex numbers, Eigen's dot product is conjugate-linear in the first variable and linear in the
    176 second variable.
    177 
    178 \section TutorialArithmeticRedux Basic arithmetic reduction operations
    179 Eigen also provides some reduction operations to reduce a given matrix or vector to a single value such as the sum (computed by \link DenseBase::sum() sum()\endlink), product (\link DenseBase::prod() prod()\endlink), or the maximum (\link DenseBase::maxCoeff() maxCoeff()\endlink) and minimum (\link DenseBase::minCoeff() minCoeff()\endlink) of all its coefficients.
    180 
    181 <table class="example">
    182 <tr><th>Example:</th><th>Output:</th></tr>
    183 <tr><td>
    184 \include tut_arithmetic_redux_basic.cpp
    185 </td>
    186 <td>
    187 \verbinclude tut_arithmetic_redux_basic.out
    188 </td></tr></table>
    189 
    190 The \em trace of a matrix, as returned by the function \link MatrixBase::trace() trace()\endlink, is the sum of the diagonal coefficients and can also be computed as efficiently using <tt>a.diagonal().sum()</tt>, as we will see later on.
    191 
    192 There also exist variants of the \c minCoeff and \c maxCoeff functions returning the coordinates of the respective coefficient via the arguments:
    193 
    194 <table class="example">
    195 <tr><th>Example:</th><th>Output:</th></tr>
    196 <tr><td>
    197 \include tut_arithmetic_redux_minmax.cpp
    198 </td>
    199 <td>
    200 \verbinclude tut_arithmetic_redux_minmax.out
    201 </td></tr></table>
    202 
    203 
    204 \section TutorialArithmeticValidity Validity of operations
    205 Eigen checks the validity of the operations that you perform. When possible,
    206 it checks them at compile time, producing compilation errors. These error messages can be long and ugly,
    207 but Eigen writes the important message in UPPERCASE_LETTERS_SO_IT_STANDS_OUT. For example:
    208 \code
    209   Matrix3f m;
    210   Vector4f v;
    211   v = m*v;      // Compile-time error: YOU_MIXED_MATRICES_OF_DIFFERENT_SIZES
    212 \endcode
    213 
    214 Of course, in many cases, for example when checking dynamic sizes, the check cannot be performed at compile time.
    215 Eigen then uses runtime assertions. This means that the program will abort with an error message when executing an illegal operation if it is run in "debug mode", and it will probably crash if assertions are turned off.
    216 
    217 \code
    218   MatrixXf m(3,3);
    219   VectorXf v(4);
    220   v = m * v; // Run-time assertion failure here: "invalid matrix product"
    221 \endcode
    222 
    223 For more details on this topic, see \ref TopicAssertions "this page".
    224 
    225 \li \b Next: \ref TutorialArrayClass
    226 
    227 */
    228 
    229 }
    230