Home | History | Annotate | Download | only in doc
      1 namespace Eigen {
      2 
      3 /** \eigenManualPage TutorialReductionsVisitorsBroadcasting Reductions, visitors and broadcasting
      4 
      5 This page explains Eigen's reductions, visitors and broadcasting and how they are used with
      6 \link MatrixBase matrices \endlink and \link ArrayBase arrays \endlink.
      7 
      8 \eigenAutoToc
      9 
     10 \section TutorialReductionsVisitorsBroadcastingReductions Reductions
     11 In Eigen, a reduction is a function taking a matrix or array, and returning a single
     12 scalar value. One of the most used reductions is \link DenseBase::sum() .sum() \endlink,
     13 returning the sum of all the coefficients inside a given matrix or array.
     14 
     15 <table class="example">
     16 <tr><th>Example:</th><th>Output:</th></tr>
     17 <tr><td>
     18 \include tut_arithmetic_redux_basic.cpp
     19 </td>
     20 <td>
     21 \verbinclude tut_arithmetic_redux_basic.out
     22 </td></tr></table>
     23 
     24 The \em trace of a matrix, as returned by the function \c trace(), is the sum of the diagonal coefficients and can equivalently be computed <tt>a.diagonal().sum()</tt>.
     25 
     26 
     27 \subsection TutorialReductionsVisitorsBroadcastingReductionsNorm Norm computations
     28 
     29 The (Euclidean a.k.a. \f$\ell^2\f$) squared norm of a vector can be obtained \link MatrixBase::squaredNorm() squaredNorm() \endlink. It is equal to the dot product of the vector by itself, and equivalently to the sum of squared absolute values of its coefficients.
     30 
     31 Eigen also provides the \link MatrixBase::norm() norm() \endlink method, which returns the square root of \link MatrixBase::squaredNorm() squaredNorm() \endlink.
     32 
     33 These operations can also operate on matrices; in that case, a n-by-p matrix is seen as a vector of size (n*p), so for example the \link MatrixBase::norm() norm() \endlink method returns the "Frobenius" or "Hilbert-Schmidt" norm. We refrain from speaking of the \f$\ell^2\f$ norm of a matrix because that can mean different things.
     34 
     35 If you want other \f$\ell^p\f$ norms, use the \link MatrixBase::lpNorm() lpNnorm<p>() \endlink method. The template parameter \a p can take the special value \a Infinity if you want the \f$\ell^\infty\f$ norm, which is the maximum of the absolute values of the coefficients.
     36 
     37 The following example demonstrates these methods.
     38 
     39 <table class="example">
     40 <tr><th>Example:</th><th>Output:</th></tr>
     41 <tr><td>
     42 \include Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.cpp
     43 </td>
     44 <td>
     45 \verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.out
     46 </td></tr></table>
     47 
     48 \subsection TutorialReductionsVisitorsBroadcastingReductionsBool Boolean reductions
     49 
     50 The following reductions operate on boolean values:
     51   - \link DenseBase::all() all() \endlink returns \b true if all of the coefficients in a given Matrix or Array evaluate to \b true .
     52   - \link DenseBase::any() any() \endlink returns \b true if at least one of the coefficients in a given Matrix or Array evaluates to \b true .
     53   - \link DenseBase::count() count() \endlink returns the number of coefficients in a given Matrix or Array that evaluate to  \b true.
     54 
     55 These are typically used in conjunction with the coefficient-wise comparison and equality operators provided by Array. For instance, <tt>array > 0</tt> is an %Array of the same size as \c array , with \b true at those positions where the corresponding coefficient of \c array is positive. Thus, <tt>(array > 0).all()</tt> tests whether all coefficients of \c array are positive. This can be seen in the following example:
     56 
     57 <table class="example">
     58 <tr><th>Example:</th><th>Output:</th></tr>
     59 <tr><td>
     60 \include Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.cpp
     61 </td>
     62 <td>
     63 \verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.out
     64 </td></tr></table>
     65 
     66 \subsection TutorialReductionsVisitorsBroadcastingReductionsUserdefined User defined reductions
     67 
     68 TODO
     69 
     70 In the meantime you can have a look at the DenseBase::redux() function.
     71 
     72 \section TutorialReductionsVisitorsBroadcastingVisitors Visitors
     73 Visitors are useful when one wants to obtain the location of a coefficient inside 
     74 a Matrix or Array. The simplest examples are 
     75 \link MatrixBase::maxCoeff() maxCoeff(&x,&y) \endlink and 
     76 \link MatrixBase::minCoeff() minCoeff(&x,&y)\endlink, which can be used to find
     77 the location of the greatest or smallest coefficient in a Matrix or 
     78 Array.
     79 
     80 The arguments passed to a visitor are pointers to the variables where the
     81 row and column position are to be stored. These variables should be of type
     82 \link DenseBase::Index Index \endlink, as shown below:
     83 
     84 <table class="example">
     85 <tr><th>Example:</th><th>Output:</th></tr>
     86 <tr><td>
     87 \include Tutorial_ReductionsVisitorsBroadcasting_visitors.cpp
     88 </td>
     89 <td>
     90 \verbinclude Tutorial_ReductionsVisitorsBroadcasting_visitors.out
     91 </td></tr></table>
     92 
     93 Note that both functions also return the value of the minimum or maximum coefficient if needed,
     94 as if it was a typical reduction operation.
     95 
     96 \section TutorialReductionsVisitorsBroadcastingPartialReductions Partial reductions
     97 Partial reductions are reductions that can operate column- or row-wise on a Matrix or 
     98 Array, applying the reduction operation on each column or row and 
     99 returning a column or row-vector with the corresponding values. Partial reductions are applied 
    100 with \link DenseBase::colwise() colwise() \endlink or \link DenseBase::rowwise() rowwise() \endlink.
    101 
    102 A simple example is obtaining the maximum of the elements 
    103 in each column in a given matrix, storing the result in a row-vector:
    104 
    105 <table class="example">
    106 <tr><th>Example:</th><th>Output:</th></tr>
    107 <tr><td>
    108 \include Tutorial_ReductionsVisitorsBroadcasting_colwise.cpp
    109 </td>
    110 <td>
    111 \verbinclude Tutorial_ReductionsVisitorsBroadcasting_colwise.out
    112 </td></tr></table>
    113 
    114 The same operation can be performed row-wise:
    115 
    116 <table class="example">
    117 <tr><th>Example:</th><th>Output:</th></tr>
    118 <tr><td>
    119 \include Tutorial_ReductionsVisitorsBroadcasting_rowwise.cpp
    120 </td>
    121 <td>
    122 \verbinclude Tutorial_ReductionsVisitorsBroadcasting_rowwise.out
    123 </td></tr></table>
    124 
    125 <b>Note that column-wise operations return a 'row-vector' while row-wise operations
    126 return a 'column-vector'</b>
    127 
    128 \subsection TutorialReductionsVisitorsBroadcastingPartialReductionsCombined Combining partial reductions with other operations
    129 It is also possible to use the result of a partial reduction to do further processing.
    130 Here is another example that finds the column whose sum of elements is the maximum
    131  within a matrix. With column-wise partial reductions this can be coded as:
    132 
    133 <table class="example">
    134 <tr><th>Example:</th><th>Output:</th></tr>
    135 <tr><td>
    136 \include Tutorial_ReductionsVisitorsBroadcasting_maxnorm.cpp
    137 </td>
    138 <td>
    139 \verbinclude Tutorial_ReductionsVisitorsBroadcasting_maxnorm.out
    140 </td></tr></table>
    141 
    142 The previous example applies the \link DenseBase::sum() sum() \endlink reduction on each column
    143 though the \link DenseBase::colwise() colwise() \endlink visitor, obtaining a new matrix whose
    144 size is 1x4.
    145 
    146 Therefore, if
    147 \f[
    148 \mbox{m} = \begin{bmatrix} 1 & 2 & 6 & 9 \\
    149                     3 & 1 & 7 & 2 \end{bmatrix}
    150 \f]
    151 
    152 then
    153 
    154 \f[
    155 \mbox{m.colwise().sum()} = \begin{bmatrix} 4 & 3 & 13 & 11 \end{bmatrix}
    156 \f]
    157 
    158 The \link DenseBase::maxCoeff() maxCoeff() \endlink reduction is finally applied 
    159 to obtain the column index where the maximum sum is found, 
    160 which is the column index 2 (third column) in this case.
    161 
    162 
    163 \section TutorialReductionsVisitorsBroadcastingBroadcasting Broadcasting
    164 The concept behind broadcasting is similar to partial reductions, with the difference that broadcasting 
    165 constructs an expression where a vector (column or row) is interpreted as a matrix by replicating it in 
    166 one direction.
    167 
    168 A simple example is to add a certain column-vector to each column in a matrix. 
    169 This can be accomplished with:
    170 
    171 <table class="example">
    172 <tr><th>Example:</th><th>Output:</th></tr>
    173 <tr><td>
    174 \include Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.cpp
    175 </td>
    176 <td>
    177 \verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.out
    178 </td></tr></table>
    179 
    180 We can interpret the instruction <tt>mat.colwise() += v</tt> in two equivalent ways. It adds the vector \c v
    181 to every column of the matrix. Alternatively, it can be interpreted as repeating the vector \c v four times to
    182 form a four-by-two matrix which is then added to \c mat:
    183 \f[
    184 \begin{bmatrix} 1 & 2 & 6 & 9 \\ 3 & 1 & 7 & 2 \end{bmatrix}
    185 + \begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 \end{bmatrix}
    186 = \begin{bmatrix} 1 & 2 & 6 & 9 \\ 4 & 2 & 8 & 3 \end{bmatrix}.
    187 \f]
    188 The operators <tt>-=</tt>, <tt>+</tt> and <tt>-</tt> can also be used column-wise and row-wise. On arrays, we 
    189 can also use the operators <tt>*=</tt>, <tt>/=</tt>, <tt>*</tt> and <tt>/</tt> to perform coefficient-wise 
    190 multiplication and division column-wise or row-wise. These operators are not available on matrices because it
    191 is not clear what they would do. If you want multiply column 0 of a matrix \c mat with \c v(0), column 1 with 
    192 \c v(1), and so on, then use <tt>mat = mat * v.asDiagonal()</tt>.
    193 
    194 It is important to point out that the vector to be added column-wise or row-wise must be of type Vector,
    195 and cannot be a Matrix. If this is not met then you will get compile-time error. This also means that
    196 broadcasting operations can only be applied with an object of type Vector, when operating with Matrix.
    197 The same applies for the Array class, where the equivalent for VectorXf is ArrayXf. As always, you should
    198 not mix arrays and matrices in the same expression.
    199 
    200 To perform the same operation row-wise we can do:
    201 
    202 <table class="example">
    203 <tr><th>Example:</th><th>Output:</th></tr>
    204 <tr><td>
    205 \include Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.cpp
    206 </td>
    207 <td>
    208 \verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.out
    209 </td></tr></table>
    210 
    211 \subsection TutorialReductionsVisitorsBroadcastingBroadcastingCombined Combining broadcasting with other operations
    212 Broadcasting can also be combined with other operations, such as Matrix or Array operations, 
    213 reductions and partial reductions.
    214 
    215 Now that broadcasting, reductions and partial reductions have been introduced, we can dive into a more advanced example that finds
    216 the nearest neighbour of a vector <tt>v</tt> within the columns of matrix <tt>m</tt>. The Euclidean distance will be used in this example,
    217 computing the squared Euclidean distance with the partial reduction named \link MatrixBase::squaredNorm() squaredNorm() \endlink:
    218 
    219 <table class="example">
    220 <tr><th>Example:</th><th>Output:</th></tr>
    221 <tr><td>
    222 \include Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.cpp
    223 </td>
    224 <td>
    225 \verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.out
    226 </td></tr></table>
    227 
    228 The line that does the job is 
    229 \code
    230   (m.colwise() - v).colwise().squaredNorm().minCoeff(&index);
    231 \endcode
    232 
    233 We will go step by step to understand what is happening:
    234 
    235   - <tt>m.colwise() - v</tt> is a broadcasting operation, subtracting <tt>v</tt> from each column in <tt>m</tt>. The result of this operation
    236 is a new matrix whose size is the same as matrix <tt>m</tt>: \f[
    237   \mbox{m.colwise() - v} = 
    238   \begin{bmatrix}
    239     -1 & 21 & 4 & 7 \\
    240      0 & 8  & 4 & -1
    241   \end{bmatrix}
    242 \f]
    243 
    244   - <tt>(m.colwise() - v).colwise().squaredNorm()</tt> is a partial reduction, computing the squared norm column-wise. The result of
    245 this operation is a row-vector where each coefficient is the squared Euclidean distance between each column in <tt>m</tt> and <tt>v</tt>: \f[
    246   \mbox{(m.colwise() - v).colwise().squaredNorm()} =
    247   \begin{bmatrix}
    248      1 & 505 & 32 & 50
    249   \end{bmatrix}
    250 \f]
    251 
    252   - Finally, <tt>minCoeff(&index)</tt> is used to obtain the index of the column in <tt>m</tt> that is closest to <tt>v</tt> in terms of Euclidean
    253 distance.
    254 
    255 */
    256 
    257 }
    258