Home | History | Annotate | Download | only in doc
      1                         Kernels in gemmlowp
      2                         *******************
      3 
      4 
      5 Kernels provide an inner-loop implementation, and a format
      6 ==========================================================
      7 
      8 Here we assume familiarity with the concepts of kernels and of packing
      9 as explained in doc/design.txt.
     10 
     11 gemmlowp is designed to be easily extensible to different architectures and
     12 other low-level details, while achieving high performance. Thus a line had to
     13 be drawn between the generic GEMM code and the specific parts that need to
     14 be manually designed for each architecture, etc. The design choice made in
     15 gemmlowp is to have easily swappable GEMM kernels.
     16 
     17 In itself, a GEMM kernel is just an implementation of the inner-most loop
     18 in a GEMM (That inner-most loop has to be over the 'depth' dimension so as
     19 to be able to accumulate into a small enough number of accumulators to fit
     20 in registers).
     21 
     22 Thus, by itself, a GEMM kernel should be just a function computing a block
     23 of GEMM.
     24 
     25 However, GEMM kernels may need to differ not just in how they implement this
     26 computation, but also in the format of data that they operate on. Indeed,
     27 in order to maximize the ratio of arithmetic instructions to memory access
     28 instructions, GEMM kernels want to handle blocks as wide as possible given
     29 the number of registers of the CPU architecture.
     30 
     31 Thus, in order to allow efficient specialization to diverse architectures,
     32 gemmlowp allows each GEMM kernel to dictate the format of data that it expects,
     33 in addition to providing its inner-loop implementation.
     34 
     35 The former is given by a 'Format' typedef, and the latter by a 'Run'
     36 method.
     37 
     38 A good example is to look at internal/kernel_neon.h, and specifically at
     39 the NEONKernel12x4Depth2 kernel, which specifies its format as
     40 
     41   typedef KernelFormat<KernelSideFormat<CellFormat<4, 2>, 3>,
     42                        KernelSideFormat<CellFormat<4, 2>, 1> > Format;
     43 
     44 The meaning of these terms is explained in the lengthy comment at the
     45 top of internal/kernel.h. Here, they mean that this kernel handles at
     46 each iteration (along the depth dimension):
     47   - 3 'cells' of size 4x2 each of the lhs, so a total lhs block
     48     of size 12x2
     49   - 1 'cell' of size 2x4 of the rhs.
     50 In other words, this kernel handles 12 rows of the lhs and 4 columns of the
     51 rhs, and handles two levels of depth at once. The 'cells' and 'CellFormat'
     52 details the layout of these 12x2 and 2x4 blocks.
     53 
     54 This kernel then loads these 12x2 and 2x4 blocks and computes the corresponding
     55 12x4 GEMM; for ease of reference let us paste the critical comment and code here:
     56 
     57       "loop_NEONKernel12x4Depth2_%=:\n"
     58 
     59         // Overview of register layout:
     60         //
     61         // A 2x4 cell of Rhs is stored in 16bit in d0--d1 (q0).
     62         // A 12x2 block of 3 4x2 cells Lhs is stored in 16bit in d2--d7
     63         // (q1--q3).
     64         // A 12x4 block of accumulators is stored in 32bit in q4--q15.
     65         //
     66         //                   +-----+-----+-----+-----+
     67         //                   |d0[0]|d0[1]|d0[2]|d0[3]|
     68         //              Rhs  +-----+-----+-----+-----+
     69         //                   |d1[0]|d1[1]|d1[2]|d1[3]|
     70         //                   +-----+-----+-----+-----+
     71         //
     72         //                   |     |     |     |     |
     73         //
     74         //    Lhs            |     |     |     |     |
     75         //
     76         //  +--+--+ - - - -  +-----+-----+-----+-----+
     77         //  |d2|d3|          | q4  | q5  | q6  | q7  |
     78         //  |d2|d3|          | q4  | q5  | q6  | q7  |
     79         //  |d2|d3|          | q4  | q5  | q6  | q7  |
     80         //  |d2|d3|          | q4  | q5  | q6  | q7  |
     81         //  +--+--+ - - - -  +-----+-----+-----+-----+
     82         //  |d4|d5|          | q8  | q9  | q10 | q11 |
     83         //  |d4|d5|          | q8  | q9  | q10 | q11 |
     84         //  |d4|d5|          | q8  | q9  | q10 | q11 |
     85         //  |d4|d5|          | q8  | q9  | q10 | q11 |
     86         //  +--+--+ - - - -  +-----+-----+-----+-----+
     87         //  |d6|d7|          | q12 | q13 | q14 | q15 |
     88         //  |d6|d7|          | q12 | q13 | q14 | q15 |
     89         //  |d6|d7|          | q12 | q13 | q14 | q15 |
     90         //  |d6|d7|          | q12 | q13 | q14 | q15 |
     91         //  +--+--+ - - - -  +-----+-----+-----+-----+
     92         //
     93         //                            Accumulator
     94 
     95         // Load 1 Rhs cell of size 2x4
     96         "vld1.8 {d0}, [%[rhs_ptr]:64]!\n"
     97 
     98         // Load 3 Lhs cells of size 4x2 each
     99         "vld1.8 {d2}, [%[lhs_ptr]:64]!\n"
    100         "vld1.8 {d4}, [%[lhs_ptr]:64]!\n"
    101         "vld1.8 {d6}, [%[lhs_ptr]:64]!\n"
    102 
    103         // Expand Lhs/Rhs cells to 16 bit.
    104         "vmovl.u8 q0, d0\n"
    105         "vmovl.u8 q1, d2\n"
    106         "vmovl.u8 q2, d4\n"
    107         "vmovl.u8 q3, d6\n"
    108 
    109         // Multiply-accumulate, level of depth 0
    110         "vmlal.u16 q4, d2, d0[0]\n"
    111         "vmlal.u16 q5, d2, d0[1]\n"
    112         "vmlal.u16 q6, d2, d0[2]\n"
    113         "vmlal.u16 q7, d2, d0[3]\n"
    114         "vmlal.u16 q8, d4, d0[0]\n"
    115         "vmlal.u16 q9, d4, d0[1]\n"
    116         "vmlal.u16 q10, d4, d0[2]\n"
    117         "vmlal.u16 q11, d4, d0[3]\n"
    118         "vmlal.u16 q12, d6, d0[0]\n"
    119         "vmlal.u16 q13, d6, d0[1]\n"
    120         "vmlal.u16 q14, d6, d0[2]\n"
    121         "vmlal.u16 q15, d6, d0[3]\n"
    122 
    123         // Multiply-accumulate, level of depth 1
    124         "vmlal.u16 q4, d3, d1[0]\n"
    125         "vmlal.u16 q5, d3, d1[1]\n"
    126         "vmlal.u16 q6, d3, d1[2]\n"
    127         "vmlal.u16 q7, d3, d1[3]\n"
    128         "vmlal.u16 q8, d5, d1[0]\n"
    129         "vmlal.u16 q9, d5, d1[1]\n"
    130         "vmlal.u16 q10, d5, d1[2]\n"
    131         "vmlal.u16 q11, d5, d1[3]\n"
    132         "vmlal.u16 q12, d7, d1[0]\n"
    133         "vmlal.u16 q13, d7, d1[1]\n"
    134         "vmlal.u16 q14, d7, d1[2]\n"
    135         "vmlal.u16 q15, d7, d1[3]\n"
    136 
    137         // Loop. Decrement loop index (depth) by 2, since we just handled 2
    138         // levels of depth (Kernel::kDepth=2).
    139         "subs %[run_depth], #2\n"
    140         "bne loop_NEONKernel12x4Depth2_%=\n"
    141 
    142 
    143 
    144 Packing code adapts to the format chosen by the kernel
    145 ======================================================
    146 
    147 As explained in doc/design.txt, gemmlowp starts by packing blocks of the
    148 lhs and rhs matrices for optimally efficient traversal by the kernel. This
    149 depends on fine details of the kernel format, in ways that can only be
    150 efficiently handled by knowing these kernel format details at compile-time.
    151 
    152 This is the reason why all the code in internal/pack.h is templated in
    153 the corresponding kernel format.
    154 
    155 The code in internal/pack.h isn't tightly optimized by itself, but it is
    156 structured in such a way that the critical code is in a template,
    157   PackingRegisterBlock,
    158 that can easily be specialized to override the slow generic code with
    159 fast specific packing code for specific formats, on specific platforms.
    160 
    161 See internal/pack_neon.h which provides NEON specializations of the
    162 packing code for the particular kernel formats that are used by the NEON
    163 kernels in internal/kernel_neon.h.
    164 
    165 
    166 Wrapping up: how to optimize gemmlowp for a CPU architecture
    167 ============================================================
    168 
    169 In conclusion, the key feature of gemmlowp when it comes to efficiently
    170 supporting a specific CPU architecture, is that it allows to freely replace
    171 the inner loop of the GEMM by providing one's own GEMM kernel, which is
    172 also free to dictate its required data layout; each data layout then also
    173 needs optimized packing code. The steps are thus:
    174   1) Freely design a GEMM kernel with a freely chosen data layout
    175   2) Implement the GEMM kernel, similar to internal/kernel_neon.h
    176   3) Implement the optimized packing code, similar to internal/pack_neon.h.
    177