Home | History | Annotate | Download | only in docs
      1 ==============================================
      2 LLVM Atomic Instructions and Concurrency Guide
      3 ==============================================
      4 
      5 .. contents::
      6    :local:
      7 
      8 Introduction
      9 ============
     10 
     11 Historically, LLVM has not had very strong support for concurrency; some minimal
     12 intrinsics were provided, and ``volatile`` was used in some cases to achieve
     13 rough semantics in the presence of concurrency.  However, this is changing;
     14 there are now new instructions which are well-defined in the presence of threads
     15 and asynchronous signals, and the model for existing instructions has been
     16 clarified in the IR.
     17 
     18 The atomic instructions are designed specifically to provide readable IR and
     19 optimized code generation for the following:
     20 
     21 * The new C++0x ``<atomic>`` header.  (`C++0x draft available here
     22   <http://www.open-std.org/jtc1/sc22/wg21/>`_.) (`C1x draft available here
     23   <http://www.open-std.org/jtc1/sc22/wg14/>`_.)
     24 
     25 * Proper semantics for Java-style memory, for both ``volatile`` and regular
     26   shared variables. (`Java Specification
     27   <http://java.sun.com/docs/books/jls/third_edition/html/memory.html>`_)
     28 
     29 * gcc-compatible ``__sync_*`` builtins. (`Description
     30   <http://gcc.gnu.org/onlinedocs/gcc/Atomic-Builtins.html>`_)
     31 
     32 * Other scenarios with atomic semantics, including ``static`` variables with
     33   non-trivial constructors in C++.
     34 
     35 Atomic and volatile in the IR are orthogonal; "volatile" is the C/C++ volatile,
     36 which ensures that every volatile load and store happens and is performed in the
     37 stated order.  A couple examples: if a SequentiallyConsistent store is
     38 immediately followed by another SequentiallyConsistent store to the same
     39 address, the first store can be erased. This transformation is not allowed for a
     40 pair of volatile stores. On the other hand, a non-volatile non-atomic load can
     41 be moved across a volatile load freely, but not an Acquire load.
     42 
     43 This document is intended to provide a guide to anyone either writing a frontend
     44 for LLVM or working on optimization passes for LLVM with a guide for how to deal
     45 with instructions with special semantics in the presence of concurrency.  This
     46 is not intended to be a precise guide to the semantics; the details can get
     47 extremely complicated and unreadable, and are not usually necessary.
     48 
     49 .. _Optimization outside atomic:
     50 
     51 Optimization outside atomic
     52 ===========================
     53 
     54 The basic ``'load'`` and ``'store'`` allow a variety of optimizations, but can
     55 lead to undefined results in a concurrent environment; see `NotAtomic`_. This
     56 section specifically goes into the one optimizer restriction which applies in
     57 concurrent environments, which gets a bit more of an extended description
     58 because any optimization dealing with stores needs to be aware of it.
     59 
     60 From the optimizer's point of view, the rule is that if there are not any
     61 instructions with atomic ordering involved, concurrency does not matter, with
     62 one exception: if a variable might be visible to another thread or signal
     63 handler, a store cannot be inserted along a path where it might not execute
     64 otherwise.  Take the following example:
     65 
     66 .. code-block:: c
     67 
     68  /* C code, for readability; run through clang -O2 -S -emit-llvm to get
     69      equivalent IR */
     70   int x;
     71   void f(int* a) {
     72     for (int i = 0; i < 100; i++) {
     73       if (a[i])
     74         x += 1;
     75     }
     76   }
     77 
     78 The following is equivalent in non-concurrent situations:
     79 
     80 .. code-block:: c
     81 
     82   int x;
     83   void f(int* a) {
     84     int xtemp = x;
     85     for (int i = 0; i < 100; i++) {
     86       if (a[i])
     87         xtemp += 1;
     88     }
     89     x = xtemp;
     90   }
     91 
     92 However, LLVM is not allowed to transform the former to the latter: it could
     93 indirectly introduce undefined behavior if another thread can access ``x`` at
     94 the same time. (This example is particularly of interest because before the
     95 concurrency model was implemented, LLVM would perform this transformation.)
     96 
     97 Note that speculative loads are allowed; a load which is part of a race returns
     98 ``undef``, but does not have undefined behavior.
     99 
    100 Atomic instructions
    101 ===================
    102 
    103 For cases where simple loads and stores are not sufficient, LLVM provides
    104 various atomic instructions. The exact guarantees provided depend on the
    105 ordering; see `Atomic orderings`_.
    106 
    107 ``load atomic`` and ``store atomic`` provide the same basic functionality as
    108 non-atomic loads and stores, but provide additional guarantees in situations
    109 where threads and signals are involved.
    110 
    111 ``cmpxchg`` and ``atomicrmw`` are essentially like an atomic load followed by an
    112 atomic store (where the store is conditional for ``cmpxchg``), but no other
    113 memory operation can happen on any thread between the load and store.
    114 
    115 A ``fence`` provides Acquire and/or Release ordering which is not part of
    116 another operation; it is normally used along with Monotonic memory operations.
    117 A Monotonic load followed by an Acquire fence is roughly equivalent to an
    118 Acquire load.
    119 
    120 Frontends generating atomic instructions generally need to be aware of the
    121 target to some degree; atomic instructions are guaranteed to be lock-free, and
    122 therefore an instruction which is wider than the target natively supports can be
    123 impossible to generate.
    124 
    125 .. _Atomic orderings:
    126 
    127 Atomic orderings
    128 ================
    129 
    130 In order to achieve a balance between performance and necessary guarantees,
    131 there are six levels of atomicity. They are listed in order of strength; each
    132 level includes all the guarantees of the previous level except for
    133 Acquire/Release. (See also `LangRef Ordering <LangRef.html#ordering>`_.)
    134 
    135 .. _NotAtomic:
    136 
    137 NotAtomic
    138 ---------
    139 
    140 NotAtomic is the obvious, a load or store which is not atomic. (This isn't
    141 really a level of atomicity, but is listed here for comparison.) This is
    142 essentially a regular load or store. If there is a race on a given memory
    143 location, loads from that location return undef.
    144 
    145 Relevant standard
    146   This is intended to match shared variables in C/C++, and to be used in any
    147   other context where memory access is necessary, and a race is impossible. (The
    148   precise definition is in `LangRef Memory Model <LangRef.html#memmodel>`_.)
    149 
    150 Notes for frontends
    151   The rule is essentially that all memory accessed with basic loads and stores
    152   by multiple threads should be protected by a lock or other synchronization;
    153   otherwise, you are likely to run into undefined behavior. If your frontend is
    154   for a "safe" language like Java, use Unordered to load and store any shared
    155   variable.  Note that NotAtomic volatile loads and stores are not properly
    156   atomic; do not try to use them as a substitute. (Per the C/C++ standards,
    157   volatile does provide some limited guarantees around asynchronous signals, but
    158   atomics are generally a better solution.)
    159 
    160 Notes for optimizers
    161   Introducing loads to shared variables along a codepath where they would not
    162   otherwise exist is allowed; introducing stores to shared variables is not. See
    163   `Optimization outside atomic`_.
    164 
    165 Notes for code generation
    166   The one interesting restriction here is that it is not allowed to write to
    167   bytes outside of the bytes relevant to a store.  This is mostly relevant to
    168   unaligned stores: it is not allowed in general to convert an unaligned store
    169   into two aligned stores of the same width as the unaligned store. Backends are
    170   also expected to generate an i8 store as an i8 store, and not an instruction
    171   which writes to surrounding bytes.  (If you are writing a backend for an
    172   architecture which cannot satisfy these restrictions and cares about
    173   concurrency, please send an email to llvmdev.)
    174 
    175 Unordered
    176 ---------
    177 
    178 Unordered is the lowest level of atomicity. It essentially guarantees that races
    179 produce somewhat sane results instead of having undefined behavior.  It also
    180 guarantees the operation to be lock-free, so it do not depend on the data being
    181 part of a special atomic structure or depend on a separate per-process global
    182 lock.  Note that code generation will fail for unsupported atomic operations; if
    183 you need such an operation, use explicit locking.
    184 
    185 Relevant standard
    186   This is intended to match the Java memory model for shared variables.
    187 
    188 Notes for frontends
    189   This cannot be used for synchronization, but is useful for Java and other
    190   "safe" languages which need to guarantee that the generated code never
    191   exhibits undefined behavior. Note that this guarantee is cheap on common
    192   platforms for loads of a native width, but can be expensive or unavailable for
    193   wider loads, like a 64-bit store on ARM. (A frontend for Java or other "safe"
    194   languages would normally split a 64-bit store on ARM into two 32-bit unordered
    195   stores.)
    196 
    197 Notes for optimizers
    198   In terms of the optimizer, this prohibits any transformation that transforms a
    199   single load into multiple loads, transforms a store into multiple stores,
    200   narrows a store, or stores a value which would not be stored otherwise.  Some
    201   examples of unsafe optimizations are narrowing an assignment into a bitfield,
    202   rematerializing a load, and turning loads and stores into a memcpy
    203   call. Reordering unordered operations is safe, though, and optimizers should
    204   take advantage of that because unordered operations are common in languages
    205   that need them.
    206 
    207 Notes for code generation
    208   These operations are required to be atomic in the sense that if you use
    209   unordered loads and unordered stores, a load cannot see a value which was
    210   never stored.  A normal load or store instruction is usually sufficient, but
    211   note that an unordered load or store cannot be split into multiple
    212   instructions (or an instruction which does multiple memory operations, like
    213   ``LDRD`` on ARM without LPAE, or not naturally-aligned ``LDRD`` on LPAE ARM).
    214 
    215 Monotonic
    216 ---------
    217 
    218 Monotonic is the weakest level of atomicity that can be used in synchronization
    219 primitives, although it does not provide any general synchronization. It
    220 essentially guarantees that if you take all the operations affecting a specific
    221 address, a consistent ordering exists.
    222 
    223 Relevant standard
    224   This corresponds to the C++0x/C1x ``memory_order_relaxed``; see those
    225   standards for the exact definition.
    226 
    227 Notes for frontends
    228   If you are writing a frontend which uses this directly, use with caution.  The
    229   guarantees in terms of synchronization are very weak, so make sure these are
    230   only used in a pattern which you know is correct.  Generally, these would
    231   either be used for atomic operations which do not protect other memory (like
    232   an atomic counter), or along with a ``fence``.
    233 
    234 Notes for optimizers
    235   In terms of the optimizer, this can be treated as a read+write on the relevant
    236   memory location (and alias analysis will take advantage of that). In addition,
    237   it is legal to reorder non-atomic and Unordered loads around Monotonic
    238   loads. CSE/DSE and a few other optimizations are allowed, but Monotonic
    239   operations are unlikely to be used in ways which would make those
    240   optimizations useful.
    241 
    242 Notes for code generation
    243   Code generation is essentially the same as that for unordered for loads and
    244   stores.  No fences are required.  ``cmpxchg`` and ``atomicrmw`` are required
    245   to appear as a single operation.
    246 
    247 Acquire
    248 -------
    249 
    250 Acquire provides a barrier of the sort necessary to acquire a lock to access
    251 other memory with normal loads and stores.
    252 
    253 Relevant standard
    254   This corresponds to the C++0x/C1x ``memory_order_acquire``. It should also be
    255   used for C++0x/C1x ``memory_order_consume``.
    256 
    257 Notes for frontends
    258   If you are writing a frontend which uses this directly, use with caution.
    259   Acquire only provides a semantic guarantee when paired with a Release
    260   operation.
    261 
    262 Notes for optimizers
    263   Optimizers not aware of atomics can treat this like a nothrow call.  It is
    264   also possible to move stores from before an Acquire load or read-modify-write
    265   operation to after it, and move non-Acquire loads from before an Acquire
    266   operation to after it.
    267 
    268 Notes for code generation
    269   Architectures with weak memory ordering (essentially everything relevant today
    270   except x86 and SPARC) require some sort of fence to maintain the Acquire
    271   semantics.  The precise fences required varies widely by architecture, but for
    272   a simple implementation, most architectures provide a barrier which is strong
    273   enough for everything (``dmb`` on ARM, ``sync`` on PowerPC, etc.).  Putting
    274   such a fence after the equivalent Monotonic operation is sufficient to
    275   maintain Acquire semantics for a memory operation.
    276 
    277 Release
    278 -------
    279 
    280 Release is similar to Acquire, but with a barrier of the sort necessary to
    281 release a lock.
    282 
    283 Relevant standard
    284   This corresponds to the C++0x/C1x ``memory_order_release``.
    285 
    286 Notes for frontends
    287   If you are writing a frontend which uses this directly, use with caution.
    288   Release only provides a semantic guarantee when paired with a Acquire
    289   operation.
    290 
    291 Notes for optimizers
    292   Optimizers not aware of atomics can treat this like a nothrow call.  It is
    293   also possible to move loads from after a Release store or read-modify-write
    294   operation to before it, and move non-Release stores from after an Release
    295   operation to before it.
    296 
    297 Notes for code generation
    298   See the section on Acquire; a fence before the relevant operation is usually
    299   sufficient for Release. Note that a store-store fence is not sufficient to
    300   implement Release semantics; store-store fences are generally not exposed to
    301   IR because they are extremely difficult to use correctly.
    302 
    303 AcquireRelease
    304 --------------
    305 
    306 AcquireRelease (``acq_rel`` in IR) provides both an Acquire and a Release
    307 barrier (for fences and operations which both read and write memory).
    308 
    309 Relevant standard
    310   This corresponds to the C++0x/C1x ``memory_order_acq_rel``.
    311 
    312 Notes for frontends
    313   If you are writing a frontend which uses this directly, use with caution.
    314   Acquire only provides a semantic guarantee when paired with a Release
    315   operation, and vice versa.
    316 
    317 Notes for optimizers
    318   In general, optimizers should treat this like a nothrow call; the possible
    319   optimizations are usually not interesting.
    320 
    321 Notes for code generation
    322   This operation has Acquire and Release semantics; see the sections on Acquire
    323   and Release.
    324 
    325 SequentiallyConsistent
    326 ----------------------
    327 
    328 SequentiallyConsistent (``seq_cst`` in IR) provides Acquire semantics for loads
    329 and Release semantics for stores. Additionally, it guarantees that a total
    330 ordering exists between all SequentiallyConsistent operations.
    331 
    332 Relevant standard
    333   This corresponds to the C++0x/C1x ``memory_order_seq_cst``, Java volatile, and
    334   the gcc-compatible ``__sync_*`` builtins which do not specify otherwise.
    335 
    336 Notes for frontends
    337   If a frontend is exposing atomic operations, these are much easier to reason
    338   about for the programmer than other kinds of operations, and using them is
    339   generally a practical performance tradeoff.
    340 
    341 Notes for optimizers
    342   Optimizers not aware of atomics can treat this like a nothrow call.  For
    343   SequentiallyConsistent loads and stores, the same reorderings are allowed as
    344   for Acquire loads and Release stores, except that SequentiallyConsistent
    345   operations may not be reordered.
    346 
    347 Notes for code generation
    348   SequentiallyConsistent loads minimally require the same barriers as Acquire
    349   operations and SequentiallyConsistent stores require Release
    350   barriers. Additionally, the code generator must enforce ordering between
    351   SequentiallyConsistent stores followed by SequentiallyConsistent loads. This
    352   is usually done by emitting either a full fence before the loads or a full
    353   fence after the stores; which is preferred varies by architecture.
    354 
    355 Atomics and IR optimization
    356 ===========================
    357 
    358 Predicates for optimizer writers to query:
    359 
    360 * ``isSimple()``: A load or store which is not volatile or atomic.  This is
    361   what, for example, memcpyopt would check for operations it might transform.
    362 
    363 * ``isUnordered()``: A load or store which is not volatile and at most
    364   Unordered. This would be checked, for example, by LICM before hoisting an
    365   operation.
    366 
    367 * ``mayReadFromMemory()``/``mayWriteToMemory()``: Existing predicate, but note
    368   that they return true for any operation which is volatile or at least
    369   Monotonic.
    370 
    371 * Alias analysis: Note that AA will return ModRef for anything Acquire or
    372   Release, and for the address accessed by any Monotonic operation.
    373 
    374 To support optimizing around atomic operations, make sure you are using the
    375 right predicates; everything should work if that is done.  If your pass should
    376 optimize some atomic operations (Unordered operations in particular), make sure
    377 it doesn't replace an atomic load or store with a non-atomic operation.
    378 
    379 Some examples of how optimizations interact with various kinds of atomic
    380 operations:
    381 
    382 * ``memcpyopt``: An atomic operation cannot be optimized into part of a
    383   memcpy/memset, including unordered loads/stores.  It can pull operations
    384   across some atomic operations.
    385 
    386 * LICM: Unordered loads/stores can be moved out of a loop.  It just treats
    387   monotonic operations like a read+write to a memory location, and anything
    388   stricter than that like a nothrow call.
    389 
    390 * DSE: Unordered stores can be DSE'ed like normal stores.  Monotonic stores can
    391   be DSE'ed in some cases, but it's tricky to reason about, and not especially
    392   important.
    393 
    394 * Folding a load: Any atomic load from a constant global can be constant-folded,
    395   because it cannot be observed.  Similar reasoning allows scalarrepl with
    396   atomic loads and stores.
    397 
    398 Atomics and Codegen
    399 ===================
    400 
    401 Atomic operations are represented in the SelectionDAG with ``ATOMIC_*`` opcodes.
    402 On architectures which use barrier instructions for all atomic ordering (like
    403 ARM), appropriate fences are split out as the DAG is built.
    404 
    405 The MachineMemOperand for all atomic operations is currently marked as volatile;
    406 this is not correct in the IR sense of volatile, but CodeGen handles anything
    407 marked volatile very conservatively.  This should get fixed at some point.
    408 
    409 Common architectures have some way of representing at least a pointer-sized
    410 lock-free ``cmpxchg``; such an operation can be used to implement all the other
    411 atomic operations which can be represented in IR up to that size.  Backends are
    412 expected to implement all those operations, but not operations which cannot be
    413 implemented in a lock-free manner.  It is expected that backends will give an
    414 error when given an operation which cannot be implemented.  (The LLVM code
    415 generator is not very helpful here at the moment, but hopefully that will
    416 change.)
    417 
    418 The implementation of atomics on LL/SC architectures (like ARM) is currently a
    419 bit of a mess; there is a lot of copy-pasted code across targets, and the
    420 representation is relatively unsuited to optimization (it would be nice to be
    421 able to optimize loops involving cmpxchg etc.).
    422 
    423 On x86, all atomic loads generate a ``MOV``. SequentiallyConsistent stores
    424 generate an ``XCHG``, other stores generate a ``MOV``. SequentiallyConsistent
    425 fences generate an ``MFENCE``, other fences do not cause any code to be
    426 generated.  cmpxchg uses the ``LOCK CMPXCHG`` instruction.  ``atomicrmw xchg``
    427 uses ``XCHG``, ``atomicrmw add`` and ``atomicrmw sub`` use ``XADD``, and all
    428 other ``atomicrmw`` operations generate a loop with ``LOCK CMPXCHG``.  Depending
    429 on the users of the result, some ``atomicrmw`` operations can be translated into
    430 operations like ``LOCK AND``, but that does not work in general.
    431 
    432 On ARM (before v8), MIPS, and many other RISC architectures, Acquire, Release,
    433 and SequentiallyConsistent semantics require barrier instructions for every such
    434 operation. Loads and stores generate normal instructions.  ``cmpxchg`` and
    435 ``atomicrmw`` can be represented using a loop with LL/SC-style instructions
    436 which take some sort of exclusive lock on a cache line (``LDREX`` and ``STREX``
    437 on ARM, etc.).
    438