Home | History | Annotate | Download | only in library
      1 :mod:`multiprocessing` --- Process-based parallelism
      2 ====================================================
      3 
      4 .. module:: multiprocessing
      5    :synopsis: Process-based parallelism.
      6 
      7 **Source code:** :source:`Lib/multiprocessing/`
      8 
      9 --------------
     10 
     11 Introduction
     12 ------------
     13 
     14 :mod:`multiprocessing` is a package that supports spawning processes using an
     15 API similar to the :mod:`threading` module.  The :mod:`multiprocessing` package
     16 offers both local and remote concurrency, effectively side-stepping the
     17 :term:`Global Interpreter Lock` by using subprocesses instead of threads.  Due
     18 to this, the :mod:`multiprocessing` module allows the programmer to fully
     19 leverage multiple processors on a given machine.  It runs on both Unix and
     20 Windows.
     21 
     22 The :mod:`multiprocessing` module also introduces APIs which do not have
     23 analogs in the :mod:`threading` module.  A prime example of this is the
     24 :class:`~multiprocessing.pool.Pool` object which offers a convenient means of
     25 parallelizing the execution of a function across multiple input values,
     26 distributing the input data across processes (data parallelism).  The following
     27 example demonstrates the common practice of defining such functions in a module
     28 so that child processes can successfully import that module.  This basic example
     29 of data parallelism using :class:`~multiprocessing.pool.Pool`, ::
     30 
     31    from multiprocessing import Pool
     32 
     33    def f(x):
     34        return x*x
     35 
     36    if __name__ == '__main__':
     37        with Pool(5) as p:
     38            print(p.map(f, [1, 2, 3]))
     39 
     40 will print to standard output ::
     41 
     42    [1, 4, 9]
     43 
     44 
     45 The :class:`Process` class
     46 ~~~~~~~~~~~~~~~~~~~~~~~~~~
     47 
     48 In :mod:`multiprocessing`, processes are spawned by creating a :class:`Process`
     49 object and then calling its :meth:`~Process.start` method.  :class:`Process`
     50 follows the API of :class:`threading.Thread`.  A trivial example of a
     51 multiprocess program is ::
     52 
     53    from multiprocessing import Process
     54 
     55    def f(name):
     56        print('hello', name)
     57 
     58    if __name__ == '__main__':
     59        p = Process(target=f, args=('bob',))
     60        p.start()
     61        p.join()
     62 
     63 To show the individual process IDs involved, here is an expanded example::
     64 
     65     from multiprocessing import Process
     66     import os
     67 
     68     def info(title):
     69         print(title)
     70         print('module name:', __name__)
     71         print('parent process:', os.getppid())
     72         print('process id:', os.getpid())
     73 
     74     def f(name):
     75         info('function f')
     76         print('hello', name)
     77 
     78     if __name__ == '__main__':
     79         info('main line')
     80         p = Process(target=f, args=('bob',))
     81         p.start()
     82         p.join()
     83 
     84 For an explanation of why the ``if __name__ == '__main__'`` part is
     85 necessary, see :ref:`multiprocessing-programming`.
     86 
     87 
     88 
     89 Contexts and start methods
     90 ~~~~~~~~~~~~~~~~~~~~~~~~~~
     91 
     92 .. _multiprocessing-start-methods:
     93 
     94 Depending on the platform, :mod:`multiprocessing` supports three ways
     95 to start a process.  These *start methods* are
     96 
     97   *spawn*
     98     The parent process starts a fresh python interpreter process.  The
     99     child process will only inherit those resources necessary to run
    100     the process objects :meth:`~Process.run` method.  In particular,
    101     unnecessary file descriptors and handles from the parent process
    102     will not be inherited.  Starting a process using this method is
    103     rather slow compared to using *fork* or *forkserver*.
    104 
    105     Available on Unix and Windows.  The default on Windows.
    106 
    107   *fork*
    108     The parent process uses :func:`os.fork` to fork the Python
    109     interpreter.  The child process, when it begins, is effectively
    110     identical to the parent process.  All resources of the parent are
    111     inherited by the child process.  Note that safely forking a
    112     multithreaded process is problematic.
    113 
    114     Available on Unix only.  The default on Unix.
    115 
    116   *forkserver*
    117     When the program starts and selects the *forkserver* start method,
    118     a server process is started.  From then on, whenever a new process
    119     is needed, the parent process connects to the server and requests
    120     that it fork a new process.  The fork server process is single
    121     threaded so it is safe for it to use :func:`os.fork`.  No
    122     unnecessary resources are inherited.
    123 
    124     Available on Unix platforms which support passing file descriptors
    125     over Unix pipes.
    126 
    127 .. versionchanged:: 3.4
    128    *spawn* added on all unix platforms, and *forkserver* added for
    129    some unix platforms.
    130    Child processes no longer inherit all of the parents inheritable
    131    handles on Windows.
    132 
    133 On Unix using the *spawn* or *forkserver* start methods will also
    134 start a *semaphore tracker* process which tracks the unlinked named
    135 semaphores created by processes of the program.  When all processes
    136 have exited the semaphore tracker unlinks any remaining semaphores.
    137 Usually there should be none, but if a process was killed by a signal
    138 there may some "leaked" semaphores.  (Unlinking the named semaphores
    139 is a serious matter since the system allows only a limited number, and
    140 they will not be automatically unlinked until the next reboot.)
    141 
    142 To select a start method you use the :func:`set_start_method` in
    143 the ``if __name__ == '__main__'`` clause of the main module.  For
    144 example::
    145 
    146        import multiprocessing as mp
    147 
    148        def foo(q):
    149            q.put('hello')
    150 
    151        if __name__ == '__main__':
    152            mp.set_start_method('spawn')
    153            q = mp.Queue()
    154            p = mp.Process(target=foo, args=(q,))
    155            p.start()
    156            print(q.get())
    157            p.join()
    158 
    159 :func:`set_start_method` should not be used more than once in the
    160 program.
    161 
    162 Alternatively, you can use :func:`get_context` to obtain a context
    163 object.  Context objects have the same API as the multiprocessing
    164 module, and allow one to use multiple start methods in the same
    165 program. ::
    166 
    167        import multiprocessing as mp
    168 
    169        def foo(q):
    170            q.put('hello')
    171 
    172        if __name__ == '__main__':
    173            ctx = mp.get_context('spawn')
    174            q = ctx.Queue()
    175            p = ctx.Process(target=foo, args=(q,))
    176            p.start()
    177            print(q.get())
    178            p.join()
    179 
    180 Note that objects related to one context may not be compatible with
    181 processes for a different context.  In particular, locks created using
    182 the *fork* context cannot be passed to a processes started using the
    183 *spawn* or *forkserver* start methods.
    184 
    185 A library which wants to use a particular start method should probably
    186 use :func:`get_context` to avoid interfering with the choice of the
    187 library user.
    188 
    189 
    190 Exchanging objects between processes
    191 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    192 
    193 :mod:`multiprocessing` supports two types of communication channel between
    194 processes:
    195 
    196 **Queues**
    197 
    198    The :class:`Queue` class is a near clone of :class:`queue.Queue`.  For
    199    example::
    200 
    201       from multiprocessing import Process, Queue
    202 
    203       def f(q):
    204           q.put([42, None, 'hello'])
    205 
    206       if __name__ == '__main__':
    207           q = Queue()
    208           p = Process(target=f, args=(q,))
    209           p.start()
    210           print(q.get())    # prints "[42, None, 'hello']"
    211           p.join()
    212 
    213    Queues are thread and process safe.
    214 
    215 **Pipes**
    216 
    217    The :func:`Pipe` function returns a pair of connection objects connected by a
    218    pipe which by default is duplex (two-way).  For example::
    219 
    220       from multiprocessing import Process, Pipe
    221 
    222       def f(conn):
    223           conn.send([42, None, 'hello'])
    224           conn.close()
    225 
    226       if __name__ == '__main__':
    227           parent_conn, child_conn = Pipe()
    228           p = Process(target=f, args=(child_conn,))
    229           p.start()
    230           print(parent_conn.recv())   # prints "[42, None, 'hello']"
    231           p.join()
    232 
    233    The two connection objects returned by :func:`Pipe` represent the two ends of
    234    the pipe.  Each connection object has :meth:`~Connection.send` and
    235    :meth:`~Connection.recv` methods (among others).  Note that data in a pipe
    236    may become corrupted if two processes (or threads) try to read from or write
    237    to the *same* end of the pipe at the same time.  Of course there is no risk
    238    of corruption from processes using different ends of the pipe at the same
    239    time.
    240 
    241 
    242 Synchronization between processes
    243 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    244 
    245 :mod:`multiprocessing` contains equivalents of all the synchronization
    246 primitives from :mod:`threading`.  For instance one can use a lock to ensure
    247 that only one process prints to standard output at a time::
    248 
    249    from multiprocessing import Process, Lock
    250 
    251    def f(l, i):
    252        l.acquire()
    253        try:
    254            print('hello world', i)
    255        finally:
    256            l.release()
    257 
    258    if __name__ == '__main__':
    259        lock = Lock()
    260 
    261        for num in range(10):
    262            Process(target=f, args=(lock, num)).start()
    263 
    264 Without using the lock output from the different processes is liable to get all
    265 mixed up.
    266 
    267 
    268 Sharing state between processes
    269 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    270 
    271 As mentioned above, when doing concurrent programming it is usually best to
    272 avoid using shared state as far as possible.  This is particularly true when
    273 using multiple processes.
    274 
    275 However, if you really do need to use some shared data then
    276 :mod:`multiprocessing` provides a couple of ways of doing so.
    277 
    278 **Shared memory**
    279 
    280    Data can be stored in a shared memory map using :class:`Value` or
    281    :class:`Array`.  For example, the following code ::
    282 
    283       from multiprocessing import Process, Value, Array
    284 
    285       def f(n, a):
    286           n.value = 3.1415927
    287           for i in range(len(a)):
    288               a[i] = -a[i]
    289 
    290       if __name__ == '__main__':
    291           num = Value('d', 0.0)
    292           arr = Array('i', range(10))
    293 
    294           p = Process(target=f, args=(num, arr))
    295           p.start()
    296           p.join()
    297 
    298           print(num.value)
    299           print(arr[:])
    300 
    301    will print ::
    302 
    303       3.1415927
    304       [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
    305 
    306    The ``'d'`` and ``'i'`` arguments used when creating ``num`` and ``arr`` are
    307    typecodes of the kind used by the :mod:`array` module: ``'d'`` indicates a
    308    double precision float and ``'i'`` indicates a signed integer.  These shared
    309    objects will be process and thread-safe.
    310 
    311    For more flexibility in using shared memory one can use the
    312    :mod:`multiprocessing.sharedctypes` module which supports the creation of
    313    arbitrary ctypes objects allocated from shared memory.
    314 
    315 **Server process**
    316 
    317    A manager object returned by :func:`Manager` controls a server process which
    318    holds Python objects and allows other processes to manipulate them using
    319    proxies.
    320 
    321    A manager returned by :func:`Manager` will support types
    322    :class:`list`, :class:`dict`, :class:`~managers.Namespace`, :class:`Lock`,
    323    :class:`RLock`, :class:`Semaphore`, :class:`BoundedSemaphore`,
    324    :class:`Condition`, :class:`Event`, :class:`Barrier`,
    325    :class:`Queue`, :class:`Value` and :class:`Array`.  For example, ::
    326 
    327       from multiprocessing import Process, Manager
    328 
    329       def f(d, l):
    330           d[1] = '1'
    331           d['2'] = 2
    332           d[0.25] = None
    333           l.reverse()
    334 
    335       if __name__ == '__main__':
    336           with Manager() as manager:
    337               d = manager.dict()
    338               l = manager.list(range(10))
    339 
    340               p = Process(target=f, args=(d, l))
    341               p.start()
    342               p.join()
    343 
    344               print(d)
    345               print(l)
    346 
    347    will print ::
    348 
    349        {0.25: None, 1: '1', '2': 2}
    350        [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
    351 
    352    Server process managers are more flexible than using shared memory objects
    353    because they can be made to support arbitrary object types.  Also, a single
    354    manager can be shared by processes on different computers over a network.
    355    They are, however, slower than using shared memory.
    356 
    357 
    358 Using a pool of workers
    359 ~~~~~~~~~~~~~~~~~~~~~~~
    360 
    361 The :class:`~multiprocessing.pool.Pool` class represents a pool of worker
    362 processes.  It has methods which allows tasks to be offloaded to the worker
    363 processes in a few different ways.
    364 
    365 For example::
    366 
    367    from multiprocessing import Pool, TimeoutError
    368    import time
    369    import os
    370 
    371    def f(x):
    372        return x*x
    373 
    374    if __name__ == '__main__':
    375        # start 4 worker processes
    376        with Pool(processes=4) as pool:
    377 
    378            # print "[0, 1, 4,..., 81]"
    379            print(pool.map(f, range(10)))
    380 
    381            # print same numbers in arbitrary order
    382            for i in pool.imap_unordered(f, range(10)):
    383                print(i)
    384 
    385            # evaluate "f(20)" asynchronously
    386            res = pool.apply_async(f, (20,))      # runs in *only* one process
    387            print(res.get(timeout=1))             # prints "400"
    388 
    389            # evaluate "os.getpid()" asynchronously
    390            res = pool.apply_async(os.getpid, ()) # runs in *only* one process
    391            print(res.get(timeout=1))             # prints the PID of that process
    392 
    393            # launching multiple evaluations asynchronously *may* use more processes
    394            multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
    395            print([res.get(timeout=1) for res in multiple_results])
    396 
    397            # make a single worker sleep for 10 secs
    398            res = pool.apply_async(time.sleep, (10,))
    399            try:
    400                print(res.get(timeout=1))
    401            except TimeoutError:
    402                print("We lacked patience and got a multiprocessing.TimeoutError")
    403 
    404            print("For the moment, the pool remains available for more work")
    405 
    406        # exiting the 'with'-block has stopped the pool
    407        print("Now the pool is closed and no longer available")
    408 
    409 Note that the methods of a pool should only ever be used by the
    410 process which created it.
    411 
    412 .. note::
    413 
    414    Functionality within this package requires that the ``__main__`` module be
    415    importable by the children. This is covered in :ref:`multiprocessing-programming`
    416    however it is worth pointing out here. This means that some examples, such
    417    as the :class:`multiprocessing.pool.Pool` examples will not work in the
    418    interactive interpreter. For example::
    419 
    420       >>> from multiprocessing import Pool
    421       >>> p = Pool(5)
    422       >>> def f(x):
    423       ...     return x*x
    424       ...
    425       >>> p.map(f, [1,2,3])
    426       Process PoolWorker-1:
    427       Process PoolWorker-2:
    428       Process PoolWorker-3:
    429       Traceback (most recent call last):
    430       Traceback (most recent call last):
    431       Traceback (most recent call last):
    432       AttributeError: 'module' object has no attribute 'f'
    433       AttributeError: 'module' object has no attribute 'f'
    434       AttributeError: 'module' object has no attribute 'f'
    435 
    436    (If you try this it will actually output three full tracebacks
    437    interleaved in a semi-random fashion, and then you may have to
    438    stop the master process somehow.)
    439 
    440 
    441 Reference
    442 ---------
    443 
    444 The :mod:`multiprocessing` package mostly replicates the API of the
    445 :mod:`threading` module.
    446 
    447 
    448 :class:`Process` and exceptions
    449 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    450 
    451 .. class:: Process(group=None, target=None, name=None, args=(), kwargs={}, \
    452                    *, daemon=None)
    453 
    454    Process objects represent activity that is run in a separate process. The
    455    :class:`Process` class has equivalents of all the methods of
    456    :class:`threading.Thread`.
    457 
    458    The constructor should always be called with keyword arguments. *group*
    459    should always be ``None``; it exists solely for compatibility with
    460    :class:`threading.Thread`.  *target* is the callable object to be invoked by
    461    the :meth:`run()` method.  It defaults to ``None``, meaning nothing is
    462    called. *name* is the process name (see :attr:`name` for more details).
    463    *args* is the argument tuple for the target invocation.  *kwargs* is a
    464    dictionary of keyword arguments for the target invocation.  If provided,
    465    the keyword-only *daemon* argument sets the process :attr:`daemon` flag
    466    to ``True`` or ``False``.  If ``None`` (the default), this flag will be
    467    inherited from the creating process.
    468 
    469    By default, no arguments are passed to *target*.
    470 
    471    If a subclass overrides the constructor, it must make sure it invokes the
    472    base class constructor (:meth:`Process.__init__`) before doing anything else
    473    to the process.
    474 
    475    .. versionchanged:: 3.3
    476       Added the *daemon* argument.
    477 
    478    .. method:: run()
    479 
    480       Method representing the process's activity.
    481 
    482       You may override this method in a subclass.  The standard :meth:`run`
    483       method invokes the callable object passed to the object's constructor as
    484       the target argument, if any, with sequential and keyword arguments taken
    485       from the *args* and *kwargs* arguments, respectively.
    486 
    487    .. method:: start()
    488 
    489       Start the process's activity.
    490 
    491       This must be called at most once per process object.  It arranges for the
    492       object's :meth:`run` method to be invoked in a separate process.
    493 
    494    .. method:: join([timeout])
    495 
    496       If the optional argument *timeout* is ``None`` (the default), the method
    497       blocks until the process whose :meth:`join` method is called terminates.
    498       If *timeout* is a positive number, it blocks at most *timeout* seconds.
    499       Note that the method returns ``None`` if its process terminates or if the
    500       method times out.  Check the process's :attr:`exitcode` to determine if
    501       it terminated.
    502 
    503       A process can be joined many times.
    504 
    505       A process cannot join itself because this would cause a deadlock.  It is
    506       an error to attempt to join a process before it has been started.
    507 
    508    .. attribute:: name
    509 
    510       The process's name.  The name is a string used for identification purposes
    511       only.  It has no semantics.  Multiple processes may be given the same
    512       name.
    513 
    514       The initial name is set by the constructor.  If no explicit name is
    515       provided to the constructor, a name of the form
    516       'Process-N\ :sub:`1`:N\ :sub:`2`:...:N\ :sub:`k`' is constructed, where
    517       each N\ :sub:`k` is the N-th child of its parent.
    518 
    519    .. method:: is_alive
    520 
    521       Return whether the process is alive.
    522 
    523       Roughly, a process object is alive from the moment the :meth:`start`
    524       method returns until the child process terminates.
    525 
    526    .. attribute:: daemon
    527 
    528       The process's daemon flag, a Boolean value.  This must be set before
    529       :meth:`start` is called.
    530 
    531       The initial value is inherited from the creating process.
    532 
    533       When a process exits, it attempts to terminate all of its daemonic child
    534       processes.
    535 
    536       Note that a daemonic process is not allowed to create child processes.
    537       Otherwise a daemonic process would leave its children orphaned if it gets
    538       terminated when its parent process exits. Additionally, these are **not**
    539       Unix daemons or services, they are normal processes that will be
    540       terminated (and not joined) if non-daemonic processes have exited.
    541 
    542    In addition to the  :class:`threading.Thread` API, :class:`Process` objects
    543    also support the following attributes and methods:
    544 
    545    .. attribute:: pid
    546 
    547       Return the process ID.  Before the process is spawned, this will be
    548       ``None``.
    549 
    550    .. attribute:: exitcode
    551 
    552       The child's exit code.  This will be ``None`` if the process has not yet
    553       terminated.  A negative value *-N* indicates that the child was terminated
    554       by signal *N*.
    555 
    556    .. attribute:: authkey
    557 
    558       The process's authentication key (a byte string).
    559 
    560       When :mod:`multiprocessing` is initialized the main process is assigned a
    561       random string using :func:`os.urandom`.
    562 
    563       When a :class:`Process` object is created, it will inherit the
    564       authentication key of its parent process, although this may be changed by
    565       setting :attr:`authkey` to another byte string.
    566 
    567       See :ref:`multiprocessing-auth-keys`.
    568 
    569    .. attribute:: sentinel
    570 
    571       A numeric handle of a system object which will become "ready" when
    572       the process ends.
    573 
    574       You can use this value if you want to wait on several events at
    575       once using :func:`multiprocessing.connection.wait`.  Otherwise
    576       calling :meth:`join()` is simpler.
    577 
    578       On Windows, this is an OS handle usable with the ``WaitForSingleObject``
    579       and ``WaitForMultipleObjects`` family of API calls.  On Unix, this is
    580       a file descriptor usable with primitives from the :mod:`select` module.
    581 
    582       .. versionadded:: 3.3
    583 
    584    .. method:: terminate()
    585 
    586       Terminate the process.  On Unix this is done using the ``SIGTERM`` signal;
    587       on Windows :c:func:`TerminateProcess` is used.  Note that exit handlers and
    588       finally clauses, etc., will not be executed.
    589 
    590       Note that descendant processes of the process will *not* be terminated --
    591       they will simply become orphaned.
    592 
    593       .. warning::
    594 
    595          If this method is used when the associated process is using a pipe or
    596          queue then the pipe or queue is liable to become corrupted and may
    597          become unusable by other process.  Similarly, if the process has
    598          acquired a lock or semaphore etc. then terminating it is liable to
    599          cause other processes to deadlock.
    600 
    601    Note that the :meth:`start`, :meth:`join`, :meth:`is_alive`,
    602    :meth:`terminate` and :attr:`exitcode` methods should only be called by
    603    the process that created the process object.
    604 
    605    Example usage of some of the methods of :class:`Process`:
    606 
    607    .. doctest::
    608 
    609        >>> import multiprocessing, time, signal
    610        >>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
    611        >>> print(p, p.is_alive())
    612        <Process(Process-1, initial)> False
    613        >>> p.start()
    614        >>> print(p, p.is_alive())
    615        <Process(Process-1, started)> True
    616        >>> p.terminate()
    617        >>> time.sleep(0.1)
    618        >>> print(p, p.is_alive())
    619        <Process(Process-1, stopped[SIGTERM])> False
    620        >>> p.exitcode == -signal.SIGTERM
    621        True
    622 
    623 .. exception:: ProcessError
    624 
    625    The base class of all :mod:`multiprocessing` exceptions.
    626 
    627 .. exception:: BufferTooShort
    628 
    629    Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied
    630    buffer object is too small for the message read.
    631 
    632    If ``e`` is an instance of :exc:`BufferTooShort` then ``e.args[0]`` will give
    633    the message as a byte string.
    634 
    635 .. exception:: AuthenticationError
    636 
    637    Raised when there is an authentication error.
    638 
    639 .. exception:: TimeoutError
    640 
    641    Raised by methods with a timeout when the timeout expires.
    642 
    643 Pipes and Queues
    644 ~~~~~~~~~~~~~~~~
    645 
    646 When using multiple processes, one generally uses message passing for
    647 communication between processes and avoids having to use any synchronization
    648 primitives like locks.
    649 
    650 For passing messages one can use :func:`Pipe` (for a connection between two
    651 processes) or a queue (which allows multiple producers and consumers).
    652 
    653 The :class:`Queue`, :class:`SimpleQueue` and :class:`JoinableQueue` types
    654 are multi-producer, multi-consumer :abbr:`FIFO (first-in, first-out)`
    655 queues modelled on the :class:`queue.Queue` class in the
    656 standard library.  They differ in that :class:`Queue` lacks the
    657 :meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join` methods introduced
    658 into Python 2.5's :class:`queue.Queue` class.
    659 
    660 If you use :class:`JoinableQueue` then you **must** call
    661 :meth:`JoinableQueue.task_done` for each task removed from the queue or else the
    662 semaphore used to count the number of unfinished tasks may eventually overflow,
    663 raising an exception.
    664 
    665 Note that one can also create a shared queue by using a manager object -- see
    666 :ref:`multiprocessing-managers`.
    667 
    668 .. note::
    669 
    670    :mod:`multiprocessing` uses the usual :exc:`queue.Empty` and
    671    :exc:`queue.Full` exceptions to signal a timeout.  They are not available in
    672    the :mod:`multiprocessing` namespace so you need to import them from
    673    :mod:`queue`.
    674 
    675 .. note::
    676 
    677    When an object is put on a queue, the object is pickled and a
    678    background thread later flushes the pickled data to an underlying
    679    pipe.  This has some consequences which are a little surprising,
    680    but should not cause any practical difficulties -- if they really
    681    bother you then you can instead use a queue created with a
    682    :ref:`manager <multiprocessing-managers>`.
    683 
    684    (1) After putting an object on an empty queue there may be an
    685        infinitesimal delay before the queue's :meth:`~Queue.empty`
    686        method returns :const:`False` and :meth:`~Queue.get_nowait` can
    687        return without raising :exc:`queue.Empty`.
    688 
    689    (2) If multiple processes are enqueuing objects, it is possible for
    690        the objects to be received at the other end out-of-order.
    691        However, objects enqueued by the same process will always be in
    692        the expected order with respect to each other.
    693 
    694 .. warning::
    695 
    696    If a process is killed using :meth:`Process.terminate` or :func:`os.kill`
    697    while it is trying to use a :class:`Queue`, then the data in the queue is
    698    likely to become corrupted.  This may cause any other process to get an
    699    exception when it tries to use the queue later on.
    700 
    701 .. warning::
    702 
    703    As mentioned above, if a child process has put items on a queue (and it has
    704    not used :meth:`JoinableQueue.cancel_join_thread
    705    <multiprocessing.Queue.cancel_join_thread>`), then that process will
    706    not terminate until all buffered items have been flushed to the pipe.
    707 
    708    This means that if you try joining that process you may get a deadlock unless
    709    you are sure that all items which have been put on the queue have been
    710    consumed.  Similarly, if the child process is non-daemonic then the parent
    711    process may hang on exit when it tries to join all its non-daemonic children.
    712 
    713    Note that a queue created using a manager does not have this issue.  See
    714    :ref:`multiprocessing-programming`.
    715 
    716 For an example of the usage of queues for interprocess communication see
    717 :ref:`multiprocessing-examples`.
    718 
    719 
    720 .. function:: Pipe([duplex])
    721 
    722    Returns a pair ``(conn1, conn2)`` of :class:`Connection` objects representing
    723    the ends of a pipe.
    724 
    725    If *duplex* is ``True`` (the default) then the pipe is bidirectional.  If
    726    *duplex* is ``False`` then the pipe is unidirectional: ``conn1`` can only be
    727    used for receiving messages and ``conn2`` can only be used for sending
    728    messages.
    729 
    730 
    731 .. class:: Queue([maxsize])
    732 
    733    Returns a process shared queue implemented using a pipe and a few
    734    locks/semaphores.  When a process first puts an item on the queue a feeder
    735    thread is started which transfers objects from a buffer into the pipe.
    736 
    737    The usual :exc:`queue.Empty` and :exc:`queue.Full` exceptions from the
    738    standard library's :mod:`queue` module are raised to signal timeouts.
    739 
    740    :class:`Queue` implements all the methods of :class:`queue.Queue` except for
    741    :meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join`.
    742 
    743    .. method:: qsize()
    744 
    745       Return the approximate size of the queue.  Because of
    746       multithreading/multiprocessing semantics, this number is not reliable.
    747 
    748       Note that this may raise :exc:`NotImplementedError` on Unix platforms like
    749       Mac OS X where ``sem_getvalue()`` is not implemented.
    750 
    751    .. method:: empty()
    752 
    753       Return ``True`` if the queue is empty, ``False`` otherwise.  Because of
    754       multithreading/multiprocessing semantics, this is not reliable.
    755 
    756    .. method:: full()
    757 
    758       Return ``True`` if the queue is full, ``False`` otherwise.  Because of
    759       multithreading/multiprocessing semantics, this is not reliable.
    760 
    761    .. method:: put(obj[, block[, timeout]])
    762 
    763       Put obj into the queue.  If the optional argument *block* is ``True``
    764       (the default) and *timeout* is ``None`` (the default), block if necessary until
    765       a free slot is available.  If *timeout* is a positive number, it blocks at
    766       most *timeout* seconds and raises the :exc:`queue.Full` exception if no
    767       free slot was available within that time.  Otherwise (*block* is
    768       ``False``), put an item on the queue if a free slot is immediately
    769       available, else raise the :exc:`queue.Full` exception (*timeout* is
    770       ignored in that case).
    771 
    772    .. method:: put_nowait(obj)
    773 
    774       Equivalent to ``put(obj, False)``.
    775 
    776    .. method:: get([block[, timeout]])
    777 
    778       Remove and return an item from the queue.  If optional args *block* is
    779       ``True`` (the default) and *timeout* is ``None`` (the default), block if
    780       necessary until an item is available.  If *timeout* is a positive number,
    781       it blocks at most *timeout* seconds and raises the :exc:`queue.Empty`
    782       exception if no item was available within that time.  Otherwise (block is
    783       ``False``), return an item if one is immediately available, else raise the
    784       :exc:`queue.Empty` exception (*timeout* is ignored in that case).
    785 
    786    .. method:: get_nowait()
    787 
    788       Equivalent to ``get(False)``.
    789 
    790    :class:`multiprocessing.Queue` has a few additional methods not found in
    791    :class:`queue.Queue`.  These methods are usually unnecessary for most
    792    code:
    793 
    794    .. method:: close()
    795 
    796       Indicate that no more data will be put on this queue by the current
    797       process.  The background thread will quit once it has flushed all buffered
    798       data to the pipe.  This is called automatically when the queue is garbage
    799       collected.
    800 
    801    .. method:: join_thread()
    802 
    803       Join the background thread.  This can only be used after :meth:`close` has
    804       been called.  It blocks until the background thread exits, ensuring that
    805       all data in the buffer has been flushed to the pipe.
    806 
    807       By default if a process is not the creator of the queue then on exit it
    808       will attempt to join the queue's background thread.  The process can call
    809       :meth:`cancel_join_thread` to make :meth:`join_thread` do nothing.
    810 
    811    .. method:: cancel_join_thread()
    812 
    813       Prevent :meth:`join_thread` from blocking.  In particular, this prevents
    814       the background thread from being joined automatically when the process
    815       exits -- see :meth:`join_thread`.
    816 
    817       A better name for this method might be
    818       ``allow_exit_without_flush()``.  It is likely to cause enqueued
    819       data to lost, and you almost certainly will not need to use it.
    820       It is really only there if you need the current process to exit
    821       immediately without waiting to flush enqueued data to the
    822       underlying pipe, and you don't care about lost data.
    823 
    824    .. note::
    825 
    826       This class's functionality requires a functioning shared semaphore
    827       implementation on the host operating system. Without one, the
    828       functionality in this class will be disabled, and attempts to
    829       instantiate a :class:`Queue` will result in an :exc:`ImportError`. See
    830       :issue:`3770` for additional information.  The same holds true for any
    831       of the specialized queue types listed below.
    832 
    833 .. class:: SimpleQueue()
    834 
    835    It is a simplified :class:`Queue` type, very close to a locked :class:`Pipe`.
    836 
    837    .. method:: empty()
    838 
    839       Return ``True`` if the queue is empty, ``False`` otherwise.
    840 
    841    .. method:: get()
    842 
    843       Remove and return an item from the queue.
    844 
    845    .. method:: put(item)
    846 
    847       Put *item* into the queue.
    848 
    849 
    850 .. class:: JoinableQueue([maxsize])
    851 
    852    :class:`JoinableQueue`, a :class:`Queue` subclass, is a queue which
    853    additionally has :meth:`task_done` and :meth:`join` methods.
    854 
    855    .. method:: task_done()
    856 
    857       Indicate that a formerly enqueued task is complete. Used by queue
    858       consumers.  For each :meth:`~Queue.get` used to fetch a task, a subsequent
    859       call to :meth:`task_done` tells the queue that the processing on the task
    860       is complete.
    861 
    862       If a :meth:`~queue.Queue.join` is currently blocking, it will resume when all
    863       items have been processed (meaning that a :meth:`task_done` call was
    864       received for every item that had been :meth:`~Queue.put` into the queue).
    865 
    866       Raises a :exc:`ValueError` if called more times than there were items
    867       placed in the queue.
    868 
    869 
    870    .. method:: join()
    871 
    872       Block until all items in the queue have been gotten and processed.
    873 
    874       The count of unfinished tasks goes up whenever an item is added to the
    875       queue.  The count goes down whenever a consumer calls
    876       :meth:`task_done` to indicate that the item was retrieved and all work on
    877       it is complete.  When the count of unfinished tasks drops to zero,
    878       :meth:`~queue.Queue.join` unblocks.
    879 
    880 
    881 Miscellaneous
    882 ~~~~~~~~~~~~~
    883 
    884 .. function:: active_children()
    885 
    886    Return list of all live children of the current process.
    887 
    888    Calling this has the side effect of "joining" any processes which have
    889    already finished.
    890 
    891 .. function:: cpu_count()
    892 
    893    Return the number of CPUs in the system.
    894 
    895    This number is not equivalent to the number of CPUs the current process can
    896    use.  The number of usable CPUs can be obtained with
    897    ``len(os.sched_getaffinity(0))``
    898 
    899    May raise :exc:`NotImplementedError`.
    900 
    901    .. seealso::
    902       :func:`os.cpu_count`
    903 
    904 .. function:: current_process()
    905 
    906    Return the :class:`Process` object corresponding to the current process.
    907 
    908    An analogue of :func:`threading.current_thread`.
    909 
    910 .. function:: freeze_support()
    911 
    912    Add support for when a program which uses :mod:`multiprocessing` has been
    913    frozen to produce a Windows executable.  (Has been tested with **py2exe**,
    914    **PyInstaller** and **cx_Freeze**.)
    915 
    916    One needs to call this function straight after the ``if __name__ ==
    917    '__main__'`` line of the main module.  For example::
    918 
    919       from multiprocessing import Process, freeze_support
    920 
    921       def f():
    922           print('hello world!')
    923 
    924       if __name__ == '__main__':
    925           freeze_support()
    926           Process(target=f).start()
    927 
    928    If the ``freeze_support()`` line is omitted then trying to run the frozen
    929    executable will raise :exc:`RuntimeError`.
    930 
    931    Calling ``freeze_support()`` has no effect when invoked on any operating
    932    system other than Windows.  In addition, if the module is being run
    933    normally by the Python interpreter on Windows (the program has not been
    934    frozen), then ``freeze_support()`` has no effect.
    935 
    936 .. function:: get_all_start_methods()
    937 
    938    Returns a list of the supported start methods, the first of which
    939    is the default.  The possible start methods are ``'fork'``,
    940    ``'spawn'`` and ``'forkserver'``.  On Windows only ``'spawn'`` is
    941    available.  On Unix ``'fork'`` and ``'spawn'`` are always
    942    supported, with ``'fork'`` being the default.
    943 
    944    .. versionadded:: 3.4
    945 
    946 .. function:: get_context(method=None)
    947 
    948    Return a context object which has the same attributes as the
    949    :mod:`multiprocessing` module.
    950 
    951    If *method* is ``None`` then the default context is returned.
    952    Otherwise *method* should be ``'fork'``, ``'spawn'``,
    953    ``'forkserver'``.  :exc:`ValueError` is raised if the specified
    954    start method is not available.
    955 
    956    .. versionadded:: 3.4
    957 
    958 .. function:: get_start_method(allow_none=False)
    959 
    960    Return the name of start method used for starting processes.
    961 
    962    If the start method has not been fixed and *allow_none* is false,
    963    then the start method is fixed to the default and the name is
    964    returned.  If the start method has not been fixed and *allow_none*
    965    is true then ``None`` is returned.
    966 
    967    The return value can be ``'fork'``, ``'spawn'``, ``'forkserver'``
    968    or ``None``.  ``'fork'`` is the default on Unix, while ``'spawn'`` is
    969    the default on Windows.
    970 
    971    .. versionadded:: 3.4
    972 
    973 .. function:: set_executable()
    974 
    975    Sets the path of the Python interpreter to use when starting a child process.
    976    (By default :data:`sys.executable` is used).  Embedders will probably need to
    977    do some thing like ::
    978 
    979       set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
    980 
    981    before they can create child processes.
    982 
    983    .. versionchanged:: 3.4
    984       Now supported on Unix when the ``'spawn'`` start method is used.
    985 
    986 .. function:: set_start_method(method)
    987 
    988    Set the method which should be used to start child processes.
    989    *method* can be ``'fork'``, ``'spawn'`` or ``'forkserver'``.
    990 
    991    Note that this should be called at most once, and it should be
    992    protected inside the ``if __name__ == '__main__'`` clause of the
    993    main module.
    994 
    995    .. versionadded:: 3.4
    996 
    997 .. note::
    998 
    999    :mod:`multiprocessing` contains no analogues of
   1000    :func:`threading.active_count`, :func:`threading.enumerate`,
   1001    :func:`threading.settrace`, :func:`threading.setprofile`,
   1002    :class:`threading.Timer`, or :class:`threading.local`.
   1003 
   1004 
   1005 Connection Objects
   1006 ~~~~~~~~~~~~~~~~~~
   1007 
   1008 Connection objects allow the sending and receiving of picklable objects or
   1009 strings.  They can be thought of as message oriented connected sockets.
   1010 
   1011 Connection objects are usually created using :func:`Pipe` -- see also
   1012 :ref:`multiprocessing-listeners-clients`.
   1013 
   1014 .. class:: Connection
   1015 
   1016    .. method:: send(obj)
   1017 
   1018       Send an object to the other end of the connection which should be read
   1019       using :meth:`recv`.
   1020 
   1021       The object must be picklable.  Very large pickles (approximately 32 MB+,
   1022       though it depends on the OS) may raise a :exc:`ValueError` exception.
   1023 
   1024    .. method:: recv()
   1025 
   1026       Return an object sent from the other end of the connection using
   1027       :meth:`send`.  Blocks until there its something to receive.  Raises
   1028       :exc:`EOFError` if there is nothing left to receive
   1029       and the other end was closed.
   1030 
   1031    .. method:: fileno()
   1032 
   1033       Return the file descriptor or handle used by the connection.
   1034 
   1035    .. method:: close()
   1036 
   1037       Close the connection.
   1038 
   1039       This is called automatically when the connection is garbage collected.
   1040 
   1041    .. method:: poll([timeout])
   1042 
   1043       Return whether there is any data available to be read.
   1044 
   1045       If *timeout* is not specified then it will return immediately.  If
   1046       *timeout* is a number then this specifies the maximum time in seconds to
   1047       block.  If *timeout* is ``None`` then an infinite timeout is used.
   1048 
   1049       Note that multiple connection objects may be polled at once by
   1050       using :func:`multiprocessing.connection.wait`.
   1051 
   1052    .. method:: send_bytes(buffer[, offset[, size]])
   1053 
   1054       Send byte data from a :term:`bytes-like object` as a complete message.
   1055 
   1056       If *offset* is given then data is read from that position in *buffer*.  If
   1057       *size* is given then that many bytes will be read from buffer.  Very large
   1058       buffers (approximately 32 MB+, though it depends on the OS) may raise a
   1059       :exc:`ValueError` exception
   1060 
   1061    .. method:: recv_bytes([maxlength])
   1062 
   1063       Return a complete message of byte data sent from the other end of the
   1064       connection as a string.  Blocks until there is something to receive.
   1065       Raises :exc:`EOFError` if there is nothing left
   1066       to receive and the other end has closed.
   1067 
   1068       If *maxlength* is specified and the message is longer than *maxlength*
   1069       then :exc:`OSError` is raised and the connection will no longer be
   1070       readable.
   1071 
   1072       .. versionchanged:: 3.3
   1073          This function used to raise :exc:`IOError`, which is now an
   1074          alias of :exc:`OSError`.
   1075 
   1076 
   1077    .. method:: recv_bytes_into(buffer[, offset])
   1078 
   1079       Read into *buffer* a complete message of byte data sent from the other end
   1080       of the connection and return the number of bytes in the message.  Blocks
   1081       until there is something to receive.  Raises
   1082       :exc:`EOFError` if there is nothing left to receive and the other end was
   1083       closed.
   1084 
   1085       *buffer* must be a writable :term:`bytes-like object`.  If
   1086       *offset* is given then the message will be written into the buffer from
   1087       that position.  Offset must be a non-negative integer less than the
   1088       length of *buffer* (in bytes).
   1089 
   1090       If the buffer is too short then a :exc:`BufferTooShort` exception is
   1091       raised and the complete message is available as ``e.args[0]`` where ``e``
   1092       is the exception instance.
   1093 
   1094    .. versionchanged:: 3.3
   1095       Connection objects themselves can now be transferred between processes
   1096       using :meth:`Connection.send` and :meth:`Connection.recv`.
   1097 
   1098    .. versionadded:: 3.3
   1099       Connection objects now support the context management protocol -- see
   1100       :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
   1101       connection object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
   1102 
   1103 For example:
   1104 
   1105 .. doctest::
   1106 
   1107     >>> from multiprocessing import Pipe
   1108     >>> a, b = Pipe()
   1109     >>> a.send([1, 'hello', None])
   1110     >>> b.recv()
   1111     [1, 'hello', None]
   1112     >>> b.send_bytes(b'thank you')
   1113     >>> a.recv_bytes()
   1114     b'thank you'
   1115     >>> import array
   1116     >>> arr1 = array.array('i', range(5))
   1117     >>> arr2 = array.array('i', [0] * 10)
   1118     >>> a.send_bytes(arr1)
   1119     >>> count = b.recv_bytes_into(arr2)
   1120     >>> assert count == len(arr1) * arr1.itemsize
   1121     >>> arr2
   1122     array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
   1123 
   1124 
   1125 .. warning::
   1126 
   1127     The :meth:`Connection.recv` method automatically unpickles the data it
   1128     receives, which can be a security risk unless you can trust the process
   1129     which sent the message.
   1130 
   1131     Therefore, unless the connection object was produced using :func:`Pipe` you
   1132     should only use the :meth:`~Connection.recv` and :meth:`~Connection.send`
   1133     methods after performing some sort of authentication.  See
   1134     :ref:`multiprocessing-auth-keys`.
   1135 
   1136 .. warning::
   1137 
   1138     If a process is killed while it is trying to read or write to a pipe then
   1139     the data in the pipe is likely to become corrupted, because it may become
   1140     impossible to be sure where the message boundaries lie.
   1141 
   1142 
   1143 Synchronization primitives
   1144 ~~~~~~~~~~~~~~~~~~~~~~~~~~
   1145 
   1146 Generally synchronization primitives are not as necessary in a multiprocess
   1147 program as they are in a multithreaded program.  See the documentation for
   1148 :mod:`threading` module.
   1149 
   1150 Note that one can also create synchronization primitives by using a manager
   1151 object -- see :ref:`multiprocessing-managers`.
   1152 
   1153 .. class:: Barrier(parties[, action[, timeout]])
   1154 
   1155    A barrier object: a clone of :class:`threading.Barrier`.
   1156 
   1157    .. versionadded:: 3.3
   1158 
   1159 .. class:: BoundedSemaphore([value])
   1160 
   1161    A bounded semaphore object: a close analog of
   1162    :class:`threading.BoundedSemaphore`.
   1163 
   1164    A solitary difference from its close analog exists: its ``acquire`` method's
   1165    first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
   1166 
   1167    .. note::
   1168       On Mac OS X, this is indistinguishable from :class:`Semaphore` because
   1169       ``sem_getvalue()`` is not implemented on that platform.
   1170 
   1171 .. class:: Condition([lock])
   1172 
   1173    A condition variable: an alias for :class:`threading.Condition`.
   1174 
   1175    If *lock* is specified then it should be a :class:`Lock` or :class:`RLock`
   1176    object from :mod:`multiprocessing`.
   1177 
   1178    .. versionchanged:: 3.3
   1179       The :meth:`~threading.Condition.wait_for` method was added.
   1180 
   1181 .. class:: Event()
   1182 
   1183    A clone of :class:`threading.Event`.
   1184 
   1185 
   1186 .. class:: Lock()
   1187 
   1188    A non-recursive lock object: a close analog of :class:`threading.Lock`.
   1189    Once a process or thread has acquired a lock, subsequent attempts to
   1190    acquire it from any process or thread will block until it is released;
   1191    any process or thread may release it.  The concepts and behaviors of
   1192    :class:`threading.Lock` as it applies to threads are replicated here in
   1193    :class:`multiprocessing.Lock` as it applies to either processes or threads,
   1194    except as noted.
   1195 
   1196    Note that :class:`Lock` is actually a factory function which returns an
   1197    instance of ``multiprocessing.synchronize.Lock`` initialized with a
   1198    default context.
   1199 
   1200    :class:`Lock` supports the :term:`context manager` protocol and thus may be
   1201    used in :keyword:`with` statements.
   1202 
   1203    .. method:: acquire(block=True, timeout=None)
   1204 
   1205       Acquire a lock, blocking or non-blocking.
   1206 
   1207       With the *block* argument set to ``True`` (the default), the method call
   1208       will block until the lock is in an unlocked state, then set it to locked
   1209       and return ``True``.  Note that the name of this first argument differs
   1210       from that in :meth:`threading.Lock.acquire`.
   1211 
   1212       With the *block* argument set to ``False``, the method call does not
   1213       block.  If the lock is currently in a locked state, return ``False``;
   1214       otherwise set the lock to a locked state and return ``True``.
   1215 
   1216       When invoked with a positive, floating-point value for *timeout*, block
   1217       for at most the number of seconds specified by *timeout* as long as
   1218       the lock can not be acquired.  Invocations with a negative value for
   1219       *timeout* are equivalent to a *timeout* of zero.  Invocations with a
   1220       *timeout* value of ``None`` (the default) set the timeout period to
   1221       infinite.  Note that the treatment of negative or ``None`` values for
   1222       *timeout* differs from the implemented behavior in
   1223       :meth:`threading.Lock.acquire`.  The *timeout* argument has no practical
   1224       implications if the *block* argument is set to ``False`` and is thus
   1225       ignored.  Returns ``True`` if the lock has been acquired or ``False`` if
   1226       the timeout period has elapsed.
   1227 
   1228 
   1229    .. method:: release()
   1230 
   1231       Release a lock.  This can be called from any process or thread, not only
   1232       the process or thread which originally acquired the lock.
   1233 
   1234       Behavior is the same as in :meth:`threading.Lock.release` except that
   1235       when invoked on an unlocked lock, a :exc:`ValueError` is raised.
   1236 
   1237 
   1238 .. class:: RLock()
   1239 
   1240    A recursive lock object: a close analog of :class:`threading.RLock`.  A
   1241    recursive lock must be released by the process or thread that acquired it.
   1242    Once a process or thread has acquired a recursive lock, the same process
   1243    or thread may acquire it again without blocking; that process or thread
   1244    must release it once for each time it has been acquired.
   1245 
   1246    Note that :class:`RLock` is actually a factory function which returns an
   1247    instance of ``multiprocessing.synchronize.RLock`` initialized with a
   1248    default context.
   1249 
   1250    :class:`RLock` supports the :term:`context manager` protocol and thus may be
   1251    used in :keyword:`with` statements.
   1252 
   1253 
   1254    .. method:: acquire(block=True, timeout=None)
   1255 
   1256       Acquire a lock, blocking or non-blocking.
   1257 
   1258       When invoked with the *block* argument set to ``True``, block until the
   1259       lock is in an unlocked state (not owned by any process or thread) unless
   1260       the lock is already owned by the current process or thread.  The current
   1261       process or thread then takes ownership of the lock (if it does not
   1262       already have ownership) and the recursion level inside the lock increments
   1263       by one, resulting in a return value of ``True``.  Note that there are
   1264       several differences in this first argument's behavior compared to the
   1265       implementation of :meth:`threading.RLock.acquire`, starting with the name
   1266       of the argument itself.
   1267 
   1268       When invoked with the *block* argument set to ``False``, do not block.
   1269       If the lock has already been acquired (and thus is owned) by another
   1270       process or thread, the current process or thread does not take ownership
   1271       and the recursion level within the lock is not changed, resulting in
   1272       a return value of ``False``.  If the lock is in an unlocked state, the
   1273       current process or thread takes ownership and the recursion level is
   1274       incremented, resulting in a return value of ``True``.
   1275 
   1276       Use and behaviors of the *timeout* argument are the same as in
   1277       :meth:`Lock.acquire`.  Note that some of these behaviors of *timeout*
   1278       differ from the implemented behaviors in :meth:`threading.RLock.acquire`.
   1279 
   1280 
   1281    .. method:: release()
   1282 
   1283       Release a lock, decrementing the recursion level.  If after the
   1284       decrement the recursion level is zero, reset the lock to unlocked (not
   1285       owned by any process or thread) and if any other processes or threads
   1286       are blocked waiting for the lock to become unlocked, allow exactly one
   1287       of them to proceed.  If after the decrement the recursion level is still
   1288       nonzero, the lock remains locked and owned by the calling process or
   1289       thread.
   1290 
   1291       Only call this method when the calling process or thread owns the lock.
   1292       An :exc:`AssertionError` is raised if this method is called by a process
   1293       or thread other than the owner or if the lock is in an unlocked (unowned)
   1294       state.  Note that the type of exception raised in this situation
   1295       differs from the implemented behavior in :meth:`threading.RLock.release`.
   1296 
   1297 
   1298 .. class:: Semaphore([value])
   1299 
   1300    A semaphore object: a close analog of :class:`threading.Semaphore`.
   1301 
   1302    A solitary difference from its close analog exists: its ``acquire`` method's
   1303    first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
   1304 
   1305 .. note::
   1306 
   1307    On Mac OS X, ``sem_timedwait`` is unsupported, so calling ``acquire()`` with
   1308    a timeout will emulate that function's behavior using a sleeping loop.
   1309 
   1310 .. note::
   1311 
   1312    If the SIGINT signal generated by :kbd:`Ctrl-C` arrives while the main thread is
   1313    blocked by a call to :meth:`BoundedSemaphore.acquire`, :meth:`Lock.acquire`,
   1314    :meth:`RLock.acquire`, :meth:`Semaphore.acquire`, :meth:`Condition.acquire`
   1315    or :meth:`Condition.wait` then the call will be immediately interrupted and
   1316    :exc:`KeyboardInterrupt` will be raised.
   1317 
   1318    This differs from the behaviour of :mod:`threading` where SIGINT will be
   1319    ignored while the equivalent blocking calls are in progress.
   1320 
   1321 .. note::
   1322 
   1323    Some of this package's functionality requires a functioning shared semaphore
   1324    implementation on the host operating system. Without one, the
   1325    :mod:`multiprocessing.synchronize` module will be disabled, and attempts to
   1326    import it will result in an :exc:`ImportError`. See
   1327    :issue:`3770` for additional information.
   1328 
   1329 
   1330 Shared :mod:`ctypes` Objects
   1331 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   1332 
   1333 It is possible to create shared objects using shared memory which can be
   1334 inherited by child processes.
   1335 
   1336 .. function:: Value(typecode_or_type, *args, lock=True)
   1337 
   1338    Return a :mod:`ctypes` object allocated from shared memory.  By default the
   1339    return value is actually a synchronized wrapper for the object.  The object
   1340    itself can be accessed via the *value* attribute of a :class:`Value`.
   1341 
   1342    *typecode_or_type* determines the type of the returned object: it is either a
   1343    ctypes type or a one character typecode of the kind used by the :mod:`array`
   1344    module.  *\*args* is passed on to the constructor for the type.
   1345 
   1346    If *lock* is ``True`` (the default) then a new recursive lock
   1347    object is created to synchronize access to the value.  If *lock* is
   1348    a :class:`Lock` or :class:`RLock` object then that will be used to
   1349    synchronize access to the value.  If *lock* is ``False`` then
   1350    access to the returned object will not be automatically protected
   1351    by a lock, so it will not necessarily be "process-safe".
   1352 
   1353    Operations like ``+=`` which involve a read and write are not
   1354    atomic.  So if, for instance, you want to atomically increment a
   1355    shared value it is insufficient to just do ::
   1356 
   1357        counter.value += 1
   1358 
   1359    Assuming the associated lock is recursive (which it is by default)
   1360    you can instead do ::
   1361 
   1362        with counter.get_lock():
   1363            counter.value += 1
   1364 
   1365    Note that *lock* is a keyword-only argument.
   1366 
   1367 .. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
   1368 
   1369    Return a ctypes array allocated from shared memory.  By default the return
   1370    value is actually a synchronized wrapper for the array.
   1371 
   1372    *typecode_or_type* determines the type of the elements of the returned array:
   1373    it is either a ctypes type or a one character typecode of the kind used by
   1374    the :mod:`array` module.  If *size_or_initializer* is an integer, then it
   1375    determines the length of the array, and the array will be initially zeroed.
   1376    Otherwise, *size_or_initializer* is a sequence which is used to initialize
   1377    the array and whose length determines the length of the array.
   1378 
   1379    If *lock* is ``True`` (the default) then a new lock object is created to
   1380    synchronize access to the value.  If *lock* is a :class:`Lock` or
   1381    :class:`RLock` object then that will be used to synchronize access to the
   1382    value.  If *lock* is ``False`` then access to the returned object will not be
   1383    automatically protected by a lock, so it will not necessarily be
   1384    "process-safe".
   1385 
   1386    Note that *lock* is a keyword only argument.
   1387 
   1388    Note that an array of :data:`ctypes.c_char` has *value* and *raw*
   1389    attributes which allow one to use it to store and retrieve strings.
   1390 
   1391 
   1392 The :mod:`multiprocessing.sharedctypes` module
   1393 >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
   1394 
   1395 .. module:: multiprocessing.sharedctypes
   1396    :synopsis: Allocate ctypes objects from shared memory.
   1397 
   1398 The :mod:`multiprocessing.sharedctypes` module provides functions for allocating
   1399 :mod:`ctypes` objects from shared memory which can be inherited by child
   1400 processes.
   1401 
   1402 .. note::
   1403 
   1404    Although it is possible to store a pointer in shared memory remember that
   1405    this will refer to a location in the address space of a specific process.
   1406    However, the pointer is quite likely to be invalid in the context of a second
   1407    process and trying to dereference the pointer from the second process may
   1408    cause a crash.
   1409 
   1410 .. function:: RawArray(typecode_or_type, size_or_initializer)
   1411 
   1412    Return a ctypes array allocated from shared memory.
   1413 
   1414    *typecode_or_type* determines the type of the elements of the returned array:
   1415    it is either a ctypes type or a one character typecode of the kind used by
   1416    the :mod:`array` module.  If *size_or_initializer* is an integer then it
   1417    determines the length of the array, and the array will be initially zeroed.
   1418    Otherwise *size_or_initializer* is a sequence which is used to initialize the
   1419    array and whose length determines the length of the array.
   1420 
   1421    Note that setting and getting an element is potentially non-atomic -- use
   1422    :func:`Array` instead to make sure that access is automatically synchronized
   1423    using a lock.
   1424 
   1425 .. function:: RawValue(typecode_or_type, *args)
   1426 
   1427    Return a ctypes object allocated from shared memory.
   1428 
   1429    *typecode_or_type* determines the type of the returned object: it is either a
   1430    ctypes type or a one character typecode of the kind used by the :mod:`array`
   1431    module.  *\*args* is passed on to the constructor for the type.
   1432 
   1433    Note that setting and getting the value is potentially non-atomic -- use
   1434    :func:`Value` instead to make sure that access is automatically synchronized
   1435    using a lock.
   1436 
   1437    Note that an array of :data:`ctypes.c_char` has ``value`` and ``raw``
   1438    attributes which allow one to use it to store and retrieve strings -- see
   1439    documentation for :mod:`ctypes`.
   1440 
   1441 .. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
   1442 
   1443    The same as :func:`RawArray` except that depending on the value of *lock* a
   1444    process-safe synchronization wrapper may be returned instead of a raw ctypes
   1445    array.
   1446 
   1447    If *lock* is ``True`` (the default) then a new lock object is created to
   1448    synchronize access to the value.  If *lock* is a
   1449    :class:`~multiprocessing.Lock` or :class:`~multiprocessing.RLock` object
   1450    then that will be used to synchronize access to the
   1451    value.  If *lock* is ``False`` then access to the returned object will not be
   1452    automatically protected by a lock, so it will not necessarily be
   1453    "process-safe".
   1454 
   1455    Note that *lock* is a keyword-only argument.
   1456 
   1457 .. function:: Value(typecode_or_type, *args, lock=True)
   1458 
   1459    The same as :func:`RawValue` except that depending on the value of *lock* a
   1460    process-safe synchronization wrapper may be returned instead of a raw ctypes
   1461    object.
   1462 
   1463    If *lock* is ``True`` (the default) then a new lock object is created to
   1464    synchronize access to the value.  If *lock* is a :class:`~multiprocessing.Lock` or
   1465    :class:`~multiprocessing.RLock` object then that will be used to synchronize access to the
   1466    value.  If *lock* is ``False`` then access to the returned object will not be
   1467    automatically protected by a lock, so it will not necessarily be
   1468    "process-safe".
   1469 
   1470    Note that *lock* is a keyword-only argument.
   1471 
   1472 .. function:: copy(obj)
   1473 
   1474    Return a ctypes object allocated from shared memory which is a copy of the
   1475    ctypes object *obj*.
   1476 
   1477 .. function:: synchronized(obj[, lock])
   1478 
   1479    Return a process-safe wrapper object for a ctypes object which uses *lock* to
   1480    synchronize access.  If *lock* is ``None`` (the default) then a
   1481    :class:`multiprocessing.RLock` object is created automatically.
   1482 
   1483    A synchronized wrapper will have two methods in addition to those of the
   1484    object it wraps: :meth:`get_obj` returns the wrapped object and
   1485    :meth:`get_lock` returns the lock object used for synchronization.
   1486 
   1487    Note that accessing the ctypes object through the wrapper can be a lot slower
   1488    than accessing the raw ctypes object.
   1489 
   1490    .. versionchanged:: 3.5
   1491       Synchronized objects support the :term:`context manager` protocol.
   1492 
   1493 
   1494 The table below compares the syntax for creating shared ctypes objects from
   1495 shared memory with the normal ctypes syntax.  (In the table ``MyStruct`` is some
   1496 subclass of :class:`ctypes.Structure`.)
   1497 
   1498 ==================== ========================== ===========================
   1499 ctypes               sharedctypes using type    sharedctypes using typecode
   1500 ==================== ========================== ===========================
   1501 c_double(2.4)        RawValue(c_double, 2.4)    RawValue('d', 2.4)
   1502 MyStruct(4, 6)       RawValue(MyStruct, 4, 6)
   1503 (c_short * 7)()      RawArray(c_short, 7)       RawArray('h', 7)
   1504 (c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray('i', (9, 2, 8))
   1505 ==================== ========================== ===========================
   1506 
   1507 
   1508 Below is an example where a number of ctypes objects are modified by a child
   1509 process::
   1510 
   1511    from multiprocessing import Process, Lock
   1512    from multiprocessing.sharedctypes import Value, Array
   1513    from ctypes import Structure, c_double
   1514 
   1515    class Point(Structure):
   1516        _fields_ = [('x', c_double), ('y', c_double)]
   1517 
   1518    def modify(n, x, s, A):
   1519        n.value **= 2
   1520        x.value **= 2
   1521        s.value = s.value.upper()
   1522        for a in A:
   1523            a.x **= 2
   1524            a.y **= 2
   1525 
   1526    if __name__ == '__main__':
   1527        lock = Lock()
   1528 
   1529        n = Value('i', 7)
   1530        x = Value(c_double, 1.0/3.0, lock=False)
   1531        s = Array('c', b'hello world', lock=lock)
   1532        A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
   1533 
   1534        p = Process(target=modify, args=(n, x, s, A))
   1535        p.start()
   1536        p.join()
   1537 
   1538        print(n.value)
   1539        print(x.value)
   1540        print(s.value)
   1541        print([(a.x, a.y) for a in A])
   1542 
   1543 
   1544 .. highlight:: none
   1545 
   1546 The results printed are ::
   1547 
   1548     49
   1549     0.1111111111111111
   1550     HELLO WORLD
   1551     [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
   1552 
   1553 .. highlight:: python3
   1554 
   1555 
   1556 .. _multiprocessing-managers:
   1557 
   1558 Managers
   1559 ~~~~~~~~
   1560 
   1561 Managers provide a way to create data which can be shared between different
   1562 processes, including sharing over a network between processes running on
   1563 different machines. A manager object controls a server process which manages
   1564 *shared objects*.  Other processes can access the shared objects by using
   1565 proxies.
   1566 
   1567 .. function:: multiprocessing.Manager()
   1568 
   1569    Returns a started :class:`~multiprocessing.managers.SyncManager` object which
   1570    can be used for sharing objects between processes.  The returned manager
   1571    object corresponds to a spawned child process and has methods which will
   1572    create shared objects and return corresponding proxies.
   1573 
   1574 .. module:: multiprocessing.managers
   1575    :synopsis: Share data between process with shared objects.
   1576 
   1577 Manager processes will be shutdown as soon as they are garbage collected or
   1578 their parent process exits.  The manager classes are defined in the
   1579 :mod:`multiprocessing.managers` module:
   1580 
   1581 .. class:: BaseManager([address[, authkey]])
   1582 
   1583    Create a BaseManager object.
   1584 
   1585    Once created one should call :meth:`start` or ``get_server().serve_forever()`` to ensure
   1586    that the manager object refers to a started manager process.
   1587 
   1588    *address* is the address on which the manager process listens for new
   1589    connections.  If *address* is ``None`` then an arbitrary one is chosen.
   1590 
   1591    *authkey* is the authentication key which will be used to check the
   1592    validity of incoming connections to the server process.  If
   1593    *authkey* is ``None`` then ``current_process().authkey`` is used.
   1594    Otherwise *authkey* is used and it must be a byte string.
   1595 
   1596    .. method:: start([initializer[, initargs]])
   1597 
   1598       Start a subprocess to start the manager.  If *initializer* is not ``None``
   1599       then the subprocess will call ``initializer(*initargs)`` when it starts.
   1600 
   1601    .. method:: get_server()
   1602 
   1603       Returns a :class:`Server` object which represents the actual server under
   1604       the control of the Manager. The :class:`Server` object supports the
   1605       :meth:`serve_forever` method::
   1606 
   1607       >>> from multiprocessing.managers import BaseManager
   1608       >>> manager = BaseManager(address=('', 50000), authkey=b'abc')
   1609       >>> server = manager.get_server()
   1610       >>> server.serve_forever()
   1611 
   1612       :class:`Server` additionally has an :attr:`address` attribute.
   1613 
   1614    .. method:: connect()
   1615 
   1616       Connect a local manager object to a remote manager process::
   1617 
   1618       >>> from multiprocessing.managers import BaseManager
   1619       >>> m = BaseManager(address=('127.0.0.1', 5000), authkey=b'abc')
   1620       >>> m.connect()
   1621 
   1622    .. method:: shutdown()
   1623 
   1624       Stop the process used by the manager.  This is only available if
   1625       :meth:`start` has been used to start the server process.
   1626 
   1627       This can be called multiple times.
   1628 
   1629    .. method:: register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
   1630 
   1631       A classmethod which can be used for registering a type or callable with
   1632       the manager class.
   1633 
   1634       *typeid* is a "type identifier" which is used to identify a particular
   1635       type of shared object.  This must be a string.
   1636 
   1637       *callable* is a callable used for creating objects for this type
   1638       identifier.  If a manager instance will be connected to the
   1639       server using the :meth:`connect` method, or if the
   1640       *create_method* argument is ``False`` then this can be left as
   1641       ``None``.
   1642 
   1643       *proxytype* is a subclass of :class:`BaseProxy` which is used to create
   1644       proxies for shared objects with this *typeid*.  If ``None`` then a proxy
   1645       class is created automatically.
   1646 
   1647       *exposed* is used to specify a sequence of method names which proxies for
   1648       this typeid should be allowed to access using
   1649       :meth:`BaseProxy._callmethod`.  (If *exposed* is ``None`` then
   1650       :attr:`proxytype._exposed_` is used instead if it exists.)  In the case
   1651       where no exposed list is specified, all "public methods" of the shared
   1652       object will be accessible.  (Here a "public method" means any attribute
   1653       which has a :meth:`~object.__call__` method and whose name does not begin
   1654       with ``'_'``.)
   1655 
   1656       *method_to_typeid* is a mapping used to specify the return type of those
   1657       exposed methods which should return a proxy.  It maps method names to
   1658       typeid strings.  (If *method_to_typeid* is ``None`` then
   1659       :attr:`proxytype._method_to_typeid_` is used instead if it exists.)  If a
   1660       method's name is not a key of this mapping or if the mapping is ``None``
   1661       then the object returned by the method will be copied by value.
   1662 
   1663       *create_method* determines whether a method should be created with name
   1664       *typeid* which can be used to tell the server process to create a new
   1665       shared object and return a proxy for it.  By default it is ``True``.
   1666 
   1667    :class:`BaseManager` instances also have one read-only property:
   1668 
   1669    .. attribute:: address
   1670 
   1671       The address used by the manager.
   1672 
   1673    .. versionchanged:: 3.3
   1674       Manager objects support the context management protocol -- see
   1675       :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` starts the
   1676       server process (if it has not already started) and then returns the
   1677       manager object.  :meth:`~contextmanager.__exit__` calls :meth:`shutdown`.
   1678 
   1679       In previous versions :meth:`~contextmanager.__enter__` did not start the
   1680       manager's server process if it was not already started.
   1681 
   1682 .. class:: SyncManager
   1683 
   1684    A subclass of :class:`BaseManager` which can be used for the synchronization
   1685    of processes.  Objects of this type are returned by
   1686    :func:`multiprocessing.Manager`.
   1687 
   1688    Its methods create and return :ref:`multiprocessing-proxy_objects` for a
   1689    number of commonly used data types to be synchronized across processes.
   1690    This notably includes shared lists and dictionaries.
   1691 
   1692    .. method:: Barrier(parties[, action[, timeout]])
   1693 
   1694       Create a shared :class:`threading.Barrier` object and return a
   1695       proxy for it.
   1696 
   1697       .. versionadded:: 3.3
   1698 
   1699    .. method:: BoundedSemaphore([value])
   1700 
   1701       Create a shared :class:`threading.BoundedSemaphore` object and return a
   1702       proxy for it.
   1703 
   1704    .. method:: Condition([lock])
   1705 
   1706       Create a shared :class:`threading.Condition` object and return a proxy for
   1707       it.
   1708 
   1709       If *lock* is supplied then it should be a proxy for a
   1710       :class:`threading.Lock` or :class:`threading.RLock` object.
   1711 
   1712       .. versionchanged:: 3.3
   1713          The :meth:`~threading.Condition.wait_for` method was added.
   1714 
   1715    .. method:: Event()
   1716 
   1717       Create a shared :class:`threading.Event` object and return a proxy for it.
   1718 
   1719    .. method:: Lock()
   1720 
   1721       Create a shared :class:`threading.Lock` object and return a proxy for it.
   1722 
   1723    .. method:: Namespace()
   1724 
   1725       Create a shared :class:`Namespace` object and return a proxy for it.
   1726 
   1727    .. method:: Queue([maxsize])
   1728 
   1729       Create a shared :class:`queue.Queue` object and return a proxy for it.
   1730 
   1731    .. method:: RLock()
   1732 
   1733       Create a shared :class:`threading.RLock` object and return a proxy for it.
   1734 
   1735    .. method:: Semaphore([value])
   1736 
   1737       Create a shared :class:`threading.Semaphore` object and return a proxy for
   1738       it.
   1739 
   1740    .. method:: Array(typecode, sequence)
   1741 
   1742       Create an array and return a proxy for it.
   1743 
   1744    .. method:: Value(typecode, value)
   1745 
   1746       Create an object with a writable ``value`` attribute and return a proxy
   1747       for it.
   1748 
   1749    .. method:: dict()
   1750                dict(mapping)
   1751                dict(sequence)
   1752 
   1753       Create a shared :class:`dict` object and return a proxy for it.
   1754 
   1755    .. method:: list()
   1756                list(sequence)
   1757 
   1758       Create a shared :class:`list` object and return a proxy for it.
   1759 
   1760    .. versionchanged:: 3.6
   1761       Shared objects are capable of being nested.  For example, a shared
   1762       container object such as a shared list can contain other shared objects
   1763       which will all be managed and synchronized by the :class:`SyncManager`.
   1764 
   1765 .. class:: Namespace
   1766 
   1767    A type that can register with :class:`SyncManager`.
   1768 
   1769    A namespace object has no public methods, but does have writable attributes.
   1770    Its representation shows the values of its attributes.
   1771 
   1772    However, when using a proxy for a namespace object, an attribute beginning
   1773    with ``'_'`` will be an attribute of the proxy and not an attribute of the
   1774    referent:
   1775 
   1776    .. doctest::
   1777 
   1778     >>> manager = multiprocessing.Manager()
   1779     >>> Global = manager.Namespace()
   1780     >>> Global.x = 10
   1781     >>> Global.y = 'hello'
   1782     >>> Global._z = 12.3    # this is an attribute of the proxy
   1783     >>> print(Global)
   1784     Namespace(x=10, y='hello')
   1785 
   1786 
   1787 Customized managers
   1788 >>>>>>>>>>>>>>>>>>>
   1789 
   1790 To create one's own manager, one creates a subclass of :class:`BaseManager` and
   1791 uses the :meth:`~BaseManager.register` classmethod to register new types or
   1792 callables with the manager class.  For example::
   1793 
   1794    from multiprocessing.managers import BaseManager
   1795 
   1796    class MathsClass:
   1797        def add(self, x, y):
   1798            return x + y
   1799        def mul(self, x, y):
   1800            return x * y
   1801 
   1802    class MyManager(BaseManager):
   1803        pass
   1804 
   1805    MyManager.register('Maths', MathsClass)
   1806 
   1807    if __name__ == '__main__':
   1808        with MyManager() as manager:
   1809            maths = manager.Maths()
   1810            print(maths.add(4, 3))         # prints 7
   1811            print(maths.mul(7, 8))         # prints 56
   1812 
   1813 
   1814 Using a remote manager
   1815 >>>>>>>>>>>>>>>>>>>>>>
   1816 
   1817 It is possible to run a manager server on one machine and have clients use it
   1818 from other machines (assuming that the firewalls involved allow it).
   1819 
   1820 Running the following commands creates a server for a single shared queue which
   1821 remote clients can access::
   1822 
   1823    >>> from multiprocessing.managers import BaseManager
   1824    >>> import queue
   1825    >>> queue = queue.Queue()
   1826    >>> class QueueManager(BaseManager): pass
   1827    >>> QueueManager.register('get_queue', callable=lambda:queue)
   1828    >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
   1829    >>> s = m.get_server()
   1830    >>> s.serve_forever()
   1831 
   1832 One client can access the server as follows::
   1833 
   1834    >>> from multiprocessing.managers import BaseManager
   1835    >>> class QueueManager(BaseManager): pass
   1836    >>> QueueManager.register('get_queue')
   1837    >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
   1838    >>> m.connect()
   1839    >>> queue = m.get_queue()
   1840    >>> queue.put('hello')
   1841 
   1842 Another client can also use it::
   1843 
   1844    >>> from multiprocessing.managers import BaseManager
   1845    >>> class QueueManager(BaseManager): pass
   1846    >>> QueueManager.register('get_queue')
   1847    >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
   1848    >>> m.connect()
   1849    >>> queue = m.get_queue()
   1850    >>> queue.get()
   1851    'hello'
   1852 
   1853 Local processes can also access that queue, using the code from above on the
   1854 client to access it remotely::
   1855 
   1856     >>> from multiprocessing import Process, Queue
   1857     >>> from multiprocessing.managers import BaseManager
   1858     >>> class Worker(Process):
   1859     ...     def __init__(self, q):
   1860     ...         self.q = q
   1861     ...         super(Worker, self).__init__()
   1862     ...     def run(self):
   1863     ...         self.q.put('local hello')
   1864     ...
   1865     >>> queue = Queue()
   1866     >>> w = Worker(queue)
   1867     >>> w.start()
   1868     >>> class QueueManager(BaseManager): pass
   1869     ...
   1870     >>> QueueManager.register('get_queue', callable=lambda: queue)
   1871     >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
   1872     >>> s = m.get_server()
   1873     >>> s.serve_forever()
   1874 
   1875 .. _multiprocessing-proxy_objects:
   1876 
   1877 Proxy Objects
   1878 ~~~~~~~~~~~~~
   1879 
   1880 A proxy is an object which *refers* to a shared object which lives (presumably)
   1881 in a different process.  The shared object is said to be the *referent* of the
   1882 proxy.  Multiple proxy objects may have the same referent.
   1883 
   1884 A proxy object has methods which invoke corresponding methods of its referent
   1885 (although not every method of the referent will necessarily be available through
   1886 the proxy).  In this way, a proxy can be used just like its referent can:
   1887 
   1888 .. doctest::
   1889 
   1890    >>> from multiprocessing import Manager
   1891    >>> manager = Manager()
   1892    >>> l = manager.list([i*i for i in range(10)])
   1893    >>> print(l)
   1894    [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
   1895    >>> print(repr(l))
   1896    <ListProxy object, typeid 'list' at 0x...>
   1897    >>> l[4]
   1898    16
   1899    >>> l[2:5]
   1900    [4, 9, 16]
   1901 
   1902 Notice that applying :func:`str` to a proxy will return the representation of
   1903 the referent, whereas applying :func:`repr` will return the representation of
   1904 the proxy.
   1905 
   1906 An important feature of proxy objects is that they are picklable so they can be
   1907 passed between processes.  As such, a referent can contain
   1908 :ref:`multiprocessing-proxy_objects`.  This permits nesting of these managed
   1909 lists, dicts, and other :ref:`multiprocessing-proxy_objects`:
   1910 
   1911 .. doctest::
   1912 
   1913    >>> a = manager.list()
   1914    >>> b = manager.list()
   1915    >>> a.append(b)         # referent of a now contains referent of b
   1916    >>> print(a, b)
   1917    [<ListProxy object, typeid 'list' at ...>] []
   1918    >>> b.append('hello')
   1919    >>> print(a[0], b)
   1920    ['hello'] ['hello']
   1921 
   1922 Similarly, dict and list proxies may be nested inside one another::
   1923 
   1924    >>> l_outer = manager.list([ manager.dict() for i in range(2) ])
   1925    >>> d_first_inner = l_outer[0]
   1926    >>> d_first_inner['a'] = 1
   1927    >>> d_first_inner['b'] = 2
   1928    >>> l_outer[1]['c'] = 3
   1929    >>> l_outer[1]['z'] = 26
   1930    >>> print(l_outer[0])
   1931    {'a': 1, 'b': 2}
   1932    >>> print(l_outer[1])
   1933    {'c': 3, 'z': 26}
   1934 
   1935 If standard (non-proxy) :class:`list` or :class:`dict` objects are contained
   1936 in a referent, modifications to those mutable values will not be propagated
   1937 through the manager because the proxy has no way of knowing when the values
   1938 contained within are modified.  However, storing a value in a container proxy
   1939 (which triggers a ``__setitem__`` on the proxy object) does propagate through
   1940 the manager and so to effectively modify such an item, one could re-assign the
   1941 modified value to the container proxy::
   1942 
   1943    # create a list proxy and append a mutable object (a dictionary)
   1944    lproxy = manager.list()
   1945    lproxy.append({})
   1946    # now mutate the dictionary
   1947    d = lproxy[0]
   1948    d['a'] = 1
   1949    d['b'] = 2
   1950    # at this point, the changes to d are not yet synced, but by
   1951    # updating the dictionary, the proxy is notified of the change
   1952    lproxy[0] = d
   1953 
   1954 This approach is perhaps less convenient than employing nested
   1955 :ref:`multiprocessing-proxy_objects` for most use cases but also
   1956 demonstrates a level of control over the synchronization.
   1957 
   1958 .. note::
   1959 
   1960    The proxy types in :mod:`multiprocessing` do nothing to support comparisons
   1961    by value.  So, for instance, we have:
   1962 
   1963    .. doctest::
   1964 
   1965        >>> manager.list([1,2,3]) == [1,2,3]
   1966        False
   1967 
   1968    One should just use a copy of the referent instead when making comparisons.
   1969 
   1970 .. class:: BaseProxy
   1971 
   1972    Proxy objects are instances of subclasses of :class:`BaseProxy`.
   1973 
   1974    .. method:: _callmethod(methodname[, args[, kwds]])
   1975 
   1976       Call and return the result of a method of the proxy's referent.
   1977 
   1978       If ``proxy`` is a proxy whose referent is ``obj`` then the expression ::
   1979 
   1980          proxy._callmethod(methodname, args, kwds)
   1981 
   1982       will evaluate the expression ::
   1983 
   1984          getattr(obj, methodname)(*args, **kwds)
   1985 
   1986       in the manager's process.
   1987 
   1988       The returned value will be a copy of the result of the call or a proxy to
   1989       a new shared object -- see documentation for the *method_to_typeid*
   1990       argument of :meth:`BaseManager.register`.
   1991 
   1992       If an exception is raised by the call, then is re-raised by
   1993       :meth:`_callmethod`.  If some other exception is raised in the manager's
   1994       process then this is converted into a :exc:`RemoteError` exception and is
   1995       raised by :meth:`_callmethod`.
   1996 
   1997       Note in particular that an exception will be raised if *methodname* has
   1998       not been *exposed*.
   1999 
   2000       An example of the usage of :meth:`_callmethod`:
   2001 
   2002       .. doctest::
   2003 
   2004          >>> l = manager.list(range(10))
   2005          >>> l._callmethod('__len__')
   2006          10
   2007          >>> l._callmethod('__getitem__', (slice(2, 7),)) # equivalent to l[2:7]
   2008          [2, 3, 4, 5, 6]
   2009          >>> l._callmethod('__getitem__', (20,))          # equivalent to l[20]
   2010          Traceback (most recent call last):
   2011          ...
   2012          IndexError: list index out of range
   2013 
   2014    .. method:: _getvalue()
   2015 
   2016       Return a copy of the referent.
   2017 
   2018       If the referent is unpicklable then this will raise an exception.
   2019 
   2020    .. method:: __repr__
   2021 
   2022       Return a representation of the proxy object.
   2023 
   2024    .. method:: __str__
   2025 
   2026       Return the representation of the referent.
   2027 
   2028 
   2029 Cleanup
   2030 >>>>>>>
   2031 
   2032 A proxy object uses a weakref callback so that when it gets garbage collected it
   2033 deregisters itself from the manager which owns its referent.
   2034 
   2035 A shared object gets deleted from the manager process when there are no longer
   2036 any proxies referring to it.
   2037 
   2038 
   2039 Process Pools
   2040 ~~~~~~~~~~~~~
   2041 
   2042 .. module:: multiprocessing.pool
   2043    :synopsis: Create pools of processes.
   2044 
   2045 One can create a pool of processes which will carry out tasks submitted to it
   2046 with the :class:`Pool` class.
   2047 
   2048 .. class:: Pool([processes[, initializer[, initargs[, maxtasksperchild [, context]]]]])
   2049 
   2050    A process pool object which controls a pool of worker processes to which jobs
   2051    can be submitted.  It supports asynchronous results with timeouts and
   2052    callbacks and has a parallel map implementation.
   2053 
   2054    *processes* is the number of worker processes to use.  If *processes* is
   2055    ``None`` then the number returned by :func:`os.cpu_count` is used.
   2056 
   2057    If *initializer* is not ``None`` then each worker process will call
   2058    ``initializer(*initargs)`` when it starts.
   2059 
   2060    *maxtasksperchild* is the number of tasks a worker process can complete
   2061    before it will exit and be replaced with a fresh worker process, to enable
   2062    unused resources to be freed. The default *maxtasksperchild* is ``None``, which
   2063    means worker processes will live as long as the pool.
   2064 
   2065    *context* can be used to specify the context used for starting
   2066    the worker processes.  Usually a pool is created using the
   2067    function :func:`multiprocessing.Pool` or the :meth:`Pool` method
   2068    of a context object.  In both cases *context* is set
   2069    appropriately.
   2070 
   2071    Note that the methods of the pool object should only be called by
   2072    the process which created the pool.
   2073 
   2074    .. versionadded:: 3.2
   2075       *maxtasksperchild*
   2076 
   2077    .. versionadded:: 3.4
   2078       *context*
   2079 
   2080    .. note::
   2081 
   2082       Worker processes within a :class:`Pool` typically live for the complete
   2083       duration of the Pool's work queue. A frequent pattern found in other
   2084       systems (such as Apache, mod_wsgi, etc) to free resources held by
   2085       workers is to allow a worker within a pool to complete only a set
   2086       amount of work before being exiting, being cleaned up and a new
   2087       process spawned to replace the old one. The *maxtasksperchild*
   2088       argument to the :class:`Pool` exposes this ability to the end user.
   2089 
   2090    .. method:: apply(func[, args[, kwds]])
   2091 
   2092       Call *func* with arguments *args* and keyword arguments *kwds*.  It blocks
   2093       until the result is ready. Given this blocks, :meth:`apply_async` is
   2094       better suited for performing work in parallel. Additionally, *func*
   2095       is only executed in one of the workers of the pool.
   2096 
   2097    .. method:: apply_async(func[, args[, kwds[, callback[, error_callback]]]])
   2098 
   2099       A variant of the :meth:`apply` method which returns a result object.
   2100 
   2101       If *callback* is specified then it should be a callable which accepts a
   2102       single argument.  When the result becomes ready *callback* is applied to
   2103       it, that is unless the call failed, in which case the *error_callback*
   2104       is applied instead.
   2105 
   2106       If *error_callback* is specified then it should be a callable which
   2107       accepts a single argument.  If the target function fails, then
   2108       the *error_callback* is called with the exception instance.
   2109 
   2110       Callbacks should complete immediately since otherwise the thread which
   2111       handles the results will get blocked.
   2112 
   2113    .. method:: map(func, iterable[, chunksize])
   2114 
   2115       A parallel equivalent of the :func:`map` built-in function (it supports only
   2116       one *iterable* argument though).  It blocks until the result is ready.
   2117 
   2118       This method chops the iterable into a number of chunks which it submits to
   2119       the process pool as separate tasks.  The (approximate) size of these
   2120       chunks can be specified by setting *chunksize* to a positive integer.
   2121 
   2122    .. method:: map_async(func, iterable[, chunksize[, callback[, error_callback]]])
   2123 
   2124       A variant of the :meth:`.map` method which returns a result object.
   2125 
   2126       If *callback* is specified then it should be a callable which accepts a
   2127       single argument.  When the result becomes ready *callback* is applied to
   2128       it, that is unless the call failed, in which case the *error_callback*
   2129       is applied instead.
   2130 
   2131       If *error_callback* is specified then it should be a callable which
   2132       accepts a single argument.  If the target function fails, then
   2133       the *error_callback* is called with the exception instance.
   2134 
   2135       Callbacks should complete immediately since otherwise the thread which
   2136       handles the results will get blocked.
   2137 
   2138    .. method:: imap(func, iterable[, chunksize])
   2139 
   2140       A lazier version of :meth:`map`.
   2141 
   2142       The *chunksize* argument is the same as the one used by the :meth:`.map`
   2143       method.  For very long iterables using a large value for *chunksize* can
   2144       make the job complete **much** faster than using the default value of
   2145       ``1``.
   2146 
   2147       Also if *chunksize* is ``1`` then the :meth:`!next` method of the iterator
   2148       returned by the :meth:`imap` method has an optional *timeout* parameter:
   2149       ``next(timeout)`` will raise :exc:`multiprocessing.TimeoutError` if the
   2150       result cannot be returned within *timeout* seconds.
   2151 
   2152    .. method:: imap_unordered(func, iterable[, chunksize])
   2153 
   2154       The same as :meth:`imap` except that the ordering of the results from the
   2155       returned iterator should be considered arbitrary.  (Only when there is
   2156       only one worker process is the order guaranteed to be "correct".)
   2157 
   2158    .. method:: starmap(func, iterable[, chunksize])
   2159 
   2160       Like :meth:`map` except that the elements of the *iterable* are expected
   2161       to be iterables that are unpacked as arguments.
   2162 
   2163       Hence an *iterable* of ``[(1,2), (3, 4)]`` results in ``[func(1,2),
   2164       func(3,4)]``.
   2165 
   2166       .. versionadded:: 3.3
   2167 
   2168    .. method:: starmap_async(func, iterable[, chunksize[, callback[, error_back]]])
   2169 
   2170       A combination of :meth:`starmap` and :meth:`map_async` that iterates over
   2171       *iterable* of iterables and calls *func* with the iterables unpacked.
   2172       Returns a result object.
   2173 
   2174       .. versionadded:: 3.3
   2175 
   2176    .. method:: close()
   2177 
   2178       Prevents any more tasks from being submitted to the pool.  Once all the
   2179       tasks have been completed the worker processes will exit.
   2180 
   2181    .. method:: terminate()
   2182 
   2183       Stops the worker processes immediately without completing outstanding
   2184       work.  When the pool object is garbage collected :meth:`terminate` will be
   2185       called immediately.
   2186 
   2187    .. method:: join()
   2188 
   2189       Wait for the worker processes to exit.  One must call :meth:`close` or
   2190       :meth:`terminate` before using :meth:`join`.
   2191 
   2192    .. versionadded:: 3.3
   2193       Pool objects now support the context management protocol -- see
   2194       :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
   2195       pool object, and :meth:`~contextmanager.__exit__` calls :meth:`terminate`.
   2196 
   2197 
   2198 .. class:: AsyncResult
   2199 
   2200    The class of the result returned by :meth:`Pool.apply_async` and
   2201    :meth:`Pool.map_async`.
   2202 
   2203    .. method:: get([timeout])
   2204 
   2205       Return the result when it arrives.  If *timeout* is not ``None`` and the
   2206       result does not arrive within *timeout* seconds then
   2207       :exc:`multiprocessing.TimeoutError` is raised.  If the remote call raised
   2208       an exception then that exception will be reraised by :meth:`get`.
   2209 
   2210    .. method:: wait([timeout])
   2211 
   2212       Wait until the result is available or until *timeout* seconds pass.
   2213 
   2214    .. method:: ready()
   2215 
   2216       Return whether the call has completed.
   2217 
   2218    .. method:: successful()
   2219 
   2220       Return whether the call completed without raising an exception.  Will
   2221       raise :exc:`AssertionError` if the result is not ready.
   2222 
   2223 The following example demonstrates the use of a pool::
   2224 
   2225    from multiprocessing import Pool
   2226    import time
   2227 
   2228    def f(x):
   2229        return x*x
   2230 
   2231    if __name__ == '__main__':
   2232        with Pool(processes=4) as pool:         # start 4 worker processes
   2233            result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process
   2234            print(result.get(timeout=1))        # prints "100" unless your computer is *very* slow
   2235 
   2236            print(pool.map(f, range(10)))       # prints "[0, 1, 4,..., 81]"
   2237 
   2238            it = pool.imap(f, range(10))
   2239            print(next(it))                     # prints "0"
   2240            print(next(it))                     # prints "1"
   2241            print(it.next(timeout=1))           # prints "4" unless your computer is *very* slow
   2242 
   2243            result = pool.apply_async(time.sleep, (10,))
   2244            print(result.get(timeout=1))        # raises multiprocessing.TimeoutError
   2245 
   2246 
   2247 .. _multiprocessing-listeners-clients:
   2248 
   2249 Listeners and Clients
   2250 ~~~~~~~~~~~~~~~~~~~~~
   2251 
   2252 .. module:: multiprocessing.connection
   2253    :synopsis: API for dealing with sockets.
   2254 
   2255 Usually message passing between processes is done using queues or by using
   2256 :class:`~multiprocessing.Connection` objects returned by
   2257 :func:`~multiprocessing.Pipe`.
   2258 
   2259 However, the :mod:`multiprocessing.connection` module allows some extra
   2260 flexibility.  It basically gives a high level message oriented API for dealing
   2261 with sockets or Windows named pipes.  It also has support for *digest
   2262 authentication* using the :mod:`hmac` module, and for polling
   2263 multiple connections at the same time.
   2264 
   2265 
   2266 .. function:: deliver_challenge(connection, authkey)
   2267 
   2268    Send a randomly generated message to the other end of the connection and wait
   2269    for a reply.
   2270 
   2271    If the reply matches the digest of the message using *authkey* as the key
   2272    then a welcome message is sent to the other end of the connection.  Otherwise
   2273    :exc:`~multiprocessing.AuthenticationError` is raised.
   2274 
   2275 .. function:: answer_challenge(connection, authkey)
   2276 
   2277    Receive a message, calculate the digest of the message using *authkey* as the
   2278    key, and then send the digest back.
   2279 
   2280    If a welcome message is not received, then
   2281    :exc:`~multiprocessing.AuthenticationError` is raised.
   2282 
   2283 .. function:: Client(address[, family[, authenticate[, authkey]]])
   2284 
   2285    Attempt to set up a connection to the listener which is using address
   2286    *address*, returning a :class:`~multiprocessing.Connection`.
   2287 
   2288    The type of the connection is determined by *family* argument, but this can
   2289    generally be omitted since it can usually be inferred from the format of
   2290    *address*. (See :ref:`multiprocessing-address-formats`)
   2291 
   2292    If *authenticate* is ``True`` or *authkey* is a byte string then digest
   2293    authentication is used.  The key used for authentication will be either
   2294    *authkey* or ``current_process().authkey`` if *authkey* is ``None``.
   2295    If authentication fails then
   2296    :exc:`~multiprocessing.AuthenticationError` is raised.  See
   2297    :ref:`multiprocessing-auth-keys`.
   2298 
   2299 .. class:: Listener([address[, family[, backlog[, authenticate[, authkey]]]]])
   2300 
   2301    A wrapper for a bound socket or Windows named pipe which is 'listening' for
   2302    connections.
   2303 
   2304    *address* is the address to be used by the bound socket or named pipe of the
   2305    listener object.
   2306 
   2307    .. note::
   2308 
   2309       If an address of '0.0.0.0' is used, the address will not be a connectable
   2310       end point on Windows. If you require a connectable end-point,
   2311       you should use '127.0.0.1'.
   2312 
   2313    *family* is the type of socket (or named pipe) to use.  This can be one of
   2314    the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix
   2315    domain socket) or ``'AF_PIPE'`` (for a Windows named pipe).  Of these only
   2316    the first is guaranteed to be available.  If *family* is ``None`` then the
   2317    family is inferred from the format of *address*.  If *address* is also
   2318    ``None`` then a default is chosen.  This default is the family which is
   2319    assumed to be the fastest available.  See
   2320    :ref:`multiprocessing-address-formats`.  Note that if *family* is
   2321    ``'AF_UNIX'`` and address is ``None`` then the socket will be created in a
   2322    private temporary directory created using :func:`tempfile.mkstemp`.
   2323 
   2324    If the listener object uses a socket then *backlog* (1 by default) is passed
   2325    to the :meth:`~socket.socket.listen` method of the socket once it has been
   2326    bound.
   2327 
   2328    If *authenticate* is ``True`` (``False`` by default) or *authkey* is not
   2329    ``None`` then digest authentication is used.
   2330 
   2331    If *authkey* is a byte string then it will be used as the
   2332    authentication key; otherwise it must be ``None``.
   2333 
   2334    If *authkey* is ``None`` and *authenticate* is ``True`` then
   2335    ``current_process().authkey`` is used as the authentication key.  If
   2336    *authkey* is ``None`` and *authenticate* is ``False`` then no
   2337    authentication is done.  If authentication fails then
   2338    :exc:`~multiprocessing.AuthenticationError` is raised.
   2339    See :ref:`multiprocessing-auth-keys`.
   2340 
   2341    .. method:: accept()
   2342 
   2343       Accept a connection on the bound socket or named pipe of the listener
   2344       object and return a :class:`~multiprocessing.Connection` object.  If
   2345       authentication is attempted and fails, then
   2346       :exc:`~multiprocessing.AuthenticationError` is raised.
   2347 
   2348    .. method:: close()
   2349 
   2350       Close the bound socket or named pipe of the listener object.  This is
   2351       called automatically when the listener is garbage collected.  However it
   2352       is advisable to call it explicitly.
   2353 
   2354    Listener objects have the following read-only properties:
   2355 
   2356    .. attribute:: address
   2357 
   2358       The address which is being used by the Listener object.
   2359 
   2360    .. attribute:: last_accepted
   2361 
   2362       The address from which the last accepted connection came.  If this is
   2363       unavailable then it is ``None``.
   2364 
   2365    .. versionadded:: 3.3
   2366       Listener objects now support the context management protocol -- see
   2367       :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
   2368       listener object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
   2369 
   2370 .. function:: wait(object_list, timeout=None)
   2371 
   2372    Wait till an object in *object_list* is ready.  Returns the list of
   2373    those objects in *object_list* which are ready.  If *timeout* is a
   2374    float then the call blocks for at most that many seconds.  If
   2375    *timeout* is ``None`` then it will block for an unlimited period.
   2376    A negative timeout is equivalent to a zero timeout.
   2377 
   2378    For both Unix and Windows, an object can appear in *object_list* if
   2379    it is
   2380 
   2381    * a readable :class:`~multiprocessing.Connection` object;
   2382    * a connected and readable :class:`socket.socket` object; or
   2383    * the :attr:`~multiprocessing.Process.sentinel` attribute of a
   2384      :class:`~multiprocessing.Process` object.
   2385 
   2386    A connection or socket object is ready when there is data available
   2387    to be read from it, or the other end has been closed.
   2388 
   2389    **Unix**: ``wait(object_list, timeout)`` almost equivalent
   2390    ``select.select(object_list, [], [], timeout)``.  The difference is
   2391    that, if :func:`select.select` is interrupted by a signal, it can
   2392    raise :exc:`OSError` with an error number of ``EINTR``, whereas
   2393    :func:`wait` will not.
   2394 
   2395    **Windows**: An item in *object_list* must either be an integer
   2396    handle which is waitable (according to the definition used by the
   2397    documentation of the Win32 function ``WaitForMultipleObjects()``)
   2398    or it can be an object with a :meth:`fileno` method which returns a
   2399    socket handle or pipe handle.  (Note that pipe handles and socket
   2400    handles are **not** waitable handles.)
   2401 
   2402    .. versionadded:: 3.3
   2403 
   2404 
   2405 **Examples**
   2406 
   2407 The following server code creates a listener which uses ``'secret password'`` as
   2408 an authentication key.  It then waits for a connection and sends some data to
   2409 the client::
   2410 
   2411    from multiprocessing.connection import Listener
   2412    from array import array
   2413 
   2414    address = ('localhost', 6000)     # family is deduced to be 'AF_INET'
   2415 
   2416    with Listener(address, authkey=b'secret password') as listener:
   2417        with listener.accept() as conn:
   2418            print('connection accepted from', listener.last_accepted)
   2419 
   2420            conn.send([2.25, None, 'junk', float])
   2421 
   2422            conn.send_bytes(b'hello')
   2423 
   2424            conn.send_bytes(array('i', [42, 1729]))
   2425 
   2426 The following code connects to the server and receives some data from the
   2427 server::
   2428 
   2429    from multiprocessing.connection import Client
   2430    from array import array
   2431 
   2432    address = ('localhost', 6000)
   2433 
   2434    with Client(address, authkey=b'secret password') as conn:
   2435        print(conn.recv())                  # => [2.25, None, 'junk', float]
   2436 
   2437        print(conn.recv_bytes())            # => 'hello'
   2438 
   2439        arr = array('i', [0, 0, 0, 0, 0])
   2440        print(conn.recv_bytes_into(arr))    # => 8
   2441        print(arr)                          # => array('i', [42, 1729, 0, 0, 0])
   2442 
   2443 The following code uses :func:`~multiprocessing.connection.wait` to
   2444 wait for messages from multiple processes at once::
   2445 
   2446    import time, random
   2447    from multiprocessing import Process, Pipe, current_process
   2448    from multiprocessing.connection import wait
   2449 
   2450    def foo(w):
   2451        for i in range(10):
   2452            w.send((i, current_process().name))
   2453        w.close()
   2454 
   2455    if __name__ == '__main__':
   2456        readers = []
   2457 
   2458        for i in range(4):
   2459            r, w = Pipe(duplex=False)
   2460            readers.append(r)
   2461            p = Process(target=foo, args=(w,))
   2462            p.start()
   2463            # We close the writable end of the pipe now to be sure that
   2464            # p is the only process which owns a handle for it.  This
   2465            # ensures that when p closes its handle for the writable end,
   2466            # wait() will promptly report the readable end as being ready.
   2467            w.close()
   2468 
   2469        while readers:
   2470            for r in wait(readers):
   2471                try:
   2472                    msg = r.recv()
   2473                except EOFError:
   2474                    readers.remove(r)
   2475                else:
   2476                    print(msg)
   2477 
   2478 
   2479 .. _multiprocessing-address-formats:
   2480 
   2481 Address Formats
   2482 >>>>>>>>>>>>>>>
   2483 
   2484 * An ``'AF_INET'`` address is a tuple of the form ``(hostname, port)`` where
   2485   *hostname* is a string and *port* is an integer.
   2486 
   2487 * An ``'AF_UNIX'`` address is a string representing a filename on the
   2488   filesystem.
   2489 
   2490 * An ``'AF_PIPE'`` address is a string of the form
   2491    :samp:`r'\\\\.\\pipe\\{PipeName}'`.  To use :func:`Client` to connect to a named
   2492    pipe on a remote computer called *ServerName* one should use an address of the
   2493    form :samp:`r'\\\\{ServerName}\\pipe\\{PipeName}'` instead.
   2494 
   2495 Note that any string beginning with two backslashes is assumed by default to be
   2496 an ``'AF_PIPE'`` address rather than an ``'AF_UNIX'`` address.
   2497 
   2498 
   2499 .. _multiprocessing-auth-keys:
   2500 
   2501 Authentication keys
   2502 ~~~~~~~~~~~~~~~~~~~
   2503 
   2504 When one uses :meth:`Connection.recv <multiprocessing.Connection.recv>`, the
   2505 data received is automatically
   2506 unpickled.  Unfortunately unpickling data from an untrusted source is a security
   2507 risk.  Therefore :class:`Listener` and :func:`Client` use the :mod:`hmac` module
   2508 to provide digest authentication.
   2509 
   2510 An authentication key is a byte string which can be thought of as a
   2511 password: once a connection is established both ends will demand proof
   2512 that the other knows the authentication key.  (Demonstrating that both
   2513 ends are using the same key does **not** involve sending the key over
   2514 the connection.)
   2515 
   2516 If authentication is requested but no authentication key is specified then the
   2517 return value of ``current_process().authkey`` is used (see
   2518 :class:`~multiprocessing.Process`).  This value will be automatically inherited by
   2519 any :class:`~multiprocessing.Process` object that the current process creates.
   2520 This means that (by default) all processes of a multi-process program will share
   2521 a single authentication key which can be used when setting up connections
   2522 between themselves.
   2523 
   2524 Suitable authentication keys can also be generated by using :func:`os.urandom`.
   2525 
   2526 
   2527 Logging
   2528 ~~~~~~~
   2529 
   2530 Some support for logging is available.  Note, however, that the :mod:`logging`
   2531 package does not use process shared locks so it is possible (depending on the
   2532 handler type) for messages from different processes to get mixed up.
   2533 
   2534 .. currentmodule:: multiprocessing
   2535 .. function:: get_logger()
   2536 
   2537    Returns the logger used by :mod:`multiprocessing`.  If necessary, a new one
   2538    will be created.
   2539 
   2540    When first created the logger has level :data:`logging.NOTSET` and no
   2541    default handler. Messages sent to this logger will not by default propagate
   2542    to the root logger.
   2543 
   2544    Note that on Windows child processes will only inherit the level of the
   2545    parent process's logger -- any other customization of the logger will not be
   2546    inherited.
   2547 
   2548 .. currentmodule:: multiprocessing
   2549 .. function:: log_to_stderr()
   2550 
   2551    This function performs a call to :func:`get_logger` but in addition to
   2552    returning the logger created by get_logger, it adds a handler which sends
   2553    output to :data:`sys.stderr` using format
   2554    ``'[%(levelname)s/%(processName)s] %(message)s'``.
   2555 
   2556 Below is an example session with logging turned on::
   2557 
   2558     >>> import multiprocessing, logging
   2559     >>> logger = multiprocessing.log_to_stderr()
   2560     >>> logger.setLevel(logging.INFO)
   2561     >>> logger.warning('doomed')
   2562     [WARNING/MainProcess] doomed
   2563     >>> m = multiprocessing.Manager()
   2564     [INFO/SyncManager-...] child process calling self.run()
   2565     [INFO/SyncManager-...] created temp directory /.../pymp-...
   2566     [INFO/SyncManager-...] manager serving at '/.../listener-...'
   2567     >>> del m
   2568     [INFO/MainProcess] sending shutdown message to manager
   2569     [INFO/SyncManager-...] manager exiting with exitcode 0
   2570 
   2571 For a full table of logging levels, see the :mod:`logging` module.
   2572 
   2573 
   2574 The :mod:`multiprocessing.dummy` module
   2575 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   2576 
   2577 .. module:: multiprocessing.dummy
   2578    :synopsis: Dumb wrapper around threading.
   2579 
   2580 :mod:`multiprocessing.dummy` replicates the API of :mod:`multiprocessing` but is
   2581 no more than a wrapper around the :mod:`threading` module.
   2582 
   2583 
   2584 .. _multiprocessing-programming:
   2585 
   2586 Programming guidelines
   2587 ----------------------
   2588 
   2589 There are certain guidelines and idioms which should be adhered to when using
   2590 :mod:`multiprocessing`.
   2591 
   2592 
   2593 All start methods
   2594 ~~~~~~~~~~~~~~~~~
   2595 
   2596 The following applies to all start methods.
   2597 
   2598 Avoid shared state
   2599 
   2600     As far as possible one should try to avoid shifting large amounts of data
   2601     between processes.
   2602 
   2603     It is probably best to stick to using queues or pipes for communication
   2604     between processes rather than using the lower level synchronization
   2605     primitives.
   2606 
   2607 Picklability
   2608 
   2609     Ensure that the arguments to the methods of proxies are picklable.
   2610 
   2611 Thread safety of proxies
   2612 
   2613     Do not use a proxy object from more than one thread unless you protect it
   2614     with a lock.
   2615 
   2616     (There is never a problem with different processes using the *same* proxy.)
   2617 
   2618 Joining zombie processes
   2619 
   2620     On Unix when a process finishes but has not been joined it becomes a zombie.
   2621     There should never be very many because each time a new process starts (or
   2622     :func:`~multiprocessing.active_children` is called) all completed processes
   2623     which have not yet been joined will be joined.  Also calling a finished
   2624     process's :meth:`Process.is_alive <multiprocessing.Process.is_alive>` will
   2625     join the process.  Even so it is probably good
   2626     practice to explicitly join all the processes that you start.
   2627 
   2628 Better to inherit than pickle/unpickle
   2629 
   2630     When using the *spawn* or *forkserver* start methods many types
   2631     from :mod:`multiprocessing` need to be picklable so that child
   2632     processes can use them.  However, one should generally avoid
   2633     sending shared objects to other processes using pipes or queues.
   2634     Instead you should arrange the program so that a process which
   2635     needs access to a shared resource created elsewhere can inherit it
   2636     from an ancestor process.
   2637 
   2638 Avoid terminating processes
   2639 
   2640     Using the :meth:`Process.terminate <multiprocessing.Process.terminate>`
   2641     method to stop a process is liable to
   2642     cause any shared resources (such as locks, semaphores, pipes and queues)
   2643     currently being used by the process to become broken or unavailable to other
   2644     processes.
   2645 
   2646     Therefore it is probably best to only consider using
   2647     :meth:`Process.terminate <multiprocessing.Process.terminate>` on processes
   2648     which never use any shared resources.
   2649 
   2650 Joining processes that use queues
   2651 
   2652     Bear in mind that a process that has put items in a queue will wait before
   2653     terminating until all the buffered items are fed by the "feeder" thread to
   2654     the underlying pipe.  (The child process can call the
   2655     :meth:`Queue.cancel_join_thread <multiprocessing.Queue.cancel_join_thread>`
   2656     method of the queue to avoid this behaviour.)
   2657 
   2658     This means that whenever you use a queue you need to make sure that all
   2659     items which have been put on the queue will eventually be removed before the
   2660     process is joined.  Otherwise you cannot be sure that processes which have
   2661     put items on the queue will terminate.  Remember also that non-daemonic
   2662     processes will be joined automatically.
   2663 
   2664     An example which will deadlock is the following::
   2665 
   2666         from multiprocessing import Process, Queue
   2667 
   2668         def f(q):
   2669             q.put('X' * 1000000)
   2670 
   2671         if __name__ == '__main__':
   2672             queue = Queue()
   2673             p = Process(target=f, args=(queue,))
   2674             p.start()
   2675             p.join()                    # this deadlocks
   2676             obj = queue.get()
   2677 
   2678     A fix here would be to swap the last two lines (or simply remove the
   2679     ``p.join()`` line).
   2680 
   2681 Explicitly pass resources to child processes
   2682 
   2683     On Unix using the *fork* start method, a child process can make
   2684     use of a shared resource created in a parent process using a
   2685     global resource.  However, it is better to pass the object as an
   2686     argument to the constructor for the child process.
   2687 
   2688     Apart from making the code (potentially) compatible with Windows
   2689     and the other start methods this also ensures that as long as the
   2690     child process is still alive the object will not be garbage
   2691     collected in the parent process.  This might be important if some
   2692     resource is freed when the object is garbage collected in the
   2693     parent process.
   2694 
   2695     So for instance ::
   2696 
   2697         from multiprocessing import Process, Lock
   2698 
   2699         def f():
   2700             ... do something using "lock" ...
   2701 
   2702         if __name__ == '__main__':
   2703             lock = Lock()
   2704             for i in range(10):
   2705                 Process(target=f).start()
   2706 
   2707     should be rewritten as ::
   2708 
   2709         from multiprocessing import Process, Lock
   2710 
   2711         def f(l):
   2712             ... do something using "l" ...
   2713 
   2714         if __name__ == '__main__':
   2715             lock = Lock()
   2716             for i in range(10):
   2717                 Process(target=f, args=(lock,)).start()
   2718 
   2719 Beware of replacing :data:`sys.stdin` with a "file like object"
   2720 
   2721     :mod:`multiprocessing` originally unconditionally called::
   2722 
   2723         os.close(sys.stdin.fileno())
   2724 
   2725     in the :meth:`multiprocessing.Process._bootstrap` method --- this resulted
   2726     in issues with processes-in-processes. This has been changed to::
   2727 
   2728         sys.stdin.close()
   2729         sys.stdin = open(os.open(os.devnull, os.O_RDONLY), closefd=False)
   2730 
   2731     Which solves the fundamental issue of processes colliding with each other
   2732     resulting in a bad file descriptor error, but introduces a potential danger
   2733     to applications which replace :func:`sys.stdin` with a "file-like object"
   2734     with output buffering.  This danger is that if multiple processes call
   2735     :meth:`~io.IOBase.close()` on this file-like object, it could result in the same
   2736     data being flushed to the object multiple times, resulting in corruption.
   2737 
   2738     If you write a file-like object and implement your own caching, you can
   2739     make it fork-safe by storing the pid whenever you append to the cache,
   2740     and discarding the cache when the pid changes. For example::
   2741 
   2742        @property
   2743        def cache(self):
   2744            pid = os.getpid()
   2745            if pid != self._pid:
   2746                self._pid = pid
   2747                self._cache = []
   2748            return self._cache
   2749 
   2750     For more information, see :issue:`5155`, :issue:`5313` and :issue:`5331`
   2751 
   2752 The *spawn* and *forkserver* start methods
   2753 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   2754 
   2755 There are a few extra restriction which don't apply to the *fork*
   2756 start method.
   2757 
   2758 More picklability
   2759 
   2760     Ensure that all arguments to :meth:`Process.__init__` are picklable.
   2761     Also, if you subclass :class:`~multiprocessing.Process` then make sure that
   2762     instances will be picklable when the :meth:`Process.start
   2763     <multiprocessing.Process.start>` method is called.
   2764 
   2765 Global variables
   2766 
   2767     Bear in mind that if code run in a child process tries to access a global
   2768     variable, then the value it sees (if any) may not be the same as the value
   2769     in the parent process at the time that :meth:`Process.start
   2770     <multiprocessing.Process.start>` was called.
   2771 
   2772     However, global variables which are just module level constants cause no
   2773     problems.
   2774 
   2775 Safe importing of main module
   2776 
   2777     Make sure that the main module can be safely imported by a new Python
   2778     interpreter without causing unintended side effects (such a starting a new
   2779     process).
   2780 
   2781     For example, using the *spawn* or *forkserver* start method
   2782     running the following module would fail with a
   2783     :exc:`RuntimeError`::
   2784 
   2785         from multiprocessing import Process
   2786 
   2787         def foo():
   2788             print('hello')
   2789 
   2790         p = Process(target=foo)
   2791         p.start()
   2792 
   2793     Instead one should protect the "entry point" of the program by using ``if
   2794     __name__ == '__main__':`` as follows::
   2795 
   2796        from multiprocessing import Process, freeze_support, set_start_method
   2797 
   2798        def foo():
   2799            print('hello')
   2800 
   2801        if __name__ == '__main__':
   2802            freeze_support()
   2803            set_start_method('spawn')
   2804            p = Process(target=foo)
   2805            p.start()
   2806 
   2807     (The ``freeze_support()`` line can be omitted if the program will be run
   2808     normally instead of frozen.)
   2809 
   2810     This allows the newly spawned Python interpreter to safely import the module
   2811     and then run the module's ``foo()`` function.
   2812 
   2813     Similar restrictions apply if a pool or manager is created in the main
   2814     module.
   2815 
   2816 
   2817 .. _multiprocessing-examples:
   2818 
   2819 Examples
   2820 --------
   2821 
   2822 Demonstration of how to create and use customized managers and proxies:
   2823 
   2824 .. literalinclude:: ../includes/mp_newtype.py
   2825    :language: python3
   2826 
   2827 
   2828 Using :class:`~multiprocessing.pool.Pool`:
   2829 
   2830 .. literalinclude:: ../includes/mp_pool.py
   2831    :language: python3
   2832 
   2833 
   2834 An example showing how to use queues to feed tasks to a collection of worker
   2835 processes and collect the results:
   2836 
   2837 .. literalinclude:: ../includes/mp_workers.py
   2838