Home | History | Annotate | Download | only in howto
      1 .. _logging-cookbook:
      2 
      3 ================
      4 Logging Cookbook
      5 ================
      6 
      7 :Author: Vinay Sajip <vinay_sajip at red-dove dot com>
      8 
      9 This page contains a number of recipes related to logging, which have been found
     10 useful in the past.
     11 
     12 .. currentmodule:: logging
     13 
     14 Using logging in multiple modules
     15 ---------------------------------
     16 
     17 Multiple calls to ``logging.getLogger('someLogger')`` return a reference to the
     18 same logger object.  This is true not only within the same module, but also
     19 across modules as long as it is in the same Python interpreter process.  It is
     20 true for references to the same object; additionally, application code can
     21 define and configure a parent logger in one module and create (but not
     22 configure) a child logger in a separate module, and all logger calls to the
     23 child will pass up to the parent.  Here is a main module::
     24 
     25     import logging
     26     import auxiliary_module
     27 
     28     # create logger with 'spam_application'
     29     logger = logging.getLogger('spam_application')
     30     logger.setLevel(logging.DEBUG)
     31     # create file handler which logs even debug messages
     32     fh = logging.FileHandler('spam.log')
     33     fh.setLevel(logging.DEBUG)
     34     # create console handler with a higher log level
     35     ch = logging.StreamHandler()
     36     ch.setLevel(logging.ERROR)
     37     # create formatter and add it to the handlers
     38     formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
     39     fh.setFormatter(formatter)
     40     ch.setFormatter(formatter)
     41     # add the handlers to the logger
     42     logger.addHandler(fh)
     43     logger.addHandler(ch)
     44 
     45     logger.info('creating an instance of auxiliary_module.Auxiliary')
     46     a = auxiliary_module.Auxiliary()
     47     logger.info('created an instance of auxiliary_module.Auxiliary')
     48     logger.info('calling auxiliary_module.Auxiliary.do_something')
     49     a.do_something()
     50     logger.info('finished auxiliary_module.Auxiliary.do_something')
     51     logger.info('calling auxiliary_module.some_function()')
     52     auxiliary_module.some_function()
     53     logger.info('done with auxiliary_module.some_function()')
     54 
     55 Here is the auxiliary module::
     56 
     57     import logging
     58 
     59     # create logger
     60     module_logger = logging.getLogger('spam_application.auxiliary')
     61 
     62     class Auxiliary:
     63         def __init__(self):
     64             self.logger = logging.getLogger('spam_application.auxiliary.Auxiliary')
     65             self.logger.info('creating an instance of Auxiliary')
     66 
     67         def do_something(self):
     68             self.logger.info('doing something')
     69             a = 1 + 1
     70             self.logger.info('done doing something')
     71 
     72     def some_function():
     73         module_logger.info('received a call to "some_function"')
     74 
     75 The output looks like this:
     76 
     77 .. code-block:: none
     78 
     79     2005-03-23 23:47:11,663 - spam_application - INFO -
     80        creating an instance of auxiliary_module.Auxiliary
     81     2005-03-23 23:47:11,665 - spam_application.auxiliary.Auxiliary - INFO -
     82        creating an instance of Auxiliary
     83     2005-03-23 23:47:11,665 - spam_application - INFO -
     84        created an instance of auxiliary_module.Auxiliary
     85     2005-03-23 23:47:11,668 - spam_application - INFO -
     86        calling auxiliary_module.Auxiliary.do_something
     87     2005-03-23 23:47:11,668 - spam_application.auxiliary.Auxiliary - INFO -
     88        doing something
     89     2005-03-23 23:47:11,669 - spam_application.auxiliary.Auxiliary - INFO -
     90        done doing something
     91     2005-03-23 23:47:11,670 - spam_application - INFO -
     92        finished auxiliary_module.Auxiliary.do_something
     93     2005-03-23 23:47:11,671 - spam_application - INFO -
     94        calling auxiliary_module.some_function()
     95     2005-03-23 23:47:11,672 - spam_application.auxiliary - INFO -
     96        received a call to 'some_function'
     97     2005-03-23 23:47:11,673 - spam_application - INFO -
     98        done with auxiliary_module.some_function()
     99 
    100 Logging from multiple threads
    101 -----------------------------
    102 
    103 Logging from multiple threads requires no special effort. The following example
    104 shows logging from the main (initial) thread and another thread::
    105 
    106     import logging
    107     import threading
    108     import time
    109 
    110     def worker(arg):
    111         while not arg['stop']:
    112             logging.debug('Hi from myfunc')
    113             time.sleep(0.5)
    114 
    115     def main():
    116         logging.basicConfig(level=logging.DEBUG, format='%(relativeCreated)6d %(threadName)s %(message)s')
    117         info = {'stop': False}
    118         thread = threading.Thread(target=worker, args=(info,))
    119         thread.start()
    120         while True:
    121             try:
    122                 logging.debug('Hello from main')
    123                 time.sleep(0.75)
    124             except KeyboardInterrupt:
    125                 info['stop'] = True
    126                 break
    127         thread.join()
    128 
    129     if __name__ == '__main__':
    130         main()
    131 
    132 When run, the script should print something like the following:
    133 
    134 .. code-block:: none
    135 
    136      0 Thread-1 Hi from myfunc
    137      3 MainThread Hello from main
    138    505 Thread-1 Hi from myfunc
    139    755 MainThread Hello from main
    140   1007 Thread-1 Hi from myfunc
    141   1507 MainThread Hello from main
    142   1508 Thread-1 Hi from myfunc
    143   2010 Thread-1 Hi from myfunc
    144   2258 MainThread Hello from main
    145   2512 Thread-1 Hi from myfunc
    146   3009 MainThread Hello from main
    147   3013 Thread-1 Hi from myfunc
    148   3515 Thread-1 Hi from myfunc
    149   3761 MainThread Hello from main
    150   4017 Thread-1 Hi from myfunc
    151   4513 MainThread Hello from main
    152   4518 Thread-1 Hi from myfunc
    153 
    154 This shows the logging output interspersed as one might expect. This approach
    155 works for more threads than shown here, of course.
    156 
    157 Multiple handlers and formatters
    158 --------------------------------
    159 
    160 Loggers are plain Python objects.  The :meth:`~Logger.addHandler` method has no
    161 minimum or maximum quota for the number of handlers you may add.  Sometimes it
    162 will be beneficial for an application to log all messages of all severities to a
    163 text file while simultaneously logging errors or above to the console.  To set
    164 this up, simply configure the appropriate handlers.  The logging calls in the
    165 application code will remain unchanged.  Here is a slight modification to the
    166 previous simple module-based configuration example::
    167 
    168     import logging
    169 
    170     logger = logging.getLogger('simple_example')
    171     logger.setLevel(logging.DEBUG)
    172     # create file handler which logs even debug messages
    173     fh = logging.FileHandler('spam.log')
    174     fh.setLevel(logging.DEBUG)
    175     # create console handler with a higher log level
    176     ch = logging.StreamHandler()
    177     ch.setLevel(logging.ERROR)
    178     # create formatter and add it to the handlers
    179     formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
    180     ch.setFormatter(formatter)
    181     fh.setFormatter(formatter)
    182     # add the handlers to logger
    183     logger.addHandler(ch)
    184     logger.addHandler(fh)
    185 
    186     # 'application' code
    187     logger.debug('debug message')
    188     logger.info('info message')
    189     logger.warning('warn message')
    190     logger.error('error message')
    191     logger.critical('critical message')
    192 
    193 Notice that the 'application' code does not care about multiple handlers.  All
    194 that changed was the addition and configuration of a new handler named *fh*.
    195 
    196 The ability to create new handlers with higher- or lower-severity filters can be
    197 very helpful when writing and testing an application.  Instead of using many
    198 ``print`` statements for debugging, use ``logger.debug``: Unlike the print
    199 statements, which you will have to delete or comment out later, the logger.debug
    200 statements can remain intact in the source code and remain dormant until you
    201 need them again.  At that time, the only change that needs to happen is to
    202 modify the severity level of the logger and/or handler to debug.
    203 
    204 .. _multiple-destinations:
    205 
    206 Logging to multiple destinations
    207 --------------------------------
    208 
    209 Let's say you want to log to console and file with different message formats and
    210 in differing circumstances. Say you want to log messages with levels of DEBUG
    211 and higher to file, and those messages at level INFO and higher to the console.
    212 Let's also assume that the file should contain timestamps, but the console
    213 messages should not. Here's how you can achieve this::
    214 
    215    import logging
    216 
    217    # set up logging to file - see previous section for more details
    218    logging.basicConfig(level=logging.DEBUG,
    219                        format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
    220                        datefmt='%m-%d %H:%M',
    221                        filename='/temp/myapp.log',
    222                        filemode='w')
    223    # define a Handler which writes INFO messages or higher to the sys.stderr
    224    console = logging.StreamHandler()
    225    console.setLevel(logging.INFO)
    226    # set a format which is simpler for console use
    227    formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
    228    # tell the handler to use this format
    229    console.setFormatter(formatter)
    230    # add the handler to the root logger
    231    logging.getLogger('').addHandler(console)
    232 
    233    # Now, we can log to the root logger, or any other logger. First the root...
    234    logging.info('Jackdaws love my big sphinx of quartz.')
    235 
    236    # Now, define a couple of other loggers which might represent areas in your
    237    # application:
    238 
    239    logger1 = logging.getLogger('myapp.area1')
    240    logger2 = logging.getLogger('myapp.area2')
    241 
    242    logger1.debug('Quick zephyrs blow, vexing daft Jim.')
    243    logger1.info('How quickly daft jumping zebras vex.')
    244    logger2.warning('Jail zesty vixen who grabbed pay from quack.')
    245    logger2.error('The five boxing wizards jump quickly.')
    246 
    247 When you run this, on the console you will see
    248 
    249 .. code-block:: none
    250 
    251    root        : INFO     Jackdaws love my big sphinx of quartz.
    252    myapp.area1 : INFO     How quickly daft jumping zebras vex.
    253    myapp.area2 : WARNING  Jail zesty vixen who grabbed pay from quack.
    254    myapp.area2 : ERROR    The five boxing wizards jump quickly.
    255 
    256 and in the file you will see something like
    257 
    258 .. code-block:: none
    259 
    260    10-22 22:19 root         INFO     Jackdaws love my big sphinx of quartz.
    261    10-22 22:19 myapp.area1  DEBUG    Quick zephyrs blow, vexing daft Jim.
    262    10-22 22:19 myapp.area1  INFO     How quickly daft jumping zebras vex.
    263    10-22 22:19 myapp.area2  WARNING  Jail zesty vixen who grabbed pay from quack.
    264    10-22 22:19 myapp.area2  ERROR    The five boxing wizards jump quickly.
    265 
    266 As you can see, the DEBUG message only shows up in the file. The other messages
    267 are sent to both destinations.
    268 
    269 This example uses console and file handlers, but you can use any number and
    270 combination of handlers you choose.
    271 
    272 
    273 Configuration server example
    274 ----------------------------
    275 
    276 Here is an example of a module using the logging configuration server::
    277 
    278     import logging
    279     import logging.config
    280     import time
    281     import os
    282 
    283     # read initial config file
    284     logging.config.fileConfig('logging.conf')
    285 
    286     # create and start listener on port 9999
    287     t = logging.config.listen(9999)
    288     t.start()
    289 
    290     logger = logging.getLogger('simpleExample')
    291 
    292     try:
    293         # loop through logging calls to see the difference
    294         # new configurations make, until Ctrl+C is pressed
    295         while True:
    296             logger.debug('debug message')
    297             logger.info('info message')
    298             logger.warning('warn message')
    299             logger.error('error message')
    300             logger.critical('critical message')
    301             time.sleep(5)
    302     except KeyboardInterrupt:
    303         # cleanup
    304         logging.config.stopListening()
    305         t.join()
    306 
    307 And here is a script that takes a filename and sends that file to the server,
    308 properly preceded with the binary-encoded length, as the new logging
    309 configuration::
    310 
    311     #!/usr/bin/env python
    312     import socket, sys, struct
    313 
    314     with open(sys.argv[1], 'rb') as f:
    315         data_to_send = f.read()
    316 
    317     HOST = 'localhost'
    318     PORT = 9999
    319     s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    320     print('connecting...')
    321     s.connect((HOST, PORT))
    322     print('sending config...')
    323     s.send(struct.pack('>L', len(data_to_send)))
    324     s.send(data_to_send)
    325     s.close()
    326     print('complete')
    327 
    328 
    329 Dealing with handlers that block
    330 --------------------------------
    331 
    332 .. currentmodule:: logging.handlers
    333 
    334 Sometimes you have to get your logging handlers to do their work without
    335 blocking the thread you're logging from. This is common in Web applications,
    336 though of course it also occurs in other scenarios.
    337 
    338 A common culprit which demonstrates sluggish behaviour is the
    339 :class:`SMTPHandler`: sending emails can take a long time, for a
    340 number of reasons outside the developer's control (for example, a poorly
    341 performing mail or network infrastructure). But almost any network-based
    342 handler can block: Even a :class:`SocketHandler` operation may do a
    343 DNS query under the hood which is too slow (and this query can be deep in the
    344 socket library code, below the Python layer, and outside your control).
    345 
    346 One solution is to use a two-part approach. For the first part, attach only a
    347 :class:`QueueHandler` to those loggers which are accessed from
    348 performance-critical threads. They simply write to their queue, which can be
    349 sized to a large enough capacity or initialized with no upper bound to their
    350 size. The write to the queue will typically be accepted quickly, though you
    351 will probably need to catch the :exc:`queue.Full` exception as a precaution
    352 in your code. If you are a library developer who has performance-critical
    353 threads in their code, be sure to document this (together with a suggestion to
    354 attach only ``QueueHandlers`` to your loggers) for the benefit of other
    355 developers who will use your code.
    356 
    357 The second part of the solution is :class:`QueueListener`, which has been
    358 designed as the counterpart to :class:`QueueHandler`.  A
    359 :class:`QueueListener` is very simple: it's passed a queue and some handlers,
    360 and it fires up an internal thread which listens to its queue for LogRecords
    361 sent from ``QueueHandlers`` (or any other source of ``LogRecords``, for that
    362 matter). The ``LogRecords`` are removed from the queue and passed to the
    363 handlers for processing.
    364 
    365 The advantage of having a separate :class:`QueueListener` class is that you
    366 can use the same instance to service multiple ``QueueHandlers``. This is more
    367 resource-friendly than, say, having threaded versions of the existing handler
    368 classes, which would eat up one thread per handler for no particular benefit.
    369 
    370 An example of using these two classes follows (imports omitted)::
    371 
    372     que = queue.Queue(-1)  # no limit on size
    373     queue_handler = QueueHandler(que)
    374     handler = logging.StreamHandler()
    375     listener = QueueListener(que, handler)
    376     root = logging.getLogger()
    377     root.addHandler(queue_handler)
    378     formatter = logging.Formatter('%(threadName)s: %(message)s')
    379     handler.setFormatter(formatter)
    380     listener.start()
    381     # The log output will display the thread which generated
    382     # the event (the main thread) rather than the internal
    383     # thread which monitors the internal queue. This is what
    384     # you want to happen.
    385     root.warning('Look out!')
    386     listener.stop()
    387 
    388 which, when run, will produce:
    389 
    390 .. code-block:: none
    391 
    392     MainThread: Look out!
    393 
    394 .. versionchanged:: 3.5
    395    Prior to Python 3.5, the :class:`QueueListener` always passed every message
    396    received from the queue to every handler it was initialized with. (This was
    397    because it was assumed that level filtering was all done on the other side,
    398    where the queue is filled.) From 3.5 onwards, this behaviour can be changed
    399    by passing a keyword argument ``respect_handler_level=True`` to the
    400    listener's constructor. When this is done, the listener compares the level
    401    of each message with the handler's level, and only passes a message to a
    402    handler if it's appropriate to do so.
    403 
    404 .. _network-logging:
    405 
    406 Sending and receiving logging events across a network
    407 -----------------------------------------------------
    408 
    409 Let's say you want to send logging events across a network, and handle them at
    410 the receiving end. A simple way of doing this is attaching a
    411 :class:`SocketHandler` instance to the root logger at the sending end::
    412 
    413    import logging, logging.handlers
    414 
    415    rootLogger = logging.getLogger('')
    416    rootLogger.setLevel(logging.DEBUG)
    417    socketHandler = logging.handlers.SocketHandler('localhost',
    418                        logging.handlers.DEFAULT_TCP_LOGGING_PORT)
    419    # don't bother with a formatter, since a socket handler sends the event as
    420    # an unformatted pickle
    421    rootLogger.addHandler(socketHandler)
    422 
    423    # Now, we can log to the root logger, or any other logger. First the root...
    424    logging.info('Jackdaws love my big sphinx of quartz.')
    425 
    426    # Now, define a couple of other loggers which might represent areas in your
    427    # application:
    428 
    429    logger1 = logging.getLogger('myapp.area1')
    430    logger2 = logging.getLogger('myapp.area2')
    431 
    432    logger1.debug('Quick zephyrs blow, vexing daft Jim.')
    433    logger1.info('How quickly daft jumping zebras vex.')
    434    logger2.warning('Jail zesty vixen who grabbed pay from quack.')
    435    logger2.error('The five boxing wizards jump quickly.')
    436 
    437 At the receiving end, you can set up a receiver using the :mod:`socketserver`
    438 module. Here is a basic working example::
    439 
    440    import pickle
    441    import logging
    442    import logging.handlers
    443    import socketserver
    444    import struct
    445 
    446 
    447    class LogRecordStreamHandler(socketserver.StreamRequestHandler):
    448        """Handler for a streaming logging request.
    449 
    450        This basically logs the record using whatever logging policy is
    451        configured locally.
    452        """
    453 
    454        def handle(self):
    455            """
    456            Handle multiple requests - each expected to be a 4-byte length,
    457            followed by the LogRecord in pickle format. Logs the record
    458            according to whatever policy is configured locally.
    459            """
    460            while True:
    461                chunk = self.connection.recv(4)
    462                if len(chunk) < 4:
    463                    break
    464                slen = struct.unpack('>L', chunk)[0]
    465                chunk = self.connection.recv(slen)
    466                while len(chunk) < slen:
    467                    chunk = chunk + self.connection.recv(slen - len(chunk))
    468                obj = self.unPickle(chunk)
    469                record = logging.makeLogRecord(obj)
    470                self.handleLogRecord(record)
    471 
    472        def unPickle(self, data):
    473            return pickle.loads(data)
    474 
    475        def handleLogRecord(self, record):
    476            # if a name is specified, we use the named logger rather than the one
    477            # implied by the record.
    478            if self.server.logname is not None:
    479                name = self.server.logname
    480            else:
    481                name = record.name
    482            logger = logging.getLogger(name)
    483            # N.B. EVERY record gets logged. This is because Logger.handle
    484            # is normally called AFTER logger-level filtering. If you want
    485            # to do filtering, do it at the client end to save wasting
    486            # cycles and network bandwidth!
    487            logger.handle(record)
    488 
    489    class LogRecordSocketReceiver(socketserver.ThreadingTCPServer):
    490        """
    491        Simple TCP socket-based logging receiver suitable for testing.
    492        """
    493 
    494        allow_reuse_address = True
    495 
    496        def __init__(self, host='localhost',
    497                     port=logging.handlers.DEFAULT_TCP_LOGGING_PORT,
    498                     handler=LogRecordStreamHandler):
    499            socketserver.ThreadingTCPServer.__init__(self, (host, port), handler)
    500            self.abort = 0
    501            self.timeout = 1
    502            self.logname = None
    503 
    504        def serve_until_stopped(self):
    505            import select
    506            abort = 0
    507            while not abort:
    508                rd, wr, ex = select.select([self.socket.fileno()],
    509                                           [], [],
    510                                           self.timeout)
    511                if rd:
    512                    self.handle_request()
    513                abort = self.abort
    514 
    515    def main():
    516        logging.basicConfig(
    517            format='%(relativeCreated)5d %(name)-15s %(levelname)-8s %(message)s')
    518        tcpserver = LogRecordSocketReceiver()
    519        print('About to start TCP server...')
    520        tcpserver.serve_until_stopped()
    521 
    522    if __name__ == '__main__':
    523        main()
    524 
    525 First run the server, and then the client. On the client side, nothing is
    526 printed on the console; on the server side, you should see something like:
    527 
    528 .. code-block:: none
    529 
    530    About to start TCP server...
    531       59 root            INFO     Jackdaws love my big sphinx of quartz.
    532       59 myapp.area1     DEBUG    Quick zephyrs blow, vexing daft Jim.
    533       69 myapp.area1     INFO     How quickly daft jumping zebras vex.
    534       69 myapp.area2     WARNING  Jail zesty vixen who grabbed pay from quack.
    535       69 myapp.area2     ERROR    The five boxing wizards jump quickly.
    536 
    537 Note that there are some security issues with pickle in some scenarios. If
    538 these affect you, you can use an alternative serialization scheme by overriding
    539 the :meth:`~handlers.SocketHandler.makePickle` method and implementing your
    540 alternative there, as well as adapting the above script to use your alternative
    541 serialization.
    542 
    543 
    544 .. _context-info:
    545 
    546 Adding contextual information to your logging output
    547 ----------------------------------------------------
    548 
    549 Sometimes you want logging output to contain contextual information in
    550 addition to the parameters passed to the logging call. For example, in a
    551 networked application, it may be desirable to log client-specific information
    552 in the log (e.g. remote client's username, or IP address). Although you could
    553 use the *extra* parameter to achieve this, it's not always convenient to pass
    554 the information in this way. While it might be tempting to create
    555 :class:`Logger` instances on a per-connection basis, this is not a good idea
    556 because these instances are not garbage collected. While this is not a problem
    557 in practice, when the number of :class:`Logger` instances is dependent on the
    558 level of granularity you want to use in logging an application, it could
    559 be hard to manage if the number of :class:`Logger` instances becomes
    560 effectively unbounded.
    561 
    562 
    563 Using LoggerAdapters to impart contextual information
    564 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    565 
    566 An easy way in which you can pass contextual information to be output along
    567 with logging event information is to use the :class:`LoggerAdapter` class.
    568 This class is designed to look like a :class:`Logger`, so that you can call
    569 :meth:`debug`, :meth:`info`, :meth:`warning`, :meth:`error`,
    570 :meth:`exception`, :meth:`critical` and :meth:`log`. These methods have the
    571 same signatures as their counterparts in :class:`Logger`, so you can use the
    572 two types of instances interchangeably.
    573 
    574 When you create an instance of :class:`LoggerAdapter`, you pass it a
    575 :class:`Logger` instance and a dict-like object which contains your contextual
    576 information. When you call one of the logging methods on an instance of
    577 :class:`LoggerAdapter`, it delegates the call to the underlying instance of
    578 :class:`Logger` passed to its constructor, and arranges to pass the contextual
    579 information in the delegated call. Here's a snippet from the code of
    580 :class:`LoggerAdapter`::
    581 
    582     def debug(self, msg, *args, **kwargs):
    583         """
    584         Delegate a debug call to the underlying logger, after adding
    585         contextual information from this adapter instance.
    586         """
    587         msg, kwargs = self.process(msg, kwargs)
    588         self.logger.debug(msg, *args, **kwargs)
    589 
    590 The :meth:`~LoggerAdapter.process` method of :class:`LoggerAdapter` is where the
    591 contextual information is added to the logging output. It's passed the message
    592 and keyword arguments of the logging call, and it passes back (potentially)
    593 modified versions of these to use in the call to the underlying logger. The
    594 default implementation of this method leaves the message alone, but inserts
    595 an 'extra' key in the keyword argument whose value is the dict-like object
    596 passed to the constructor. Of course, if you had passed an 'extra' keyword
    597 argument in the call to the adapter, it will be silently overwritten.
    598 
    599 The advantage of using 'extra' is that the values in the dict-like object are
    600 merged into the :class:`LogRecord` instance's __dict__, allowing you to use
    601 customized strings with your :class:`Formatter` instances which know about
    602 the keys of the dict-like object. If you need a different method, e.g. if you
    603 want to prepend or append the contextual information to the message string,
    604 you just need to subclass :class:`LoggerAdapter` and override
    605 :meth:`~LoggerAdapter.process` to do what you need. Here is a simple example::
    606 
    607     class CustomAdapter(logging.LoggerAdapter):
    608         """
    609         This example adapter expects the passed in dict-like object to have a
    610         'connid' key, whose value in brackets is prepended to the log message.
    611         """
    612         def process(self, msg, kwargs):
    613             return '[%s] %s' % (self.extra['connid'], msg), kwargs
    614 
    615 which you can use like this::
    616 
    617     logger = logging.getLogger(__name__)
    618     adapter = CustomAdapter(logger, {'connid': some_conn_id})
    619 
    620 Then any events that you log to the adapter will have the value of
    621 ``some_conn_id`` prepended to the log messages.
    622 
    623 Using objects other than dicts to pass contextual information
    624 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    625 
    626 You don't need to pass an actual dict to a :class:`LoggerAdapter` - you could
    627 pass an instance of a class which implements ``__getitem__`` and ``__iter__`` so
    628 that it looks like a dict to logging. This would be useful if you want to
    629 generate values dynamically (whereas the values in a dict would be constant).
    630 
    631 
    632 .. _filters-contextual:
    633 
    634 Using Filters to impart contextual information
    635 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    636 
    637 You can also add contextual information to log output using a user-defined
    638 :class:`Filter`. ``Filter`` instances are allowed to modify the ``LogRecords``
    639 passed to them, including adding additional attributes which can then be output
    640 using a suitable format string, or if needed a custom :class:`Formatter`.
    641 
    642 For example in a web application, the request being processed (or at least,
    643 the interesting parts of it) can be stored in a threadlocal
    644 (:class:`threading.local`) variable, and then accessed from a ``Filter`` to
    645 add, say, information from the request - say, the remote IP address and remote
    646 user's username - to the ``LogRecord``, using the attribute names 'ip' and
    647 'user' as in the ``LoggerAdapter`` example above. In that case, the same format
    648 string can be used to get similar output to that shown above. Here's an example
    649 script::
    650 
    651     import logging
    652     from random import choice
    653 
    654     class ContextFilter(logging.Filter):
    655         """
    656         This is a filter which injects contextual information into the log.
    657 
    658         Rather than use actual contextual information, we just use random
    659         data in this demo.
    660         """
    661 
    662         USERS = ['jim', 'fred', 'sheila']
    663         IPS = ['123.231.231.123', '127.0.0.1', '192.168.0.1']
    664 
    665         def filter(self, record):
    666 
    667             record.ip = choice(ContextFilter.IPS)
    668             record.user = choice(ContextFilter.USERS)
    669             return True
    670 
    671     if __name__ == '__main__':
    672         levels = (logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL)
    673         logging.basicConfig(level=logging.DEBUG,
    674                             format='%(asctime)-15s %(name)-5s %(levelname)-8s IP: %(ip)-15s User: %(user)-8s %(message)s')
    675         a1 = logging.getLogger('a.b.c')
    676         a2 = logging.getLogger('d.e.f')
    677 
    678         f = ContextFilter()
    679         a1.addFilter(f)
    680         a2.addFilter(f)
    681         a1.debug('A debug message')
    682         a1.info('An info message with %s', 'some parameters')
    683         for x in range(10):
    684             lvl = choice(levels)
    685             lvlname = logging.getLevelName(lvl)
    686             a2.log(lvl, 'A message at %s level with %d %s', lvlname, 2, 'parameters')
    687 
    688 which, when run, produces something like:
    689 
    690 .. code-block:: none
    691 
    692     2010-09-06 22:38:15,292 a.b.c DEBUG    IP: 123.231.231.123 User: fred     A debug message
    693     2010-09-06 22:38:15,300 a.b.c INFO     IP: 192.168.0.1     User: sheila   An info message with some parameters
    694     2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1       User: sheila   A message at CRITICAL level with 2 parameters
    695     2010-09-06 22:38:15,300 d.e.f ERROR    IP: 127.0.0.1       User: jim      A message at ERROR level with 2 parameters
    696     2010-09-06 22:38:15,300 d.e.f DEBUG    IP: 127.0.0.1       User: sheila   A message at DEBUG level with 2 parameters
    697     2010-09-06 22:38:15,300 d.e.f ERROR    IP: 123.231.231.123 User: fred     A message at ERROR level with 2 parameters
    698     2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 192.168.0.1     User: jim      A message at CRITICAL level with 2 parameters
    699     2010-09-06 22:38:15,300 d.e.f CRITICAL IP: 127.0.0.1       User: sheila   A message at CRITICAL level with 2 parameters
    700     2010-09-06 22:38:15,300 d.e.f DEBUG    IP: 192.168.0.1     User: jim      A message at DEBUG level with 2 parameters
    701     2010-09-06 22:38:15,301 d.e.f ERROR    IP: 127.0.0.1       User: sheila   A message at ERROR level with 2 parameters
    702     2010-09-06 22:38:15,301 d.e.f DEBUG    IP: 123.231.231.123 User: fred     A message at DEBUG level with 2 parameters
    703     2010-09-06 22:38:15,301 d.e.f INFO     IP: 123.231.231.123 User: fred     A message at INFO level with 2 parameters
    704 
    705 
    706 .. _multiple-processes:
    707 
    708 Logging to a single file from multiple processes
    709 ------------------------------------------------
    710 
    711 Although logging is thread-safe, and logging to a single file from multiple
    712 threads in a single process *is* supported, logging to a single file from
    713 *multiple processes* is *not* supported, because there is no standard way to
    714 serialize access to a single file across multiple processes in Python. If you
    715 need to log to a single file from multiple processes, one way of doing this is
    716 to have all the processes log to a :class:`~handlers.SocketHandler`, and have a
    717 separate process which implements a socket server which reads from the socket
    718 and logs to file. (If you prefer, you can dedicate one thread in one of the
    719 existing processes to perform this function.)
    720 :ref:`This section <network-logging>` documents this approach in more detail and
    721 includes a working socket receiver which can be used as a starting point for you
    722 to adapt in your own applications.
    723 
    724 If you are using a recent version of Python which includes the
    725 :mod:`multiprocessing` module, you could write your own handler which uses the
    726 :class:`~multiprocessing.Lock` class from this module to serialize access to the
    727 file from your processes. The existing :class:`FileHandler` and subclasses do
    728 not make use of :mod:`multiprocessing` at present, though they may do so in the
    729 future. Note that at present, the :mod:`multiprocessing` module does not provide
    730 working lock functionality on all platforms (see
    731 https://bugs.python.org/issue3770).
    732 
    733 .. currentmodule:: logging.handlers
    734 
    735 Alternatively, you can use a ``Queue`` and a :class:`QueueHandler` to send
    736 all logging events to one of the processes in your multi-process application.
    737 The following example script demonstrates how you can do this; in the example
    738 a separate listener process listens for events sent by other processes and logs
    739 them according to its own logging configuration. Although the example only
    740 demonstrates one way of doing it (for example, you may want to use a listener
    741 thread rather than a separate listener process -- the implementation would be
    742 analogous) it does allow for completely different logging configurations for
    743 the listener and the other processes in your application, and can be used as
    744 the basis for code meeting your own specific requirements::
    745 
    746     # You'll need these imports in your own code
    747     import logging
    748     import logging.handlers
    749     import multiprocessing
    750 
    751     # Next two import lines for this demo only
    752     from random import choice, random
    753     import time
    754 
    755     #
    756     # Because you'll want to define the logging configurations for listener and workers, the
    757     # listener and worker process functions take a configurer parameter which is a callable
    758     # for configuring logging for that process. These functions are also passed the queue,
    759     # which they use for communication.
    760     #
    761     # In practice, you can configure the listener however you want, but note that in this
    762     # simple example, the listener does not apply level or filter logic to received records.
    763     # In practice, you would probably want to do this logic in the worker processes, to avoid
    764     # sending events which would be filtered out between processes.
    765     #
    766     # The size of the rotated files is made small so you can see the results easily.
    767     def listener_configurer():
    768         root = logging.getLogger()
    769         h = logging.handlers.RotatingFileHandler('mptest.log', 'a', 300, 10)
    770         f = logging.Formatter('%(asctime)s %(processName)-10s %(name)s %(levelname)-8s %(message)s')
    771         h.setFormatter(f)
    772         root.addHandler(h)
    773 
    774     # This is the listener process top-level loop: wait for logging events
    775     # (LogRecords)on the queue and handle them, quit when you get a None for a
    776     # LogRecord.
    777     def listener_process(queue, configurer):
    778         configurer()
    779         while True:
    780             try:
    781                 record = queue.get()
    782                 if record is None:  # We send this as a sentinel to tell the listener to quit.
    783                     break
    784                 logger = logging.getLogger(record.name)
    785                 logger.handle(record)  # No level or filter logic applied - just do it!
    786             except Exception:
    787                 import sys, traceback
    788                 print('Whoops! Problem:', file=sys.stderr)
    789                 traceback.print_exc(file=sys.stderr)
    790 
    791     # Arrays used for random selections in this demo
    792 
    793     LEVELS = [logging.DEBUG, logging.INFO, logging.WARNING,
    794               logging.ERROR, logging.CRITICAL]
    795 
    796     LOGGERS = ['a.b.c', 'd.e.f']
    797 
    798     MESSAGES = [
    799         'Random message #1',
    800         'Random message #2',
    801         'Random message #3',
    802     ]
    803 
    804     # The worker configuration is done at the start of the worker process run.
    805     # Note that on Windows you can't rely on fork semantics, so each process
    806     # will run the logging configuration code when it starts.
    807     def worker_configurer(queue):
    808         h = logging.handlers.QueueHandler(queue)  # Just the one handler needed
    809         root = logging.getLogger()
    810         root.addHandler(h)
    811         # send all messages, for demo; no other level or filter logic applied.
    812         root.setLevel(logging.DEBUG)
    813 
    814     # This is the worker process top-level loop, which just logs ten events with
    815     # random intervening delays before terminating.
    816     # The print messages are just so you know it's doing something!
    817     def worker_process(queue, configurer):
    818         configurer(queue)
    819         name = multiprocessing.current_process().name
    820         print('Worker started: %s' % name)
    821         for i in range(10):
    822             time.sleep(random())
    823             logger = logging.getLogger(choice(LOGGERS))
    824             level = choice(LEVELS)
    825             message = choice(MESSAGES)
    826             logger.log(level, message)
    827         print('Worker finished: %s' % name)
    828 
    829     # Here's where the demo gets orchestrated. Create the queue, create and start
    830     # the listener, create ten workers and start them, wait for them to finish,
    831     # then send a None to the queue to tell the listener to finish.
    832     def main():
    833         queue = multiprocessing.Queue(-1)
    834         listener = multiprocessing.Process(target=listener_process,
    835                                            args=(queue, listener_configurer))
    836         listener.start()
    837         workers = []
    838         for i in range(10):
    839             worker = multiprocessing.Process(target=worker_process,
    840                                              args=(queue, worker_configurer))
    841             workers.append(worker)
    842             worker.start()
    843         for w in workers:
    844             w.join()
    845         queue.put_nowait(None)
    846         listener.join()
    847 
    848     if __name__ == '__main__':
    849         main()
    850 
    851 A variant of the above script keeps the logging in the main process, in a
    852 separate thread::
    853 
    854     import logging
    855     import logging.config
    856     import logging.handlers
    857     from multiprocessing import Process, Queue
    858     import random
    859     import threading
    860     import time
    861 
    862     def logger_thread(q):
    863         while True:
    864             record = q.get()
    865             if record is None:
    866                 break
    867             logger = logging.getLogger(record.name)
    868             logger.handle(record)
    869 
    870 
    871     def worker_process(q):
    872         qh = logging.handlers.QueueHandler(q)
    873         root = logging.getLogger()
    874         root.setLevel(logging.DEBUG)
    875         root.addHandler(qh)
    876         levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
    877                   logging.CRITICAL]
    878         loggers = ['foo', 'foo.bar', 'foo.bar.baz',
    879                    'spam', 'spam.ham', 'spam.ham.eggs']
    880         for i in range(100):
    881             lvl = random.choice(levels)
    882             logger = logging.getLogger(random.choice(loggers))
    883             logger.log(lvl, 'Message no. %d', i)
    884 
    885     if __name__ == '__main__':
    886         q = Queue()
    887         d = {
    888             'version': 1,
    889             'formatters': {
    890                 'detailed': {
    891                     'class': 'logging.Formatter',
    892                     'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
    893                 }
    894             },
    895             'handlers': {
    896                 'console': {
    897                     'class': 'logging.StreamHandler',
    898                     'level': 'INFO',
    899                 },
    900                 'file': {
    901                     'class': 'logging.FileHandler',
    902                     'filename': 'mplog.log',
    903                     'mode': 'w',
    904                     'formatter': 'detailed',
    905                 },
    906                 'foofile': {
    907                     'class': 'logging.FileHandler',
    908                     'filename': 'mplog-foo.log',
    909                     'mode': 'w',
    910                     'formatter': 'detailed',
    911                 },
    912                 'errors': {
    913                     'class': 'logging.FileHandler',
    914                     'filename': 'mplog-errors.log',
    915                     'mode': 'w',
    916                     'level': 'ERROR',
    917                     'formatter': 'detailed',
    918                 },
    919             },
    920             'loggers': {
    921                 'foo': {
    922                     'handlers': ['foofile']
    923                 }
    924             },
    925             'root': {
    926                 'level': 'DEBUG',
    927                 'handlers': ['console', 'file', 'errors']
    928             },
    929         }
    930         workers = []
    931         for i in range(5):
    932             wp = Process(target=worker_process, name='worker %d' % (i + 1), args=(q,))
    933             workers.append(wp)
    934             wp.start()
    935         logging.config.dictConfig(d)
    936         lp = threading.Thread(target=logger_thread, args=(q,))
    937         lp.start()
    938         # At this point, the main process could do some useful work of its own
    939         # Once it's done that, it can wait for the workers to terminate...
    940         for wp in workers:
    941             wp.join()
    942         # And now tell the logging thread to finish up, too
    943         q.put(None)
    944         lp.join()
    945 
    946 This variant shows how you can e.g. apply configuration for particular loggers
    947 - e.g. the ``foo`` logger has a special handler which stores all events in the
    948 ``foo`` subsystem in a file ``mplog-foo.log``. This will be used by the logging
    949 machinery in the main process (even though the logging events are generated in
    950 the worker processes) to direct the messages to the appropriate destinations.
    951 
    952 Using file rotation
    953 -------------------
    954 
    955 .. sectionauthor:: Doug Hellmann, Vinay Sajip (changes)
    956 .. (see <https://pymotw.com/3/logging/>)
    957 
    958 Sometimes you want to let a log file grow to a certain size, then open a new
    959 file and log to that. You may want to keep a certain number of these files, and
    960 when that many files have been created, rotate the files so that the number of
    961 files and the size of the files both remain bounded. For this usage pattern, the
    962 logging package provides a :class:`~handlers.RotatingFileHandler`::
    963 
    964    import glob
    965    import logging
    966    import logging.handlers
    967 
    968    LOG_FILENAME = 'logging_rotatingfile_example.out'
    969 
    970    # Set up a specific logger with our desired output level
    971    my_logger = logging.getLogger('MyLogger')
    972    my_logger.setLevel(logging.DEBUG)
    973 
    974    # Add the log message handler to the logger
    975    handler = logging.handlers.RotatingFileHandler(
    976                  LOG_FILENAME, maxBytes=20, backupCount=5)
    977 
    978    my_logger.addHandler(handler)
    979 
    980    # Log some messages
    981    for i in range(20):
    982        my_logger.debug('i = %d' % i)
    983 
    984    # See what files are created
    985    logfiles = glob.glob('%s*' % LOG_FILENAME)
    986 
    987    for filename in logfiles:
    988        print(filename)
    989 
    990 The result should be 6 separate files, each with part of the log history for the
    991 application:
    992 
    993 .. code-block:: none
    994 
    995    logging_rotatingfile_example.out
    996    logging_rotatingfile_example.out.1
    997    logging_rotatingfile_example.out.2
    998    logging_rotatingfile_example.out.3
    999    logging_rotatingfile_example.out.4
   1000    logging_rotatingfile_example.out.5
   1001 
   1002 The most current file is always :file:`logging_rotatingfile_example.out`,
   1003 and each time it reaches the size limit it is renamed with the suffix
   1004 ``.1``. Each of the existing backup files is renamed to increment the suffix
   1005 (``.1`` becomes ``.2``, etc.)  and the ``.6`` file is erased.
   1006 
   1007 Obviously this example sets the log length much too small as an extreme
   1008 example.  You would want to set *maxBytes* to an appropriate value.
   1009 
   1010 .. _format-styles:
   1011 
   1012 Use of alternative formatting styles
   1013 ------------------------------------
   1014 
   1015 When logging was added to the Python standard library, the only way of
   1016 formatting messages with variable content was to use the %-formatting
   1017 method. Since then, Python has gained two new formatting approaches:
   1018 :class:`string.Template` (added in Python 2.4) and :meth:`str.format`
   1019 (added in Python 2.6).
   1020 
   1021 Logging (as of 3.2) provides improved support for these two additional
   1022 formatting styles. The :class:`Formatter` class been enhanced to take an
   1023 additional, optional keyword parameter named ``style``. This defaults to
   1024 ``'%'``, but other possible values are ``'{'`` and ``'$'``, which correspond
   1025 to the other two formatting styles. Backwards compatibility is maintained by
   1026 default (as you would expect), but by explicitly specifying a style parameter,
   1027 you get the ability to specify format strings which work with
   1028 :meth:`str.format` or :class:`string.Template`. Here's an example console
   1029 session to show the possibilities:
   1030 
   1031 .. code-block:: pycon
   1032 
   1033     >>> import logging
   1034     >>> root = logging.getLogger()
   1035     >>> root.setLevel(logging.DEBUG)
   1036     >>> handler = logging.StreamHandler()
   1037     >>> bf = logging.Formatter('{asctime} {name} {levelname:8s} {message}',
   1038     ...                        style='{')
   1039     >>> handler.setFormatter(bf)
   1040     >>> root.addHandler(handler)
   1041     >>> logger = logging.getLogger('foo.bar')
   1042     >>> logger.debug('This is a DEBUG message')
   1043     2010-10-28 15:11:55,341 foo.bar DEBUG    This is a DEBUG message
   1044     >>> logger.critical('This is a CRITICAL message')
   1045     2010-10-28 15:12:11,526 foo.bar CRITICAL This is a CRITICAL message
   1046     >>> df = logging.Formatter('$asctime $name ${levelname} $message',
   1047     ...                        style='$')
   1048     >>> handler.setFormatter(df)
   1049     >>> logger.debug('This is a DEBUG message')
   1050     2010-10-28 15:13:06,924 foo.bar DEBUG This is a DEBUG message
   1051     >>> logger.critical('This is a CRITICAL message')
   1052     2010-10-28 15:13:11,494 foo.bar CRITICAL This is a CRITICAL message
   1053     >>>
   1054 
   1055 Note that the formatting of logging messages for final output to logs is
   1056 completely independent of how an individual logging message is constructed.
   1057 That can still use %-formatting, as shown here::
   1058 
   1059     >>> logger.error('This is an%s %s %s', 'other,', 'ERROR,', 'message')
   1060     2010-10-28 15:19:29,833 foo.bar ERROR This is another, ERROR, message
   1061     >>>
   1062 
   1063 Logging calls (``logger.debug()``, ``logger.info()`` etc.) only take
   1064 positional parameters for the actual logging message itself, with keyword
   1065 parameters used only for determining options for how to handle the actual
   1066 logging call (e.g. the ``exc_info`` keyword parameter to indicate that
   1067 traceback information should be logged, or the ``extra`` keyword parameter
   1068 to indicate additional contextual information to be added to the log). So
   1069 you cannot directly make logging calls using :meth:`str.format` or
   1070 :class:`string.Template` syntax, because internally the logging package
   1071 uses %-formatting to merge the format string and the variable arguments.
   1072 There would be no changing this while preserving backward compatibility, since
   1073 all logging calls which are out there in existing code will be using %-format
   1074 strings.
   1075 
   1076 There is, however, a way that you can use {}- and $- formatting to construct
   1077 your individual log messages. Recall that for a message you can use an
   1078 arbitrary object as a message format string, and that the logging package will
   1079 call ``str()`` on that object to get the actual format string. Consider the
   1080 following two classes::
   1081 
   1082     class BraceMessage:
   1083         def __init__(self, fmt, *args, **kwargs):
   1084             self.fmt = fmt
   1085             self.args = args
   1086             self.kwargs = kwargs
   1087 
   1088         def __str__(self):
   1089             return self.fmt.format(*self.args, **self.kwargs)
   1090 
   1091     class DollarMessage:
   1092         def __init__(self, fmt, **kwargs):
   1093             self.fmt = fmt
   1094             self.kwargs = kwargs
   1095 
   1096         def __str__(self):
   1097             from string import Template
   1098             return Template(self.fmt).substitute(**self.kwargs)
   1099 
   1100 Either of these can be used in place of a format string, to allow {}- or
   1101 $-formatting to be used to build the actual "message" part which appears in the
   1102 formatted log output in place of "%(message)s" or "{message}" or "$message".
   1103 It's a little unwieldy to use the class names whenever you want to log
   1104 something, but it's quite palatable if you use an alias such as __ (double
   1105 underscore --- not to be confused with _, the single underscore used as a
   1106 synonym/alias for :func:`gettext.gettext` or its brethren).
   1107 
   1108 The above classes are not included in Python, though they're easy enough to
   1109 copy and paste into your own code. They can be used as follows (assuming that
   1110 they're declared in a module called ``wherever``):
   1111 
   1112 .. code-block:: pycon
   1113 
   1114     >>> from wherever import BraceMessage as __
   1115     >>> print(__('Message with {0} {name}', 2, name='placeholders'))
   1116     Message with 2 placeholders
   1117     >>> class Point: pass
   1118     ...
   1119     >>> p = Point()
   1120     >>> p.x = 0.5
   1121     >>> p.y = 0.5
   1122     >>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})',
   1123     ...       point=p))
   1124     Message with coordinates: (0.50, 0.50)
   1125     >>> from wherever import DollarMessage as __
   1126     >>> print(__('Message with $num $what', num=2, what='placeholders'))
   1127     Message with 2 placeholders
   1128     >>>
   1129 
   1130 While the above examples use ``print()`` to show how the formatting works, you
   1131 would of course use ``logger.debug()`` or similar to actually log using this
   1132 approach.
   1133 
   1134 One thing to note is that you pay no significant performance penalty with this
   1135 approach: the actual formatting happens not when you make the logging call, but
   1136 when (and if) the logged message is actually about to be output to a log by a
   1137 handler. So the only slightly unusual thing which might trip you up is that the
   1138 parentheses go around the format string and the arguments, not just the format
   1139 string. That's because the __ notation is just syntax sugar for a constructor
   1140 call to one of the XXXMessage classes.
   1141 
   1142 If you prefer, you can use a :class:`LoggerAdapter` to achieve a similar effect
   1143 to the above, as in the following example::
   1144 
   1145     import logging
   1146 
   1147     class Message(object):
   1148         def __init__(self, fmt, args):
   1149             self.fmt = fmt
   1150             self.args = args
   1151 
   1152         def __str__(self):
   1153             return self.fmt.format(*self.args)
   1154 
   1155     class StyleAdapter(logging.LoggerAdapter):
   1156         def __init__(self, logger, extra=None):
   1157             super(StyleAdapter, self).__init__(logger, extra or {})
   1158 
   1159         def log(self, level, msg, *args, **kwargs):
   1160             if self.isEnabledFor(level):
   1161                 msg, kwargs = self.process(msg, kwargs)
   1162                 self.logger._log(level, Message(msg, args), (), **kwargs)
   1163 
   1164     logger = StyleAdapter(logging.getLogger(__name__))
   1165 
   1166     def main():
   1167         logger.debug('Hello, {}', 'world!')
   1168 
   1169     if __name__ == '__main__':
   1170         logging.basicConfig(level=logging.DEBUG)
   1171         main()
   1172 
   1173 The above script should log the message ``Hello, world!`` when run with
   1174 Python 3.2 or later.
   1175 
   1176 
   1177 .. currentmodule:: logging
   1178 
   1179 .. _custom-logrecord:
   1180 
   1181 Customizing ``LogRecord``
   1182 -------------------------
   1183 
   1184 Every logging event is represented by a :class:`LogRecord` instance.
   1185 When an event is logged and not filtered out by a logger's level, a
   1186 :class:`LogRecord` is created, populated with information about the event and
   1187 then passed to the handlers for that logger (and its ancestors, up to and
   1188 including the logger where further propagation up the hierarchy is disabled).
   1189 Before Python 3.2, there were only two places where this creation was done:
   1190 
   1191 * :meth:`Logger.makeRecord`, which is called in the normal process of
   1192   logging an event. This invoked :class:`LogRecord` directly to create an
   1193   instance.
   1194 * :func:`makeLogRecord`, which is called with a dictionary containing
   1195   attributes to be added to the LogRecord. This is typically invoked when a
   1196   suitable dictionary has been received over the network (e.g. in pickle form
   1197   via a :class:`~handlers.SocketHandler`, or in JSON form via an
   1198   :class:`~handlers.HTTPHandler`).
   1199 
   1200 This has usually meant that if you need to do anything special with a
   1201 :class:`LogRecord`, you've had to do one of the following.
   1202 
   1203 * Create your own :class:`Logger` subclass, which overrides
   1204   :meth:`Logger.makeRecord`, and set it using :func:`~logging.setLoggerClass`
   1205   before any loggers that you care about are instantiated.
   1206 * Add a :class:`Filter` to a logger or handler, which does the
   1207   necessary special manipulation you need when its
   1208   :meth:`~Filter.filter` method is called.
   1209 
   1210 The first approach would be a little unwieldy in the scenario where (say)
   1211 several different libraries wanted to do different things. Each would attempt
   1212 to set its own :class:`Logger` subclass, and the one which did this last would
   1213 win.
   1214 
   1215 The second approach works reasonably well for many cases, but does not allow
   1216 you to e.g. use a specialized subclass of :class:`LogRecord`. Library
   1217 developers can set a suitable filter on their loggers, but they would have to
   1218 remember to do this every time they introduced a new logger (which they would
   1219 do simply by adding new packages or modules and doing ::
   1220 
   1221    logger = logging.getLogger(__name__)
   1222 
   1223 at module level). It's probably one too many things to think about. Developers
   1224 could also add the filter to a :class:`~logging.NullHandler` attached to their
   1225 top-level logger, but this would not be invoked if an application developer
   1226 attached a handler to a lower-level library logger --- so output from that
   1227 handler would not reflect the intentions of the library developer.
   1228 
   1229 In Python 3.2 and later, :class:`~logging.LogRecord` creation is done through a
   1230 factory, which you can specify. The factory is just a callable you can set with
   1231 :func:`~logging.setLogRecordFactory`, and interrogate with
   1232 :func:`~logging.getLogRecordFactory`. The factory is invoked with the same
   1233 signature as the :class:`~logging.LogRecord` constructor, as :class:`LogRecord`
   1234 is the default setting for the factory.
   1235 
   1236 This approach allows a custom factory to control all aspects of LogRecord
   1237 creation. For example, you could return a subclass, or just add some additional
   1238 attributes to the record once created, using a pattern similar to this::
   1239 
   1240     old_factory = logging.getLogRecordFactory()
   1241 
   1242     def record_factory(*args, **kwargs):
   1243         record = old_factory(*args, **kwargs)
   1244         record.custom_attribute = 0xdecafbad
   1245         return record
   1246 
   1247     logging.setLogRecordFactory(record_factory)
   1248 
   1249 This pattern allows different libraries to chain factories together, and as
   1250 long as they don't overwrite each other's attributes or unintentionally
   1251 overwrite the attributes provided as standard, there should be no surprises.
   1252 However, it should be borne in mind that each link in the chain adds run-time
   1253 overhead to all logging operations, and the technique should only be used when
   1254 the use of a :class:`Filter` does not provide the desired result.
   1255 
   1256 
   1257 .. _zeromq-handlers:
   1258 
   1259 Subclassing QueueHandler - a ZeroMQ example
   1260 -------------------------------------------
   1261 
   1262 You can use a :class:`QueueHandler` subclass to send messages to other kinds
   1263 of queues, for example a ZeroMQ 'publish' socket. In the example below,the
   1264 socket is created separately and passed to the handler (as its 'queue')::
   1265 
   1266     import zmq   # using pyzmq, the Python binding for ZeroMQ
   1267     import json  # for serializing records portably
   1268 
   1269     ctx = zmq.Context()
   1270     sock = zmq.Socket(ctx, zmq.PUB)  # or zmq.PUSH, or other suitable value
   1271     sock.bind('tcp://*:5556')        # or wherever
   1272 
   1273     class ZeroMQSocketHandler(QueueHandler):
   1274         def enqueue(self, record):
   1275             self.queue.send_json(record.__dict__)
   1276 
   1277 
   1278     handler = ZeroMQSocketHandler(sock)
   1279 
   1280 
   1281 Of course there are other ways of organizing this, for example passing in the
   1282 data needed by the handler to create the socket::
   1283 
   1284     class ZeroMQSocketHandler(QueueHandler):
   1285         def __init__(self, uri, socktype=zmq.PUB, ctx=None):
   1286             self.ctx = ctx or zmq.Context()
   1287             socket = zmq.Socket(self.ctx, socktype)
   1288             socket.bind(uri)
   1289             super().__init__(socket)
   1290 
   1291         def enqueue(self, record):
   1292             self.queue.send_json(record.__dict__)
   1293 
   1294         def close(self):
   1295             self.queue.close()
   1296 
   1297 
   1298 Subclassing QueueListener - a ZeroMQ example
   1299 --------------------------------------------
   1300 
   1301 You can also subclass :class:`QueueListener` to get messages from other kinds
   1302 of queues, for example a ZeroMQ 'subscribe' socket. Here's an example::
   1303 
   1304     class ZeroMQSocketListener(QueueListener):
   1305         def __init__(self, uri, *handlers, **kwargs):
   1306             self.ctx = kwargs.get('ctx') or zmq.Context()
   1307             socket = zmq.Socket(self.ctx, zmq.SUB)
   1308             socket.setsockopt_string(zmq.SUBSCRIBE, '')  # subscribe to everything
   1309             socket.connect(uri)
   1310             super().__init__(socket, *handlers, **kwargs)
   1311 
   1312         def dequeue(self):
   1313             msg = self.queue.recv_json()
   1314             return logging.makeLogRecord(msg)
   1315 
   1316 
   1317 .. seealso::
   1318 
   1319    Module :mod:`logging`
   1320       API reference for the logging module.
   1321 
   1322    Module :mod:`logging.config`
   1323       Configuration API for the logging module.
   1324 
   1325    Module :mod:`logging.handlers`
   1326       Useful handlers included with the logging module.
   1327 
   1328    :ref:`A basic logging tutorial <logging-basic-tutorial>`
   1329 
   1330    :ref:`A more advanced logging tutorial <logging-advanced-tutorial>`
   1331 
   1332 
   1333 An example dictionary-based configuration
   1334 -----------------------------------------
   1335 
   1336 Below is an example of a logging configuration dictionary - it's taken from
   1337 the `documentation on the Django project <https://docs.djangoproject.com/en/1.9/topics/logging/#configuring-logging>`_.
   1338 This dictionary is passed to :func:`~config.dictConfig` to put the configuration into effect::
   1339 
   1340     LOGGING = {
   1341         'version': 1,
   1342         'disable_existing_loggers': True,
   1343         'formatters': {
   1344             'verbose': {
   1345                 'format': '%(levelname)s %(asctime)s %(module)s %(process)d %(thread)d %(message)s'
   1346             },
   1347             'simple': {
   1348                 'format': '%(levelname)s %(message)s'
   1349             },
   1350         },
   1351         'filters': {
   1352             'special': {
   1353                 '()': 'project.logging.SpecialFilter',
   1354                 'foo': 'bar',
   1355             }
   1356         },
   1357         'handlers': {
   1358             'null': {
   1359                 'level':'DEBUG',
   1360                 'class':'django.utils.log.NullHandler',
   1361             },
   1362             'console':{
   1363                 'level':'DEBUG',
   1364                 'class':'logging.StreamHandler',
   1365                 'formatter': 'simple'
   1366             },
   1367             'mail_admins': {
   1368                 'level': 'ERROR',
   1369                 'class': 'django.utils.log.AdminEmailHandler',
   1370                 'filters': ['special']
   1371             }
   1372         },
   1373         'loggers': {
   1374             'django': {
   1375                 'handlers':['null'],
   1376                 'propagate': True,
   1377                 'level':'INFO',
   1378             },
   1379             'django.request': {
   1380                 'handlers': ['mail_admins'],
   1381                 'level': 'ERROR',
   1382                 'propagate': False,
   1383             },
   1384             'myproject.custom': {
   1385                 'handlers': ['console', 'mail_admins'],
   1386                 'level': 'INFO',
   1387                 'filters': ['special']
   1388             }
   1389         }
   1390     }
   1391 
   1392 For more information about this configuration, you can see the `relevant
   1393 section <https://docs.djangoproject.com/en/1.9/topics/logging/#configuring-logging>`_
   1394 of the Django documentation.
   1395 
   1396 .. _cookbook-rotator-namer:
   1397 
   1398 Using a rotator and namer to customize log rotation processing
   1399 --------------------------------------------------------------
   1400 
   1401 An example of how you can define a namer and rotator is given in the following
   1402 snippet, which shows zlib-based compression of the log file::
   1403 
   1404     def namer(name):
   1405         return name + ".gz"
   1406 
   1407     def rotator(source, dest):
   1408         with open(source, "rb") as sf:
   1409             data = sf.read()
   1410             compressed = zlib.compress(data, 9)
   1411             with open(dest, "wb") as df:
   1412                 df.write(compressed)
   1413         os.remove(source)
   1414 
   1415     rh = logging.handlers.RotatingFileHandler(...)
   1416     rh.rotator = rotator
   1417     rh.namer = namer
   1418 
   1419 These are not "true" .gz files, as they are bare compressed data, with no
   1420 "container" such as youd find in an actual gzip file. This snippet is just
   1421 for illustration purposes.
   1422 
   1423 A more elaborate multiprocessing example
   1424 ----------------------------------------
   1425 
   1426 The following working example shows how logging can be used with multiprocessing
   1427 using configuration files. The configurations are fairly simple, but serve to
   1428 illustrate how more complex ones could be implemented in a real multiprocessing
   1429 scenario.
   1430 
   1431 In the example, the main process spawns a listener process and some worker
   1432 processes. Each of the main process, the listener and the workers have three
   1433 separate configurations (the workers all share the same configuration). We can
   1434 see logging in the main process, how the workers log to a QueueHandler and how
   1435 the listener implements a QueueListener and a more complex logging
   1436 configuration, and arranges to dispatch events received via the queue to the
   1437 handlers specified in the configuration. Note that these configurations are
   1438 purely illustrative, but you should be able to adapt this example to your own
   1439 scenario.
   1440 
   1441 Here's the script - the docstrings and the comments hopefully explain how it
   1442 works::
   1443 
   1444     import logging
   1445     import logging.config
   1446     import logging.handlers
   1447     from multiprocessing import Process, Queue, Event, current_process
   1448     import os
   1449     import random
   1450     import time
   1451 
   1452     class MyHandler:
   1453         """
   1454         A simple handler for logging events. It runs in the listener process and
   1455         dispatches events to loggers based on the name in the received record,
   1456         which then get dispatched, by the logging system, to the handlers
   1457         configured for those loggers.
   1458         """
   1459         def handle(self, record):
   1460             logger = logging.getLogger(record.name)
   1461             # The process name is transformed just to show that it's the listener
   1462             # doing the logging to files and console
   1463             record.processName = '%s (for %s)' % (current_process().name, record.processName)
   1464             logger.handle(record)
   1465 
   1466     def listener_process(q, stop_event, config):
   1467         """
   1468         This could be done in the main process, but is just done in a separate
   1469         process for illustrative purposes.
   1470 
   1471         This initialises logging according to the specified configuration,
   1472         starts the listener and waits for the main process to signal completion
   1473         via the event. The listener is then stopped, and the process exits.
   1474         """
   1475         logging.config.dictConfig(config)
   1476         listener = logging.handlers.QueueListener(q, MyHandler())
   1477         listener.start()
   1478         if os.name == 'posix':
   1479             # On POSIX, the setup logger will have been configured in the
   1480             # parent process, but should have been disabled following the
   1481             # dictConfig call.
   1482             # On Windows, since fork isn't used, the setup logger won't
   1483             # exist in the child, so it would be created and the message
   1484             # would appear - hence the "if posix" clause.
   1485             logger = logging.getLogger('setup')
   1486             logger.critical('Should not appear, because of disabled logger ...')
   1487         stop_event.wait()
   1488         listener.stop()
   1489 
   1490     def worker_process(config):
   1491         """
   1492         A number of these are spawned for the purpose of illustration. In
   1493         practice, they could be a heterogeneous bunch of processes rather than
   1494         ones which are identical to each other.
   1495 
   1496         This initialises logging according to the specified configuration,
   1497         and logs a hundred messages with random levels to randomly selected
   1498         loggers.
   1499 
   1500         A small sleep is added to allow other processes a chance to run. This
   1501         is not strictly needed, but it mixes the output from the different
   1502         processes a bit more than if it's left out.
   1503         """
   1504         logging.config.dictConfig(config)
   1505         levels = [logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR,
   1506                   logging.CRITICAL]
   1507         loggers = ['foo', 'foo.bar', 'foo.bar.baz',
   1508                    'spam', 'spam.ham', 'spam.ham.eggs']
   1509         if os.name == 'posix':
   1510             # On POSIX, the setup logger will have been configured in the
   1511             # parent process, but should have been disabled following the
   1512             # dictConfig call.
   1513             # On Windows, since fork isn't used, the setup logger won't
   1514             # exist in the child, so it would be created and the message
   1515             # would appear - hence the "if posix" clause.
   1516             logger = logging.getLogger('setup')
   1517             logger.critical('Should not appear, because of disabled logger ...')
   1518         for i in range(100):
   1519             lvl = random.choice(levels)
   1520             logger = logging.getLogger(random.choice(loggers))
   1521             logger.log(lvl, 'Message no. %d', i)
   1522             time.sleep(0.01)
   1523 
   1524     def main():
   1525         q = Queue()
   1526         # The main process gets a simple configuration which prints to the console.
   1527         config_initial = {
   1528             'version': 1,
   1529             'formatters': {
   1530                 'detailed': {
   1531                     'class': 'logging.Formatter',
   1532                     'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
   1533                 }
   1534             },
   1535             'handlers': {
   1536                 'console': {
   1537                     'class': 'logging.StreamHandler',
   1538                     'level': 'INFO',
   1539                 },
   1540             },
   1541             'root': {
   1542                 'level': 'DEBUG',
   1543                 'handlers': ['console']
   1544             },
   1545         }
   1546         # The worker process configuration is just a QueueHandler attached to the
   1547         # root logger, which allows all messages to be sent to the queue.
   1548         # We disable existing loggers to disable the "setup" logger used in the
   1549         # parent process. This is needed on POSIX because the logger will
   1550         # be there in the child following a fork().
   1551         config_worker = {
   1552             'version': 1,
   1553             'disable_existing_loggers': True,
   1554             'handlers': {
   1555                 'queue': {
   1556                     'class': 'logging.handlers.QueueHandler',
   1557                     'queue': q,
   1558                 },
   1559             },
   1560             'root': {
   1561                 'level': 'DEBUG',
   1562                 'handlers': ['queue']
   1563             },
   1564         }
   1565         # The listener process configuration shows that the full flexibility of
   1566         # logging configuration is available to dispatch events to handlers however
   1567         # you want.
   1568         # We disable existing loggers to disable the "setup" logger used in the
   1569         # parent process. This is needed on POSIX because the logger will
   1570         # be there in the child following a fork().
   1571         config_listener = {
   1572             'version': 1,
   1573             'disable_existing_loggers': True,
   1574             'formatters': {
   1575                 'detailed': {
   1576                     'class': 'logging.Formatter',
   1577                     'format': '%(asctime)s %(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
   1578                 },
   1579                 'simple': {
   1580                     'class': 'logging.Formatter',
   1581                     'format': '%(name)-15s %(levelname)-8s %(processName)-10s %(message)s'
   1582                 }
   1583             },
   1584             'handlers': {
   1585                 'console': {
   1586                     'class': 'logging.StreamHandler',
   1587                     'level': 'INFO',
   1588                     'formatter': 'simple',
   1589                 },
   1590                 'file': {
   1591                     'class': 'logging.FileHandler',
   1592                     'filename': 'mplog.log',
   1593                     'mode': 'w',
   1594                     'formatter': 'detailed',
   1595                 },
   1596                 'foofile': {
   1597                     'class': 'logging.FileHandler',
   1598                     'filename': 'mplog-foo.log',
   1599                     'mode': 'w',
   1600                     'formatter': 'detailed',
   1601                 },
   1602                 'errors': {
   1603                     'class': 'logging.FileHandler',
   1604                     'filename': 'mplog-errors.log',
   1605                     'mode': 'w',
   1606                     'level': 'ERROR',
   1607                     'formatter': 'detailed',
   1608                 },
   1609             },
   1610             'loggers': {
   1611                 'foo': {
   1612                     'handlers': ['foofile']
   1613                 }
   1614             },
   1615             'root': {
   1616                 'level': 'DEBUG',
   1617                 'handlers': ['console', 'file', 'errors']
   1618             },
   1619         }
   1620         # Log some initial events, just to show that logging in the parent works
   1621         # normally.
   1622         logging.config.dictConfig(config_initial)
   1623         logger = logging.getLogger('setup')
   1624         logger.info('About to create workers ...')
   1625         workers = []
   1626         for i in range(5):
   1627             wp = Process(target=worker_process, name='worker %d' % (i + 1),
   1628                          args=(config_worker,))
   1629             workers.append(wp)
   1630             wp.start()
   1631             logger.info('Started worker: %s', wp.name)
   1632         logger.info('About to create listener ...')
   1633         stop_event = Event()
   1634         lp = Process(target=listener_process, name='listener',
   1635                      args=(q, stop_event, config_listener))
   1636         lp.start()
   1637         logger.info('Started listener')
   1638         # We now hang around for the workers to finish their work.
   1639         for wp in workers:
   1640             wp.join()
   1641         # Workers all done, listening can now stop.
   1642         # Logging in the parent still works normally.
   1643         logger.info('Telling listener to stop ...')
   1644         stop_event.set()
   1645         lp.join()
   1646         logger.info('All done.')
   1647 
   1648     if __name__ == '__main__':
   1649         main()
   1650 
   1651 
   1652 Inserting a BOM into messages sent to a SysLogHandler
   1653 -----------------------------------------------------
   1654 
   1655 :rfc:`5424` requires that a
   1656 Unicode message be sent to a syslog daemon as a set of bytes which have the
   1657 following structure: an optional pure-ASCII component, followed by a UTF-8 Byte
   1658 Order Mark (BOM), followed by Unicode encoded using UTF-8. (See the
   1659 :rfc:`relevant section of the specification <5424#section-6>`.)
   1660 
   1661 In Python 3.1, code was added to
   1662 :class:`~logging.handlers.SysLogHandler` to insert a BOM into the message, but
   1663 unfortunately, it was implemented incorrectly, with the BOM appearing at the
   1664 beginning of the message and hence not allowing any pure-ASCII component to
   1665 appear before it.
   1666 
   1667 As this behaviour is broken, the incorrect BOM insertion code is being removed
   1668 from Python 3.2.4 and later. However, it is not being replaced, and if you
   1669 want to produce :rfc:`5424`-compliant messages which include a BOM, an optional
   1670 pure-ASCII sequence before it and arbitrary Unicode after it, encoded using
   1671 UTF-8, then you need to do the following:
   1672 
   1673 #. Attach a :class:`~logging.Formatter` instance to your
   1674    :class:`~logging.handlers.SysLogHandler` instance, with a format string
   1675    such as::
   1676 
   1677       'ASCII section\ufeffUnicode section'
   1678 
   1679    The Unicode code point U+FEFF, when encoded using UTF-8, will be
   1680    encoded as a UTF-8 BOM -- the byte-string ``b'\xef\xbb\xbf'``.
   1681 
   1682 #. Replace the ASCII section with whatever placeholders you like, but make sure
   1683    that the data that appears in there after substitution is always ASCII (that
   1684    way, it will remain unchanged after UTF-8 encoding).
   1685 
   1686 #. Replace the Unicode section with whatever placeholders you like; if the data
   1687    which appears there after substitution contains characters outside the ASCII
   1688    range, that's fine -- it will be encoded using UTF-8.
   1689 
   1690 The formatted message *will* be encoded using UTF-8 encoding by
   1691 ``SysLogHandler``. If you follow the above rules, you should be able to produce
   1692 :rfc:`5424`-compliant messages. If you don't, logging may not complain, but your
   1693 messages will not be RFC 5424-compliant, and your syslog daemon may complain.
   1694 
   1695 
   1696 Implementing structured logging
   1697 -------------------------------
   1698 
   1699 Although most logging messages are intended for reading by humans, and thus not
   1700 readily machine-parseable, there might be circumstances where you want to output
   1701 messages in a structured format which *is* capable of being parsed by a program
   1702 (without needing complex regular expressions to parse the log message). This is
   1703 straightforward to achieve using the logging package. There are a number of
   1704 ways in which this could be achieved, but the following is a simple approach
   1705 which uses JSON to serialise the event in a machine-parseable manner::
   1706 
   1707     import json
   1708     import logging
   1709 
   1710     class StructuredMessage(object):
   1711         def __init__(self, message, **kwargs):
   1712             self.message = message
   1713             self.kwargs = kwargs
   1714 
   1715         def __str__(self):
   1716             return '%s >>> %s' % (self.message, json.dumps(self.kwargs))
   1717 
   1718     _ = StructuredMessage   # optional, to improve readability
   1719 
   1720     logging.basicConfig(level=logging.INFO, format='%(message)s')
   1721     logging.info(_('message 1', foo='bar', bar='baz', num=123, fnum=123.456))
   1722 
   1723 If the above script is run, it prints:
   1724 
   1725 .. code-block:: none
   1726 
   1727     message 1 >>> {"fnum": 123.456, "num": 123, "bar": "baz", "foo": "bar"}
   1728 
   1729 Note that the order of items might be different according to the version of
   1730 Python used.
   1731 
   1732 If you need more specialised processing, you can use a custom JSON encoder,
   1733 as in the following complete example::
   1734 
   1735     from __future__ import unicode_literals
   1736 
   1737     import json
   1738     import logging
   1739 
   1740     # This next bit is to ensure the script runs unchanged on 2.x and 3.x
   1741     try:
   1742         unicode
   1743     except NameError:
   1744         unicode = str
   1745 
   1746     class Encoder(json.JSONEncoder):
   1747         def default(self, o):
   1748             if isinstance(o, set):
   1749                 return tuple(o)
   1750             elif isinstance(o, unicode):
   1751                 return o.encode('unicode_escape').decode('ascii')
   1752             return super(Encoder, self).default(o)
   1753 
   1754     class StructuredMessage(object):
   1755         def __init__(self, message, **kwargs):
   1756             self.message = message
   1757             self.kwargs = kwargs
   1758 
   1759         def __str__(self):
   1760             s = Encoder().encode(self.kwargs)
   1761             return '%s >>> %s' % (self.message, s)
   1762 
   1763     _ = StructuredMessage   # optional, to improve readability
   1764 
   1765     def main():
   1766         logging.basicConfig(level=logging.INFO, format='%(message)s')
   1767         logging.info(_('message 1', set_value={1, 2, 3}, snowman='\u2603'))
   1768 
   1769     if __name__ == '__main__':
   1770         main()
   1771 
   1772 When the above script is run, it prints:
   1773 
   1774 .. code-block:: none
   1775 
   1776     message 1 >>> {"snowman": "\u2603", "set_value": [1, 2, 3]}
   1777 
   1778 Note that the order of items might be different according to the version of
   1779 Python used.
   1780 
   1781 
   1782 .. _custom-handlers:
   1783 
   1784 .. currentmodule:: logging.config
   1785 
   1786 Customizing handlers with :func:`dictConfig`
   1787 --------------------------------------------
   1788 
   1789 There are times when you want to customize logging handlers in particular ways,
   1790 and if you use :func:`dictConfig` you may be able to do this without
   1791 subclassing. As an example, consider that you may want to set the ownership of a
   1792 log file. On POSIX, this is easily done using :func:`shutil.chown`, but the file
   1793 handlers in the stdlib don't offer built-in support. You can customize handler
   1794 creation using a plain function such as::
   1795 
   1796     def owned_file_handler(filename, mode='a', encoding=None, owner=None):
   1797         if owner:
   1798             if not os.path.exists(filename):
   1799                 open(filename, 'a').close()
   1800             shutil.chown(filename, *owner)
   1801         return logging.FileHandler(filename, mode, encoding)
   1802 
   1803 You can then specify, in a logging configuration passed to :func:`dictConfig`,
   1804 that a logging handler be created by calling this function::
   1805 
   1806     LOGGING = {
   1807         'version': 1,
   1808         'disable_existing_loggers': False,
   1809         'formatters': {
   1810             'default': {
   1811                 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
   1812             },
   1813         },
   1814         'handlers': {
   1815             'file':{
   1816                 # The values below are popped from this dictionary and
   1817                 # used to create the handler, set the handler's level and
   1818                 # its formatter.
   1819                 '()': owned_file_handler,
   1820                 'level':'DEBUG',
   1821                 'formatter': 'default',
   1822                 # The values below are passed to the handler creator callable
   1823                 # as keyword arguments.
   1824                 'owner': ['pulse', 'pulse'],
   1825                 'filename': 'chowntest.log',
   1826                 'mode': 'w',
   1827                 'encoding': 'utf-8',
   1828             },
   1829         },
   1830         'root': {
   1831             'handlers': ['file'],
   1832             'level': 'DEBUG',
   1833         },
   1834     }
   1835 
   1836 In this example I am setting the ownership using the ``pulse`` user and group,
   1837 just for the purposes of illustration. Putting it together into a working
   1838 script, ``chowntest.py``::
   1839 
   1840     import logging, logging.config, os, shutil
   1841 
   1842     def owned_file_handler(filename, mode='a', encoding=None, owner=None):
   1843         if owner:
   1844             if not os.path.exists(filename):
   1845                 open(filename, 'a').close()
   1846             shutil.chown(filename, *owner)
   1847         return logging.FileHandler(filename, mode, encoding)
   1848 
   1849     LOGGING = {
   1850         'version': 1,
   1851         'disable_existing_loggers': False,
   1852         'formatters': {
   1853             'default': {
   1854                 'format': '%(asctime)s %(levelname)s %(name)s %(message)s'
   1855             },
   1856         },
   1857         'handlers': {
   1858             'file':{
   1859                 # The values below are popped from this dictionary and
   1860                 # used to create the handler, set the handler's level and
   1861                 # its formatter.
   1862                 '()': owned_file_handler,
   1863                 'level':'DEBUG',
   1864                 'formatter': 'default',
   1865                 # The values below are passed to the handler creator callable
   1866                 # as keyword arguments.
   1867                 'owner': ['pulse', 'pulse'],
   1868                 'filename': 'chowntest.log',
   1869                 'mode': 'w',
   1870                 'encoding': 'utf-8',
   1871             },
   1872         },
   1873         'root': {
   1874             'handlers': ['file'],
   1875             'level': 'DEBUG',
   1876         },
   1877     }
   1878 
   1879     logging.config.dictConfig(LOGGING)
   1880     logger = logging.getLogger('mylogger')
   1881     logger.debug('A debug message')
   1882 
   1883 To run this, you will probably need to run as ``root``:
   1884 
   1885 .. code-block:: shell-session
   1886 
   1887     $ sudo python3.3 chowntest.py
   1888     $ cat chowntest.log
   1889     2013-11-05 09:34:51,128 DEBUG mylogger A debug message
   1890     $ ls -l chowntest.log
   1891     -rw-r--r-- 1 pulse pulse 55 2013-11-05 09:34 chowntest.log
   1892 
   1893 Note that this example uses Python 3.3 because that's where :func:`shutil.chown`
   1894 makes an appearance. This approach should work with any Python version that
   1895 supports :func:`dictConfig` - namely, Python 2.7, 3.2 or later. With pre-3.3
   1896 versions, you would need to implement the actual ownership change using e.g.
   1897 :func:`os.chown`.
   1898 
   1899 In practice, the handler-creating function may be in a utility module somewhere
   1900 in your project. Instead of the line in the configuration::
   1901 
   1902     '()': owned_file_handler,
   1903 
   1904 you could use e.g.::
   1905 
   1906     '()': 'ext://project.util.owned_file_handler',
   1907 
   1908 where ``project.util`` can be replaced with the actual name of the package
   1909 where the function resides. In the above working script, using
   1910 ``'ext://__main__.owned_file_handler'`` should work. Here, the actual callable
   1911 is resolved by :func:`dictConfig` from the ``ext://`` specification.
   1912 
   1913 This example hopefully also points the way to how you could implement other
   1914 types of file change - e.g. setting specific POSIX permission bits - in the
   1915 same way, using :func:`os.chmod`.
   1916 
   1917 Of course, the approach could also be extended to types of handler other than a
   1918 :class:`~logging.FileHandler` - for example, one of the rotating file handlers,
   1919 or a different type of handler altogether.
   1920 
   1921 
   1922 .. currentmodule:: logging
   1923 
   1924 .. _formatting-styles:
   1925 
   1926 Using particular formatting styles throughout your application
   1927 --------------------------------------------------------------
   1928 
   1929 In Python 3.2, the :class:`~logging.Formatter` gained a ``style`` keyword
   1930 parameter which, while defaulting to ``%`` for backward compatibility, allowed
   1931 the specification of ``{`` or ``$`` to support the formatting approaches
   1932 supported by :meth:`str.format` and :class:`string.Template`. Note that this
   1933 governs the formatting of logging messages for final output to logs, and is
   1934 completely orthogonal to how an individual logging message is constructed.
   1935 
   1936 Logging calls (:meth:`~Logger.debug`, :meth:`~Logger.info` etc.) only take
   1937 positional parameters for the actual logging message itself, with keyword
   1938 parameters used only for determining options for how to handle the logging call
   1939 (e.g. the ``exc_info`` keyword parameter to indicate that traceback information
   1940 should be logged, or the ``extra`` keyword parameter to indicate additional
   1941 contextual information to be added to the log). So you cannot directly make
   1942 logging calls using :meth:`str.format` or :class:`string.Template` syntax,
   1943 because internally the logging package uses %-formatting to merge the format
   1944 string and the variable arguments. There would no changing this while preserving
   1945 backward compatibility, since all logging calls which are out there in existing
   1946 code will be using %-format strings.
   1947 
   1948 There have been suggestions to associate format styles with specific loggers,
   1949 but that approach also runs into backward compatibility problems because any
   1950 existing code could be using a given logger name and using %-formatting.
   1951 
   1952 For logging to work interoperably between any third-party libraries and your
   1953 code, decisions about formatting need to be made at the level of the
   1954 individual logging call. This opens up a couple of ways in which alternative
   1955 formatting styles can be accommodated.
   1956 
   1957 
   1958 Using LogRecord factories
   1959 ^^^^^^^^^^^^^^^^^^^^^^^^^
   1960 
   1961 In Python 3.2, along with the :class:`~logging.Formatter` changes mentioned
   1962 above, the logging package gained the ability to allow users to set their own
   1963 :class:`LogRecord` subclasses, using the :func:`setLogRecordFactory` function.
   1964 You can use this to set your own subclass of :class:`LogRecord`, which does the
   1965 Right Thing by overriding the :meth:`~LogRecord.getMessage` method. The base
   1966 class implementation of this method is where the ``msg % args`` formatting
   1967 happens, and where you can substitute your alternate formatting; however, you
   1968 should be careful to support all formatting styles and allow %-formatting as
   1969 the default, to ensure interoperability with other code. Care should also be
   1970 taken to call ``str(self.msg)``, just as the base implementation does.
   1971 
   1972 Refer to the reference documentation on :func:`setLogRecordFactory` and
   1973 :class:`LogRecord` for more information.
   1974 
   1975 
   1976 Using custom message objects
   1977 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   1978 
   1979 There is another, perhaps simpler way that you can use {}- and $- formatting to
   1980 construct your individual log messages. You may recall (from
   1981 :ref:`arbitrary-object-messages`) that when logging you can use an arbitrary
   1982 object as a message format string, and that the logging package will call
   1983 :func:`str` on that object to get the actual format string. Consider the
   1984 following two classes::
   1985 
   1986     class BraceMessage(object):
   1987         def __init__(self, fmt, *args, **kwargs):
   1988             self.fmt = fmt
   1989             self.args = args
   1990             self.kwargs = kwargs
   1991 
   1992         def __str__(self):
   1993             return self.fmt.format(*self.args, **self.kwargs)
   1994 
   1995     class DollarMessage(object):
   1996         def __init__(self, fmt, **kwargs):
   1997             self.fmt = fmt
   1998             self.kwargs = kwargs
   1999 
   2000         def __str__(self):
   2001             from string import Template
   2002             return Template(self.fmt).substitute(**self.kwargs)
   2003 
   2004 Either of these can be used in place of a format string, to allow {}- or
   2005 $-formatting to be used to build the actual "message" part which appears in the
   2006 formatted log output in place of %(message)s or {message} or $message.
   2007 If you find it a little unwieldy to use the class names whenever you want to log
   2008 something, you can make it more palatable if you use an alias such as ``M`` or
   2009 ``_`` for the message (or perhaps ``__``, if you are using ``_`` for
   2010 localization).
   2011 
   2012 Examples of this approach are given below. Firstly, formatting with
   2013 :meth:`str.format`::
   2014 
   2015     >>> __ = BraceMessage
   2016     >>> print(__('Message with {0} {1}', 2, 'placeholders'))
   2017     Message with 2 placeholders
   2018     >>> class Point: pass
   2019     ...
   2020     >>> p = Point()
   2021     >>> p.x = 0.5
   2022     >>> p.y = 0.5
   2023     >>> print(__('Message with coordinates: ({point.x:.2f}, {point.y:.2f})', point=p))
   2024     Message with coordinates: (0.50, 0.50)
   2025 
   2026 Secondly, formatting with :class:`string.Template`::
   2027 
   2028     >>> __ = DollarMessage
   2029     >>> print(__('Message with $num $what', num=2, what='placeholders'))
   2030     Message with 2 placeholders
   2031     >>>
   2032 
   2033 One thing to note is that you pay no significant performance penalty with this
   2034 approach: the actual formatting happens not when you make the logging call, but
   2035 when (and if) the logged message is actually about to be output to a log by a
   2036 handler. So the only slightly unusual thing which might trip you up is that the
   2037 parentheses go around the format string and the arguments, not just the format
   2038 string. Thats because the __ notation is just syntax sugar for a constructor
   2039 call to one of the ``XXXMessage`` classes shown above.
   2040 
   2041 
   2042 .. _filters-dictconfig:
   2043 
   2044 .. currentmodule:: logging.config
   2045 
   2046 Configuring filters with :func:`dictConfig`
   2047 -------------------------------------------
   2048 
   2049 You *can* configure filters using :func:`~logging.config.dictConfig`, though it
   2050 might not be obvious at first glance how to do it (hence this recipe). Since
   2051 :class:`~logging.Filter` is the only filter class included in the standard
   2052 library, and it is unlikely to cater to many requirements (it's only there as a
   2053 base class), you will typically need to define your own :class:`~logging.Filter`
   2054 subclass with an overridden :meth:`~logging.Filter.filter` method. To do this,
   2055 specify the ``()`` key in the configuration dictionary for the filter,
   2056 specifying a callable which will be used to create the filter (a class is the
   2057 most obvious, but you can provide any callable which returns a
   2058 :class:`~logging.Filter` instance). Here is a complete example::
   2059 
   2060     import logging
   2061     import logging.config
   2062     import sys
   2063 
   2064     class MyFilter(logging.Filter):
   2065         def __init__(self, param=None):
   2066             self.param = param
   2067 
   2068         def filter(self, record):
   2069             if self.param is None:
   2070                 allow = True
   2071             else:
   2072                 allow = self.param not in record.msg
   2073             if allow:
   2074                 record.msg = 'changed: ' + record.msg
   2075             return allow
   2076 
   2077     LOGGING = {
   2078         'version': 1,
   2079         'filters': {
   2080             'myfilter': {
   2081                 '()': MyFilter,
   2082                 'param': 'noshow',
   2083             }
   2084         },
   2085         'handlers': {
   2086             'console': {
   2087                 'class': 'logging.StreamHandler',
   2088                 'filters': ['myfilter']
   2089             }
   2090         },
   2091         'root': {
   2092             'level': 'DEBUG',
   2093             'handlers': ['console']
   2094         },
   2095     }
   2096 
   2097     if __name__ == '__main__':
   2098         logging.config.dictConfig(LOGGING)
   2099         logging.debug('hello')
   2100         logging.debug('hello - noshow')
   2101 
   2102 This example shows how you can pass configuration data to the callable which
   2103 constructs the instance, in the form of keyword parameters. When run, the above
   2104 script will print:
   2105 
   2106 .. code-block:: none
   2107 
   2108     changed: hello
   2109 
   2110 which shows that the filter is working as configured.
   2111 
   2112 A couple of extra points to note:
   2113 
   2114 * If you can't refer to the callable directly in the configuration (e.g. if it
   2115   lives in a different module, and you can't import it directly where the
   2116   configuration dictionary is), you can use the form ``ext://...`` as described
   2117   in :ref:`logging-config-dict-externalobj`. For example, you could have used
   2118   the text ``'ext://__main__.MyFilter'`` instead of ``MyFilter`` in the above
   2119   example.
   2120 
   2121 * As well as for filters, this technique can also be used to configure custom
   2122   handlers and formatters. See :ref:`logging-config-dict-userdef` for more
   2123   information on how logging supports using user-defined objects in its
   2124   configuration, and see the other cookbook recipe :ref:`custom-handlers` above.
   2125 
   2126 
   2127 .. _custom-format-exception:
   2128 
   2129 Customized exception formatting
   2130 -------------------------------
   2131 
   2132 There might be times when you want to do customized exception formatting - for
   2133 argument's sake, let's say you want exactly one line per logged event, even
   2134 when exception information is present. You can do this with a custom formatter
   2135 class, as shown in the following example::
   2136 
   2137     import logging
   2138 
   2139     class OneLineExceptionFormatter(logging.Formatter):
   2140         def formatException(self, exc_info):
   2141             """
   2142             Format an exception so that it prints on a single line.
   2143             """
   2144             result = super(OneLineExceptionFormatter, self).formatException(exc_info)
   2145             return repr(result)  # or format into one line however you want to
   2146 
   2147         def format(self, record):
   2148             s = super(OneLineExceptionFormatter, self).format(record)
   2149             if record.exc_text:
   2150                 s = s.replace('\n', '') + '|'
   2151             return s
   2152 
   2153     def configure_logging():
   2154         fh = logging.FileHandler('output.txt', 'w')
   2155         f = OneLineExceptionFormatter('%(asctime)s|%(levelname)s|%(message)s|',
   2156                                       '%d/%m/%Y %H:%M:%S')
   2157         fh.setFormatter(f)
   2158         root = logging.getLogger()
   2159         root.setLevel(logging.DEBUG)
   2160         root.addHandler(fh)
   2161 
   2162     def main():
   2163         configure_logging()
   2164         logging.info('Sample message')
   2165         try:
   2166             x = 1 / 0
   2167         except ZeroDivisionError as e:
   2168             logging.exception('ZeroDivisionError: %s', e)
   2169 
   2170     if __name__ == '__main__':
   2171         main()
   2172 
   2173 When run, this produces a file with exactly two lines:
   2174 
   2175 .. code-block:: none
   2176 
   2177     28/01/2015 07:21:23|INFO|Sample message|
   2178     28/01/2015 07:21:23|ERROR|ZeroDivisionError: integer division or modulo by zero|'Traceback (most recent call last):\n  File "logtest7.py", line 30, in main\n    x = 1 / 0\nZeroDivisionError: integer division or modulo by zero'|
   2179 
   2180 While the above treatment is simplistic, it points the way to how exception
   2181 information can be formatted to your liking. The :mod:`traceback` module may be
   2182 helpful for more specialized needs.
   2183 
   2184 .. _spoken-messages:
   2185 
   2186 Speaking logging messages
   2187 -------------------------
   2188 
   2189 There might be situations when it is desirable to have logging messages rendered
   2190 in an audible rather than a visible format. This is easy to do if you have
   2191 text-to-speech (TTS) functionality available in your system, even if it doesn't have
   2192 a Python binding. Most TTS systems have a command line program you can run, and
   2193 this can be invoked from a handler using :mod:`subprocess`. It's assumed here
   2194 that TTS command line programs won't expect to interact with users or take a
   2195 long time to complete, and that the frequency of logged messages will be not so
   2196 high as to swamp the user with messages, and that it's acceptable to have the
   2197 messages spoken one at a time rather than concurrently, The example implementation
   2198 below waits for one message to be spoken before the next is processed, and this
   2199 might cause other handlers to be kept waiting. Here is a short example showing
   2200 the approach, which assumes that the ``espeak`` TTS package is available::
   2201 
   2202     import logging
   2203     import subprocess
   2204     import sys
   2205 
   2206     class TTSHandler(logging.Handler):
   2207         def emit(self, record):
   2208             msg = self.format(record)
   2209             # Speak slowly in a female English voice
   2210             cmd = ['espeak', '-s150', '-ven+f3', msg]
   2211             p = subprocess.Popen(cmd, stdout=subprocess.PIPE,
   2212                                  stderr=subprocess.STDOUT)
   2213             # wait for the program to finish
   2214             p.communicate()
   2215 
   2216     def configure_logging():
   2217         h = TTSHandler()
   2218         root = logging.getLogger()
   2219         root.addHandler(h)
   2220         # the default formatter just returns the message
   2221         root.setLevel(logging.DEBUG)
   2222 
   2223     def main():
   2224         logging.info('Hello')
   2225         logging.debug('Goodbye')
   2226 
   2227     if __name__ == '__main__':
   2228         configure_logging()
   2229         sys.exit(main())
   2230 
   2231 When run, this script should say "Hello" and then "Goodbye" in a female voice.
   2232 
   2233 The above approach can, of course, be adapted to other TTS systems and even
   2234 other systems altogether which can process messages via external programs run
   2235 from a command line.
   2236 
   2237 
   2238 .. _buffered-logging:
   2239 
   2240 Buffering logging messages and outputting them conditionally
   2241 ------------------------------------------------------------
   2242 
   2243 There might be situations where you want to log messages in a temporary area
   2244 and only output them if a certain condition occurs. For example, you may want to
   2245 start logging debug events in a function, and if the function completes without
   2246 errors, you don't want to clutter the log with the collected debug information,
   2247 but if there is an error, you want all the debug information to be output as well
   2248 as the error.
   2249 
   2250 Here is an example which shows how you could do this using a decorator for your
   2251 functions where you want logging to behave this way. It makes use of the
   2252 :class:`logging.handlers.MemoryHandler`, which allows buffering of logged events
   2253 until some condition occurs, at which point the buffered events are ``flushed``
   2254 - passed to another handler (the ``target`` handler) for processing. By default,
   2255 the ``MemoryHandler`` flushed when its buffer gets filled up or an event whose
   2256 level is greater than or equal to a specified threshold is seen. You can use this
   2257 recipe with a more specialised subclass of ``MemoryHandler`` if you want custom
   2258 flushing behavior.
   2259 
   2260 The example script has a simple function, ``foo``, which just cycles through
   2261 all the logging levels, writing to ``sys.stderr`` to say what level it's about
   2262 to log at, and then actually logging a message at that level. You can pass a
   2263 parameter to ``foo`` which, if true, will log at ERROR and CRITICAL levels -
   2264 otherwise, it only logs at DEBUG, INFO and WARNING levels.
   2265 
   2266 The script just arranges to decorate ``foo`` with a decorator which will do the
   2267 conditional logging that's required. The decorator takes a logger as a parameter
   2268 and attaches a memory handler for the duration of the call to the decorated
   2269 function. The decorator can be additionally parameterised using a target handler,
   2270 a level at which flushing should occur, and a capacity for the buffer. These
   2271 default to a :class:`~logging.StreamHandler` which writes to ``sys.stderr``,
   2272 ``logging.ERROR`` and ``100`` respectively.
   2273 
   2274 Here's the script::
   2275 
   2276     import logging
   2277     from logging.handlers import MemoryHandler
   2278     import sys
   2279 
   2280     logger = logging.getLogger(__name__)
   2281     logger.addHandler(logging.NullHandler())
   2282 
   2283     def log_if_errors(logger, target_handler=None, flush_level=None, capacity=None):
   2284         if target_handler is None:
   2285             target_handler = logging.StreamHandler()
   2286         if flush_level is None:
   2287             flush_level = logging.ERROR
   2288         if capacity is None:
   2289             capacity = 100
   2290         handler = MemoryHandler(capacity, flushLevel=flush_level, target=target_handler)
   2291 
   2292         def decorator(fn):
   2293             def wrapper(*args, **kwargs):
   2294                 logger.addHandler(handler)
   2295                 try:
   2296                     return fn(*args, **kwargs)
   2297                 except Exception:
   2298                     logger.exception('call failed')
   2299                     raise
   2300                 finally:
   2301                     super(MemoryHandler, handler).flush()
   2302                     logger.removeHandler(handler)
   2303             return wrapper
   2304 
   2305         return decorator
   2306 
   2307     def write_line(s):
   2308         sys.stderr.write('%s\n' % s)
   2309 
   2310     def foo(fail=False):
   2311         write_line('about to log at DEBUG ...')
   2312         logger.debug('Actually logged at DEBUG')
   2313         write_line('about to log at INFO ...')
   2314         logger.info('Actually logged at INFO')
   2315         write_line('about to log at WARNING ...')
   2316         logger.warning('Actually logged at WARNING')
   2317         if fail:
   2318             write_line('about to log at ERROR ...')
   2319             logger.error('Actually logged at ERROR')
   2320             write_line('about to log at CRITICAL ...')
   2321             logger.critical('Actually logged at CRITICAL')
   2322         return fail
   2323 
   2324     decorated_foo = log_if_errors(logger)(foo)
   2325 
   2326     if __name__ == '__main__':
   2327         logger.setLevel(logging.DEBUG)
   2328         write_line('Calling undecorated foo with False')
   2329         assert not foo(False)
   2330         write_line('Calling undecorated foo with True')
   2331         assert foo(True)
   2332         write_line('Calling decorated foo with False')
   2333         assert not decorated_foo(False)
   2334         write_line('Calling decorated foo with True')
   2335         assert decorated_foo(True)
   2336 
   2337 When this script is run, the following output should be observed:
   2338 
   2339 .. code-block:: none
   2340 
   2341     Calling undecorated foo with False
   2342     about to log at DEBUG ...
   2343     about to log at INFO ...
   2344     about to log at WARNING ...
   2345     Calling undecorated foo with True
   2346     about to log at DEBUG ...
   2347     about to log at INFO ...
   2348     about to log at WARNING ...
   2349     about to log at ERROR ...
   2350     about to log at CRITICAL ...
   2351     Calling decorated foo with False
   2352     about to log at DEBUG ...
   2353     about to log at INFO ...
   2354     about to log at WARNING ...
   2355     Calling decorated foo with True
   2356     about to log at DEBUG ...
   2357     about to log at INFO ...
   2358     about to log at WARNING ...
   2359     about to log at ERROR ...
   2360     Actually logged at DEBUG
   2361     Actually logged at INFO
   2362     Actually logged at WARNING
   2363     Actually logged at ERROR
   2364     about to log at CRITICAL ...
   2365     Actually logged at CRITICAL
   2366 
   2367 As you can see, actual logging output only occurs when an event is logged whose
   2368 severity is ERROR or greater, but in that case, any previous events at lower
   2369 severities are also logged.
   2370 
   2371 You can of course use the conventional means of decoration::
   2372 
   2373     @log_if_errors(logger)
   2374     def foo(fail=False):
   2375         ...
   2376 
   2377 
   2378 .. _utc-formatting:
   2379 
   2380 Formatting times using UTC (GMT) via configuration
   2381 --------------------------------------------------
   2382 
   2383 Sometimes you want to format times using UTC, which can be done using a class
   2384 such as `UTCFormatter`, shown below::
   2385 
   2386     import logging
   2387     import time
   2388 
   2389     class UTCFormatter(logging.Formatter):
   2390         converter = time.gmtime
   2391 
   2392 and you can then use the ``UTCFormatter`` in your code instead of
   2393 :class:`~logging.Formatter`. If you want to do that via configuration, you can
   2394 use the :func:`~logging.config.dictConfig` API with an approach illustrated by
   2395 the following complete example::
   2396 
   2397     import logging
   2398     import logging.config
   2399     import time
   2400 
   2401     class UTCFormatter(logging.Formatter):
   2402         converter = time.gmtime
   2403 
   2404     LOGGING = {
   2405         'version': 1,
   2406         'disable_existing_loggers': False,
   2407         'formatters': {
   2408             'utc': {
   2409                 '()': UTCFormatter,
   2410                 'format': '%(asctime)s %(message)s',
   2411             },
   2412             'local': {
   2413                 'format': '%(asctime)s %(message)s',
   2414             }
   2415         },
   2416         'handlers': {
   2417             'console1': {
   2418                 'class': 'logging.StreamHandler',
   2419                 'formatter': 'utc',
   2420             },
   2421             'console2': {
   2422                 'class': 'logging.StreamHandler',
   2423                 'formatter': 'local',
   2424             },
   2425         },
   2426         'root': {
   2427             'handlers': ['console1', 'console2'],
   2428        }
   2429     }
   2430 
   2431     if __name__ == '__main__':
   2432         logging.config.dictConfig(LOGGING)
   2433         logging.warning('The local time is %s', time.asctime())
   2434 
   2435 When this script is run, it should print something like:
   2436 
   2437 .. code-block:: none
   2438 
   2439     2015-10-17 12:53:29,501 The local time is Sat Oct 17 13:53:29 2015
   2440     2015-10-17 13:53:29,501 The local time is Sat Oct 17 13:53:29 2015
   2441 
   2442 showing how the time is formatted both as local time and UTC, one for each
   2443 handler.
   2444 
   2445 
   2446 .. _context-manager:
   2447 
   2448 Using a context manager for selective logging
   2449 ---------------------------------------------
   2450 
   2451 There are times when it would be useful to temporarily change the logging
   2452 configuration and revert it back after doing something. For this, a context
   2453 manager is the most obvious way of saving and restoring the logging context.
   2454 Here is a simple example of such a context manager, which allows you to
   2455 optionally change the logging level and add a logging handler purely in the
   2456 scope of the context manager::
   2457 
   2458     import logging
   2459     import sys
   2460 
   2461     class LoggingContext(object):
   2462         def __init__(self, logger, level=None, handler=None, close=True):
   2463             self.logger = logger
   2464             self.level = level
   2465             self.handler = handler
   2466             self.close = close
   2467 
   2468         def __enter__(self):
   2469             if self.level is not None:
   2470                 self.old_level = self.logger.level
   2471                 self.logger.setLevel(self.level)
   2472             if self.handler:
   2473                 self.logger.addHandler(self.handler)
   2474 
   2475         def __exit__(self, et, ev, tb):
   2476             if self.level is not None:
   2477                 self.logger.setLevel(self.old_level)
   2478             if self.handler:
   2479                 self.logger.removeHandler(self.handler)
   2480             if self.handler and self.close:
   2481                 self.handler.close()
   2482             # implicit return of None => don't swallow exceptions
   2483 
   2484 If you specify a level value, the logger's level is set to that value in the
   2485 scope of the with block covered by the context manager. If you specify a
   2486 handler, it is added to the logger on entry to the block and removed on exit
   2487 from the block. You can also ask the manager to close the handler for you on
   2488 block exit - you could do this if you don't need the handler any more.
   2489 
   2490 To illustrate how it works, we can add the following block of code to the
   2491 above::
   2492 
   2493     if __name__ == '__main__':
   2494         logger = logging.getLogger('foo')
   2495         logger.addHandler(logging.StreamHandler())
   2496         logger.setLevel(logging.INFO)
   2497         logger.info('1. This should appear just once on stderr.')
   2498         logger.debug('2. This should not appear.')
   2499         with LoggingContext(logger, level=logging.DEBUG):
   2500             logger.debug('3. This should appear once on stderr.')
   2501         logger.debug('4. This should not appear.')
   2502         h = logging.StreamHandler(sys.stdout)
   2503         with LoggingContext(logger, level=logging.DEBUG, handler=h, close=True):
   2504             logger.debug('5. This should appear twice - once on stderr and once on stdout.')
   2505         logger.info('6. This should appear just once on stderr.')
   2506         logger.debug('7. This should not appear.')
   2507 
   2508 We initially set the logger's level to ``INFO``, so message #1 appears and
   2509 message #2 doesn't. We then change the level to ``DEBUG`` temporarily in the
   2510 following ``with`` block, and so message #3 appears. After the block exits, the
   2511 logger's level is restored to ``INFO`` and so message #4 doesn't appear. In the
   2512 next ``with`` block, we set the level to ``DEBUG`` again but also add a handler
   2513 writing to ``sys.stdout``. Thus, message #5 appears twice on the console (once
   2514 via ``stderr`` and once via ``stdout``). After the ``with`` statement's
   2515 completion, the status is as it was before so message #6 appears (like message
   2516 #1) whereas message #7 doesn't (just like message #2).
   2517 
   2518 If we run the resulting script, the result is as follows:
   2519 
   2520 .. code-block:: shell-session
   2521 
   2522     $ python logctx.py
   2523     1. This should appear just once on stderr.
   2524     3. This should appear once on stderr.
   2525     5. This should appear twice - once on stderr and once on stdout.
   2526     5. This should appear twice - once on stderr and once on stdout.
   2527     6. This should appear just once on stderr.
   2528 
   2529 If we run it again, but pipe ``stderr`` to ``/dev/null``, we see the following,
   2530 which is the only message written to ``stdout``:
   2531 
   2532 .. code-block:: shell-session
   2533 
   2534     $ python logctx.py 2>/dev/null
   2535     5. This should appear twice - once on stderr and once on stdout.
   2536 
   2537 Once again, but piping ``stdout`` to ``/dev/null``, we get:
   2538 
   2539 .. code-block:: shell-session
   2540 
   2541     $ python logctx.py >/dev/null
   2542     1. This should appear just once on stderr.
   2543     3. This should appear once on stderr.
   2544     5. This should appear twice - once on stderr and once on stdout.
   2545     6. This should appear just once on stderr.
   2546 
   2547 In this case, the message #5 printed to ``stdout`` doesn't appear, as expected.
   2548 
   2549 Of course, the approach described here can be generalised, for example to attach
   2550 logging filters temporarily. Note that the above code works in Python 2 as well
   2551 as Python 3.
   2552