Home | History | Annotate | Download | only in bionic
      1 /*
      2  * Copyright (C) 2008 The Android Open Source Project
      3  * All rights reserved.
      4  *
      5  * Redistribution and use in source and binary forms, with or without
      6  * modification, are permitted provided that the following conditions
      7  * are met:
      8  *  * Redistributions of source code must retain the above copyright
      9  *    notice, this list of conditions and the following disclaimer.
     10  *  * Redistributions in binary form must reproduce the above copyright
     11  *    notice, this list of conditions and the following disclaimer in
     12  *    the documentation and/or other materials provided with the
     13  *    distribution.
     14  *
     15  * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
     16  * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
     17  * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
     18  * FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
     19  * COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
     20  * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
     21  * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
     22  * OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
     23  * AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
     24  * OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
     25  * OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
     26  * SUCH DAMAGE.
     27  */
     28 /*
     29   This is a version (aka dlmalloc) of malloc/free/realloc written by
     30   Doug Lea and released to the public domain, as explained at
     31   http://creativecommons.org/licenses/publicdomain.  Send questions,
     32   comments, complaints, performance data, etc to dl (at) cs.oswego.edu
     33 
     34 * Version 2.8.3 Thu Sep 22 11:16:15 2005  Doug Lea  (dl at gee)
     35 
     36    Note: There may be an updated version of this malloc obtainable at
     37            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
     38          Check before installing!
     39 
     40 * Quickstart
     41 
     42   This library is all in one file to simplify the most common usage:
     43   ftp it, compile it (-O3), and link it into another program. All of
     44   the compile-time options default to reasonable values for use on
     45   most platforms.  You might later want to step through various
     46   compile-time and dynamic tuning options.
     47 
     48   For convenience, an include file for code using this malloc is at:
     49      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.3.h
     50   You don't really need this .h file unless you call functions not
     51   defined in your system include files.  The .h file contains only the
     52   excerpts from this file needed for using this malloc on ANSI C/C++
     53   systems, so long as you haven't changed compile-time options about
     54   naming and tuning parameters.  If you do, then you can create your
     55   own malloc.h that does include all settings by cutting at the point
     56   indicated below. Note that you may already by default be using a C
     57   library containing a malloc that is based on some version of this
     58   malloc (for example in linux). You might still want to use the one
     59   in this file to customize settings or to avoid overheads associated
     60   with library versions.
     61 
     62 * Vital statistics:
     63 
     64   Supported pointer/size_t representation:       4 or 8 bytes
     65        size_t MUST be an unsigned type of the same width as
     66        pointers. (If you are using an ancient system that declares
     67        size_t as a signed type, or need it to be a different width
     68        than pointers, you can use a previous release of this malloc
     69        (e.g. 2.7.2) supporting these.)
     70 
     71   Alignment:                                     8 bytes (default)
     72        This suffices for nearly all current machines and C compilers.
     73        However, you can define MALLOC_ALIGNMENT to be wider than this
     74        if necessary (up to 128bytes), at the expense of using more space.
     75 
     76   Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
     77                                           8 or 16 bytes (if 8byte sizes)
     78        Each malloced chunk has a hidden word of overhead holding size
     79        and status information, and additional cross-check word
     80        if FOOTERS is defined.
     81 
     82   Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
     83                           8-byte ptrs:  32 bytes    (including overhead)
     84 
     85        Even a request for zero bytes (i.e., malloc(0)) returns a
     86        pointer to something of the minimum allocatable size.
     87        The maximum overhead wastage (i.e., number of extra bytes
     88        allocated than were requested in malloc) is less than or equal
     89        to the minimum size, except for requests >= mmap_threshold that
     90        are serviced via mmap(), where the worst case wastage is about
     91        32 bytes plus the remainder from a system page (the minimal
     92        mmap unit); typically 4096 or 8192 bytes.
     93 
     94   Security: static-safe; optionally more or less
     95        The "security" of malloc refers to the ability of malicious
     96        code to accentuate the effects of errors (for example, freeing
     97        space that is not currently malloc'ed or overwriting past the
     98        ends of chunks) in code that calls malloc.  This malloc
     99        guarantees not to modify any memory locations below the base of
    100        heap, i.e., static variables, even in the presence of usage
    101        errors.  The routines additionally detect most improper frees
    102        and reallocs.  All this holds as long as the static bookkeeping
    103        for malloc itself is not corrupted by some other means.  This
    104        is only one aspect of security -- these checks do not, and
    105        cannot, detect all possible programming errors.
    106 
    107        If FOOTERS is defined nonzero, then each allocated chunk
    108        carries an additional check word to verify that it was malloced
    109        from its space.  These check words are the same within each
    110        execution of a program using malloc, but differ across
    111        executions, so externally crafted fake chunks cannot be
    112        freed. This improves security by rejecting frees/reallocs that
    113        could corrupt heap memory, in addition to the checks preventing
    114        writes to statics that are always on.  This may further improve
    115        security at the expense of time and space overhead.  (Note that
    116        FOOTERS may also be worth using with MSPACES.)
    117 
    118        By default detected errors cause the program to abort (calling
    119        "abort()"). You can override this to instead proceed past
    120        errors by defining PROCEED_ON_ERROR.  In this case, a bad free
    121        has no effect, and a malloc that encounters a bad address
    122        caused by user overwrites will ignore the bad address by
    123        dropping pointers and indices to all known memory. This may
    124        be appropriate for programs that should continue if at all
    125        possible in the face of programming errors, although they may
    126        run out of memory because dropped memory is never reclaimed.
    127 
    128        If you don't like either of these options, you can define
    129        CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
    130        else. And if if you are sure that your program using malloc has
    131        no errors or vulnerabilities, you can define INSECURE to 1,
    132        which might (or might not) provide a small performance improvement.
    133 
    134   Thread-safety: NOT thread-safe unless USE_LOCKS defined
    135        When USE_LOCKS is defined, each public call to malloc, free,
    136        etc is surrounded with either a pthread mutex or a win32
    137        spinlock (depending on WIN32). This is not especially fast, and
    138        can be a major bottleneck.  It is designed only to provide
    139        minimal protection in concurrent environments, and to provide a
    140        basis for extensions.  If you are using malloc in a concurrent
    141        program, consider instead using ptmalloc, which is derived from
    142        a version of this malloc. (See http://www.malloc.de).
    143 
    144   System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
    145        This malloc can use unix sbrk or any emulation (invoked using
    146        the CALL_MORECORE macro) and/or mmap/munmap or any emulation
    147        (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
    148        memory.  On most unix systems, it tends to work best if both
    149        MORECORE and MMAP are enabled.  On Win32, it uses emulations
    150        based on VirtualAlloc. It also uses common C library functions
    151        like memset.
    152 
    153   Compliance: I believe it is compliant with the Single Unix Specification
    154        (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
    155        others as well.
    156 
    157 * Overview of algorithms
    158 
    159   This is not the fastest, most space-conserving, most portable, or
    160   most tunable malloc ever written. However it is among the fastest
    161   while also being among the most space-conserving, portable and
    162   tunable.  Consistent balance across these factors results in a good
    163   general-purpose allocator for malloc-intensive programs.
    164 
    165   In most ways, this malloc is a best-fit allocator. Generally, it
    166   chooses the best-fitting existing chunk for a request, with ties
    167   broken in approximately least-recently-used order. (This strategy
    168   normally maintains low fragmentation.) However, for requests less
    169   than 256bytes, it deviates from best-fit when there is not an
    170   exactly fitting available chunk by preferring to use space adjacent
    171   to that used for the previous small request, as well as by breaking
    172   ties in approximately most-recently-used order. (These enhance
    173   locality of series of small allocations.)  And for very large requests
    174   (>= 256Kb by default), it relies on system memory mapping
    175   facilities, if supported.  (This helps avoid carrying around and
    176   possibly fragmenting memory used only for large chunks.)
    177 
    178   All operations (except malloc_stats and mallinfo) have execution
    179   times that are bounded by a constant factor of the number of bits in
    180   a size_t, not counting any clearing in calloc or copying in realloc,
    181   or actions surrounding MORECORE and MMAP that have times
    182   proportional to the number of non-contiguous regions returned by
    183   system allocation routines, which is often just 1.
    184 
    185   The implementation is not very modular and seriously overuses
    186   macros. Perhaps someday all C compilers will do as good a job
    187   inlining modular code as can now be done by brute-force expansion,
    188   but now, enough of them seem not to.
    189 
    190   Some compilers issue a lot of warnings about code that is
    191   dead/unreachable only on some platforms, and also about intentional
    192   uses of negation on unsigned types. All known cases of each can be
    193   ignored.
    194 
    195   For a longer but out of date high-level description, see
    196      http://gee.cs.oswego.edu/dl/html/malloc.html
    197 
    198 * MSPACES
    199   If MSPACES is defined, then in addition to malloc, free, etc.,
    200   this file also defines mspace_malloc, mspace_free, etc. These
    201   are versions of malloc routines that take an "mspace" argument
    202   obtained using create_mspace, to control all internal bookkeeping.
    203   If ONLY_MSPACES is defined, only these versions are compiled.
    204   So if you would like to use this allocator for only some allocations,
    205   and your system malloc for others, you can compile with
    206   ONLY_MSPACES and then do something like...
    207     static mspace mymspace = create_mspace(0,0); // for example
    208     #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
    209 
    210   (Note: If you only need one instance of an mspace, you can instead
    211   use "USE_DL_PREFIX" to relabel the global malloc.)
    212 
    213   You can similarly create thread-local allocators by storing
    214   mspaces as thread-locals. For example:
    215     static __thread mspace tlms = 0;
    216     void*  tlmalloc(size_t bytes) {
    217       if (tlms == 0) tlms = create_mspace(0, 0);
    218       return mspace_malloc(tlms, bytes);
    219     }
    220     void  tlfree(void* mem) { mspace_free(tlms, mem); }
    221 
    222   Unless FOOTERS is defined, each mspace is completely independent.
    223   You cannot allocate from one and free to another (although
    224   conformance is only weakly checked, so usage errors are not always
    225   caught). If FOOTERS is defined, then each chunk carries around a tag
    226   indicating its originating mspace, and frees are directed to their
    227   originating spaces.
    228 
    229  -------------------------  Compile-time options ---------------------------
    230 
    231 Be careful in setting #define values for numerical constants of type
    232 size_t. On some systems, literal values are not automatically extended
    233 to size_t precision unless they are explicitly casted.
    234 
    235 WIN32                    default: defined if _WIN32 defined
    236   Defining WIN32 sets up defaults for MS environment and compilers.
    237   Otherwise defaults are for unix.
    238 
    239 MALLOC_ALIGNMENT         default: (size_t)8
    240   Controls the minimum alignment for malloc'ed chunks.  It must be a
    241   power of two and at least 8, even on machines for which smaller
    242   alignments would suffice. It may be defined as larger than this
    243   though. Note however that code and data structures are optimized for
    244   the case of 8-byte alignment.
    245 
    246 MSPACES                  default: 0 (false)
    247   If true, compile in support for independent allocation spaces.
    248   This is only supported if HAVE_MMAP is true.
    249 
    250 ONLY_MSPACES             default: 0 (false)
    251   If true, only compile in mspace versions, not regular versions.
    252 
    253 USE_LOCKS                default: 0 (false)
    254   Causes each call to each public routine to be surrounded with
    255   pthread or WIN32 mutex lock/unlock. (If set true, this can be
    256   overridden on a per-mspace basis for mspace versions.)
    257 
    258 FOOTERS                  default: 0
    259   If true, provide extra checking and dispatching by placing
    260   information in the footers of allocated chunks. This adds
    261   space and time overhead.
    262 
    263 INSECURE                 default: 0
    264   If true, omit checks for usage errors and heap space overwrites.
    265 
    266 USE_DL_PREFIX            default: NOT defined
    267   Causes compiler to prefix all public routines with the string 'dl'.
    268   This can be useful when you only want to use this malloc in one part
    269   of a program, using your regular system malloc elsewhere.
    270 
    271 ABORT                    default: defined as abort()
    272   Defines how to abort on failed checks.  On most systems, a failed
    273   check cannot die with an "assert" or even print an informative
    274   message, because the underlying print routines in turn call malloc,
    275   which will fail again.  Generally, the best policy is to simply call
    276   abort(). It's not very useful to do more than this because many
    277   errors due to overwriting will show up as address faults (null, odd
    278   addresses etc) rather than malloc-triggered checks, so will also
    279   abort.  Also, most compilers know that abort() does not return, so
    280   can better optimize code conditionally calling it.
    281 
    282 PROCEED_ON_ERROR           default: defined as 0 (false)
    283   Controls whether detected bad addresses cause them to bypassed
    284   rather than aborting. If set, detected bad arguments to free and
    285   realloc are ignored. And all bookkeeping information is zeroed out
    286   upon a detected overwrite of freed heap space, thus losing the
    287   ability to ever return it from malloc again, but enabling the
    288   application to proceed. If PROCEED_ON_ERROR is defined, the
    289   static variable malloc_corruption_error_count is compiled in
    290   and can be examined to see if errors have occurred. This option
    291   generates slower code than the default abort policy.
    292 
    293 DEBUG                    default: NOT defined
    294   The DEBUG setting is mainly intended for people trying to modify
    295   this code or diagnose problems when porting to new platforms.
    296   However, it may also be able to better isolate user errors than just
    297   using runtime checks.  The assertions in the check routines spell
    298   out in more detail the assumptions and invariants underlying the
    299   algorithms.  The checking is fairly extensive, and will slow down
    300   execution noticeably. Calling malloc_stats or mallinfo with DEBUG
    301   set will attempt to check every non-mmapped allocated and free chunk
    302   in the course of computing the summaries.
    303 
    304 ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
    305   Debugging assertion failures can be nearly impossible if your
    306   version of the assert macro causes malloc to be called, which will
    307   lead to a cascade of further failures, blowing the runtime stack.
    308   ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
    309   which will usually make debugging easier.
    310 
    311 MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
    312   The action to take before "return 0" when malloc fails to be able to
    313   return memory because there is none available.
    314 
    315 HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
    316   True if this system supports sbrk or an emulation of it.
    317 
    318 MORECORE                  default: sbrk
    319   The name of the sbrk-style system routine to call to obtain more
    320   memory.  See below for guidance on writing custom MORECORE
    321   functions. The type of the argument to sbrk/MORECORE varies across
    322   systems.  It cannot be size_t, because it supports negative
    323   arguments, so it is normally the signed type of the same width as
    324   size_t (sometimes declared as "intptr_t").  It doesn't much matter
    325   though. Internally, we only call it with arguments less than half
    326   the max value of a size_t, which should work across all reasonable
    327   possibilities, although sometimes generating compiler warnings.  See
    328   near the end of this file for guidelines for creating a custom
    329   version of MORECORE.
    330 
    331 MORECORE_CONTIGUOUS       default: 1 (true)
    332   If true, take advantage of fact that consecutive calls to MORECORE
    333   with positive arguments always return contiguous increasing
    334   addresses.  This is true of unix sbrk. It does not hurt too much to
    335   set it true anyway, since malloc copes with non-contiguities.
    336   Setting it false when definitely non-contiguous saves time
    337   and possibly wasted space it would take to discover this though.
    338 
    339 MORECORE_CANNOT_TRIM      default: NOT defined
    340   True if MORECORE cannot release space back to the system when given
    341   negative arguments. This is generally necessary only if you are
    342   using a hand-crafted MORECORE function that cannot handle negative
    343   arguments.
    344 
    345 HAVE_MMAP                 default: 1 (true)
    346   True if this system supports mmap or an emulation of it.  If so, and
    347   HAVE_MORECORE is not true, MMAP is used for all system
    348   allocation. If set and HAVE_MORECORE is true as well, MMAP is
    349   primarily used to directly allocate very large blocks. It is also
    350   used as a backup strategy in cases where MORECORE fails to provide
    351   space from system. Note: A single call to MUNMAP is assumed to be
    352   able to unmap memory that may have be allocated using multiple calls
    353   to MMAP, so long as they are adjacent.
    354 
    355 HAVE_MREMAP               default: 1 on linux, else 0
    356   If true realloc() uses mremap() to re-allocate large blocks and
    357   extend or shrink allocation spaces.
    358 
    359 MMAP_CLEARS               default: 1 on unix
    360   True if mmap clears memory so calloc doesn't need to. This is true
    361   for standard unix mmap using /dev/zero.
    362 
    363 USE_BUILTIN_FFS            default: 0 (i.e., not used)
    364   Causes malloc to use the builtin ffs() function to compute indices.
    365   Some compilers may recognize and intrinsify ffs to be faster than the
    366   supplied C version. Also, the case of x86 using gcc is special-cased
    367   to an asm instruction, so is already as fast as it can be, and so
    368   this setting has no effect. (On most x86s, the asm version is only
    369   slightly faster than the C version.)
    370 
    371 malloc_getpagesize         default: derive from system includes, or 4096.
    372   The system page size. To the extent possible, this malloc manages
    373   memory from the system in page-size units.  This may be (and
    374   usually is) a function rather than a constant. This is ignored
    375   if WIN32, where page size is determined using getSystemInfo during
    376   initialization.
    377 
    378 USE_DEV_RANDOM             default: 0 (i.e., not used)
    379   Causes malloc to use /dev/random to initialize secure magic seed for
    380   stamping footers. Otherwise, the current time is used.
    381 
    382 NO_MALLINFO                default: 0
    383   If defined, don't compile "mallinfo". This can be a simple way
    384   of dealing with mismatches between system declarations and
    385   those in this file.
    386 
    387 MALLINFO_FIELD_TYPE        default: size_t
    388   The type of the fields in the mallinfo struct. This was originally
    389   defined as "int" in SVID etc, but is more usefully defined as
    390   size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
    391 
    392 REALLOC_ZERO_BYTES_FREES    default: not defined
    393   This should be set if a call to realloc with zero bytes should
    394   be the same as a call to free. Some people think it should. Otherwise,
    395   since this malloc returns a unique pointer for malloc(0), so does
    396   realloc(p, 0).
    397 
    398 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
    399 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
    400 LACKS_STDLIB_H                default: NOT defined unless on WIN32
    401   Define these if your system does not have these header files.
    402   You might need to manually insert some of the declarations they provide.
    403 
    404 DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
    405                                 system_info.dwAllocationGranularity in WIN32,
    406                                 otherwise 64K.
    407       Also settable using mallopt(M_GRANULARITY, x)
    408   The unit for allocating and deallocating memory from the system.  On
    409   most systems with contiguous MORECORE, there is no reason to
    410   make this more than a page. However, systems with MMAP tend to
    411   either require or encourage larger granularities.  You can increase
    412   this value to prevent system allocation functions to be called so
    413   often, especially if they are slow.  The value must be at least one
    414   page and must be a power of two.  Setting to 0 causes initialization
    415   to either page size or win32 region size.  (Note: In previous
    416   versions of malloc, the equivalent of this option was called
    417   "TOP_PAD")
    418 
    419 DEFAULT_TRIM_THRESHOLD    default: 2MB
    420       Also settable using mallopt(M_TRIM_THRESHOLD, x)
    421   The maximum amount of unused top-most memory to keep before
    422   releasing via malloc_trim in free().  Automatic trimming is mainly
    423   useful in long-lived programs using contiguous MORECORE.  Because
    424   trimming via sbrk can be slow on some systems, and can sometimes be
    425   wasteful (in cases where programs immediately afterward allocate
    426   more large chunks) the value should be high enough so that your
    427   overall system performance would improve by releasing this much
    428   memory.  As a rough guide, you might set to a value close to the
    429   average size of a process (program) running on your system.
    430   Releasing this much memory would allow such a process to run in
    431   memory.  Generally, it is worth tuning trim thresholds when a
    432   program undergoes phases where several large chunks are allocated
    433   and released in ways that can reuse each other's storage, perhaps
    434   mixed with phases where there are no such chunks at all. The trim
    435   value must be greater than page size to have any useful effect.  To
    436   disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
    437   some people use of mallocing a huge space and then freeing it at
    438   program startup, in an attempt to reserve system memory, doesn't
    439   have the intended effect under automatic trimming, since that memory
    440   will immediately be returned to the system.
    441 
    442 DEFAULT_MMAP_THRESHOLD       default: 256K
    443       Also settable using mallopt(M_MMAP_THRESHOLD, x)
    444   The request size threshold for using MMAP to directly service a
    445   request. Requests of at least this size that cannot be allocated
    446   using already-existing space will be serviced via mmap.  (If enough
    447   normal freed space already exists it is used instead.)  Using mmap
    448   segregates relatively large chunks of memory so that they can be
    449   individually obtained and released from the host system. A request
    450   serviced through mmap is never reused by any other request (at least
    451   not directly; the system may just so happen to remap successive
    452   requests to the same locations).  Segregating space in this way has
    453   the benefits that: Mmapped space can always be individually released
    454   back to the system, which helps keep the system level memory demands
    455   of a long-lived program low.  Also, mapped memory doesn't become
    456   `locked' between other chunks, as can happen with normally allocated
    457   chunks, which means that even trimming via malloc_trim would not
    458   release them.  However, it has the disadvantage that the space
    459   cannot be reclaimed, consolidated, and then used to service later
    460   requests, as happens with normal chunks.  The advantages of mmap
    461   nearly always outweigh disadvantages for "large" chunks, but the
    462   value of "large" may vary across systems.  The default is an
    463   empirically derived value that works well in most systems. You can
    464   disable mmap by setting to MAX_SIZE_T.
    465 
    466 */
    467 
    468 #ifndef WIN32
    469 #ifdef _WIN32
    470 #define WIN32 1
    471 #endif  /* _WIN32 */
    472 #endif  /* WIN32 */
    473 #ifdef WIN32
    474 #define WIN32_LEAN_AND_MEAN
    475 #include <windows.h>
    476 #define HAVE_MMAP 1
    477 #define HAVE_MORECORE 0
    478 #define LACKS_UNISTD_H
    479 #define LACKS_SYS_PARAM_H
    480 #define LACKS_SYS_MMAN_H
    481 #define LACKS_STRING_H
    482 #define LACKS_STRINGS_H
    483 #define LACKS_SYS_TYPES_H
    484 #define LACKS_ERRNO_H
    485 #define MALLOC_FAILURE_ACTION
    486 #define MMAP_CLEARS 0 /* WINCE and some others apparently don't clear */
    487 #endif  /* WIN32 */
    488 
    489 #if defined(DARWIN) || defined(_DARWIN)
    490 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
    491 #ifndef HAVE_MORECORE
    492 #define HAVE_MORECORE 0
    493 #define HAVE_MMAP 1
    494 #endif  /* HAVE_MORECORE */
    495 #endif  /* DARWIN */
    496 
    497 #ifndef LACKS_SYS_TYPES_H
    498 #include <sys/types.h>  /* For size_t */
    499 #endif  /* LACKS_SYS_TYPES_H */
    500 
    501 /* The maximum possible size_t value has all bits set */
    502 #define MAX_SIZE_T           (~(size_t)0)
    503 
    504 #ifndef ONLY_MSPACES
    505 #define ONLY_MSPACES 0
    506 #endif  /* ONLY_MSPACES */
    507 #ifndef MSPACES
    508 #if ONLY_MSPACES
    509 #define MSPACES 1
    510 #else   /* ONLY_MSPACES */
    511 #define MSPACES 0
    512 #endif  /* ONLY_MSPACES */
    513 #endif  /* MSPACES */
    514 #ifndef MALLOC_ALIGNMENT
    515 #define MALLOC_ALIGNMENT ((size_t)8U)
    516 #endif  /* MALLOC_ALIGNMENT */
    517 #ifndef FOOTERS
    518 #define FOOTERS 0
    519 #endif  /* FOOTERS */
    520 #ifndef USE_MAX_ALLOWED_FOOTPRINT
    521 #define USE_MAX_ALLOWED_FOOTPRINT 0
    522 #endif
    523 #ifndef ABORT
    524 #define ABORT  abort()
    525 #endif  /* ABORT */
    526 #ifndef ABORT_ON_ASSERT_FAILURE
    527 #define ABORT_ON_ASSERT_FAILURE 1
    528 #endif  /* ABORT_ON_ASSERT_FAILURE */
    529 #ifndef PROCEED_ON_ERROR
    530 #define PROCEED_ON_ERROR 0
    531 #endif  /* PROCEED_ON_ERROR */
    532 #ifndef USE_LOCKS
    533 #define USE_LOCKS 0
    534 #endif  /* USE_LOCKS */
    535 #ifndef INSECURE
    536 #define INSECURE 0
    537 #endif  /* INSECURE */
    538 #ifndef HAVE_MMAP
    539 #define HAVE_MMAP 1
    540 #endif  /* HAVE_MMAP */
    541 #ifndef MMAP_CLEARS
    542 #define MMAP_CLEARS 1
    543 #endif  /* MMAP_CLEARS */
    544 #ifndef HAVE_MREMAP
    545 #ifdef linux
    546 #define HAVE_MREMAP 1
    547 #else   /* linux */
    548 #define HAVE_MREMAP 0
    549 #endif  /* linux */
    550 #endif  /* HAVE_MREMAP */
    551 #ifndef MALLOC_FAILURE_ACTION
    552 #define MALLOC_FAILURE_ACTION  errno = ENOMEM;
    553 #endif  /* MALLOC_FAILURE_ACTION */
    554 #ifndef HAVE_MORECORE
    555 #if ONLY_MSPACES
    556 #define HAVE_MORECORE 0
    557 #else   /* ONLY_MSPACES */
    558 #define HAVE_MORECORE 1
    559 #endif  /* ONLY_MSPACES */
    560 #endif  /* HAVE_MORECORE */
    561 #if !HAVE_MORECORE
    562 #define MORECORE_CONTIGUOUS 0
    563 #else   /* !HAVE_MORECORE */
    564 #ifndef MORECORE
    565 #define MORECORE sbrk
    566 #endif  /* MORECORE */
    567 #ifndef MORECORE_CONTIGUOUS
    568 #define MORECORE_CONTIGUOUS 1
    569 #endif  /* MORECORE_CONTIGUOUS */
    570 #endif  /* HAVE_MORECORE */
    571 #ifndef DEFAULT_GRANULARITY
    572 #if MORECORE_CONTIGUOUS
    573 #define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
    574 #else   /* MORECORE_CONTIGUOUS */
    575 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
    576 #endif  /* MORECORE_CONTIGUOUS */
    577 #endif  /* DEFAULT_GRANULARITY */
    578 #ifndef DEFAULT_TRIM_THRESHOLD
    579 #ifndef MORECORE_CANNOT_TRIM
    580 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
    581 #else   /* MORECORE_CANNOT_TRIM */
    582 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
    583 #endif  /* MORECORE_CANNOT_TRIM */
    584 #endif  /* DEFAULT_TRIM_THRESHOLD */
    585 #ifndef DEFAULT_MMAP_THRESHOLD
    586 #if HAVE_MMAP
    587 #define DEFAULT_MMAP_THRESHOLD ((size_t)64U * (size_t)1024U)
    588 #else   /* HAVE_MMAP */
    589 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
    590 #endif  /* HAVE_MMAP */
    591 #endif  /* DEFAULT_MMAP_THRESHOLD */
    592 #ifndef USE_BUILTIN_FFS
    593 #define USE_BUILTIN_FFS 0
    594 #endif  /* USE_BUILTIN_FFS */
    595 #ifndef USE_DEV_RANDOM
    596 #define USE_DEV_RANDOM 0
    597 #endif  /* USE_DEV_RANDOM */
    598 #ifndef NO_MALLINFO
    599 #define NO_MALLINFO 0
    600 #endif  /* NO_MALLINFO */
    601 #ifndef MALLINFO_FIELD_TYPE
    602 #define MALLINFO_FIELD_TYPE size_t
    603 #endif  /* MALLINFO_FIELD_TYPE */
    604 
    605 /*
    606   mallopt tuning options.  SVID/XPG defines four standard parameter
    607   numbers for mallopt, normally defined in malloc.h.  None of these
    608   are used in this malloc, so setting them has no effect. But this
    609   malloc does support the following options.
    610 */
    611 
    612 #define M_TRIM_THRESHOLD     (-1)
    613 #define M_GRANULARITY        (-2)
    614 #define M_MMAP_THRESHOLD     (-3)
    615 
    616 /* ------------------------ Mallinfo declarations ------------------------ */
    617 
    618 #if !NO_MALLINFO
    619 /*
    620   This version of malloc supports the standard SVID/XPG mallinfo
    621   routine that returns a struct containing usage properties and
    622   statistics. It should work on any system that has a
    623   /usr/include/malloc.h defining struct mallinfo.  The main
    624   declaration needed is the mallinfo struct that is returned (by-copy)
    625   by mallinfo().  The malloinfo struct contains a bunch of fields that
    626   are not even meaningful in this version of malloc.  These fields are
    627   are instead filled by mallinfo() with other numbers that might be of
    628   interest.
    629 
    630   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
    631   /usr/include/malloc.h file that includes a declaration of struct
    632   mallinfo.  If so, it is included; else a compliant version is
    633   declared below.  These must be precisely the same for mallinfo() to
    634   work.  The original SVID version of this struct, defined on most
    635   systems with mallinfo, declares all fields as ints. But some others
    636   define as unsigned long. If your system defines the fields using a
    637   type of different width than listed here, you MUST #include your
    638   system version and #define HAVE_USR_INCLUDE_MALLOC_H.
    639 */
    640 
    641 /* #define HAVE_USR_INCLUDE_MALLOC_H */
    642 
    643 #if !ANDROID
    644 #ifdef HAVE_USR_INCLUDE_MALLOC_H
    645 #include "/usr/include/malloc.h"
    646 #else /* HAVE_USR_INCLUDE_MALLOC_H */
    647 
    648 struct mallinfo {
    649   MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
    650   MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
    651   MALLINFO_FIELD_TYPE smblks;   /* always 0 */
    652   MALLINFO_FIELD_TYPE hblks;    /* always 0 */
    653   MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
    654   MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
    655   MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
    656   MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
    657   MALLINFO_FIELD_TYPE fordblks; /* total free space */
    658   MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
    659 };
    660 
    661 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
    662 #endif /* NO_MALLINFO */
    663 #endif /* ANDROID */
    664 
    665 #ifdef __cplusplus
    666 extern "C" {
    667 #endif /* __cplusplus */
    668 
    669 #if !ONLY_MSPACES
    670 
    671 /* ------------------- Declarations of public routines ------------------- */
    672 
    673 /* Check an additional macro for the five primary functions */
    674 #ifndef USE_DL_PREFIX
    675 #define dlcalloc               calloc
    676 #define dlfree                 free
    677 #define dlmalloc               malloc
    678 #define dlmemalign             memalign
    679 #define dlrealloc              realloc
    680 #endif
    681 
    682 #ifndef USE_DL_PREFIX
    683 #define dlvalloc               valloc
    684 #define dlpvalloc              pvalloc
    685 #define dlmallinfo             mallinfo
    686 #define dlmallopt              mallopt
    687 #define dlmalloc_trim          malloc_trim
    688 #define dlmalloc_walk_free_pages \
    689                                malloc_walk_free_pages
    690 #define dlmalloc_walk_heap \
    691                                malloc_walk_heap
    692 #define dlmalloc_stats         malloc_stats
    693 #define dlmalloc_usable_size   malloc_usable_size
    694 #define dlmalloc_footprint     malloc_footprint
    695 #define dlmalloc_max_allowed_footprint \
    696                                malloc_max_allowed_footprint
    697 #define dlmalloc_set_max_allowed_footprint \
    698                                malloc_set_max_allowed_footprint
    699 #define dlmalloc_max_footprint malloc_max_footprint
    700 #define dlindependent_calloc   independent_calloc
    701 #define dlindependent_comalloc independent_comalloc
    702 #endif /* USE_DL_PREFIX */
    703 
    704 
    705 /*
    706   malloc(size_t n)
    707   Returns a pointer to a newly allocated chunk of at least n bytes, or
    708   null if no space is available, in which case errno is set to ENOMEM
    709   on ANSI C systems.
    710 
    711   If n is zero, malloc returns a minimum-sized chunk. (The minimum
    712   size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
    713   systems.)  Note that size_t is an unsigned type, so calls with
    714   arguments that would be negative if signed are interpreted as
    715   requests for huge amounts of space, which will often fail. The
    716   maximum supported value of n differs across systems, but is in all
    717   cases less than the maximum representable value of a size_t.
    718 */
    719 void* dlmalloc(size_t);
    720 
    721 /*
    722   free(void* p)
    723   Releases the chunk of memory pointed to by p, that had been previously
    724   allocated using malloc or a related routine such as realloc.
    725   It has no effect if p is null. If p was not malloced or already
    726   freed, free(p) will by default cause the current program to abort.
    727 */
    728 void  dlfree(void*);
    729 
    730 /*
    731   calloc(size_t n_elements, size_t element_size);
    732   Returns a pointer to n_elements * element_size bytes, with all locations
    733   set to zero.
    734 */
    735 void* dlcalloc(size_t, size_t);
    736 
    737 /*
    738   realloc(void* p, size_t n)
    739   Returns a pointer to a chunk of size n that contains the same data
    740   as does chunk p up to the minimum of (n, p's size) bytes, or null
    741   if no space is available.
    742 
    743   The returned pointer may or may not be the same as p. The algorithm
    744   prefers extending p in most cases when possible, otherwise it
    745   employs the equivalent of a malloc-copy-free sequence.
    746 
    747   If p is null, realloc is equivalent to malloc.
    748 
    749   If space is not available, realloc returns null, errno is set (if on
    750   ANSI) and p is NOT freed.
    751 
    752   if n is for fewer bytes than already held by p, the newly unused
    753   space is lopped off and freed if possible.  realloc with a size
    754   argument of zero (re)allocates a minimum-sized chunk.
    755 
    756   The old unix realloc convention of allowing the last-free'd chunk
    757   to be used as an argument to realloc is not supported.
    758 */
    759 
    760 void* dlrealloc(void*, size_t);
    761 
    762 /*
    763   memalign(size_t alignment, size_t n);
    764   Returns a pointer to a newly allocated chunk of n bytes, aligned
    765   in accord with the alignment argument.
    766 
    767   The alignment argument should be a power of two. If the argument is
    768   not a power of two, the nearest greater power is used.
    769   8-byte alignment is guaranteed by normal malloc calls, so don't
    770   bother calling memalign with an argument of 8 or less.
    771 
    772   Overreliance on memalign is a sure way to fragment space.
    773 */
    774 void* dlmemalign(size_t, size_t);
    775 
    776 /*
    777   valloc(size_t n);
    778   Equivalent to memalign(pagesize, n), where pagesize is the page
    779   size of the system. If the pagesize is unknown, 4096 is used.
    780 */
    781 void* dlvalloc(size_t);
    782 
    783 /*
    784   mallopt(int parameter_number, int parameter_value)
    785   Sets tunable parameters The format is to provide a
    786   (parameter-number, parameter-value) pair.  mallopt then sets the
    787   corresponding parameter to the argument value if it can (i.e., so
    788   long as the value is meaningful), and returns 1 if successful else
    789   0.  SVID/XPG/ANSI defines four standard param numbers for mallopt,
    790   normally defined in malloc.h.  None of these are use in this malloc,
    791   so setting them has no effect. But this malloc also supports other
    792   options in mallopt. See below for details.  Briefly, supported
    793   parameters are as follows (listed defaults are for "typical"
    794   configurations).
    795 
    796   Symbol            param #  default    allowed param values
    797   M_TRIM_THRESHOLD     -1   2*1024*1024   any   (MAX_SIZE_T disables)
    798   M_GRANULARITY        -2     page size   any power of 2 >= page size
    799   M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
    800 */
    801 int dlmallopt(int, int);
    802 
    803 /*
    804   malloc_footprint();
    805   Returns the number of bytes obtained from the system.  The total
    806   number of bytes allocated by malloc, realloc etc., is less than this
    807   value. Unlike mallinfo, this function returns only a precomputed
    808   result, so can be called frequently to monitor memory consumption.
    809   Even if locks are otherwise defined, this function does not use them,
    810   so results might not be up to date.
    811 */
    812 size_t dlmalloc_footprint(void);
    813 
    814 #if USE_MAX_ALLOWED_FOOTPRINT
    815 /*
    816   malloc_max_allowed_footprint();
    817   Returns the number of bytes that the heap is allowed to obtain
    818   from the system.  malloc_footprint() should always return a
    819   size less than or equal to max_allowed_footprint, unless the
    820   max_allowed_footprint was set to a value smaller than the
    821   footprint at the time.
    822 */
    823 size_t dlmalloc_max_allowed_footprint();
    824 
    825 /*
    826   malloc_set_max_allowed_footprint();
    827   Set the maximum number of bytes that the heap is allowed to
    828   obtain from the system.  The size will be rounded up to a whole
    829   page, and the rounded number will be returned from future calls
    830   to malloc_max_allowed_footprint().  If the new max_allowed_footprint
    831   is larger than the current footprint, the heap will never grow
    832   larger than max_allowed_footprint.  If the new max_allowed_footprint
    833   is smaller than the current footprint, the heap will not grow
    834   further.
    835 
    836   TODO: try to force the heap to give up memory in the shrink case,
    837         and update this comment once that happens.
    838 */
    839 void dlmalloc_set_max_allowed_footprint(size_t bytes);
    840 #endif /* USE_MAX_ALLOWED_FOOTPRINT */
    841 
    842 /*
    843   malloc_max_footprint();
    844   Returns the maximum number of bytes obtained from the system. This
    845   value will be greater than current footprint if deallocated space
    846   has been reclaimed by the system. The peak number of bytes allocated
    847   by malloc, realloc etc., is less than this value. Unlike mallinfo,
    848   this function returns only a precomputed result, so can be called
    849   frequently to monitor memory consumption.  Even if locks are
    850   otherwise defined, this function does not use them, so results might
    851   not be up to date.
    852 */
    853 size_t dlmalloc_max_footprint(void);
    854 
    855 #if !NO_MALLINFO
    856 /*
    857   mallinfo()
    858   Returns (by copy) a struct containing various summary statistics:
    859 
    860   arena:     current total non-mmapped bytes allocated from system
    861   ordblks:   the number of free chunks
    862   smblks:    always zero.
    863   hblks:     current number of mmapped regions
    864   hblkhd:    total bytes held in mmapped regions
    865   usmblks:   the maximum total allocated space. This will be greater
    866                 than current total if trimming has occurred.
    867   fsmblks:   always zero
    868   uordblks:  current total allocated space (normal or mmapped)
    869   fordblks:  total free space
    870   keepcost:  the maximum number of bytes that could ideally be released
    871                back to system via malloc_trim. ("ideally" means that
    872                it ignores page restrictions etc.)
    873 
    874   Because these fields are ints, but internal bookkeeping may
    875   be kept as longs, the reported values may wrap around zero and
    876   thus be inaccurate.
    877 */
    878 struct mallinfo dlmallinfo(void);
    879 #endif /* NO_MALLINFO */
    880 
    881 /*
    882   independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
    883 
    884   independent_calloc is similar to calloc, but instead of returning a
    885   single cleared space, it returns an array of pointers to n_elements
    886   independent elements that can hold contents of size elem_size, each
    887   of which starts out cleared, and can be independently freed,
    888   realloc'ed etc. The elements are guaranteed to be adjacently
    889   allocated (this is not guaranteed to occur with multiple callocs or
    890   mallocs), which may also improve cache locality in some
    891   applications.
    892 
    893   The "chunks" argument is optional (i.e., may be null, which is
    894   probably the most typical usage). If it is null, the returned array
    895   is itself dynamically allocated and should also be freed when it is
    896   no longer needed. Otherwise, the chunks array must be of at least
    897   n_elements in length. It is filled in with the pointers to the
    898   chunks.
    899 
    900   In either case, independent_calloc returns this pointer array, or
    901   null if the allocation failed.  If n_elements is zero and "chunks"
    902   is null, it returns a chunk representing an array with zero elements
    903   (which should be freed if not wanted).
    904 
    905   Each element must be individually freed when it is no longer
    906   needed. If you'd like to instead be able to free all at once, you
    907   should instead use regular calloc and assign pointers into this
    908   space to represent elements.  (In this case though, you cannot
    909   independently free elements.)
    910 
    911   independent_calloc simplifies and speeds up implementations of many
    912   kinds of pools.  It may also be useful when constructing large data
    913   structures that initially have a fixed number of fixed-sized nodes,
    914   but the number is not known at compile time, and some of the nodes
    915   may later need to be freed. For example:
    916 
    917   struct Node { int item; struct Node* next; };
    918 
    919   struct Node* build_list() {
    920     struct Node** pool;
    921     int n = read_number_of_nodes_needed();
    922     if (n <= 0) return 0;
    923     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
    924     if (pool == 0) die();
    925     // organize into a linked list...
    926     struct Node* first = pool[0];
    927     for (i = 0; i < n-1; ++i)
    928       pool[i]->next = pool[i+1];
    929     free(pool);     // Can now free the array (or not, if it is needed later)
    930     return first;
    931   }
    932 */
    933 void** dlindependent_calloc(size_t, size_t, void**);
    934 
    935 /*
    936   independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
    937 
    938   independent_comalloc allocates, all at once, a set of n_elements
    939   chunks with sizes indicated in the "sizes" array.    It returns
    940   an array of pointers to these elements, each of which can be
    941   independently freed, realloc'ed etc. The elements are guaranteed to
    942   be adjacently allocated (this is not guaranteed to occur with
    943   multiple callocs or mallocs), which may also improve cache locality
    944   in some applications.
    945 
    946   The "chunks" argument is optional (i.e., may be null). If it is null
    947   the returned array is itself dynamically allocated and should also
    948   be freed when it is no longer needed. Otherwise, the chunks array
    949   must be of at least n_elements in length. It is filled in with the
    950   pointers to the chunks.
    951 
    952   In either case, independent_comalloc returns this pointer array, or
    953   null if the allocation failed.  If n_elements is zero and chunks is
    954   null, it returns a chunk representing an array with zero elements
    955   (which should be freed if not wanted).
    956 
    957   Each element must be individually freed when it is no longer
    958   needed. If you'd like to instead be able to free all at once, you
    959   should instead use a single regular malloc, and assign pointers at
    960   particular offsets in the aggregate space. (In this case though, you
    961   cannot independently free elements.)
    962 
    963   independent_comallac differs from independent_calloc in that each
    964   element may have a different size, and also that it does not
    965   automatically clear elements.
    966 
    967   independent_comalloc can be used to speed up allocation in cases
    968   where several structs or objects must always be allocated at the
    969   same time.  For example:
    970 
    971   struct Head { ... }
    972   struct Foot { ... }
    973 
    974   void send_message(char* msg) {
    975     int msglen = strlen(msg);
    976     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
    977     void* chunks[3];
    978     if (independent_comalloc(3, sizes, chunks) == 0)
    979       die();
    980     struct Head* head = (struct Head*)(chunks[0]);
    981     char*        body = (char*)(chunks[1]);
    982     struct Foot* foot = (struct Foot*)(chunks[2]);
    983     // ...
    984   }
    985 
    986   In general though, independent_comalloc is worth using only for
    987   larger values of n_elements. For small values, you probably won't
    988   detect enough difference from series of malloc calls to bother.
    989 
    990   Overuse of independent_comalloc can increase overall memory usage,
    991   since it cannot reuse existing noncontiguous small chunks that
    992   might be available for some of the elements.
    993 */
    994 void** dlindependent_comalloc(size_t, size_t*, void**);
    995 
    996 
    997 /*
    998   pvalloc(size_t n);
    999   Equivalent to valloc(minimum-page-that-holds(n)), that is,
   1000   round up n to nearest pagesize.
   1001  */
   1002 void*  dlpvalloc(size_t);
   1003 
   1004 /*
   1005   malloc_trim(size_t pad);
   1006 
   1007   If possible, gives memory back to the system (via negative arguments
   1008   to sbrk) if there is unused memory at the `high' end of the malloc
   1009   pool or in unused MMAP segments. You can call this after freeing
   1010   large blocks of memory to potentially reduce the system-level memory
   1011   requirements of a program. However, it cannot guarantee to reduce
   1012   memory. Under some allocation patterns, some large free blocks of
   1013   memory will be locked between two used chunks, so they cannot be
   1014   given back to the system.
   1015 
   1016   The `pad' argument to malloc_trim represents the amount of free
   1017   trailing space to leave untrimmed. If this argument is zero, only
   1018   the minimum amount of memory to maintain internal data structures
   1019   will be left. Non-zero arguments can be supplied to maintain enough
   1020   trailing space to service future expected allocations without having
   1021   to re-obtain memory from the system.
   1022 
   1023   Malloc_trim returns 1 if it actually released any memory, else 0.
   1024 */
   1025 int  dlmalloc_trim(size_t);
   1026 
   1027 /*
   1028   malloc_walk_free_pages(handler, harg)
   1029 
   1030   Calls the provided handler on each free region in the heap.  The
   1031   memory between start and end are guaranteed not to contain any
   1032   important data, so the handler is free to alter the contents
   1033   in any way.  This can be used to advise the OS that large free
   1034   regions may be swapped out.
   1035 
   1036   The value in harg will be passed to each call of the handler.
   1037  */
   1038 void dlmalloc_walk_free_pages(void(*)(void*, void*, void*), void*);
   1039 
   1040 /*
   1041   malloc_walk_heap(handler, harg)
   1042 
   1043   Calls the provided handler on each object or free region in the
   1044   heap.  The handler will receive the chunk pointer and length, the
   1045   object pointer and length, and the value in harg on each call.
   1046  */
   1047 void dlmalloc_walk_heap(void(*)(const void*, size_t,
   1048                                 const void*, size_t, void*),
   1049                         void*);
   1050 
   1051 /*
   1052   malloc_usable_size(void* p);
   1053 
   1054   Returns the number of bytes you can actually use in
   1055   an allocated chunk, which may be more than you requested (although
   1056   often not) due to alignment and minimum size constraints.
   1057   You can use this many bytes without worrying about
   1058   overwriting other allocated objects. This is not a particularly great
   1059   programming practice. malloc_usable_size can be more useful in
   1060   debugging and assertions, for example:
   1061 
   1062   p = malloc(n);
   1063   assert(malloc_usable_size(p) >= 256);
   1064 */
   1065 size_t dlmalloc_usable_size(void*);
   1066 
   1067 /*
   1068   malloc_stats();
   1069   Prints on stderr the amount of space obtained from the system (both
   1070   via sbrk and mmap), the maximum amount (which may be more than
   1071   current if malloc_trim and/or munmap got called), and the current
   1072   number of bytes allocated via malloc (or realloc, etc) but not yet
   1073   freed. Note that this is the number of bytes allocated, not the
   1074   number requested. It will be larger than the number requested
   1075   because of alignment and bookkeeping overhead. Because it includes
   1076   alignment wastage as being in use, this figure may be greater than
   1077   zero even when no user-level chunks are allocated.
   1078 
   1079   The reported current and maximum system memory can be inaccurate if
   1080   a program makes other calls to system memory allocation functions
   1081   (normally sbrk) outside of malloc.
   1082 
   1083   malloc_stats prints only the most commonly interesting statistics.
   1084   More information can be obtained by calling mallinfo.
   1085 */
   1086 void  dlmalloc_stats(void);
   1087 
   1088 #endif /* ONLY_MSPACES */
   1089 
   1090 #if MSPACES
   1091 
   1092 /*
   1093   mspace is an opaque type representing an independent
   1094   region of space that supports mspace_malloc, etc.
   1095 */
   1096 typedef void* mspace;
   1097 
   1098 /*
   1099   create_mspace creates and returns a new independent space with the
   1100   given initial capacity, or, if 0, the default granularity size.  It
   1101   returns null if there is no system memory available to create the
   1102   space.  If argument locked is non-zero, the space uses a separate
   1103   lock to control access. The capacity of the space will grow
   1104   dynamically as needed to service mspace_malloc requests.  You can
   1105   control the sizes of incremental increases of this space by
   1106   compiling with a different DEFAULT_GRANULARITY or dynamically
   1107   setting with mallopt(M_GRANULARITY, value).
   1108 */
   1109 mspace create_mspace(size_t capacity, int locked);
   1110 
   1111 /*
   1112   destroy_mspace destroys the given space, and attempts to return all
   1113   of its memory back to the system, returning the total number of
   1114   bytes freed. After destruction, the results of access to all memory
   1115   used by the space become undefined.
   1116 */
   1117 size_t destroy_mspace(mspace msp);
   1118 
   1119 /*
   1120   create_mspace_with_base uses the memory supplied as the initial base
   1121   of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
   1122   space is used for bookkeeping, so the capacity must be at least this
   1123   large. (Otherwise 0 is returned.) When this initial space is
   1124   exhausted, additional memory will be obtained from the system.
   1125   Destroying this space will deallocate all additionally allocated
   1126   space (if possible) but not the initial base.
   1127 */
   1128 mspace create_mspace_with_base(void* base, size_t capacity, int locked);
   1129 
   1130 /*
   1131   mspace_malloc behaves as malloc, but operates within
   1132   the given space.
   1133 */
   1134 void* mspace_malloc(mspace msp, size_t bytes);
   1135 
   1136 /*
   1137   mspace_free behaves as free, but operates within
   1138   the given space.
   1139 
   1140   If compiled with FOOTERS==1, mspace_free is not actually needed.
   1141   free may be called instead of mspace_free because freed chunks from
   1142   any space are handled by their originating spaces.
   1143 */
   1144 void mspace_free(mspace msp, void* mem);
   1145 
   1146 /*
   1147   mspace_realloc behaves as realloc, but operates within
   1148   the given space.
   1149 
   1150   If compiled with FOOTERS==1, mspace_realloc is not actually
   1151   needed.  realloc may be called instead of mspace_realloc because
   1152   realloced chunks from any space are handled by their originating
   1153   spaces.
   1154 */
   1155 void* mspace_realloc(mspace msp, void* mem, size_t newsize);
   1156 
   1157 #if ANDROID /* Added for Android, not part of dlmalloc as released */
   1158 /*
   1159   mspace_merge_objects will merge allocated memory mema and memb
   1160   together, provided memb immediately follows mema.  It is roughly as
   1161   if memb has been freed and mema has been realloced to a larger size.
   1162   On successfully merging, mema will be returned. If either argument
   1163   is null or memb does not immediately follow mema, null will be
   1164   returned.
   1165 
   1166   Both mema and memb should have been previously allocated using
   1167   malloc or a related routine such as realloc. If either mema or memb
   1168   was not malloced or was previously freed, the result is undefined,
   1169   but like mspace_free, the default is to abort the program.
   1170 */
   1171 void* mspace_merge_objects(mspace msp, void* mema, void* memb);
   1172 #endif
   1173 
   1174 /*
   1175   mspace_calloc behaves as calloc, but operates within
   1176   the given space.
   1177 */
   1178 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
   1179 
   1180 /*
   1181   mspace_memalign behaves as memalign, but operates within
   1182   the given space.
   1183 */
   1184 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
   1185 
   1186 /*
   1187   mspace_independent_calloc behaves as independent_calloc, but
   1188   operates within the given space.
   1189 */
   1190 void** mspace_independent_calloc(mspace msp, size_t n_elements,
   1191                                  size_t elem_size, void* chunks[]);
   1192 
   1193 /*
   1194   mspace_independent_comalloc behaves as independent_comalloc, but
   1195   operates within the given space.
   1196 */
   1197 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
   1198                                    size_t sizes[], void* chunks[]);
   1199 
   1200 /*
   1201   mspace_footprint() returns the number of bytes obtained from the
   1202   system for this space.
   1203 */
   1204 size_t mspace_footprint(mspace msp);
   1205 
   1206 /*
   1207   mspace_max_footprint() returns the peak number of bytes obtained from the
   1208   system for this space.
   1209 */
   1210 size_t mspace_max_footprint(mspace msp);
   1211 
   1212 
   1213 #if !NO_MALLINFO
   1214 /*
   1215   mspace_mallinfo behaves as mallinfo, but reports properties of
   1216   the given space.
   1217 */
   1218 struct mallinfo mspace_mallinfo(mspace msp);
   1219 #endif /* NO_MALLINFO */
   1220 
   1221 /*
   1222   mspace_malloc_stats behaves as malloc_stats, but reports
   1223   properties of the given space.
   1224 */
   1225 void mspace_malloc_stats(mspace msp);
   1226 
   1227 /*
   1228   mspace_trim behaves as malloc_trim, but
   1229   operates within the given space.
   1230 */
   1231 int mspace_trim(mspace msp, size_t pad);
   1232 
   1233 /*
   1234   An alias for mallopt.
   1235 */
   1236 int mspace_mallopt(int, int);
   1237 
   1238 #endif /* MSPACES */
   1239 
   1240 #ifdef __cplusplus
   1241 };  /* end of extern "C" */
   1242 #endif /* __cplusplus */
   1243 
   1244 /*
   1245   ========================================================================
   1246   To make a fully customizable malloc.h header file, cut everything
   1247   above this line, put into file malloc.h, edit to suit, and #include it
   1248   on the next line, as well as in programs that use this malloc.
   1249   ========================================================================
   1250 */
   1251 
   1252 /* #include "malloc.h" */
   1253 
   1254 /*------------------------------ internal #includes ---------------------- */
   1255 
   1256 #ifdef WIN32
   1257 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
   1258 #endif /* WIN32 */
   1259 
   1260 #include <stdio.h>       /* for printing in malloc_stats */
   1261 
   1262 #ifndef LACKS_ERRNO_H
   1263 #include <errno.h>       /* for MALLOC_FAILURE_ACTION */
   1264 #endif /* LACKS_ERRNO_H */
   1265 #if FOOTERS
   1266 #include <time.h>        /* for magic initialization */
   1267 #endif /* FOOTERS */
   1268 #ifndef LACKS_STDLIB_H
   1269 #include <stdlib.h>      /* for abort() */
   1270 #endif /* LACKS_STDLIB_H */
   1271 #ifdef DEBUG
   1272 #if ABORT_ON_ASSERT_FAILURE
   1273 #define assert(x) if(!(x)) ABORT
   1274 #else /* ABORT_ON_ASSERT_FAILURE */
   1275 #include <assert.h>
   1276 #endif /* ABORT_ON_ASSERT_FAILURE */
   1277 #else  /* DEBUG */
   1278 #define assert(x)
   1279 #endif /* DEBUG */
   1280 #ifndef LACKS_STRING_H
   1281 #include <string.h>      /* for memset etc */
   1282 #endif  /* LACKS_STRING_H */
   1283 #if USE_BUILTIN_FFS
   1284 #ifndef LACKS_STRINGS_H
   1285 #include <strings.h>     /* for ffs */
   1286 #endif /* LACKS_STRINGS_H */
   1287 #endif /* USE_BUILTIN_FFS */
   1288 #if HAVE_MMAP
   1289 #ifndef LACKS_SYS_MMAN_H
   1290 #include <sys/mman.h>    /* for mmap */
   1291 #endif /* LACKS_SYS_MMAN_H */
   1292 #ifndef LACKS_FCNTL_H
   1293 #include <fcntl.h>
   1294 #endif /* LACKS_FCNTL_H */
   1295 #endif /* HAVE_MMAP */
   1296 #if HAVE_MORECORE
   1297 #ifndef LACKS_UNISTD_H
   1298 #include <unistd.h>     /* for sbrk */
   1299 #else /* LACKS_UNISTD_H */
   1300 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
   1301 extern void*     sbrk(ptrdiff_t);
   1302 #endif /* FreeBSD etc */
   1303 #endif /* LACKS_UNISTD_H */
   1304 #endif /* HAVE_MMAP */
   1305 
   1306 #ifndef WIN32
   1307 #ifndef malloc_getpagesize
   1308 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
   1309 #    ifndef _SC_PAGE_SIZE
   1310 #      define _SC_PAGE_SIZE _SC_PAGESIZE
   1311 #    endif
   1312 #  endif
   1313 #  ifdef _SC_PAGE_SIZE
   1314 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
   1315 #  else
   1316 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
   1317        extern size_t getpagesize();
   1318 #      define malloc_getpagesize getpagesize()
   1319 #    else
   1320 #      ifdef WIN32 /* use supplied emulation of getpagesize */
   1321 #        define malloc_getpagesize getpagesize()
   1322 #      else
   1323 #        ifndef LACKS_SYS_PARAM_H
   1324 #          include <sys/param.h>
   1325 #        endif
   1326 #        ifdef EXEC_PAGESIZE
   1327 #          define malloc_getpagesize EXEC_PAGESIZE
   1328 #        else
   1329 #          ifdef NBPG
   1330 #            ifndef CLSIZE
   1331 #              define malloc_getpagesize NBPG
   1332 #            else
   1333 #              define malloc_getpagesize (NBPG * CLSIZE)
   1334 #            endif
   1335 #          else
   1336 #            ifdef NBPC
   1337 #              define malloc_getpagesize NBPC
   1338 #            else
   1339 #              ifdef PAGESIZE
   1340 #                define malloc_getpagesize PAGESIZE
   1341 #              else /* just guess */
   1342 #                define malloc_getpagesize ((size_t)4096U)
   1343 #              endif
   1344 #            endif
   1345 #          endif
   1346 #        endif
   1347 #      endif
   1348 #    endif
   1349 #  endif
   1350 #endif
   1351 #endif
   1352 
   1353 /* ------------------- size_t and alignment properties -------------------- */
   1354 
   1355 /* The byte and bit size of a size_t */
   1356 #define SIZE_T_SIZE         (sizeof(size_t))
   1357 #define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
   1358 
   1359 /* Some constants coerced to size_t */
   1360 /* Annoying but necessary to avoid errors on some plaftorms */
   1361 #define SIZE_T_ZERO         ((size_t)0)
   1362 #define SIZE_T_ONE          ((size_t)1)
   1363 #define SIZE_T_TWO          ((size_t)2)
   1364 #define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
   1365 #define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
   1366 #define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
   1367 #define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
   1368 
   1369 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
   1370 #define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
   1371 
   1372 /* True if address a has acceptable alignment */
   1373 #define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
   1374 
   1375 /* the number of bytes to offset an address to align it */
   1376 #define align_offset(A)\
   1377  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
   1378   ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
   1379 
   1380 /* -------------------------- MMAP preliminaries ------------------------- */
   1381 
   1382 /*
   1383    If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
   1384    checks to fail so compiler optimizer can delete code rather than
   1385    using so many "#if"s.
   1386 */
   1387 
   1388 
   1389 /* MORECORE and MMAP must return MFAIL on failure */
   1390 #define MFAIL                ((void*)(MAX_SIZE_T))
   1391 #define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
   1392 
   1393 #if !HAVE_MMAP
   1394 #define IS_MMAPPED_BIT       (SIZE_T_ZERO)
   1395 #define USE_MMAP_BIT         (SIZE_T_ZERO)
   1396 #define CALL_MMAP(s)         MFAIL
   1397 #define CALL_MUNMAP(a, s)    (-1)
   1398 #define DIRECT_MMAP(s)       MFAIL
   1399 
   1400 #else /* HAVE_MMAP */
   1401 #define IS_MMAPPED_BIT       (SIZE_T_ONE)
   1402 #define USE_MMAP_BIT         (SIZE_T_ONE)
   1403 
   1404 #ifndef WIN32
   1405 #define CALL_MUNMAP(a, s)    munmap((a), (s))
   1406 #define MMAP_PROT            (PROT_READ|PROT_WRITE)
   1407 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
   1408 #define MAP_ANONYMOUS        MAP_ANON
   1409 #endif /* MAP_ANON */
   1410 #ifdef MAP_ANONYMOUS
   1411 #define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
   1412 #define CALL_MMAP(s)         mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
   1413 #else /* MAP_ANONYMOUS */
   1414 /*
   1415    Nearly all versions of mmap support MAP_ANONYMOUS, so the following
   1416    is unlikely to be needed, but is supplied just in case.
   1417 */
   1418 #define MMAP_FLAGS           (MAP_PRIVATE)
   1419 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
   1420 #define CALL_MMAP(s) ((dev_zero_fd < 0) ? \
   1421            (dev_zero_fd = open("/dev/zero", O_RDWR), \
   1422             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
   1423             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
   1424 #endif /* MAP_ANONYMOUS */
   1425 
   1426 #define DIRECT_MMAP(s)       CALL_MMAP(s)
   1427 #else /* WIN32 */
   1428 
   1429 /* Win32 MMAP via VirtualAlloc */
   1430 static void* win32mmap(size_t size) {
   1431   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
   1432   return (ptr != 0)? ptr: MFAIL;
   1433 }
   1434 
   1435 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
   1436 static void* win32direct_mmap(size_t size) {
   1437   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
   1438                            PAGE_READWRITE);
   1439   return (ptr != 0)? ptr: MFAIL;
   1440 }
   1441 
   1442 /* This function supports releasing coalesed segments */
   1443 static int win32munmap(void* ptr, size_t size) {
   1444   MEMORY_BASIC_INFORMATION minfo;
   1445   char* cptr = ptr;
   1446   while (size) {
   1447     if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
   1448       return -1;
   1449     if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
   1450         minfo.State != MEM_COMMIT || minfo.RegionSize > size)
   1451       return -1;
   1452     if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
   1453       return -1;
   1454     cptr += minfo.RegionSize;
   1455     size -= minfo.RegionSize;
   1456   }
   1457   return 0;
   1458 }
   1459 
   1460 #define CALL_MMAP(s)         win32mmap(s)
   1461 #define CALL_MUNMAP(a, s)    win32munmap((a), (s))
   1462 #define DIRECT_MMAP(s)       win32direct_mmap(s)
   1463 #endif /* WIN32 */
   1464 #endif /* HAVE_MMAP */
   1465 
   1466 #if HAVE_MMAP && HAVE_MREMAP
   1467 #define CALL_MREMAP(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
   1468 #else  /* HAVE_MMAP && HAVE_MREMAP */
   1469 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL
   1470 #endif /* HAVE_MMAP && HAVE_MREMAP */
   1471 
   1472 #if HAVE_MORECORE
   1473 #define CALL_MORECORE(S)     MORECORE(S)
   1474 #else  /* HAVE_MORECORE */
   1475 #define CALL_MORECORE(S)     MFAIL
   1476 #endif /* HAVE_MORECORE */
   1477 
   1478 /* mstate bit set if continguous morecore disabled or failed */
   1479 #define USE_NONCONTIGUOUS_BIT (4U)
   1480 
   1481 /* segment bit set in create_mspace_with_base */
   1482 #define EXTERN_BIT            (8U)
   1483 
   1484 
   1485 /* --------------------------- Lock preliminaries ------------------------ */
   1486 
   1487 #if USE_LOCKS
   1488 
   1489 /*
   1490   When locks are defined, there are up to two global locks:
   1491 
   1492   * If HAVE_MORECORE, morecore_mutex protects sequences of calls to
   1493     MORECORE.  In many cases sys_alloc requires two calls, that should
   1494     not be interleaved with calls by other threads.  This does not
   1495     protect against direct calls to MORECORE by other threads not
   1496     using this lock, so there is still code to cope the best we can on
   1497     interference.
   1498 
   1499   * magic_init_mutex ensures that mparams.magic and other
   1500     unique mparams values are initialized only once.
   1501 */
   1502 
   1503 #ifndef WIN32
   1504 /* By default use posix locks */
   1505 #include <pthread.h>
   1506 #define MLOCK_T pthread_mutex_t
   1507 #define INITIAL_LOCK(l)      pthread_mutex_init(l, NULL)
   1508 #define ACQUIRE_LOCK(l)      pthread_mutex_lock(l)
   1509 #define RELEASE_LOCK(l)      pthread_mutex_unlock(l)
   1510 
   1511 #if HAVE_MORECORE
   1512 static MLOCK_T morecore_mutex = PTHREAD_MUTEX_INITIALIZER;
   1513 #endif /* HAVE_MORECORE */
   1514 
   1515 static MLOCK_T magic_init_mutex = PTHREAD_MUTEX_INITIALIZER;
   1516 
   1517 #else /* WIN32 */
   1518 /*
   1519    Because lock-protected regions have bounded times, and there
   1520    are no recursive lock calls, we can use simple spinlocks.
   1521 */
   1522 
   1523 #define MLOCK_T long
   1524 static int win32_acquire_lock (MLOCK_T *sl) {
   1525   for (;;) {
   1526 #ifdef InterlockedCompareExchangePointer
   1527     if (!InterlockedCompareExchange(sl, 1, 0))
   1528       return 0;
   1529 #else  /* Use older void* version */
   1530     if (!InterlockedCompareExchange((void**)sl, (void*)1, (void*)0))
   1531       return 0;
   1532 #endif /* InterlockedCompareExchangePointer */
   1533     Sleep (0);
   1534   }
   1535 }
   1536 
   1537 static void win32_release_lock (MLOCK_T *sl) {
   1538   InterlockedExchange (sl, 0);
   1539 }
   1540 
   1541 #define INITIAL_LOCK(l)      *(l)=0
   1542 #define ACQUIRE_LOCK(l)      win32_acquire_lock(l)
   1543 #define RELEASE_LOCK(l)      win32_release_lock(l)
   1544 #if HAVE_MORECORE
   1545 static MLOCK_T morecore_mutex;
   1546 #endif /* HAVE_MORECORE */
   1547 static MLOCK_T magic_init_mutex;
   1548 #endif /* WIN32 */
   1549 
   1550 #define USE_LOCK_BIT               (2U)
   1551 #else  /* USE_LOCKS */
   1552 #define USE_LOCK_BIT               (0U)
   1553 #define INITIAL_LOCK(l)
   1554 #endif /* USE_LOCKS */
   1555 
   1556 #if USE_LOCKS && HAVE_MORECORE
   1557 #define ACQUIRE_MORECORE_LOCK()    ACQUIRE_LOCK(&morecore_mutex);
   1558 #define RELEASE_MORECORE_LOCK()    RELEASE_LOCK(&morecore_mutex);
   1559 #else /* USE_LOCKS && HAVE_MORECORE */
   1560 #define ACQUIRE_MORECORE_LOCK()
   1561 #define RELEASE_MORECORE_LOCK()
   1562 #endif /* USE_LOCKS && HAVE_MORECORE */
   1563 
   1564 #if USE_LOCKS
   1565 #define ACQUIRE_MAGIC_INIT_LOCK()  ACQUIRE_LOCK(&magic_init_mutex);
   1566 #define RELEASE_MAGIC_INIT_LOCK()  RELEASE_LOCK(&magic_init_mutex);
   1567 #else  /* USE_LOCKS */
   1568 #define ACQUIRE_MAGIC_INIT_LOCK()
   1569 #define RELEASE_MAGIC_INIT_LOCK()
   1570 #endif /* USE_LOCKS */
   1571 
   1572 
   1573 /* -----------------------  Chunk representations ------------------------ */
   1574 
   1575 /*
   1576   (The following includes lightly edited explanations by Colin Plumb.)
   1577 
   1578   The malloc_chunk declaration below is misleading (but accurate and
   1579   necessary).  It declares a "view" into memory allowing access to
   1580   necessary fields at known offsets from a given base.
   1581 
   1582   Chunks of memory are maintained using a `boundary tag' method as
   1583   originally described by Knuth.  (See the paper by Paul Wilson
   1584   ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
   1585   techniques.)  Sizes of free chunks are stored both in the front of
   1586   each chunk and at the end.  This makes consolidating fragmented
   1587   chunks into bigger chunks fast.  The head fields also hold bits
   1588   representing whether chunks are free or in use.
   1589 
   1590   Here are some pictures to make it clearer.  They are "exploded" to
   1591   show that the state of a chunk can be thought of as extending from
   1592   the high 31 bits of the head field of its header through the
   1593   prev_foot and PINUSE_BIT bit of the following chunk header.
   1594 
   1595   A chunk that's in use looks like:
   1596 
   1597    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1598            | Size of previous chunk (if P = 1)                             |
   1599            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1600          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
   1601          | Size of this chunk                                         1| +-+
   1602    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1603          |                                                               |
   1604          +-                                                             -+
   1605          |                                                               |
   1606          +-                                                             -+
   1607          |                                                               :
   1608          +-      size - sizeof(size_t) available payload bytes          -+
   1609          :                                                               |
   1610  chunk-> +-                                                             -+
   1611          |                                                               |
   1612          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1613        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
   1614        | Size of next chunk (may or may not be in use)               | +-+
   1615  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1616 
   1617     And if it's free, it looks like this:
   1618 
   1619    chunk-> +-                                                             -+
   1620            | User payload (must be in use, or we would have merged!)       |
   1621            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1622          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
   1623          | Size of this chunk                                         0| +-+
   1624    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1625          | Next pointer                                                  |
   1626          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1627          | Prev pointer                                                  |
   1628          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1629          |                                                               :
   1630          +-      size - sizeof(struct chunk) unused bytes               -+
   1631          :                                                               |
   1632  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1633          | Size of this chunk                                            |
   1634          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1635        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
   1636        | Size of next chunk (must be in use, or we would have merged)| +-+
   1637  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1638        |                                                               :
   1639        +- User payload                                                -+
   1640        :                                                               |
   1641        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1642                                                                      |0|
   1643                                                                      +-+
   1644   Note that since we always merge adjacent free chunks, the chunks
   1645   adjacent to a free chunk must be in use.
   1646 
   1647   Given a pointer to a chunk (which can be derived trivially from the
   1648   payload pointer) we can, in O(1) time, find out whether the adjacent
   1649   chunks are free, and if so, unlink them from the lists that they
   1650   are on and merge them with the current chunk.
   1651 
   1652   Chunks always begin on even word boundaries, so the mem portion
   1653   (which is returned to the user) is also on an even word boundary, and
   1654   thus at least double-word aligned.
   1655 
   1656   The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
   1657   chunk size (which is always a multiple of two words), is an in-use
   1658   bit for the *previous* chunk.  If that bit is *clear*, then the
   1659   word before the current chunk size contains the previous chunk
   1660   size, and can be used to find the front of the previous chunk.
   1661   The very first chunk allocated always has this bit set, preventing
   1662   access to non-existent (or non-owned) memory. If pinuse is set for
   1663   any given chunk, then you CANNOT determine the size of the
   1664   previous chunk, and might even get a memory addressing fault when
   1665   trying to do so.
   1666 
   1667   The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
   1668   the chunk size redundantly records whether the current chunk is
   1669   inuse. This redundancy enables usage checks within free and realloc,
   1670   and reduces indirection when freeing and consolidating chunks.
   1671 
   1672   Each freshly allocated chunk must have both cinuse and pinuse set.
   1673   That is, each allocated chunk borders either a previously allocated
   1674   and still in-use chunk, or the base of its memory arena. This is
   1675   ensured by making all allocations from the the `lowest' part of any
   1676   found chunk.  Further, no free chunk physically borders another one,
   1677   so each free chunk is known to be preceded and followed by either
   1678   inuse chunks or the ends of memory.
   1679 
   1680   Note that the `foot' of the current chunk is actually represented
   1681   as the prev_foot of the NEXT chunk. This makes it easier to
   1682   deal with alignments etc but can be very confusing when trying
   1683   to extend or adapt this code.
   1684 
   1685   The exceptions to all this are
   1686 
   1687      1. The special chunk `top' is the top-most available chunk (i.e.,
   1688         the one bordering the end of available memory). It is treated
   1689         specially.  Top is never included in any bin, is used only if
   1690         no other chunk is available, and is released back to the
   1691         system if it is very large (see M_TRIM_THRESHOLD).  In effect,
   1692         the top chunk is treated as larger (and thus less well
   1693         fitting) than any other available chunk.  The top chunk
   1694         doesn't update its trailing size field since there is no next
   1695         contiguous chunk that would have to index off it. However,
   1696         space is still allocated for it (TOP_FOOT_SIZE) to enable
   1697         separation or merging when space is extended.
   1698 
   1699      3. Chunks allocated via mmap, which have the lowest-order bit
   1700         (IS_MMAPPED_BIT) set in their prev_foot fields, and do not set
   1701         PINUSE_BIT in their head fields.  Because they are allocated
   1702         one-by-one, each must carry its own prev_foot field, which is
   1703         also used to hold the offset this chunk has within its mmapped
   1704         region, which is needed to preserve alignment. Each mmapped
   1705         chunk is trailed by the first two fields of a fake next-chunk
   1706         for sake of usage checks.
   1707 
   1708 */
   1709 
   1710 struct malloc_chunk {
   1711   size_t               prev_foot;  /* Size of previous chunk (if free).  */
   1712   size_t               head;       /* Size and inuse bits. */
   1713   struct malloc_chunk* fd;         /* double links -- used only if free. */
   1714   struct malloc_chunk* bk;
   1715 };
   1716 
   1717 typedef struct malloc_chunk  mchunk;
   1718 typedef struct malloc_chunk* mchunkptr;
   1719 typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
   1720 typedef unsigned int bindex_t;         /* Described below */
   1721 typedef unsigned int binmap_t;         /* Described below */
   1722 typedef unsigned int flag_t;           /* The type of various bit flag sets */
   1723 
   1724 /* ------------------- Chunks sizes and alignments ----------------------- */
   1725 
   1726 #define MCHUNK_SIZE         (sizeof(mchunk))
   1727 
   1728 #if FOOTERS
   1729 #define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
   1730 #else /* FOOTERS */
   1731 #define CHUNK_OVERHEAD      (SIZE_T_SIZE)
   1732 #endif /* FOOTERS */
   1733 
   1734 /* MMapped chunks need a second word of overhead ... */
   1735 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
   1736 /* ... and additional padding for fake next-chunk at foot */
   1737 #define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
   1738 
   1739 /* The smallest size we can malloc is an aligned minimal chunk */
   1740 #define MIN_CHUNK_SIZE\
   1741   ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
   1742 
   1743 /* conversion from malloc headers to user pointers, and back */
   1744 #define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
   1745 #define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
   1746 /* chunk associated with aligned address A */
   1747 #define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
   1748 
   1749 /* Bounds on request (not chunk) sizes. */
   1750 #define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
   1751 #define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
   1752 
   1753 /* pad request bytes into a usable size */
   1754 #define pad_request(req) \
   1755    (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
   1756 
   1757 /* pad request, checking for minimum (but not maximum) */
   1758 #define request2size(req) \
   1759   (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
   1760 
   1761 
   1762 /* ------------------ Operations on head and foot fields ----------------- */
   1763 
   1764 /*
   1765   The head field of a chunk is or'ed with PINUSE_BIT when previous
   1766   adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
   1767   use. If the chunk was obtained with mmap, the prev_foot field has
   1768   IS_MMAPPED_BIT set, otherwise holding the offset of the base of the
   1769   mmapped region to the base of the chunk.
   1770 */
   1771 
   1772 #define PINUSE_BIT          (SIZE_T_ONE)
   1773 #define CINUSE_BIT          (SIZE_T_TWO)
   1774 #define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
   1775 
   1776 /* Head value for fenceposts */
   1777 #define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
   1778 
   1779 /* extraction of fields from head words */
   1780 #define cinuse(p)           ((p)->head & CINUSE_BIT)
   1781 #define pinuse(p)           ((p)->head & PINUSE_BIT)
   1782 #define chunksize(p)        ((p)->head & ~(INUSE_BITS))
   1783 
   1784 #define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
   1785 #define clear_cinuse(p)     ((p)->head &= ~CINUSE_BIT)
   1786 
   1787 /* Treat space at ptr +/- offset as a chunk */
   1788 #define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
   1789 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
   1790 
   1791 /* Ptr to next or previous physical malloc_chunk. */
   1792 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~INUSE_BITS)))
   1793 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
   1794 
   1795 /* extract next chunk's pinuse bit */
   1796 #define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
   1797 
   1798 /* Get/set size at footer */
   1799 #define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
   1800 #define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
   1801 
   1802 /* Set size, pinuse bit, and foot */
   1803 #define set_size_and_pinuse_of_free_chunk(p, s)\
   1804   ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
   1805 
   1806 /* Set size, pinuse bit, foot, and clear next pinuse */
   1807 #define set_free_with_pinuse(p, s, n)\
   1808   (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
   1809 
   1810 #define is_mmapped(p)\
   1811   (!((p)->head & PINUSE_BIT) && ((p)->prev_foot & IS_MMAPPED_BIT))
   1812 
   1813 /* Get the internal overhead associated with chunk p */
   1814 #define overhead_for(p)\
   1815  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
   1816 
   1817 /* Return true if malloced space is not necessarily cleared */
   1818 #if MMAP_CLEARS
   1819 #define calloc_must_clear(p) (!is_mmapped(p))
   1820 #else /* MMAP_CLEARS */
   1821 #define calloc_must_clear(p) (1)
   1822 #endif /* MMAP_CLEARS */
   1823 
   1824 /* ---------------------- Overlaid data structures ----------------------- */
   1825 
   1826 /*
   1827   When chunks are not in use, they are treated as nodes of either
   1828   lists or trees.
   1829 
   1830   "Small"  chunks are stored in circular doubly-linked lists, and look
   1831   like this:
   1832 
   1833     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1834             |             Size of previous chunk                            |
   1835             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1836     `head:' |             Size of chunk, in bytes                         |P|
   1837       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1838             |             Forward pointer to next chunk in list             |
   1839             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1840             |             Back pointer to previous chunk in list            |
   1841             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1842             |             Unused space (may be 0 bytes long)                .
   1843             .                                                               .
   1844             .                                                               |
   1845 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1846     `foot:' |             Size of chunk, in bytes                           |
   1847             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1848 
   1849   Larger chunks are kept in a form of bitwise digital trees (aka
   1850   tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
   1851   free chunks greater than 256 bytes, their size doesn't impose any
   1852   constraints on user chunk sizes.  Each node looks like:
   1853 
   1854     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1855             |             Size of previous chunk                            |
   1856             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1857     `head:' |             Size of chunk, in bytes                         |P|
   1858       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1859             |             Forward pointer to next chunk of same size        |
   1860             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1861             |             Back pointer to previous chunk of same size       |
   1862             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1863             |             Pointer to left child (child[0])                  |
   1864             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1865             |             Pointer to right child (child[1])                 |
   1866             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1867             |             Pointer to parent                                 |
   1868             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1869             |             bin index of this chunk                           |
   1870             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1871             |             Unused space                                      .
   1872             .                                                               |
   1873 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1874     `foot:' |             Size of chunk, in bytes                           |
   1875             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   1876 
   1877   Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
   1878   of the same size are arranged in a circularly-linked list, with only
   1879   the oldest chunk (the next to be used, in our FIFO ordering)
   1880   actually in the tree.  (Tree members are distinguished by a non-null
   1881   parent pointer.)  If a chunk with the same size an an existing node
   1882   is inserted, it is linked off the existing node using pointers that
   1883   work in the same way as fd/bk pointers of small chunks.
   1884 
   1885   Each tree contains a power of 2 sized range of chunk sizes (the
   1886   smallest is 0x100 <= x < 0x180), which is is divided in half at each
   1887   tree level, with the chunks in the smaller half of the range (0x100
   1888   <= x < 0x140 for the top nose) in the left subtree and the larger
   1889   half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
   1890   done by inspecting individual bits.
   1891 
   1892   Using these rules, each node's left subtree contains all smaller
   1893   sizes than its right subtree.  However, the node at the root of each
   1894   subtree has no particular ordering relationship to either.  (The
   1895   dividing line between the subtree sizes is based on trie relation.)
   1896   If we remove the last chunk of a given size from the interior of the
   1897   tree, we need to replace it with a leaf node.  The tree ordering
   1898   rules permit a node to be replaced by any leaf below it.
   1899 
   1900   The smallest chunk in a tree (a common operation in a best-fit
   1901   allocator) can be found by walking a path to the leftmost leaf in
   1902   the tree.  Unlike a usual binary tree, where we follow left child
   1903   pointers until we reach a null, here we follow the right child
   1904   pointer any time the left one is null, until we reach a leaf with
   1905   both child pointers null. The smallest chunk in the tree will be
   1906   somewhere along that path.
   1907 
   1908   The worst case number of steps to add, find, or remove a node is
   1909   bounded by the number of bits differentiating chunks within
   1910   bins. Under current bin calculations, this ranges from 6 up to 21
   1911   (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
   1912   is of course much better.
   1913 */
   1914 
   1915 struct malloc_tree_chunk {
   1916   /* The first four fields must be compatible with malloc_chunk */
   1917   size_t                    prev_foot;
   1918   size_t                    head;
   1919   struct malloc_tree_chunk* fd;
   1920   struct malloc_tree_chunk* bk;
   1921 
   1922   struct malloc_tree_chunk* child[2];
   1923   struct malloc_tree_chunk* parent;
   1924   bindex_t                  index;
   1925 };
   1926 
   1927 typedef struct malloc_tree_chunk  tchunk;
   1928 typedef struct malloc_tree_chunk* tchunkptr;
   1929 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
   1930 
   1931 /* A little helper macro for trees */
   1932 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
   1933 
   1934 /* ----------------------------- Segments -------------------------------- */
   1935 
   1936 /*
   1937   Each malloc space may include non-contiguous segments, held in a
   1938   list headed by an embedded malloc_segment record representing the
   1939   top-most space. Segments also include flags holding properties of
   1940   the space. Large chunks that are directly allocated by mmap are not
   1941   included in this list. They are instead independently created and
   1942   destroyed without otherwise keeping track of them.
   1943 
   1944   Segment management mainly comes into play for spaces allocated by
   1945   MMAP.  Any call to MMAP might or might not return memory that is
   1946   adjacent to an existing segment.  MORECORE normally contiguously
   1947   extends the current space, so this space is almost always adjacent,
   1948   which is simpler and faster to deal with. (This is why MORECORE is
   1949   used preferentially to MMAP when both are available -- see
   1950   sys_alloc.)  When allocating using MMAP, we don't use any of the
   1951   hinting mechanisms (inconsistently) supported in various
   1952   implementations of unix mmap, or distinguish reserving from
   1953   committing memory. Instead, we just ask for space, and exploit
   1954   contiguity when we get it.  It is probably possible to do
   1955   better than this on some systems, but no general scheme seems
   1956   to be significantly better.
   1957 
   1958   Management entails a simpler variant of the consolidation scheme
   1959   used for chunks to reduce fragmentation -- new adjacent memory is
   1960   normally prepended or appended to an existing segment. However,
   1961   there are limitations compared to chunk consolidation that mostly
   1962   reflect the fact that segment processing is relatively infrequent
   1963   (occurring only when getting memory from system) and that we
   1964   don't expect to have huge numbers of segments:
   1965 
   1966   * Segments are not indexed, so traversal requires linear scans.  (It
   1967     would be possible to index these, but is not worth the extra
   1968     overhead and complexity for most programs on most platforms.)
   1969   * New segments are only appended to old ones when holding top-most
   1970     memory; if they cannot be prepended to others, they are held in
   1971     different segments.
   1972 
   1973   Except for the top-most segment of an mstate, each segment record
   1974   is kept at the tail of its segment. Segments are added by pushing
   1975   segment records onto the list headed by &mstate.seg for the
   1976   containing mstate.
   1977 
   1978   Segment flags control allocation/merge/deallocation policies:
   1979   * If EXTERN_BIT set, then we did not allocate this segment,
   1980     and so should not try to deallocate or merge with others.
   1981     (This currently holds only for the initial segment passed
   1982     into create_mspace_with_base.)
   1983   * If IS_MMAPPED_BIT set, the segment may be merged with
   1984     other surrounding mmapped segments and trimmed/de-allocated
   1985     using munmap.
   1986   * If neither bit is set, then the segment was obtained using
   1987     MORECORE so can be merged with surrounding MORECORE'd segments
   1988     and deallocated/trimmed using MORECORE with negative arguments.
   1989 */
   1990 
   1991 struct malloc_segment {
   1992   char*        base;             /* base address */
   1993   size_t       size;             /* allocated size */
   1994   struct malloc_segment* next;   /* ptr to next segment */
   1995   flag_t       sflags;           /* mmap and extern flag */
   1996 };
   1997 
   1998 #define is_mmapped_segment(S)  ((S)->sflags & IS_MMAPPED_BIT)
   1999 #define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
   2000 
   2001 typedef struct malloc_segment  msegment;
   2002 typedef struct malloc_segment* msegmentptr;
   2003 
   2004 /* ---------------------------- malloc_state ----------------------------- */
   2005 
   2006 /*
   2007    A malloc_state holds all of the bookkeeping for a space.
   2008    The main fields are:
   2009 
   2010   Top
   2011     The topmost chunk of the currently active segment. Its size is
   2012     cached in topsize.  The actual size of topmost space is
   2013     topsize+TOP_FOOT_SIZE, which includes space reserved for adding
   2014     fenceposts and segment records if necessary when getting more
   2015     space from the system.  The size at which to autotrim top is
   2016     cached from mparams in trim_check, except that it is disabled if
   2017     an autotrim fails.
   2018 
   2019   Designated victim (dv)
   2020     This is the preferred chunk for servicing small requests that
   2021     don't have exact fits.  It is normally the chunk split off most
   2022     recently to service another small request.  Its size is cached in
   2023     dvsize. The link fields of this chunk are not maintained since it
   2024     is not kept in a bin.
   2025 
   2026   SmallBins
   2027     An array of bin headers for free chunks.  These bins hold chunks
   2028     with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
   2029     chunks of all the same size, spaced 8 bytes apart.  To simplify
   2030     use in double-linked lists, each bin header acts as a malloc_chunk
   2031     pointing to the real first node, if it exists (else pointing to
   2032     itself).  This avoids special-casing for headers.  But to avoid
   2033     waste, we allocate only the fd/bk pointers of bins, and then use
   2034     repositioning tricks to treat these as the fields of a chunk.
   2035 
   2036   TreeBins
   2037     Treebins are pointers to the roots of trees holding a range of
   2038     sizes. There are 2 equally spaced treebins for each power of two
   2039     from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
   2040     larger.
   2041 
   2042   Bin maps
   2043     There is one bit map for small bins ("smallmap") and one for
   2044     treebins ("treemap).  Each bin sets its bit when non-empty, and
   2045     clears the bit when empty.  Bit operations are then used to avoid
   2046     bin-by-bin searching -- nearly all "search" is done without ever
   2047     looking at bins that won't be selected.  The bit maps
   2048     conservatively use 32 bits per map word, even if on 64bit system.
   2049     For a good description of some of the bit-based techniques used
   2050     here, see Henry S. Warren Jr's book "Hacker's Delight" (and
   2051     supplement at http://hackersdelight.org/). Many of these are
   2052     intended to reduce the branchiness of paths through malloc etc, as
   2053     well as to reduce the number of memory locations read or written.
   2054 
   2055   Segments
   2056     A list of segments headed by an embedded malloc_segment record
   2057     representing the initial space.
   2058 
   2059   Address check support
   2060     The least_addr field is the least address ever obtained from
   2061     MORECORE or MMAP. Attempted frees and reallocs of any address less
   2062     than this are trapped (unless INSECURE is defined).
   2063 
   2064   Magic tag
   2065     A cross-check field that should always hold same value as mparams.magic.
   2066 
   2067   Flags
   2068     Bits recording whether to use MMAP, locks, or contiguous MORECORE
   2069 
   2070   Statistics
   2071     Each space keeps track of current and maximum system memory
   2072     obtained via MORECORE or MMAP.
   2073 
   2074   Locking
   2075     If USE_LOCKS is defined, the "mutex" lock is acquired and released
   2076     around every public call using this mspace.
   2077 */
   2078 
   2079 /* Bin types, widths and sizes */
   2080 #define NSMALLBINS        (32U)
   2081 #define NTREEBINS         (32U)
   2082 #define SMALLBIN_SHIFT    (3U)
   2083 #define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
   2084 #define TREEBIN_SHIFT     (8U)
   2085 #define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
   2086 #define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
   2087 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
   2088 
   2089 struct malloc_state {
   2090   binmap_t   smallmap;
   2091   binmap_t   treemap;
   2092   size_t     dvsize;
   2093   size_t     topsize;
   2094   char*      least_addr;
   2095   mchunkptr  dv;
   2096   mchunkptr  top;
   2097   size_t     trim_check;
   2098   size_t     magic;
   2099   mchunkptr  smallbins[(NSMALLBINS+1)*2];
   2100   tbinptr    treebins[NTREEBINS];
   2101   size_t     footprint;
   2102 #if USE_MAX_ALLOWED_FOOTPRINT
   2103   size_t     max_allowed_footprint;
   2104 #endif
   2105   size_t     max_footprint;
   2106   flag_t     mflags;
   2107 #if USE_LOCKS
   2108   MLOCK_T    mutex;     /* locate lock among fields that rarely change */
   2109 #endif /* USE_LOCKS */
   2110   msegment   seg;
   2111 };
   2112 
   2113 typedef struct malloc_state*    mstate;
   2114 
   2115 /* ------------- Global malloc_state and malloc_params ------------------- */
   2116 
   2117 /*
   2118   malloc_params holds global properties, including those that can be
   2119   dynamically set using mallopt. There is a single instance, mparams,
   2120   initialized in init_mparams.
   2121 */
   2122 
   2123 struct malloc_params {
   2124   size_t magic;
   2125   size_t page_size;
   2126   size_t granularity;
   2127   size_t mmap_threshold;
   2128   size_t trim_threshold;
   2129   flag_t default_mflags;
   2130 };
   2131 
   2132 static struct malloc_params mparams;
   2133 
   2134 /* The global malloc_state used for all non-"mspace" calls */
   2135 static struct malloc_state _gm_
   2136 #if USE_MAX_ALLOWED_FOOTPRINT
   2137         = { .max_allowed_footprint = MAX_SIZE_T };
   2138 #else
   2139         ;
   2140 #endif
   2141 
   2142 #define gm                 (&_gm_)
   2143 #define is_global(M)       ((M) == &_gm_)
   2144 #define is_initialized(M)  ((M)->top != 0)
   2145 
   2146 /* -------------------------- system alloc setup ------------------------- */
   2147 
   2148 /* Operations on mflags */
   2149 
   2150 #define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
   2151 #define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
   2152 #define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
   2153 
   2154 #define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
   2155 #define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
   2156 #define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
   2157 
   2158 #define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
   2159 #define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
   2160 
   2161 #define set_lock(M,L)\
   2162  ((M)->mflags = (L)?\
   2163   ((M)->mflags | USE_LOCK_BIT) :\
   2164   ((M)->mflags & ~USE_LOCK_BIT))
   2165 
   2166 /* page-align a size */
   2167 #define page_align(S)\
   2168  (((S) + (mparams.page_size)) & ~(mparams.page_size - SIZE_T_ONE))
   2169 
   2170 /* granularity-align a size */
   2171 #define granularity_align(S)\
   2172   (((S) + (mparams.granularity)) & ~(mparams.granularity - SIZE_T_ONE))
   2173 
   2174 #define is_page_aligned(S)\
   2175    (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
   2176 #define is_granularity_aligned(S)\
   2177    (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
   2178 
   2179 /*  True if segment S holds address A */
   2180 #define segment_holds(S, A)\
   2181   ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
   2182 
   2183 /* Return segment holding given address */
   2184 static msegmentptr segment_holding(mstate m, char* addr) {
   2185   msegmentptr sp = &m->seg;
   2186   for (;;) {
   2187     if (addr >= sp->base && addr < sp->base + sp->size)
   2188       return sp;
   2189     if ((sp = sp->next) == 0)
   2190       return 0;
   2191   }
   2192 }
   2193 
   2194 /* Return true if segment contains a segment link */
   2195 static int has_segment_link(mstate m, msegmentptr ss) {
   2196   msegmentptr sp = &m->seg;
   2197   for (;;) {
   2198     if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
   2199       return 1;
   2200     if ((sp = sp->next) == 0)
   2201       return 0;
   2202   }
   2203 }
   2204 
   2205 #ifndef MORECORE_CANNOT_TRIM
   2206 #define should_trim(M,s)  ((s) > (M)->trim_check)
   2207 #else  /* MORECORE_CANNOT_TRIM */
   2208 #define should_trim(M,s)  (0)
   2209 #endif /* MORECORE_CANNOT_TRIM */
   2210 
   2211 /*
   2212   TOP_FOOT_SIZE is padding at the end of a segment, including space
   2213   that may be needed to place segment records and fenceposts when new
   2214   noncontiguous segments are added.
   2215 */
   2216 #define TOP_FOOT_SIZE\
   2217   (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
   2218 
   2219 
   2220 /* -------------------------------  Hooks -------------------------------- */
   2221 
   2222 /*
   2223   PREACTION should be defined to return 0 on success, and nonzero on
   2224   failure. If you are not using locking, you can redefine these to do
   2225   anything you like.
   2226 */
   2227 
   2228 #if USE_LOCKS
   2229 
   2230 /* Ensure locks are initialized */
   2231 #define GLOBALLY_INITIALIZE() (mparams.page_size == 0 && init_mparams())
   2232 
   2233 #define PREACTION(M)  ((GLOBALLY_INITIALIZE() || use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
   2234 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
   2235 #else /* USE_LOCKS */
   2236 
   2237 #ifndef PREACTION
   2238 #define PREACTION(M) (0)
   2239 #endif  /* PREACTION */
   2240 
   2241 #ifndef POSTACTION
   2242 #define POSTACTION(M)
   2243 #endif  /* POSTACTION */
   2244 
   2245 #endif /* USE_LOCKS */
   2246 
   2247 /*
   2248   CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
   2249   USAGE_ERROR_ACTION is triggered on detected bad frees and
   2250   reallocs. The argument p is an address that might have triggered the
   2251   fault. It is ignored by the two predefined actions, but might be
   2252   useful in custom actions that try to help diagnose errors.
   2253 */
   2254 
   2255 #if PROCEED_ON_ERROR
   2256 
   2257 /* A count of the number of corruption errors causing resets */
   2258 int malloc_corruption_error_count;
   2259 
   2260 /* default corruption action */
   2261 static void reset_on_error(mstate m);
   2262 
   2263 #define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
   2264 #define USAGE_ERROR_ACTION(m, p)
   2265 
   2266 #else /* PROCEED_ON_ERROR */
   2267 
   2268 /* The following Android-specific code is used to print an informative
   2269  * fatal error message to the log when we detect that a heap corruption
   2270  * was detected. We need to be careful about not using a log function
   2271  * that may require an allocation here!
   2272  */
   2273 #ifdef LOG_ON_HEAP_ERROR
   2274 
   2275 #  include <private/logd.h>
   2276 
   2277 static void __bionic_heap_error(const char* msg, const char* function)
   2278 {
   2279     /* We format the buffer explicitely, i.e. without using snprintf()
   2280      * which may use malloc() internally. Not something we can trust
   2281      * if we just detected a corrupted heap.
   2282      */
   2283     char buffer[256];
   2284     strlcpy(buffer, "@@@ ABORTING: ", sizeof(buffer));
   2285     strlcat(buffer, msg, sizeof(buffer));
   2286     if (function != NULL) {
   2287         strlcat(buffer, " IN ", sizeof(buffer));
   2288         strlcat(buffer, function, sizeof(buffer));
   2289     }
   2290     __libc_android_log_write(ANDROID_LOG_FATAL,"libc",buffer);
   2291     abort();
   2292 }
   2293 
   2294 #  ifndef CORRUPTION_ERROR_ACTION
   2295 #    define CORRUPTION_ERROR_ACTION(m)  \
   2296     __bionic_heap_error("HEAP MEMORY CORRUPTION", __FUNCTION__)
   2297 #  endif
   2298 #  ifndef USAGE_ERROR_ACTION
   2299 #    define USAGE_ERROR_ACTION(m,p)   \
   2300     __bionic_heap_error("INVALID HEAP ADDRESS", __FUNCTION__)
   2301 #  endif
   2302 
   2303 #else /* !LOG_ON_HEAP_ERROR */
   2304 
   2305 #  ifndef CORRUPTION_ERROR_ACTION
   2306 #    define CORRUPTION_ERROR_ACTION(m) ABORT
   2307 #  endif /* CORRUPTION_ERROR_ACTION */
   2308 
   2309 #  ifndef USAGE_ERROR_ACTION
   2310 #    define USAGE_ERROR_ACTION(m,p) ABORT
   2311 #  endif /* USAGE_ERROR_ACTION */
   2312 
   2313 #endif /* !LOG_ON_HEAP_ERROR */
   2314 
   2315 
   2316 #endif /* PROCEED_ON_ERROR */
   2317 
   2318 /* -------------------------- Debugging setup ---------------------------- */
   2319 
   2320 #if ! DEBUG
   2321 
   2322 #define check_free_chunk(M,P)
   2323 #define check_inuse_chunk(M,P)
   2324 #define check_malloced_chunk(M,P,N)
   2325 #define check_mmapped_chunk(M,P)
   2326 #define check_malloc_state(M)
   2327 #define check_top_chunk(M,P)
   2328 
   2329 #else /* DEBUG */
   2330 #define check_free_chunk(M,P)       do_check_free_chunk(M,P)
   2331 #define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
   2332 #define check_top_chunk(M,P)        do_check_top_chunk(M,P)
   2333 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
   2334 #define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
   2335 #define check_malloc_state(M)       do_check_malloc_state(M)
   2336 
   2337 static void   do_check_any_chunk(mstate m, mchunkptr p);
   2338 static void   do_check_top_chunk(mstate m, mchunkptr p);
   2339 static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
   2340 static void   do_check_inuse_chunk(mstate m, mchunkptr p);
   2341 static void   do_check_free_chunk(mstate m, mchunkptr p);
   2342 static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
   2343 static void   do_check_tree(mstate m, tchunkptr t);
   2344 static void   do_check_treebin(mstate m, bindex_t i);
   2345 static void   do_check_smallbin(mstate m, bindex_t i);
   2346 static void   do_check_malloc_state(mstate m);
   2347 static int    bin_find(mstate m, mchunkptr x);
   2348 static size_t traverse_and_check(mstate m);
   2349 #endif /* DEBUG */
   2350 
   2351 /* ---------------------------- Indexing Bins ---------------------------- */
   2352 
   2353 #define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
   2354 #define small_index(s)      ((s)  >> SMALLBIN_SHIFT)
   2355 #define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
   2356 #define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
   2357 
   2358 /* addressing by index. See above about smallbin repositioning */
   2359 #define smallbin_at(M, i)   ((sbinptr)((char*)&((M)->smallbins[(i)<<1])))
   2360 #define treebin_at(M,i)     (&((M)->treebins[i]))
   2361 
   2362 /* assign tree index for size S to variable I */
   2363 #if defined(__GNUC__) && defined(i386)
   2364 #define compute_tree_index(S, I)\
   2365 {\
   2366   size_t X = S >> TREEBIN_SHIFT;\
   2367   if (X == 0)\
   2368     I = 0;\
   2369   else if (X > 0xFFFF)\
   2370     I = NTREEBINS-1;\
   2371   else {\
   2372     unsigned int K;\
   2373     __asm__("bsrl %1,%0\n\t" : "=r" (K) : "rm"  (X));\
   2374     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
   2375   }\
   2376 }
   2377 #else /* GNUC */
   2378 #define compute_tree_index(S, I)\
   2379 {\
   2380   size_t X = S >> TREEBIN_SHIFT;\
   2381   if (X == 0)\
   2382     I = 0;\
   2383   else if (X > 0xFFFF)\
   2384     I = NTREEBINS-1;\
   2385   else {\
   2386     unsigned int Y = (unsigned int)X;\
   2387     unsigned int N = ((Y - 0x100) >> 16) & 8;\
   2388     unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
   2389     N += K;\
   2390     N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
   2391     K = 14 - N + ((Y <<= K) >> 15);\
   2392     I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
   2393   }\
   2394 }
   2395 #endif /* GNUC */
   2396 
   2397 /* Bit representing maximum resolved size in a treebin at i */
   2398 #define bit_for_tree_index(i) \
   2399    (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
   2400 
   2401 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
   2402 #define leftshift_for_tree_index(i) \
   2403    ((i == NTREEBINS-1)? 0 : \
   2404     ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
   2405 
   2406 /* The size of the smallest chunk held in bin with index i */
   2407 #define minsize_for_tree_index(i) \
   2408    ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
   2409    (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
   2410 
   2411 
   2412 /* ------------------------ Operations on bin maps ----------------------- */
   2413 
   2414 /* bit corresponding to given index */
   2415 #define idx2bit(i)              ((binmap_t)(1) << (i))
   2416 
   2417 /* Mark/Clear bits with given index */
   2418 #define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
   2419 #define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
   2420 #define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
   2421 
   2422 #define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
   2423 #define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
   2424 #define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
   2425 
   2426 /* index corresponding to given bit */
   2427 
   2428 #if defined(__GNUC__) && defined(i386)
   2429 #define compute_bit2idx(X, I)\
   2430 {\
   2431   unsigned int J;\
   2432   __asm__("bsfl %1,%0\n\t" : "=r" (J) : "rm" (X));\
   2433   I = (bindex_t)J;\
   2434 }
   2435 
   2436 #else /* GNUC */
   2437 #if  USE_BUILTIN_FFS
   2438 #define compute_bit2idx(X, I) I = ffs(X)-1
   2439 
   2440 #else /* USE_BUILTIN_FFS */
   2441 #define compute_bit2idx(X, I)\
   2442 {\
   2443   unsigned int Y = X - 1;\
   2444   unsigned int K = Y >> (16-4) & 16;\
   2445   unsigned int N = K;        Y >>= K;\
   2446   N += K = Y >> (8-3) &  8;  Y >>= K;\
   2447   N += K = Y >> (4-2) &  4;  Y >>= K;\
   2448   N += K = Y >> (2-1) &  2;  Y >>= K;\
   2449   N += K = Y >> (1-0) &  1;  Y >>= K;\
   2450   I = (bindex_t)(N + Y);\
   2451 }
   2452 #endif /* USE_BUILTIN_FFS */
   2453 #endif /* GNUC */
   2454 
   2455 /* isolate the least set bit of a bitmap */
   2456 #define least_bit(x)         ((x) & -(x))
   2457 
   2458 /* mask with all bits to left of least bit of x on */
   2459 #define left_bits(x)         ((x<<1) | -(x<<1))
   2460 
   2461 /* mask with all bits to left of or equal to least bit of x on */
   2462 #define same_or_left_bits(x) ((x) | -(x))
   2463 
   2464 
   2465 /* ----------------------- Runtime Check Support ------------------------- */
   2466 
   2467 /*
   2468   For security, the main invariant is that malloc/free/etc never
   2469   writes to a static address other than malloc_state, unless static
   2470   malloc_state itself has been corrupted, which cannot occur via
   2471   malloc (because of these checks). In essence this means that we
   2472   believe all pointers, sizes, maps etc held in malloc_state, but
   2473   check all of those linked or offsetted from other embedded data
   2474   structures.  These checks are interspersed with main code in a way
   2475   that tends to minimize their run-time cost.
   2476 
   2477   When FOOTERS is defined, in addition to range checking, we also
   2478   verify footer fields of inuse chunks, which can be used guarantee
   2479   that the mstate controlling malloc/free is intact.  This is a
   2480   streamlined version of the approach described by William Robertson
   2481   et al in "Run-time Detection of Heap-based Overflows" LISA'03
   2482   http://www.usenix.org/events/lisa03/tech/robertson.html The footer
   2483   of an inuse chunk holds the xor of its mstate and a random seed,
   2484   that is checked upon calls to free() and realloc().  This is
   2485   (probablistically) unguessable from outside the program, but can be
   2486   computed by any code successfully malloc'ing any chunk, so does not
   2487   itself provide protection against code that has already broken
   2488   security through some other means.  Unlike Robertson et al, we
   2489   always dynamically check addresses of all offset chunks (previous,
   2490   next, etc). This turns out to be cheaper than relying on hashes.
   2491 */
   2492 
   2493 #if !INSECURE
   2494 /* Check if address a is at least as high as any from MORECORE or MMAP */
   2495 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
   2496 /* Check if address of next chunk n is higher than base chunk p */
   2497 #define ok_next(p, n)    ((char*)(p) < (char*)(n))
   2498 /* Check if p has its cinuse bit on */
   2499 #define ok_cinuse(p)     cinuse(p)
   2500 /* Check if p has its pinuse bit on */
   2501 #define ok_pinuse(p)     pinuse(p)
   2502 
   2503 #else /* !INSECURE */
   2504 #define ok_address(M, a) (1)
   2505 #define ok_next(b, n)    (1)
   2506 #define ok_cinuse(p)     (1)
   2507 #define ok_pinuse(p)     (1)
   2508 #endif /* !INSECURE */
   2509 
   2510 #if (FOOTERS && !INSECURE)
   2511 /* Check if (alleged) mstate m has expected magic field */
   2512 #define ok_magic(M)      ((M)->magic == mparams.magic)
   2513 #else  /* (FOOTERS && !INSECURE) */
   2514 #define ok_magic(M)      (1)
   2515 #endif /* (FOOTERS && !INSECURE) */
   2516 
   2517 
   2518 /* In gcc, use __builtin_expect to minimize impact of checks */
   2519 #if !INSECURE
   2520 #if defined(__GNUC__) && __GNUC__ >= 3
   2521 #define RTCHECK(e)  __builtin_expect(e, 1)
   2522 #else /* GNUC */
   2523 #define RTCHECK(e)  (e)
   2524 #endif /* GNUC */
   2525 #else /* !INSECURE */
   2526 #define RTCHECK(e)  (1)
   2527 #endif /* !INSECURE */
   2528 
   2529 /* macros to set up inuse chunks with or without footers */
   2530 
   2531 #if !FOOTERS
   2532 
   2533 #define mark_inuse_foot(M,p,s)
   2534 
   2535 /* Set cinuse bit and pinuse bit of next chunk */
   2536 #define set_inuse(M,p,s)\
   2537   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
   2538   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
   2539 
   2540 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
   2541 #define set_inuse_and_pinuse(M,p,s)\
   2542   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
   2543   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
   2544 
   2545 /* Set size, cinuse and pinuse bit of this chunk */
   2546 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
   2547   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
   2548 
   2549 #else /* FOOTERS */
   2550 
   2551 /* Set foot of inuse chunk to be xor of mstate and seed */
   2552 #define mark_inuse_foot(M,p,s)\
   2553   (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
   2554 
   2555 #define get_mstate_for(p)\
   2556   ((mstate)(((mchunkptr)((char*)(p) +\
   2557     (chunksize(p))))->prev_foot ^ mparams.magic))
   2558 
   2559 #define set_inuse(M,p,s)\
   2560   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
   2561   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
   2562   mark_inuse_foot(M,p,s))
   2563 
   2564 #define set_inuse_and_pinuse(M,p,s)\
   2565   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
   2566   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
   2567  mark_inuse_foot(M,p,s))
   2568 
   2569 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
   2570   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
   2571   mark_inuse_foot(M, p, s))
   2572 
   2573 #endif /* !FOOTERS */
   2574 
   2575 /* ---------------------------- setting mparams -------------------------- */
   2576 
   2577 /* Initialize mparams */
   2578 static int init_mparams(void) {
   2579   if (mparams.page_size == 0) {
   2580     size_t s;
   2581 
   2582     mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
   2583     mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
   2584 #if MORECORE_CONTIGUOUS
   2585     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
   2586 #else  /* MORECORE_CONTIGUOUS */
   2587     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
   2588 #endif /* MORECORE_CONTIGUOUS */
   2589 
   2590 #if (FOOTERS && !INSECURE)
   2591     {
   2592 #if USE_DEV_RANDOM
   2593       int fd;
   2594       unsigned char buf[sizeof(size_t)];
   2595       /* Try to use /dev/urandom, else fall back on using time */
   2596       if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
   2597           read(fd, buf, sizeof(buf)) == sizeof(buf)) {
   2598         s = *((size_t *) buf);
   2599         close(fd);
   2600       }
   2601       else
   2602 #endif /* USE_DEV_RANDOM */
   2603         s = (size_t)(time(0) ^ (size_t)0x55555555U);
   2604 
   2605       s |= (size_t)8U;    /* ensure nonzero */
   2606       s &= ~(size_t)7U;   /* improve chances of fault for bad values */
   2607 
   2608     }
   2609 #else /* (FOOTERS && !INSECURE) */
   2610     s = (size_t)0x58585858U;
   2611 #endif /* (FOOTERS && !INSECURE) */
   2612     ACQUIRE_MAGIC_INIT_LOCK();
   2613     if (mparams.magic == 0) {
   2614       mparams.magic = s;
   2615       /* Set up lock for main malloc area */
   2616       INITIAL_LOCK(&gm->mutex);
   2617       gm->mflags = mparams.default_mflags;
   2618     }
   2619     RELEASE_MAGIC_INIT_LOCK();
   2620 
   2621 #ifndef WIN32
   2622     mparams.page_size = malloc_getpagesize;
   2623     mparams.granularity = ((DEFAULT_GRANULARITY != 0)?
   2624                            DEFAULT_GRANULARITY : mparams.page_size);
   2625 #else /* WIN32 */
   2626     {
   2627       SYSTEM_INFO system_info;
   2628       GetSystemInfo(&system_info);
   2629       mparams.page_size = system_info.dwPageSize;
   2630       mparams.granularity = system_info.dwAllocationGranularity;
   2631     }
   2632 #endif /* WIN32 */
   2633 
   2634     /* Sanity-check configuration:
   2635        size_t must be unsigned and as wide as pointer type.
   2636        ints must be at least 4 bytes.
   2637        alignment must be at least 8.
   2638        Alignment, min chunk size, and page size must all be powers of 2.
   2639     */
   2640     if ((sizeof(size_t) != sizeof(char*)) ||
   2641         (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
   2642         (sizeof(int) < 4)  ||
   2643         (MALLOC_ALIGNMENT < (size_t)8U) ||
   2644         ((MALLOC_ALIGNMENT    & (MALLOC_ALIGNMENT-SIZE_T_ONE))    != 0) ||
   2645         ((MCHUNK_SIZE         & (MCHUNK_SIZE-SIZE_T_ONE))         != 0) ||
   2646         ((mparams.granularity & (mparams.granularity-SIZE_T_ONE)) != 0) ||
   2647         ((mparams.page_size   & (mparams.page_size-SIZE_T_ONE))   != 0))
   2648       ABORT;
   2649   }
   2650   return 0;
   2651 }
   2652 
   2653 /* support for mallopt */
   2654 static int change_mparam(int param_number, int value) {
   2655   size_t val = (size_t)value;
   2656   init_mparams();
   2657   switch(param_number) {
   2658   case M_TRIM_THRESHOLD:
   2659     mparams.trim_threshold = val;
   2660     return 1;
   2661   case M_GRANULARITY:
   2662     if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
   2663       mparams.granularity = val;
   2664       return 1;
   2665     }
   2666     else
   2667       return 0;
   2668   case M_MMAP_THRESHOLD:
   2669     mparams.mmap_threshold = val;
   2670     return 1;
   2671   default:
   2672     return 0;
   2673   }
   2674 }
   2675 
   2676 #if DEBUG
   2677 /* ------------------------- Debugging Support --------------------------- */
   2678 
   2679 /* Check properties of any chunk, whether free, inuse, mmapped etc  */
   2680 static void do_check_any_chunk(mstate m, mchunkptr p) {
   2681   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
   2682   assert(ok_address(m, p));
   2683 }
   2684 
   2685 /* Check properties of top chunk */
   2686 static void do_check_top_chunk(mstate m, mchunkptr p) {
   2687   msegmentptr sp = segment_holding(m, (char*)p);
   2688   size_t  sz = chunksize(p);
   2689   assert(sp != 0);
   2690   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
   2691   assert(ok_address(m, p));
   2692   assert(sz == m->topsize);
   2693   assert(sz > 0);
   2694   assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
   2695   assert(pinuse(p));
   2696   assert(!next_pinuse(p));
   2697 }
   2698 
   2699 /* Check properties of (inuse) mmapped chunks */
   2700 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
   2701   size_t  sz = chunksize(p);
   2702   size_t len = (sz + (p->prev_foot & ~IS_MMAPPED_BIT) + MMAP_FOOT_PAD);
   2703   assert(is_mmapped(p));
   2704   assert(use_mmap(m));
   2705   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
   2706   assert(ok_address(m, p));
   2707   assert(!is_small(sz));
   2708   assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
   2709   assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
   2710   assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
   2711 }
   2712 
   2713 /* Check properties of inuse chunks */
   2714 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
   2715   do_check_any_chunk(m, p);
   2716   assert(cinuse(p));
   2717   assert(next_pinuse(p));
   2718   /* If not pinuse and not mmapped, previous chunk has OK offset */
   2719   assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
   2720   if (is_mmapped(p))
   2721     do_check_mmapped_chunk(m, p);
   2722 }
   2723 
   2724 /* Check properties of free chunks */
   2725 static void do_check_free_chunk(mstate m, mchunkptr p) {
   2726   size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
   2727   mchunkptr next = chunk_plus_offset(p, sz);
   2728   do_check_any_chunk(m, p);
   2729   assert(!cinuse(p));
   2730   assert(!next_pinuse(p));
   2731   assert (!is_mmapped(p));
   2732   if (p != m->dv && p != m->top) {
   2733     if (sz >= MIN_CHUNK_SIZE) {
   2734       assert((sz & CHUNK_ALIGN_MASK) == 0);
   2735       assert(is_aligned(chunk2mem(p)));
   2736       assert(next->prev_foot == sz);
   2737       assert(pinuse(p));
   2738       assert (next == m->top || cinuse(next));
   2739       assert(p->fd->bk == p);
   2740       assert(p->bk->fd == p);
   2741     }
   2742     else  /* markers are always of size SIZE_T_SIZE */
   2743       assert(sz == SIZE_T_SIZE);
   2744   }
   2745 }
   2746 
   2747 /* Check properties of malloced chunks at the point they are malloced */
   2748 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
   2749   if (mem != 0) {
   2750     mchunkptr p = mem2chunk(mem);
   2751     size_t sz = p->head & ~(PINUSE_BIT|CINUSE_BIT);
   2752     do_check_inuse_chunk(m, p);
   2753     assert((sz & CHUNK_ALIGN_MASK) == 0);
   2754     assert(sz >= MIN_CHUNK_SIZE);
   2755     assert(sz >= s);
   2756     /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
   2757     assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
   2758   }
   2759 }
   2760 
   2761 /* Check a tree and its subtrees.  */
   2762 static void do_check_tree(mstate m, tchunkptr t) {
   2763   tchunkptr head = 0;
   2764   tchunkptr u = t;
   2765   bindex_t tindex = t->index;
   2766   size_t tsize = chunksize(t);
   2767   bindex_t idx;
   2768   compute_tree_index(tsize, idx);
   2769   assert(tindex == idx);
   2770   assert(tsize >= MIN_LARGE_SIZE);
   2771   assert(tsize >= minsize_for_tree_index(idx));
   2772   assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
   2773 
   2774   do { /* traverse through chain of same-sized nodes */
   2775     do_check_any_chunk(m, ((mchunkptr)u));
   2776     assert(u->index == tindex);
   2777     assert(chunksize(u) == tsize);
   2778     assert(!cinuse(u));
   2779     assert(!next_pinuse(u));
   2780     assert(u->fd->bk == u);
   2781     assert(u->bk->fd == u);
   2782     if (u->parent == 0) {
   2783       assert(u->child[0] == 0);
   2784       assert(u->child[1] == 0);
   2785     }
   2786     else {
   2787       assert(head == 0); /* only one node on chain has parent */
   2788       head = u;
   2789       assert(u->parent != u);
   2790       assert (u->parent->child[0] == u ||
   2791               u->parent->child[1] == u ||
   2792               *((tbinptr*)(u->parent)) == u);
   2793       if (u->child[0] != 0) {
   2794         assert(u->child[0]->parent == u);
   2795         assert(u->child[0] != u);
   2796         do_check_tree(m, u->child[0]);
   2797       }
   2798       if (u->child[1] != 0) {
   2799         assert(u->child[1]->parent == u);
   2800         assert(u->child[1] != u);
   2801         do_check_tree(m, u->child[1]);
   2802       }
   2803       if (u->child[0] != 0 && u->child[1] != 0) {
   2804         assert(chunksize(u->child[0]) < chunksize(u->child[1]));
   2805       }
   2806     }
   2807     u = u->fd;
   2808   } while (u != t);
   2809   assert(head != 0);
   2810 }
   2811 
   2812 /*  Check all the chunks in a treebin.  */
   2813 static void do_check_treebin(mstate m, bindex_t i) {
   2814   tbinptr* tb = treebin_at(m, i);
   2815   tchunkptr t = *tb;
   2816   int empty = (m->treemap & (1U << i)) == 0;
   2817   if (t == 0)
   2818     assert(empty);
   2819   if (!empty)
   2820     do_check_tree(m, t);
   2821 }
   2822 
   2823 /*  Check all the chunks in a smallbin.  */
   2824 static void do_check_smallbin(mstate m, bindex_t i) {
   2825   sbinptr b = smallbin_at(m, i);
   2826   mchunkptr p = b->bk;
   2827   unsigned int empty = (m->smallmap & (1U << i)) == 0;
   2828   if (p == b)
   2829     assert(empty);
   2830   if (!empty) {
   2831     for (; p != b; p = p->bk) {
   2832       size_t size = chunksize(p);
   2833       mchunkptr q;
   2834       /* each chunk claims to be free */
   2835       do_check_free_chunk(m, p);
   2836       /* chunk belongs in bin */
   2837       assert(small_index(size) == i);
   2838       assert(p->bk == b || chunksize(p->bk) == chunksize(p));
   2839       /* chunk is followed by an inuse chunk */
   2840       q = next_chunk(p);
   2841       if (q->head != FENCEPOST_HEAD)
   2842         do_check_inuse_chunk(m, q);
   2843     }
   2844   }
   2845 }
   2846 
   2847 /* Find x in a bin. Used in other check functions. */
   2848 static int bin_find(mstate m, mchunkptr x) {
   2849   size_t size = chunksize(x);
   2850   if (is_small(size)) {
   2851     bindex_t sidx = small_index(size);
   2852     sbinptr b = smallbin_at(m, sidx);
   2853     if (smallmap_is_marked(m, sidx)) {
   2854       mchunkptr p = b;
   2855       do {
   2856         if (p == x)
   2857           return 1;
   2858       } while ((p = p->fd) != b);
   2859     }
   2860   }
   2861   else {
   2862     bindex_t tidx;
   2863     compute_tree_index(size, tidx);
   2864     if (treemap_is_marked(m, tidx)) {
   2865       tchunkptr t = *treebin_at(m, tidx);
   2866       size_t sizebits = size << leftshift_for_tree_index(tidx);
   2867       while (t != 0 && chunksize(t) != size) {
   2868         t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
   2869         sizebits <<= 1;
   2870       }
   2871       if (t != 0) {
   2872         tchunkptr u = t;
   2873         do {
   2874           if (u == (tchunkptr)x)
   2875             return 1;
   2876         } while ((u = u->fd) != t);
   2877       }
   2878     }
   2879   }
   2880   return 0;
   2881 }
   2882 
   2883 /* Traverse each chunk and check it; return total */
   2884 static size_t traverse_and_check(mstate m) {
   2885   size_t sum = 0;
   2886   if (is_initialized(m)) {
   2887     msegmentptr s = &m->seg;
   2888     sum += m->topsize + TOP_FOOT_SIZE;
   2889     while (s != 0) {
   2890       mchunkptr q = align_as_chunk(s->base);
   2891       mchunkptr lastq = 0;
   2892       assert(pinuse(q));
   2893       while (segment_holds(s, q) &&
   2894              q != m->top && q->head != FENCEPOST_HEAD) {
   2895         sum += chunksize(q);
   2896         if (cinuse(q)) {
   2897           assert(!bin_find(m, q));
   2898           do_check_inuse_chunk(m, q);
   2899         }
   2900         else {
   2901           assert(q == m->dv || bin_find(m, q));
   2902           assert(lastq == 0 || cinuse(lastq)); /* Not 2 consecutive free */
   2903           do_check_free_chunk(m, q);
   2904         }
   2905         lastq = q;
   2906         q = next_chunk(q);
   2907       }
   2908       s = s->next;
   2909     }
   2910   }
   2911   return sum;
   2912 }
   2913 
   2914 /* Check all properties of malloc_state. */
   2915 static void do_check_malloc_state(mstate m) {
   2916   bindex_t i;
   2917   size_t total;
   2918   /* check bins */
   2919   for (i = 0; i < NSMALLBINS; ++i)
   2920     do_check_smallbin(m, i);
   2921   for (i = 0; i < NTREEBINS; ++i)
   2922     do_check_treebin(m, i);
   2923 
   2924   if (m->dvsize != 0) { /* check dv chunk */
   2925     do_check_any_chunk(m, m->dv);
   2926     assert(m->dvsize == chunksize(m->dv));
   2927     assert(m->dvsize >= MIN_CHUNK_SIZE);
   2928     assert(bin_find(m, m->dv) == 0);
   2929   }
   2930 
   2931   if (m->top != 0) {   /* check top chunk */
   2932     do_check_top_chunk(m, m->top);
   2933     assert(m->topsize == chunksize(m->top));
   2934     assert(m->topsize > 0);
   2935     assert(bin_find(m, m->top) == 0);
   2936   }
   2937 
   2938   total = traverse_and_check(m);
   2939   assert(total <= m->footprint);
   2940   assert(m->footprint <= m->max_footprint);
   2941 #if USE_MAX_ALLOWED_FOOTPRINT
   2942   //TODO: change these assertions if we allow for shrinking.
   2943   assert(m->footprint <= m->max_allowed_footprint);
   2944   assert(m->max_footprint <= m->max_allowed_footprint);
   2945 #endif
   2946 }
   2947 #endif /* DEBUG */
   2948 
   2949 /* ----------------------------- statistics ------------------------------ */
   2950 
   2951 #if !NO_MALLINFO
   2952 static struct mallinfo internal_mallinfo(mstate m) {
   2953   struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
   2954   if (!PREACTION(m)) {
   2955     check_malloc_state(m);
   2956     if (is_initialized(m)) {
   2957       size_t nfree = SIZE_T_ONE; /* top always free */
   2958       size_t mfree = m->topsize + TOP_FOOT_SIZE;
   2959       size_t sum = mfree;
   2960       msegmentptr s = &m->seg;
   2961       while (s != 0) {
   2962         mchunkptr q = align_as_chunk(s->base);
   2963         while (segment_holds(s, q) &&
   2964                q != m->top && q->head != FENCEPOST_HEAD) {
   2965           size_t sz = chunksize(q);
   2966           sum += sz;
   2967           if (!cinuse(q)) {
   2968             mfree += sz;
   2969             ++nfree;
   2970           }
   2971           q = next_chunk(q);
   2972         }
   2973         s = s->next;
   2974       }
   2975 
   2976       nm.arena    = sum;
   2977       nm.ordblks  = nfree;
   2978       nm.hblkhd   = m->footprint - sum;
   2979       nm.usmblks  = m->max_footprint;
   2980       nm.uordblks = m->footprint - mfree;
   2981       nm.fordblks = mfree;
   2982       nm.keepcost = m->topsize;
   2983     }
   2984 
   2985     POSTACTION(m);
   2986   }
   2987   return nm;
   2988 }
   2989 #endif /* !NO_MALLINFO */
   2990 
   2991 static void internal_malloc_stats(mstate m) {
   2992   if (!PREACTION(m)) {
   2993     size_t maxfp = 0;
   2994     size_t fp = 0;
   2995     size_t used = 0;
   2996     check_malloc_state(m);
   2997     if (is_initialized(m)) {
   2998       msegmentptr s = &m->seg;
   2999       maxfp = m->max_footprint;
   3000       fp = m->footprint;
   3001       used = fp - (m->topsize + TOP_FOOT_SIZE);
   3002 
   3003       while (s != 0) {
   3004         mchunkptr q = align_as_chunk(s->base);
   3005         while (segment_holds(s, q) &&
   3006                q != m->top && q->head != FENCEPOST_HEAD) {
   3007           if (!cinuse(q))
   3008             used -= chunksize(q);
   3009           q = next_chunk(q);
   3010         }
   3011         s = s->next;
   3012       }
   3013     }
   3014 
   3015     fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
   3016     fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
   3017     fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
   3018 
   3019     POSTACTION(m);
   3020   }
   3021 }
   3022 
   3023 /* ----------------------- Operations on smallbins ----------------------- */
   3024 
   3025 /*
   3026   Various forms of linking and unlinking are defined as macros.  Even
   3027   the ones for trees, which are very long but have very short typical
   3028   paths.  This is ugly but reduces reliance on inlining support of
   3029   compilers.
   3030 */
   3031 
   3032 /* Link a free chunk into a smallbin  */
   3033 #define insert_small_chunk(M, P, S) {\
   3034   bindex_t I  = small_index(S);\
   3035   mchunkptr B = smallbin_at(M, I);\
   3036   mchunkptr F = B;\
   3037   assert(S >= MIN_CHUNK_SIZE);\
   3038   if (!smallmap_is_marked(M, I))\
   3039     mark_smallmap(M, I);\
   3040   else if (RTCHECK(ok_address(M, B->fd)))\
   3041     F = B->fd;\
   3042   else {\
   3043     CORRUPTION_ERROR_ACTION(M);\
   3044   }\
   3045   B->fd = P;\
   3046   F->bk = P;\
   3047   P->fd = F;\
   3048   P->bk = B;\
   3049 }
   3050 
   3051 /* Unlink a chunk from a smallbin
   3052  * Added check: if F->bk != P or B->fd != P, we have double linked list
   3053  * corruption, and abort.
   3054  */
   3055 #define unlink_small_chunk(M, P, S) {\
   3056   mchunkptr F = P->fd;\
   3057   mchunkptr B = P->bk;\
   3058   bindex_t I = small_index(S);\
   3059   if (__builtin_expect (F->bk != P || B->fd != P, 0))\
   3060     CORRUPTION_ERROR_ACTION(M);\
   3061   assert(P != B);\
   3062   assert(P != F);\
   3063   assert(chunksize(P) == small_index2size(I));\
   3064   if (F == B)\
   3065     clear_smallmap(M, I);\
   3066   else if (RTCHECK((F == smallbin_at(M,I) || ok_address(M, F)) &&\
   3067                    (B == smallbin_at(M,I) || ok_address(M, B)))) {\
   3068     F->bk = B;\
   3069     B->fd = F;\
   3070   }\
   3071   else {\
   3072     CORRUPTION_ERROR_ACTION(M);\
   3073   }\
   3074 }
   3075 
   3076 /* Unlink the first chunk from a smallbin
   3077  * Added check: if F->bk != P or B->fd != P, we have double linked list
   3078  * corruption, and abort.
   3079  */
   3080 #define unlink_first_small_chunk(M, B, P, I) {\
   3081   mchunkptr F = P->fd;\
   3082   if (__builtin_expect (F->bk != P || B->fd != P, 0))\
   3083     CORRUPTION_ERROR_ACTION(M);\
   3084   assert(P != B);\
   3085   assert(P != F);\
   3086   assert(chunksize(P) == small_index2size(I));\
   3087   if (B == F)\
   3088     clear_smallmap(M, I);\
   3089   else if (RTCHECK(ok_address(M, F))) {\
   3090     B->fd = F;\
   3091     F->bk = B;\
   3092   }\
   3093   else {\
   3094     CORRUPTION_ERROR_ACTION(M);\
   3095   }\
   3096 }
   3097 
   3098 /* Replace dv node, binning the old one */
   3099 /* Used only when dvsize known to be small */
   3100 #define replace_dv(M, P, S) {\
   3101   size_t DVS = M->dvsize;\
   3102   if (DVS != 0) {\
   3103     mchunkptr DV = M->dv;\
   3104     assert(is_small(DVS));\
   3105     insert_small_chunk(M, DV, DVS);\
   3106   }\
   3107   M->dvsize = S;\
   3108   M->dv = P;\
   3109 }
   3110 
   3111 /* ------------------------- Operations on trees ------------------------- */
   3112 
   3113 /* Insert chunk into tree */
   3114 #define insert_large_chunk(M, X, S) {\
   3115   tbinptr* H;\
   3116   bindex_t I;\
   3117   compute_tree_index(S, I);\
   3118   H = treebin_at(M, I);\
   3119   X->index = I;\
   3120   X->child[0] = X->child[1] = 0;\
   3121   if (!treemap_is_marked(M, I)) {\
   3122     mark_treemap(M, I);\
   3123     *H = X;\
   3124     X->parent = (tchunkptr)H;\
   3125     X->fd = X->bk = X;\
   3126   }\
   3127   else {\
   3128     tchunkptr T = *H;\
   3129     size_t K = S << leftshift_for_tree_index(I);\
   3130     for (;;) {\
   3131       if (chunksize(T) != S) {\
   3132         tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
   3133         K <<= 1;\
   3134         if (*C != 0)\
   3135           T = *C;\
   3136         else if (RTCHECK(ok_address(M, C))) {\
   3137           *C = X;\
   3138           X->parent = T;\
   3139           X->fd = X->bk = X;\
   3140           break;\
   3141         }\
   3142         else {\
   3143           CORRUPTION_ERROR_ACTION(M);\
   3144           break;\
   3145         }\
   3146       }\
   3147       else {\
   3148         tchunkptr F = T->fd;\
   3149         if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
   3150           T->fd = F->bk = X;\
   3151           X->fd = F;\
   3152           X->bk = T;\
   3153           X->parent = 0;\
   3154           break;\
   3155         }\
   3156         else {\
   3157           CORRUPTION_ERROR_ACTION(M);\
   3158           break;\
   3159         }\
   3160       }\
   3161     }\
   3162   }\
   3163 }
   3164 
   3165 /*
   3166   Unlink steps:
   3167 
   3168   1. If x is a chained node, unlink it from its same-sized fd/bk links
   3169      and choose its bk node as its replacement.
   3170   2. If x was the last node of its size, but not a leaf node, it must
   3171      be replaced with a leaf node (not merely one with an open left or
   3172      right), to make sure that lefts and rights of descendents
   3173      correspond properly to bit masks.  We use the rightmost descendent
   3174      of x.  We could use any other leaf, but this is easy to locate and
   3175      tends to counteract removal of leftmosts elsewhere, and so keeps
   3176      paths shorter than minimally guaranteed.  This doesn't loop much
   3177      because on average a node in a tree is near the bottom.
   3178   3. If x is the base of a chain (i.e., has parent links) relink
   3179      x's parent and children to x's replacement (or null if none).
   3180 
   3181   Added check: if F->bk != X or R->fd != X, we have double linked list
   3182   corruption, and abort.
   3183 */
   3184 
   3185 #define unlink_large_chunk(M, X) {\
   3186   tchunkptr XP = X->parent;\
   3187   tchunkptr R;\
   3188   if (X->bk != X) {\
   3189     tchunkptr F = X->fd;\
   3190     R = X->bk;\
   3191     if (__builtin_expect (F->bk != X || R->fd != X, 0))\
   3192       CORRUPTION_ERROR_ACTION(M);\
   3193     if (RTCHECK(ok_address(M, F))) {\
   3194       F->bk = R;\
   3195       R->fd = F;\
   3196     }\
   3197     else {\
   3198       CORRUPTION_ERROR_ACTION(M);\
   3199     }\
   3200   }\
   3201   else {\
   3202     tchunkptr* RP;\
   3203     if (((R = *(RP = &(X->child[1]))) != 0) ||\
   3204         ((R = *(RP = &(X->child[0]))) != 0)) {\
   3205       tchunkptr* CP;\
   3206       while ((*(CP = &(R->child[1])) != 0) ||\
   3207              (*(CP = &(R->child[0])) != 0)) {\
   3208         R = *(RP = CP);\
   3209       }\
   3210       if (RTCHECK(ok_address(M, RP)))\
   3211         *RP = 0;\
   3212       else {\
   3213         CORRUPTION_ERROR_ACTION(M);\
   3214       }\
   3215     }\
   3216   }\
   3217   if (XP != 0) {\
   3218     tbinptr* H = treebin_at(M, X->index);\
   3219     if (X == *H) {\
   3220       if ((*H = R) == 0) \
   3221         clear_treemap(M, X->index);\
   3222     }\
   3223     else if (RTCHECK(ok_address(M, XP))) {\
   3224       if (XP->child[0] == X) \
   3225         XP->child[0] = R;\
   3226       else \
   3227         XP->child[1] = R;\
   3228     }\
   3229     else\
   3230       CORRUPTION_ERROR_ACTION(M);\
   3231     if (R != 0) {\
   3232       if (RTCHECK(ok_address(M, R))) {\
   3233         tchunkptr C0, C1;\
   3234         R->parent = XP;\
   3235         if ((C0 = X->child[0]) != 0) {\
   3236           if (RTCHECK(ok_address(M, C0))) {\
   3237             R->child[0] = C0;\
   3238             C0->parent = R;\
   3239           }\
   3240           else\
   3241             CORRUPTION_ERROR_ACTION(M);\
   3242         }\
   3243         if ((C1 = X->child[1]) != 0) {\
   3244           if (RTCHECK(ok_address(M, C1))) {\
   3245             R->child[1] = C1;\
   3246             C1->parent = R;\
   3247           }\
   3248           else\
   3249             CORRUPTION_ERROR_ACTION(M);\
   3250         }\
   3251       }\
   3252       else\
   3253         CORRUPTION_ERROR_ACTION(M);\
   3254     }\
   3255   }\
   3256 }
   3257 
   3258 /* Relays to large vs small bin operations */
   3259 
   3260 #define insert_chunk(M, P, S)\
   3261   if (is_small(S)) insert_small_chunk(M, P, S)\
   3262   else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
   3263 
   3264 #define unlink_chunk(M, P, S)\
   3265   if (is_small(S)) unlink_small_chunk(M, P, S)\
   3266   else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
   3267 
   3268 
   3269 /* Relays to internal calls to malloc/free from realloc, memalign etc */
   3270 
   3271 #if ONLY_MSPACES
   3272 #define internal_malloc(m, b) mspace_malloc(m, b)
   3273 #define internal_free(m, mem) mspace_free(m,mem);
   3274 #else /* ONLY_MSPACES */
   3275 #if MSPACES
   3276 #define internal_malloc(m, b)\
   3277    (m == gm)? dlmalloc(b) : mspace_malloc(m, b)
   3278 #define internal_free(m, mem)\
   3279    if (m == gm) dlfree(mem); else mspace_free(m,mem);
   3280 #else /* MSPACES */
   3281 #define internal_malloc(m, b) dlmalloc(b)
   3282 #define internal_free(m, mem) dlfree(mem)
   3283 #endif /* MSPACES */
   3284 #endif /* ONLY_MSPACES */
   3285 
   3286 /* -----------------------  Direct-mmapping chunks ----------------------- */
   3287 
   3288 /*
   3289   Directly mmapped chunks are set up with an offset to the start of
   3290   the mmapped region stored in the prev_foot field of the chunk. This
   3291   allows reconstruction of the required argument to MUNMAP when freed,
   3292   and also allows adjustment of the returned chunk to meet alignment
   3293   requirements (especially in memalign).  There is also enough space
   3294   allocated to hold a fake next chunk of size SIZE_T_SIZE to maintain
   3295   the PINUSE bit so frees can be checked.
   3296 */
   3297 
   3298 /* Malloc using mmap */
   3299 static void* mmap_alloc(mstate m, size_t nb) {
   3300   size_t mmsize = granularity_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
   3301 #if USE_MAX_ALLOWED_FOOTPRINT
   3302   size_t new_footprint = m->footprint + mmsize;
   3303   if (new_footprint <= m->footprint ||  /* Check for wrap around 0 */
   3304       new_footprint > m->max_allowed_footprint)
   3305     return 0;
   3306 #endif
   3307   if (mmsize > nb) {     /* Check for wrap around 0 */
   3308     char* mm = (char*)(DIRECT_MMAP(mmsize));
   3309     if (mm != CMFAIL) {
   3310       size_t offset = align_offset(chunk2mem(mm));
   3311       size_t psize = mmsize - offset - MMAP_FOOT_PAD;
   3312       mchunkptr p = (mchunkptr)(mm + offset);
   3313       p->prev_foot = offset | IS_MMAPPED_BIT;
   3314       (p)->head = (psize|CINUSE_BIT);
   3315       mark_inuse_foot(m, p, psize);
   3316       chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
   3317       chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
   3318 
   3319       if (mm < m->least_addr)
   3320         m->least_addr = mm;
   3321       if ((m->footprint += mmsize) > m->max_footprint)
   3322         m->max_footprint = m->footprint;
   3323       assert(is_aligned(chunk2mem(p)));
   3324       check_mmapped_chunk(m, p);
   3325       return chunk2mem(p);
   3326     }
   3327   }
   3328   return 0;
   3329 }
   3330 
   3331 /* Realloc using mmap */
   3332 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb) {
   3333   size_t oldsize = chunksize(oldp);
   3334   if (is_small(nb)) /* Can't shrink mmap regions below small size */
   3335     return 0;
   3336   /* Keep old chunk if big enough but not too big */
   3337   if (oldsize >= nb + SIZE_T_SIZE &&
   3338       (oldsize - nb) <= (mparams.granularity << 1))
   3339     return oldp;
   3340   else {
   3341     size_t offset = oldp->prev_foot & ~IS_MMAPPED_BIT;
   3342     size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
   3343     size_t newmmsize = granularity_align(nb + SIX_SIZE_T_SIZES +
   3344                                          CHUNK_ALIGN_MASK);
   3345     char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
   3346                                   oldmmsize, newmmsize, 1);
   3347     if (cp != CMFAIL) {
   3348       mchunkptr newp = (mchunkptr)(cp + offset);
   3349       size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
   3350       newp->head = (psize|CINUSE_BIT);
   3351       mark_inuse_foot(m, newp, psize);
   3352       chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
   3353       chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
   3354 
   3355       if (cp < m->least_addr)
   3356         m->least_addr = cp;
   3357       if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
   3358         m->max_footprint = m->footprint;
   3359       check_mmapped_chunk(m, newp);
   3360       return newp;
   3361     }
   3362   }
   3363   return 0;
   3364 }
   3365 
   3366 /* -------------------------- mspace management -------------------------- */
   3367 
   3368 /* Initialize top chunk and its size */
   3369 static void init_top(mstate m, mchunkptr p, size_t psize) {
   3370   /* Ensure alignment */
   3371   size_t offset = align_offset(chunk2mem(p));
   3372   p = (mchunkptr)((char*)p + offset);
   3373   psize -= offset;
   3374 
   3375   m->top = p;
   3376   m->topsize = psize;
   3377   p->head = psize | PINUSE_BIT;
   3378   /* set size of fake trailing chunk holding overhead space only once */
   3379   chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
   3380   m->trim_check = mparams.trim_threshold; /* reset on each update */
   3381 }
   3382 
   3383 /* Initialize bins for a new mstate that is otherwise zeroed out */
   3384 static void init_bins(mstate m) {
   3385   /* Establish circular links for smallbins */
   3386   bindex_t i;
   3387   for (i = 0; i < NSMALLBINS; ++i) {
   3388     sbinptr bin = smallbin_at(m,i);
   3389     bin->fd = bin->bk = bin;
   3390   }
   3391 }
   3392 
   3393 #if PROCEED_ON_ERROR
   3394 
   3395 /* default corruption action */
   3396 static void reset_on_error(mstate m) {
   3397   int i;
   3398   ++malloc_corruption_error_count;
   3399   /* Reinitialize fields to forget about all memory */
   3400   m->smallbins = m->treebins = 0;
   3401   m->dvsize = m->topsize = 0;
   3402   m->seg.base = 0;
   3403   m->seg.size = 0;
   3404   m->seg.next = 0;
   3405   m->top = m->dv = 0;
   3406   for (i = 0; i < NTREEBINS; ++i)
   3407     *treebin_at(m, i) = 0;
   3408   init_bins(m);
   3409 }
   3410 #endif /* PROCEED_ON_ERROR */
   3411 
   3412 /* Allocate chunk and prepend remainder with chunk in successor base. */
   3413 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
   3414                            size_t nb) {
   3415   mchunkptr p = align_as_chunk(newbase);
   3416   mchunkptr oldfirst = align_as_chunk(oldbase);
   3417   size_t psize = (char*)oldfirst - (char*)p;
   3418   mchunkptr q = chunk_plus_offset(p, nb);
   3419   size_t qsize = psize - nb;
   3420   set_size_and_pinuse_of_inuse_chunk(m, p, nb);
   3421 
   3422   assert((char*)oldfirst > (char*)q);
   3423   assert(pinuse(oldfirst));
   3424   assert(qsize >= MIN_CHUNK_SIZE);
   3425 
   3426   /* consolidate remainder with first chunk of old base */
   3427   if (oldfirst == m->top) {
   3428     size_t tsize = m->topsize += qsize;
   3429     m->top = q;
   3430     q->head = tsize | PINUSE_BIT;
   3431     check_top_chunk(m, q);
   3432   }
   3433   else if (oldfirst == m->dv) {
   3434     size_t dsize = m->dvsize += qsize;
   3435     m->dv = q;
   3436     set_size_and_pinuse_of_free_chunk(q, dsize);
   3437   }
   3438   else {
   3439     if (!cinuse(oldfirst)) {
   3440       size_t nsize = chunksize(oldfirst);
   3441       unlink_chunk(m, oldfirst, nsize);
   3442       oldfirst = chunk_plus_offset(oldfirst, nsize);
   3443       qsize += nsize;
   3444     }
   3445     set_free_with_pinuse(q, qsize, oldfirst);
   3446     insert_chunk(m, q, qsize);
   3447     check_free_chunk(m, q);
   3448   }
   3449 
   3450   check_malloced_chunk(m, chunk2mem(p), nb);
   3451   return chunk2mem(p);
   3452 }
   3453 
   3454 
   3455 /* Add a segment to hold a new noncontiguous region */
   3456 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
   3457   /* Determine locations and sizes of segment, fenceposts, old top */
   3458   char* old_top = (char*)m->top;
   3459   msegmentptr oldsp = segment_holding(m, old_top);
   3460   char* old_end = oldsp->base + oldsp->size;
   3461   size_t ssize = pad_request(sizeof(struct malloc_segment));
   3462   char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
   3463   size_t offset = align_offset(chunk2mem(rawsp));
   3464   char* asp = rawsp + offset;
   3465   char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
   3466   mchunkptr sp = (mchunkptr)csp;
   3467   msegmentptr ss = (msegmentptr)(chunk2mem(sp));
   3468   mchunkptr tnext = chunk_plus_offset(sp, ssize);
   3469   mchunkptr p = tnext;
   3470   int nfences = 0;
   3471 
   3472   /* reset top to new space */
   3473   init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
   3474 
   3475   /* Set up segment record */
   3476   assert(is_aligned(ss));
   3477   set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
   3478   *ss = m->seg; /* Push current record */
   3479   m->seg.base = tbase;
   3480   m->seg.size = tsize;
   3481   m->seg.sflags = mmapped;
   3482   m->seg.next = ss;
   3483 
   3484   /* Insert trailing fenceposts */
   3485   for (;;) {
   3486     mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
   3487     p->head = FENCEPOST_HEAD;
   3488     ++nfences;
   3489     if ((char*)(&(nextp->head)) < old_end)
   3490       p = nextp;
   3491     else
   3492       break;
   3493   }
   3494   assert(nfences >= 2);
   3495 
   3496   /* Insert the rest of old top into a bin as an ordinary free chunk */
   3497   if (csp != old_top) {
   3498     mchunkptr q = (mchunkptr)old_top;
   3499     size_t psize = csp - old_top;
   3500     mchunkptr tn = chunk_plus_offset(q, psize);
   3501     set_free_with_pinuse(q, psize, tn);
   3502     insert_chunk(m, q, psize);
   3503   }
   3504 
   3505   check_top_chunk(m, m->top);
   3506 }
   3507 
   3508 /* -------------------------- System allocation -------------------------- */
   3509 
   3510 /* Get memory from system using MORECORE or MMAP */
   3511 static void* sys_alloc(mstate m, size_t nb) {
   3512   char* tbase = CMFAIL;
   3513   size_t tsize = 0;
   3514   flag_t mmap_flag = 0;
   3515 
   3516   init_mparams();
   3517 
   3518   /* Directly map large chunks */
   3519   if (use_mmap(m) && nb >= mparams.mmap_threshold) {
   3520     void* mem = mmap_alloc(m, nb);
   3521     if (mem != 0)
   3522       return mem;
   3523   }
   3524 
   3525 #if USE_MAX_ALLOWED_FOOTPRINT
   3526   /* Make sure the footprint doesn't grow past max_allowed_footprint.
   3527    * This covers all cases except for where we need to page align, below.
   3528    */
   3529   {
   3530     size_t new_footprint = m->footprint +
   3531                            granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
   3532     if (new_footprint <= m->footprint ||  /* Check for wrap around 0 */
   3533         new_footprint > m->max_allowed_footprint)
   3534       return 0;
   3535   }
   3536 #endif
   3537 
   3538   /*
   3539     Try getting memory in any of three ways (in most-preferred to
   3540     least-preferred order):
   3541     1. A call to MORECORE that can normally contiguously extend memory.
   3542        (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
   3543        or main space is mmapped or a previous contiguous call failed)
   3544     2. A call to MMAP new space (disabled if not HAVE_MMAP).
   3545        Note that under the default settings, if MORECORE is unable to
   3546        fulfill a request, and HAVE_MMAP is true, then mmap is
   3547        used as a noncontiguous system allocator. This is a useful backup
   3548        strategy for systems with holes in address spaces -- in this case
   3549        sbrk cannot contiguously expand the heap, but mmap may be able to
   3550        find space.
   3551     3. A call to MORECORE that cannot usually contiguously extend memory.
   3552        (disabled if not HAVE_MORECORE)
   3553   */
   3554 
   3555   if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
   3556     char* br = CMFAIL;
   3557     msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
   3558     size_t asize = 0;
   3559     ACQUIRE_MORECORE_LOCK();
   3560 
   3561     if (ss == 0) {  /* First time through or recovery */
   3562       char* base = (char*)CALL_MORECORE(0);
   3563       if (base != CMFAIL) {
   3564         asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
   3565         /* Adjust to end on a page boundary */
   3566         if (!is_page_aligned(base)) {
   3567           asize += (page_align((size_t)base) - (size_t)base);
   3568 #if USE_MAX_ALLOWED_FOOTPRINT
   3569           /* If the alignment pushes us over max_allowed_footprint,
   3570            * poison the upcoming call to MORECORE and continue.
   3571            */
   3572           {
   3573             size_t new_footprint = m->footprint + asize;
   3574             if (new_footprint <= m->footprint ||  /* Check for wrap around 0 */
   3575                 new_footprint > m->max_allowed_footprint) {
   3576               asize = HALF_MAX_SIZE_T;
   3577             }
   3578           }
   3579 #endif
   3580         }
   3581         /* Can't call MORECORE if size is negative when treated as signed */
   3582         if (asize < HALF_MAX_SIZE_T &&
   3583             (br = (char*)(CALL_MORECORE(asize))) == base) {
   3584           tbase = base;
   3585           tsize = asize;
   3586         }
   3587       }
   3588     }
   3589     else {
   3590       /* Subtract out existing available top space from MORECORE request. */
   3591       asize = granularity_align(nb - m->topsize + TOP_FOOT_SIZE + SIZE_T_ONE);
   3592       /* Use mem here only if it did continuously extend old space */
   3593       if (asize < HALF_MAX_SIZE_T &&
   3594           (br = (char*)(CALL_MORECORE(asize))) == ss->base+ss->size) {
   3595         tbase = br;
   3596         tsize = asize;
   3597       }
   3598     }
   3599 
   3600     if (tbase == CMFAIL) {    /* Cope with partial failure */
   3601       if (br != CMFAIL) {    /* Try to use/extend the space we did get */
   3602         if (asize < HALF_MAX_SIZE_T &&
   3603             asize < nb + TOP_FOOT_SIZE + SIZE_T_ONE) {
   3604           size_t esize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE - asize);
   3605           if (esize < HALF_MAX_SIZE_T) {
   3606             char* end = (char*)CALL_MORECORE(esize);
   3607             if (end != CMFAIL)
   3608               asize += esize;
   3609             else {            /* Can't use; try to release */
   3610               CALL_MORECORE(-asize);
   3611               br = CMFAIL;
   3612             }
   3613           }
   3614         }
   3615       }
   3616       if (br != CMFAIL) {    /* Use the space we did get */
   3617         tbase = br;
   3618         tsize = asize;
   3619       }
   3620       else
   3621         disable_contiguous(m); /* Don't try contiguous path in the future */
   3622     }
   3623 
   3624     RELEASE_MORECORE_LOCK();
   3625   }
   3626 
   3627   if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
   3628     size_t req = nb + TOP_FOOT_SIZE + SIZE_T_ONE;
   3629     size_t rsize = granularity_align(req);
   3630     if (rsize > nb) { /* Fail if wraps around zero */
   3631       char* mp = (char*)(CALL_MMAP(rsize));
   3632       if (mp != CMFAIL) {
   3633         tbase = mp;
   3634         tsize = rsize;
   3635         mmap_flag = IS_MMAPPED_BIT;
   3636       }
   3637     }
   3638   }
   3639 
   3640   if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
   3641     size_t asize = granularity_align(nb + TOP_FOOT_SIZE + SIZE_T_ONE);
   3642     if (asize < HALF_MAX_SIZE_T) {
   3643       char* br = CMFAIL;
   3644       char* end = CMFAIL;
   3645       ACQUIRE_MORECORE_LOCK();
   3646       br = (char*)(CALL_MORECORE(asize));
   3647       end = (char*)(CALL_MORECORE(0));
   3648       RELEASE_MORECORE_LOCK();
   3649       if (br != CMFAIL && end != CMFAIL && br < end) {
   3650         size_t ssize = end - br;
   3651         if (ssize > nb + TOP_FOOT_SIZE) {
   3652           tbase = br;
   3653           tsize = ssize;
   3654         }
   3655       }
   3656     }
   3657   }
   3658 
   3659   if (tbase != CMFAIL) {
   3660 
   3661     if ((m->footprint += tsize) > m->max_footprint)
   3662       m->max_footprint = m->footprint;
   3663 
   3664     if (!is_initialized(m)) { /* first-time initialization */
   3665       m->seg.base = m->least_addr = tbase;
   3666       m->seg.size = tsize;
   3667       m->seg.sflags = mmap_flag;
   3668       m->magic = mparams.magic;
   3669       init_bins(m);
   3670       if (is_global(m))
   3671         init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
   3672       else {
   3673         /* Offset top by embedded malloc_state */
   3674         mchunkptr mn = next_chunk(mem2chunk(m));
   3675         init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
   3676       }
   3677     }
   3678 
   3679     else {
   3680       /* Try to merge with an existing segment */
   3681       msegmentptr sp = &m->seg;
   3682       while (sp != 0 && tbase != sp->base + sp->size)
   3683         sp = sp->next;
   3684       if (sp != 0 &&
   3685           !is_extern_segment(sp) &&
   3686           (sp->sflags & IS_MMAPPED_BIT) == mmap_flag &&
   3687           segment_holds(sp, m->top)) { /* append */
   3688         sp->size += tsize;
   3689         init_top(m, m->top, m->topsize + tsize);
   3690       }
   3691       else {
   3692         if (tbase < m->least_addr)
   3693           m->least_addr = tbase;
   3694         sp = &m->seg;
   3695         while (sp != 0 && sp->base != tbase + tsize)
   3696           sp = sp->next;
   3697         if (sp != 0 &&
   3698             !is_extern_segment(sp) &&
   3699             (sp->sflags & IS_MMAPPED_BIT) == mmap_flag) {
   3700           char* oldbase = sp->base;
   3701           sp->base = tbase;
   3702           sp->size += tsize;
   3703           return prepend_alloc(m, tbase, oldbase, nb);
   3704         }
   3705         else
   3706           add_segment(m, tbase, tsize, mmap_flag);
   3707       }
   3708     }
   3709 
   3710     if (nb < m->topsize) { /* Allocate from new or extended top space */
   3711       size_t rsize = m->topsize -= nb;
   3712       mchunkptr p = m->top;
   3713       mchunkptr r = m->top = chunk_plus_offset(p, nb);
   3714       r->head = rsize | PINUSE_BIT;
   3715       set_size_and_pinuse_of_inuse_chunk(m, p, nb);
   3716       check_top_chunk(m, m->top);
   3717       check_malloced_chunk(m, chunk2mem(p), nb);
   3718       return chunk2mem(p);
   3719     }
   3720   }
   3721 
   3722   MALLOC_FAILURE_ACTION;
   3723   return 0;
   3724 }
   3725 
   3726 /* -----------------------  system deallocation -------------------------- */
   3727 
   3728 /* Unmap and unlink any mmapped segments that don't contain used chunks */
   3729 static size_t release_unused_segments(mstate m) {
   3730   size_t released = 0;
   3731   msegmentptr pred = &m->seg;
   3732   msegmentptr sp = pred->next;
   3733   while (sp != 0) {
   3734     char* base = sp->base;
   3735     size_t size = sp->size;
   3736     msegmentptr next = sp->next;
   3737     if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
   3738       mchunkptr p = align_as_chunk(base);
   3739       size_t psize = chunksize(p);
   3740       /* Can unmap if first chunk holds entire segment and not pinned */
   3741       if (!cinuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
   3742         tchunkptr tp = (tchunkptr)p;
   3743         assert(segment_holds(sp, (char*)sp));
   3744         if (p == m->dv) {
   3745           m->dv = 0;
   3746           m->dvsize = 0;
   3747         }
   3748         else {
   3749           unlink_large_chunk(m, tp);
   3750         }
   3751         if (CALL_MUNMAP(base, size) == 0) {
   3752           released += size;
   3753           m->footprint -= size;
   3754           /* unlink obsoleted record */
   3755           sp = pred;
   3756           sp->next = next;
   3757         }
   3758         else { /* back out if cannot unmap */
   3759           insert_large_chunk(m, tp, psize);
   3760         }
   3761       }
   3762     }
   3763     pred = sp;
   3764     sp = next;
   3765   }
   3766   return released;
   3767 }
   3768 
   3769 static int sys_trim(mstate m, size_t pad) {
   3770   size_t released = 0;
   3771   if (pad < MAX_REQUEST && is_initialized(m)) {
   3772     pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
   3773 
   3774     if (m->topsize > pad) {
   3775       /* Shrink top space in granularity-size units, keeping at least one */
   3776       size_t unit = mparams.granularity;
   3777       size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
   3778                       SIZE_T_ONE) * unit;
   3779       msegmentptr sp = segment_holding(m, (char*)m->top);
   3780 
   3781       if (!is_extern_segment(sp)) {
   3782         if (is_mmapped_segment(sp)) {
   3783           if (HAVE_MMAP &&
   3784               sp->size >= extra &&
   3785               !has_segment_link(m, sp)) { /* can't shrink if pinned */
   3786             size_t newsize = sp->size - extra;
   3787             /* Prefer mremap, fall back to munmap */
   3788             if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
   3789                 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
   3790               released = extra;
   3791             }
   3792           }
   3793         }
   3794         else if (HAVE_MORECORE) {
   3795           if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
   3796             extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
   3797           ACQUIRE_MORECORE_LOCK();
   3798           {
   3799             /* Make sure end of memory is where we last set it. */
   3800             char* old_br = (char*)(CALL_MORECORE(0));
   3801             if (old_br == sp->base + sp->size) {
   3802               char* rel_br = (char*)(CALL_MORECORE(-extra));
   3803               char* new_br = (char*)(CALL_MORECORE(0));
   3804               if (rel_br != CMFAIL && new_br < old_br)
   3805                 released = old_br - new_br;
   3806             }
   3807           }
   3808           RELEASE_MORECORE_LOCK();
   3809         }
   3810       }
   3811 
   3812       if (released != 0) {
   3813         sp->size -= released;
   3814         m->footprint -= released;
   3815         init_top(m, m->top, m->topsize - released);
   3816         check_top_chunk(m, m->top);
   3817       }
   3818     }
   3819 
   3820     /* Unmap any unused mmapped segments */
   3821     if (HAVE_MMAP)
   3822       released += release_unused_segments(m);
   3823 
   3824     /* On failure, disable autotrim to avoid repeated failed future calls */
   3825     if (released == 0)
   3826       m->trim_check = MAX_SIZE_T;
   3827   }
   3828 
   3829   return (released != 0)? 1 : 0;
   3830 }
   3831 
   3832 /* ---------------------------- malloc support --------------------------- */
   3833 
   3834 /* allocate a large request from the best fitting chunk in a treebin */
   3835 static void* tmalloc_large(mstate m, size_t nb) {
   3836   tchunkptr v = 0;
   3837   size_t rsize = -nb; /* Unsigned negation */
   3838   tchunkptr t;
   3839   bindex_t idx;
   3840   compute_tree_index(nb, idx);
   3841 
   3842   if ((t = *treebin_at(m, idx)) != 0) {
   3843     /* Traverse tree for this bin looking for node with size == nb */
   3844     size_t sizebits = nb << leftshift_for_tree_index(idx);
   3845     tchunkptr rst = 0;  /* The deepest untaken right subtree */
   3846     for (;;) {
   3847       tchunkptr rt;
   3848       size_t trem = chunksize(t) - nb;
   3849       if (trem < rsize) {
   3850         v = t;
   3851         if ((rsize = trem) == 0)
   3852           break;
   3853       }
   3854       rt = t->child[1];
   3855       t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
   3856       if (rt != 0 && rt != t)
   3857         rst = rt;
   3858       if (t == 0) {
   3859         t = rst; /* set t to least subtree holding sizes > nb */
   3860         break;
   3861       }
   3862       sizebits <<= 1;
   3863     }
   3864   }
   3865 
   3866   if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
   3867     binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
   3868     if (leftbits != 0) {
   3869       bindex_t i;
   3870       binmap_t leastbit = least_bit(leftbits);
   3871       compute_bit2idx(leastbit, i);
   3872       t = *treebin_at(m, i);
   3873     }
   3874   }
   3875 
   3876   while (t != 0) { /* find smallest of tree or subtree */
   3877     size_t trem = chunksize(t) - nb;
   3878     if (trem < rsize) {
   3879       rsize = trem;
   3880       v = t;
   3881     }
   3882     t = leftmost_child(t);
   3883   }
   3884 
   3885   /*  If dv is a better fit, return 0 so malloc will use it */
   3886   if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
   3887     if (RTCHECK(ok_address(m, v))) { /* split */
   3888       mchunkptr r = chunk_plus_offset(v, nb);
   3889       assert(chunksize(v) == rsize + nb);
   3890       if (RTCHECK(ok_next(v, r))) {
   3891         unlink_large_chunk(m, v);
   3892         if (rsize < MIN_CHUNK_SIZE)
   3893           set_inuse_and_pinuse(m, v, (rsize + nb));
   3894         else {
   3895           set_size_and_pinuse_of_inuse_chunk(m, v, nb);
   3896           set_size_and_pinuse_of_free_chunk(r, rsize);
   3897           insert_chunk(m, r, rsize);
   3898         }
   3899         return chunk2mem(v);
   3900       }
   3901     }
   3902     CORRUPTION_ERROR_ACTION(m);
   3903   }
   3904   return 0;
   3905 }
   3906 
   3907 /* allocate a small request from the best fitting chunk in a treebin */
   3908 static void* tmalloc_small(mstate m, size_t nb) {
   3909   tchunkptr t, v;
   3910   size_t rsize;
   3911   bindex_t i;
   3912   binmap_t leastbit = least_bit(m->treemap);
   3913   compute_bit2idx(leastbit, i);
   3914 
   3915   v = t = *treebin_at(m, i);
   3916   rsize = chunksize(t) - nb;
   3917 
   3918   while ((t = leftmost_child(t)) != 0) {
   3919     size_t trem = chunksize(t) - nb;
   3920     if (trem < rsize) {
   3921       rsize = trem;
   3922       v = t;
   3923     }
   3924   }
   3925 
   3926   if (RTCHECK(ok_address(m, v))) {
   3927     mchunkptr r = chunk_plus_offset(v, nb);
   3928     assert(chunksize(v) == rsize + nb);
   3929     if (RTCHECK(ok_next(v, r))) {
   3930       unlink_large_chunk(m, v);
   3931       if (rsize < MIN_CHUNK_SIZE)
   3932         set_inuse_and_pinuse(m, v, (rsize + nb));
   3933       else {
   3934         set_size_and_pinuse_of_inuse_chunk(m, v, nb);
   3935         set_size_and_pinuse_of_free_chunk(r, rsize);
   3936         replace_dv(m, r, rsize);
   3937       }
   3938       return chunk2mem(v);
   3939     }
   3940   }
   3941 
   3942   CORRUPTION_ERROR_ACTION(m);
   3943   return 0;
   3944 }
   3945 
   3946 /* --------------------------- realloc support --------------------------- */
   3947 
   3948 static void* internal_realloc(mstate m, void* oldmem, size_t bytes) {
   3949   if (bytes >= MAX_REQUEST) {
   3950     MALLOC_FAILURE_ACTION;
   3951     return 0;
   3952   }
   3953   if (!PREACTION(m)) {
   3954     mchunkptr oldp = mem2chunk(oldmem);
   3955     size_t oldsize = chunksize(oldp);
   3956     mchunkptr next = chunk_plus_offset(oldp, oldsize);
   3957     mchunkptr newp = 0;
   3958     void* extra = 0;
   3959 
   3960     /* Try to either shrink or extend into top. Else malloc-copy-free */
   3961 
   3962     if (RTCHECK(ok_address(m, oldp) && ok_cinuse(oldp) &&
   3963                 ok_next(oldp, next) && ok_pinuse(next))) {
   3964       size_t nb = request2size(bytes);
   3965       if (is_mmapped(oldp))
   3966         newp = mmap_resize(m, oldp, nb);
   3967       else if (oldsize >= nb) { /* already big enough */
   3968         size_t rsize = oldsize - nb;
   3969         newp = oldp;
   3970         if (rsize >= MIN_CHUNK_SIZE) {
   3971           mchunkptr remainder = chunk_plus_offset(newp, nb);
   3972           set_inuse(m, newp, nb);
   3973           set_inuse(m, remainder, rsize);
   3974           extra = chunk2mem(remainder);
   3975         }
   3976       }
   3977       else if (next == m->top && oldsize + m->topsize > nb) {
   3978         /* Expand into top */
   3979         size_t newsize = oldsize + m->topsize;
   3980         size_t newtopsize = newsize - nb;
   3981         mchunkptr newtop = chunk_plus_offset(oldp, nb);
   3982         set_inuse(m, oldp, nb);
   3983         newtop->head = newtopsize |PINUSE_BIT;
   3984         m->top = newtop;
   3985         m->topsize = newtopsize;
   3986         newp = oldp;
   3987       }
   3988     }
   3989     else {
   3990       USAGE_ERROR_ACTION(m, oldmem);
   3991       POSTACTION(m);
   3992       return 0;
   3993     }
   3994 
   3995     POSTACTION(m);
   3996 
   3997     if (newp != 0) {
   3998       if (extra != 0) {
   3999         internal_free(m, extra);
   4000       }
   4001       check_inuse_chunk(m, newp);
   4002       return chunk2mem(newp);
   4003     }
   4004     else {
   4005       void* newmem = internal_malloc(m, bytes);
   4006       if (newmem != 0) {
   4007         size_t oc = oldsize - overhead_for(oldp);
   4008         memcpy(newmem, oldmem, (oc < bytes)? oc : bytes);
   4009         internal_free(m, oldmem);
   4010       }
   4011       return newmem;
   4012     }
   4013   }
   4014   return 0;
   4015 }
   4016 
   4017 /* --------------------------- memalign support -------------------------- */
   4018 
   4019 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
   4020   if (alignment <= MALLOC_ALIGNMENT)    /* Can just use malloc */
   4021     return internal_malloc(m, bytes);
   4022   if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
   4023     alignment = MIN_CHUNK_SIZE;
   4024   if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
   4025     size_t a = MALLOC_ALIGNMENT << 1;
   4026     while (a < alignment) a <<= 1;
   4027     alignment = a;
   4028   }
   4029 
   4030   if (bytes >= MAX_REQUEST - alignment) {
   4031     if (m != 0)  { /* Test isn't needed but avoids compiler warning */
   4032       MALLOC_FAILURE_ACTION;
   4033     }
   4034   }
   4035   else {
   4036     size_t nb = request2size(bytes);
   4037     size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
   4038     char* mem = (char*)internal_malloc(m, req);
   4039     if (mem != 0) {
   4040       void* leader = 0;
   4041       void* trailer = 0;
   4042       mchunkptr p = mem2chunk(mem);
   4043 
   4044       if (PREACTION(m)) return 0;
   4045       if ((((size_t)(mem)) % alignment) != 0) { /* misaligned */
   4046         /*
   4047           Find an aligned spot inside chunk.  Since we need to give
   4048           back leading space in a chunk of at least MIN_CHUNK_SIZE, if
   4049           the first calculation places us at a spot with less than
   4050           MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
   4051           We've allocated enough total room so that this is always
   4052           possible.
   4053         */
   4054         char* br = (char*)mem2chunk((size_t)(((size_t)(mem +
   4055                                                        alignment -
   4056                                                        SIZE_T_ONE)) &
   4057                                              -alignment));
   4058         char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
   4059           br : br+alignment;
   4060         mchunkptr newp = (mchunkptr)pos;
   4061         size_t leadsize = pos - (char*)(p);
   4062         size_t newsize = chunksize(p) - leadsize;
   4063 
   4064         if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
   4065           newp->prev_foot = p->prev_foot + leadsize;
   4066           newp->head = (newsize|CINUSE_BIT);
   4067         }
   4068         else { /* Otherwise, give back leader, use the rest */
   4069           set_inuse(m, newp, newsize);
   4070           set_inuse(m, p, leadsize);
   4071           leader = chunk2mem(p);
   4072         }
   4073         p = newp;
   4074       }
   4075 
   4076       /* Give back spare room at the end */
   4077       if (!is_mmapped(p)) {
   4078         size_t size = chunksize(p);
   4079         if (size > nb + MIN_CHUNK_SIZE) {
   4080           size_t remainder_size = size - nb;
   4081           mchunkptr remainder = chunk_plus_offset(p, nb);
   4082           set_inuse(m, p, nb);
   4083           set_inuse(m, remainder, remainder_size);
   4084           trailer = chunk2mem(remainder);
   4085         }
   4086       }
   4087 
   4088       assert (chunksize(p) >= nb);
   4089       assert((((size_t)(chunk2mem(p))) % alignment) == 0);
   4090       check_inuse_chunk(m, p);
   4091       POSTACTION(m);
   4092       if (leader != 0) {
   4093         internal_free(m, leader);
   4094       }
   4095       if (trailer != 0) {
   4096         internal_free(m, trailer);
   4097       }
   4098       return chunk2mem(p);
   4099     }
   4100   }
   4101   return 0;
   4102 }
   4103 
   4104 /* ------------------------ comalloc/coalloc support --------------------- */
   4105 
   4106 static void** ialloc(mstate m,
   4107                      size_t n_elements,
   4108                      size_t* sizes,
   4109                      int opts,
   4110                      void* chunks[]) {
   4111   /*
   4112     This provides common support for independent_X routines, handling
   4113     all of the combinations that can result.
   4114 
   4115     The opts arg has:
   4116     bit 0 set if all elements are same size (using sizes[0])
   4117     bit 1 set if elements should be zeroed
   4118   */
   4119 
   4120   size_t    element_size;   /* chunksize of each element, if all same */
   4121   size_t    contents_size;  /* total size of elements */
   4122   size_t    array_size;     /* request size of pointer array */
   4123   void*     mem;            /* malloced aggregate space */
   4124   mchunkptr p;              /* corresponding chunk */
   4125   size_t    remainder_size; /* remaining bytes while splitting */
   4126   void**    marray;         /* either "chunks" or malloced ptr array */
   4127   mchunkptr array_chunk;    /* chunk for malloced ptr array */
   4128   flag_t    was_enabled;    /* to disable mmap */
   4129   size_t    size;
   4130   size_t    i;
   4131 
   4132   /* compute array length, if needed */
   4133   if (chunks != 0) {
   4134     if (n_elements == 0)
   4135       return chunks; /* nothing to do */
   4136     marray = chunks;
   4137     array_size = 0;
   4138   }
   4139   else {
   4140     /* if empty req, must still return chunk representing empty array */
   4141     if (n_elements == 0)
   4142       return (void**)internal_malloc(m, 0);
   4143     marray = 0;
   4144     array_size = request2size(n_elements * (sizeof(void*)));
   4145   }
   4146 
   4147   /* compute total element size */
   4148   if (opts & 0x1) { /* all-same-size */
   4149     element_size = request2size(*sizes);
   4150     contents_size = n_elements * element_size;
   4151   }
   4152   else { /* add up all the sizes */
   4153     element_size = 0;
   4154     contents_size = 0;
   4155     for (i = 0; i != n_elements; ++i)
   4156       contents_size += request2size(sizes[i]);
   4157   }
   4158 
   4159   size = contents_size + array_size;
   4160 
   4161   /*
   4162      Allocate the aggregate chunk.  First disable direct-mmapping so
   4163      malloc won't use it, since we would not be able to later
   4164      free/realloc space internal to a segregated mmap region.
   4165   */
   4166   was_enabled = use_mmap(m);
   4167   disable_mmap(m);
   4168   mem = internal_malloc(m, size - CHUNK_OVERHEAD);
   4169   if (was_enabled)
   4170     enable_mmap(m);
   4171   if (mem == 0)
   4172     return 0;
   4173 
   4174   if (PREACTION(m)) return 0;
   4175   p = mem2chunk(mem);
   4176   remainder_size = chunksize(p);
   4177 
   4178   assert(!is_mmapped(p));
   4179 
   4180   if (opts & 0x2) {       /* optionally clear the elements */
   4181     memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
   4182   }
   4183 
   4184   /* If not provided, allocate the pointer array as final part of chunk */
   4185   if (marray == 0) {
   4186     size_t  array_chunk_size;
   4187     array_chunk = chunk_plus_offset(p, contents_size);
   4188     array_chunk_size = remainder_size - contents_size;
   4189     marray = (void**) (chunk2mem(array_chunk));
   4190     set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
   4191     remainder_size = contents_size;
   4192   }
   4193 
   4194   /* split out elements */
   4195   for (i = 0; ; ++i) {
   4196     marray[i] = chunk2mem(p);
   4197     if (i != n_elements-1) {
   4198       if (element_size != 0)
   4199         size = element_size;
   4200       else
   4201         size = request2size(sizes[i]);
   4202       remainder_size -= size;
   4203       set_size_and_pinuse_of_inuse_chunk(m, p, size);
   4204       p = chunk_plus_offset(p, size);
   4205     }
   4206     else { /* the final element absorbs any overallocation slop */
   4207       set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
   4208       break;
   4209     }
   4210   }
   4211 
   4212 #if DEBUG
   4213   if (marray != chunks) {
   4214     /* final element must have exactly exhausted chunk */
   4215     if (element_size != 0) {
   4216       assert(remainder_size == element_size);
   4217     }
   4218     else {
   4219       assert(remainder_size == request2size(sizes[i]));
   4220     }
   4221     check_inuse_chunk(m, mem2chunk(marray));
   4222   }
   4223   for (i = 0; i != n_elements; ++i)
   4224     check_inuse_chunk(m, mem2chunk(marray[i]));
   4225 
   4226 #endif /* DEBUG */
   4227 
   4228   POSTACTION(m);
   4229   return marray;
   4230 }
   4231 
   4232 
   4233 /* -------------------------- public routines ---------------------------- */
   4234 
   4235 #if !ONLY_MSPACES
   4236 
   4237 void* dlmalloc(size_t bytes) {
   4238   /*
   4239      Basic algorithm:
   4240      If a small request (< 256 bytes minus per-chunk overhead):
   4241        1. If one exists, use a remainderless chunk in associated smallbin.
   4242           (Remainderless means that there are too few excess bytes to
   4243           represent as a chunk.)
   4244        2. If it is big enough, use the dv chunk, which is normally the
   4245           chunk adjacent to the one used for the most recent small request.
   4246        3. If one exists, split the smallest available chunk in a bin,
   4247           saving remainder in dv.
   4248        4. If it is big enough, use the top chunk.
   4249        5. If available, get memory from system and use it
   4250      Otherwise, for a large request:
   4251        1. Find the smallest available binned chunk that fits, and use it
   4252           if it is better fitting than dv chunk, splitting if necessary.
   4253        2. If better fitting than any binned chunk, use the dv chunk.
   4254        3. If it is big enough, use the top chunk.
   4255        4. If request size >= mmap threshold, try to directly mmap this chunk.
   4256        5. If available, get memory from system and use it
   4257 
   4258      The ugly goto's here ensure that postaction occurs along all paths.
   4259   */
   4260 
   4261   if (!PREACTION(gm)) {
   4262     void* mem;
   4263     size_t nb;
   4264     if (bytes <= MAX_SMALL_REQUEST) {
   4265       bindex_t idx;
   4266       binmap_t smallbits;
   4267       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
   4268       idx = small_index(nb);
   4269       smallbits = gm->smallmap >> idx;
   4270 
   4271       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
   4272         mchunkptr b, p;
   4273         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
   4274         b = smallbin_at(gm, idx);
   4275         p = b->fd;
   4276         assert(chunksize(p) == small_index2size(idx));
   4277         unlink_first_small_chunk(gm, b, p, idx);
   4278         set_inuse_and_pinuse(gm, p, small_index2size(idx));
   4279         mem = chunk2mem(p);
   4280         check_malloced_chunk(gm, mem, nb);
   4281         goto postaction;
   4282       }
   4283 
   4284       else if (nb > gm->dvsize) {
   4285         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
   4286           mchunkptr b, p, r;
   4287           size_t rsize;
   4288           bindex_t i;
   4289           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
   4290           binmap_t leastbit = least_bit(leftbits);
   4291           compute_bit2idx(leastbit, i);
   4292           b = smallbin_at(gm, i);
   4293           p = b->fd;
   4294           assert(chunksize(p) == small_index2size(i));
   4295           unlink_first_small_chunk(gm, b, p, i);
   4296           rsize = small_index2size(i) - nb;
   4297           /* Fit here cannot be remainderless if 4byte sizes */
   4298           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
   4299             set_inuse_and_pinuse(gm, p, small_index2size(i));
   4300           else {
   4301             set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
   4302             r = chunk_plus_offset(p, nb);
   4303             set_size_and_pinuse_of_free_chunk(r, rsize);
   4304             replace_dv(gm, r, rsize);
   4305           }
   4306           mem = chunk2mem(p);
   4307           check_malloced_chunk(gm, mem, nb);
   4308           goto postaction;
   4309         }
   4310 
   4311         else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
   4312           check_malloced_chunk(gm, mem, nb);
   4313           goto postaction;
   4314         }
   4315       }
   4316     }
   4317     else if (bytes >= MAX_REQUEST)
   4318       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
   4319     else {
   4320       nb = pad_request(bytes);
   4321       if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
   4322         check_malloced_chunk(gm, mem, nb);
   4323         goto postaction;
   4324       }
   4325     }
   4326 
   4327     if (nb <= gm->dvsize) {
   4328       size_t rsize = gm->dvsize - nb;
   4329       mchunkptr p = gm->dv;
   4330       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
   4331         mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
   4332         gm->dvsize = rsize;
   4333         set_size_and_pinuse_of_free_chunk(r, rsize);
   4334         set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
   4335       }
   4336       else { /* exhaust dv */
   4337         size_t dvs = gm->dvsize;
   4338         gm->dvsize = 0;
   4339         gm->dv = 0;
   4340         set_inuse_and_pinuse(gm, p, dvs);
   4341       }
   4342       mem = chunk2mem(p);
   4343       check_malloced_chunk(gm, mem, nb);
   4344       goto postaction;
   4345     }
   4346 
   4347     else if (nb < gm->topsize) { /* Split top */
   4348       size_t rsize = gm->topsize -= nb;
   4349       mchunkptr p = gm->top;
   4350       mchunkptr r = gm->top = chunk_plus_offset(p, nb);
   4351       r->head = rsize | PINUSE_BIT;
   4352       set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
   4353       mem = chunk2mem(p);
   4354       check_top_chunk(gm, gm->top);
   4355       check_malloced_chunk(gm, mem, nb);
   4356       goto postaction;
   4357     }
   4358 
   4359     mem = sys_alloc(gm, nb);
   4360 
   4361   postaction:
   4362     POSTACTION(gm);
   4363     return mem;
   4364   }
   4365 
   4366   return 0;
   4367 }
   4368 
   4369 void dlfree(void* mem) {
   4370   /*
   4371      Consolidate freed chunks with preceeding or succeeding bordering
   4372      free chunks, if they exist, and then place in a bin.  Intermixed
   4373      with special cases for top, dv, mmapped chunks, and usage errors.
   4374   */
   4375 
   4376   if (mem != 0) {
   4377     mchunkptr p  = mem2chunk(mem);
   4378 #if FOOTERS
   4379     mstate fm = get_mstate_for(p);
   4380     if (!ok_magic(fm)) {
   4381       USAGE_ERROR_ACTION(fm, p);
   4382       return;
   4383     }
   4384 #else /* FOOTERS */
   4385 #define fm gm
   4386 #endif /* FOOTERS */
   4387     if (!PREACTION(fm)) {
   4388       check_inuse_chunk(fm, p);
   4389       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
   4390         size_t psize = chunksize(p);
   4391         mchunkptr next = chunk_plus_offset(p, psize);
   4392         if (!pinuse(p)) {
   4393           size_t prevsize = p->prev_foot;
   4394           if ((prevsize & IS_MMAPPED_BIT) != 0) {
   4395             prevsize &= ~IS_MMAPPED_BIT;
   4396             psize += prevsize + MMAP_FOOT_PAD;
   4397             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
   4398               fm->footprint -= psize;
   4399             goto postaction;
   4400           }
   4401           else {
   4402             mchunkptr prev = chunk_minus_offset(p, prevsize);
   4403             psize += prevsize;
   4404             p = prev;
   4405             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
   4406               if (p != fm->dv) {
   4407                 unlink_chunk(fm, p, prevsize);
   4408               }
   4409               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
   4410                 fm->dvsize = psize;
   4411                 set_free_with_pinuse(p, psize, next);
   4412                 goto postaction;
   4413               }
   4414             }
   4415             else
   4416               goto erroraction;
   4417           }
   4418         }
   4419 
   4420         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
   4421           if (!cinuse(next)) {  /* consolidate forward */
   4422             if (next == fm->top) {
   4423               size_t tsize = fm->topsize += psize;
   4424               fm->top = p;
   4425               p->head = tsize | PINUSE_BIT;
   4426               if (p == fm->dv) {
   4427                 fm->dv = 0;
   4428                 fm->dvsize = 0;
   4429               }
   4430               if (should_trim(fm, tsize))
   4431                 sys_trim(fm, 0);
   4432               goto postaction;
   4433             }
   4434             else if (next == fm->dv) {
   4435               size_t dsize = fm->dvsize += psize;
   4436               fm->dv = p;
   4437               set_size_and_pinuse_of_free_chunk(p, dsize);
   4438               goto postaction;
   4439             }
   4440             else {
   4441               size_t nsize = chunksize(next);
   4442               psize += nsize;
   4443               unlink_chunk(fm, next, nsize);
   4444               set_size_and_pinuse_of_free_chunk(p, psize);
   4445               if (p == fm->dv) {
   4446                 fm->dvsize = psize;
   4447                 goto postaction;
   4448               }
   4449             }
   4450           }
   4451           else
   4452             set_free_with_pinuse(p, psize, next);
   4453           insert_chunk(fm, p, psize);
   4454           check_free_chunk(fm, p);
   4455           goto postaction;
   4456         }
   4457       }
   4458     erroraction:
   4459       USAGE_ERROR_ACTION(fm, p);
   4460     postaction:
   4461       POSTACTION(fm);
   4462     }
   4463   }
   4464 #if !FOOTERS
   4465 #undef fm
   4466 #endif /* FOOTERS */
   4467 }
   4468 
   4469 void* dlcalloc(size_t n_elements, size_t elem_size) {
   4470   void *mem;
   4471   if (n_elements && MAX_SIZE_T / n_elements < elem_size) {
   4472     /* Fail on overflow */
   4473     MALLOC_FAILURE_ACTION;
   4474     return NULL;
   4475   }
   4476   elem_size *= n_elements;
   4477   mem = dlmalloc(elem_size);
   4478   if (mem && calloc_must_clear(mem2chunk(mem)))
   4479     memset(mem, 0, elem_size);
   4480   return mem;
   4481 }
   4482 
   4483 void* dlrealloc(void* oldmem, size_t bytes) {
   4484   if (oldmem == 0)
   4485     return dlmalloc(bytes);
   4486 #ifdef REALLOC_ZERO_BYTES_FREES
   4487   if (bytes == 0) {
   4488     dlfree(oldmem);
   4489     return 0;
   4490   }
   4491 #endif /* REALLOC_ZERO_BYTES_FREES */
   4492   else {
   4493 #if ! FOOTERS
   4494     mstate m = gm;
   4495 #else /* FOOTERS */
   4496     mstate m = get_mstate_for(mem2chunk(oldmem));
   4497     if (!ok_magic(m)) {
   4498       USAGE_ERROR_ACTION(m, oldmem);
   4499       return 0;
   4500     }
   4501 #endif /* FOOTERS */
   4502     return internal_realloc(m, oldmem, bytes);
   4503   }
   4504 }
   4505 
   4506 void* dlmemalign(size_t alignment, size_t bytes) {
   4507   return internal_memalign(gm, alignment, bytes);
   4508 }
   4509 
   4510 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
   4511                                  void* chunks[]) {
   4512   size_t sz = elem_size; /* serves as 1-element array */
   4513   return ialloc(gm, n_elements, &sz, 3, chunks);
   4514 }
   4515 
   4516 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
   4517                                    void* chunks[]) {
   4518   return ialloc(gm, n_elements, sizes, 0, chunks);
   4519 }
   4520 
   4521 void* dlvalloc(size_t bytes) {
   4522   size_t pagesz;
   4523   init_mparams();
   4524   pagesz = mparams.page_size;
   4525   return dlmemalign(pagesz, bytes);
   4526 }
   4527 
   4528 void* dlpvalloc(size_t bytes) {
   4529   size_t pagesz;
   4530   init_mparams();
   4531   pagesz = mparams.page_size;
   4532   return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE));
   4533 }
   4534 
   4535 int dlmalloc_trim(size_t pad) {
   4536   int result = 0;
   4537   if (!PREACTION(gm)) {
   4538     result = sys_trim(gm, pad);
   4539     POSTACTION(gm);
   4540   }
   4541   return result;
   4542 }
   4543 
   4544 size_t dlmalloc_footprint(void) {
   4545   return gm->footprint;
   4546 }
   4547 
   4548 #if USE_MAX_ALLOWED_FOOTPRINT
   4549 size_t dlmalloc_max_allowed_footprint(void) {
   4550   return gm->max_allowed_footprint;
   4551 }
   4552 
   4553 void dlmalloc_set_max_allowed_footprint(size_t bytes) {
   4554   if (bytes > gm->footprint) {
   4555     /* Increase the size in multiples of the granularity,
   4556      * which is the smallest unit we request from the system.
   4557      */
   4558     gm->max_allowed_footprint = gm->footprint +
   4559                                 granularity_align(bytes - gm->footprint);
   4560   }
   4561   else {
   4562     //TODO: allow for reducing the max footprint
   4563     gm->max_allowed_footprint = gm->footprint;
   4564   }
   4565 }
   4566 #endif
   4567 
   4568 size_t dlmalloc_max_footprint(void) {
   4569   return gm->max_footprint;
   4570 }
   4571 
   4572 #if !NO_MALLINFO
   4573 struct mallinfo dlmallinfo(void) {
   4574   return internal_mallinfo(gm);
   4575 }
   4576 #endif /* NO_MALLINFO */
   4577 
   4578 void dlmalloc_stats() {
   4579   internal_malloc_stats(gm);
   4580 }
   4581 
   4582 size_t dlmalloc_usable_size(void* mem) {
   4583   if (mem != 0) {
   4584     mchunkptr p = mem2chunk(mem);
   4585     if (cinuse(p))
   4586       return chunksize(p) - overhead_for(p);
   4587   }
   4588   return 0;
   4589 }
   4590 
   4591 int dlmallopt(int param_number, int value) {
   4592   return change_mparam(param_number, value);
   4593 }
   4594 
   4595 #endif /* !ONLY_MSPACES */
   4596 
   4597 /* ----------------------------- user mspaces ---------------------------- */
   4598 
   4599 #if MSPACES
   4600 
   4601 static mstate init_user_mstate(char* tbase, size_t tsize) {
   4602   size_t msize = pad_request(sizeof(struct malloc_state));
   4603   mchunkptr mn;
   4604   mchunkptr msp = align_as_chunk(tbase);
   4605   mstate m = (mstate)(chunk2mem(msp));
   4606   memset(m, 0, msize);
   4607   INITIAL_LOCK(&m->mutex);
   4608   msp->head = (msize|PINUSE_BIT|CINUSE_BIT);
   4609   m->seg.base = m->least_addr = tbase;
   4610   m->seg.size = m->footprint = m->max_footprint = tsize;
   4611 #if USE_MAX_ALLOWED_FOOTPRINT
   4612   m->max_allowed_footprint = MAX_SIZE_T;
   4613 #endif
   4614   m->magic = mparams.magic;
   4615   m->mflags = mparams.default_mflags;
   4616   disable_contiguous(m);
   4617   init_bins(m);
   4618   mn = next_chunk(mem2chunk(m));
   4619   init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
   4620   check_top_chunk(m, m->top);
   4621   return m;
   4622 }
   4623 
   4624 mspace create_mspace(size_t capacity, int locked) {
   4625   mstate m = 0;
   4626   size_t msize = pad_request(sizeof(struct malloc_state));
   4627   init_mparams(); /* Ensure pagesize etc initialized */
   4628 
   4629   if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
   4630     size_t rs = ((capacity == 0)? mparams.granularity :
   4631                  (capacity + TOP_FOOT_SIZE + msize));
   4632     size_t tsize = granularity_align(rs);
   4633     char* tbase = (char*)(CALL_MMAP(tsize));
   4634     if (tbase != CMFAIL) {
   4635       m = init_user_mstate(tbase, tsize);
   4636       m->seg.sflags = IS_MMAPPED_BIT;
   4637       set_lock(m, locked);
   4638     }
   4639   }
   4640   return (mspace)m;
   4641 }
   4642 
   4643 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
   4644   mstate m = 0;
   4645   size_t msize = pad_request(sizeof(struct malloc_state));
   4646   init_mparams(); /* Ensure pagesize etc initialized */
   4647 
   4648   if (capacity > msize + TOP_FOOT_SIZE &&
   4649       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
   4650     m = init_user_mstate((char*)base, capacity);
   4651     m->seg.sflags = EXTERN_BIT;
   4652     set_lock(m, locked);
   4653   }
   4654   return (mspace)m;
   4655 }
   4656 
   4657 size_t destroy_mspace(mspace msp) {
   4658   size_t freed = 0;
   4659   mstate ms = (mstate)msp;
   4660   if (ok_magic(ms)) {
   4661     msegmentptr sp = &ms->seg;
   4662     while (sp != 0) {
   4663       char* base = sp->base;
   4664       size_t size = sp->size;
   4665       flag_t flag = sp->sflags;
   4666       sp = sp->next;
   4667       if ((flag & IS_MMAPPED_BIT) && !(flag & EXTERN_BIT) &&
   4668           CALL_MUNMAP(base, size) == 0)
   4669         freed += size;
   4670     }
   4671   }
   4672   else {
   4673     USAGE_ERROR_ACTION(ms,ms);
   4674   }
   4675   return freed;
   4676 }
   4677 
   4678 /*
   4679   mspace versions of routines are near-clones of the global
   4680   versions. This is not so nice but better than the alternatives.
   4681 */
   4682 
   4683 
   4684 void* mspace_malloc(mspace msp, size_t bytes) {
   4685   mstate ms = (mstate)msp;
   4686   if (!ok_magic(ms)) {
   4687     USAGE_ERROR_ACTION(ms,ms);
   4688     return 0;
   4689   }
   4690   if (!PREACTION(ms)) {
   4691     void* mem;
   4692     size_t nb;
   4693     if (bytes <= MAX_SMALL_REQUEST) {
   4694       bindex_t idx;
   4695       binmap_t smallbits;
   4696       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
   4697       idx = small_index(nb);
   4698       smallbits = ms->smallmap >> idx;
   4699 
   4700       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
   4701         mchunkptr b, p;
   4702         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
   4703         b = smallbin_at(ms, idx);
   4704         p = b->fd;
   4705         assert(chunksize(p) == small_index2size(idx));
   4706         unlink_first_small_chunk(ms, b, p, idx);
   4707         set_inuse_and_pinuse(ms, p, small_index2size(idx));
   4708         mem = chunk2mem(p);
   4709         check_malloced_chunk(ms, mem, nb);
   4710         goto postaction;
   4711       }
   4712 
   4713       else if (nb > ms->dvsize) {
   4714         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
   4715           mchunkptr b, p, r;
   4716           size_t rsize;
   4717           bindex_t i;
   4718           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
   4719           binmap_t leastbit = least_bit(leftbits);
   4720           compute_bit2idx(leastbit, i);
   4721           b = smallbin_at(ms, i);
   4722           p = b->fd;
   4723           assert(chunksize(p) == small_index2size(i));
   4724           unlink_first_small_chunk(ms, b, p, i);
   4725           rsize = small_index2size(i) - nb;
   4726           /* Fit here cannot be remainderless if 4byte sizes */
   4727           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
   4728             set_inuse_and_pinuse(ms, p, small_index2size(i));
   4729           else {
   4730             set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
   4731             r = chunk_plus_offset(p, nb);
   4732             set_size_and_pinuse_of_free_chunk(r, rsize);
   4733             replace_dv(ms, r, rsize);
   4734           }
   4735           mem = chunk2mem(p);
   4736           check_malloced_chunk(ms, mem, nb);
   4737           goto postaction;
   4738         }
   4739 
   4740         else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
   4741           check_malloced_chunk(ms, mem, nb);
   4742           goto postaction;
   4743         }
   4744       }
   4745     }
   4746     else if (bytes >= MAX_REQUEST)
   4747       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
   4748     else {
   4749       nb = pad_request(bytes);
   4750       if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
   4751         check_malloced_chunk(ms, mem, nb);
   4752         goto postaction;
   4753       }
   4754     }
   4755 
   4756     if (nb <= ms->dvsize) {
   4757       size_t rsize = ms->dvsize - nb;
   4758       mchunkptr p = ms->dv;
   4759       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
   4760         mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
   4761         ms->dvsize = rsize;
   4762         set_size_and_pinuse_of_free_chunk(r, rsize);
   4763         set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
   4764       }
   4765       else { /* exhaust dv */
   4766         size_t dvs = ms->dvsize;
   4767         ms->dvsize = 0;
   4768         ms->dv = 0;
   4769         set_inuse_and_pinuse(ms, p, dvs);
   4770       }
   4771       mem = chunk2mem(p);
   4772       check_malloced_chunk(ms, mem, nb);
   4773       goto postaction;
   4774     }
   4775 
   4776     else if (nb < ms->topsize) { /* Split top */
   4777       size_t rsize = ms->topsize -= nb;
   4778       mchunkptr p = ms->top;
   4779       mchunkptr r = ms->top = chunk_plus_offset(p, nb);
   4780       r->head = rsize | PINUSE_BIT;
   4781       set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
   4782       mem = chunk2mem(p);
   4783       check_top_chunk(ms, ms->top);
   4784       check_malloced_chunk(ms, mem, nb);
   4785       goto postaction;
   4786     }
   4787 
   4788     mem = sys_alloc(ms, nb);
   4789 
   4790   postaction:
   4791     POSTACTION(ms);
   4792     return mem;
   4793   }
   4794 
   4795   return 0;
   4796 }
   4797 
   4798 void mspace_free(mspace msp, void* mem) {
   4799   if (mem != 0) {
   4800     mchunkptr p  = mem2chunk(mem);
   4801 #if FOOTERS
   4802     mstate fm = get_mstate_for(p);
   4803 #else /* FOOTERS */
   4804     mstate fm = (mstate)msp;
   4805 #endif /* FOOTERS */
   4806     if (!ok_magic(fm)) {
   4807       USAGE_ERROR_ACTION(fm, p);
   4808       return;
   4809     }
   4810     if (!PREACTION(fm)) {
   4811       check_inuse_chunk(fm, p);
   4812       if (RTCHECK(ok_address(fm, p) && ok_cinuse(p))) {
   4813         size_t psize = chunksize(p);
   4814         mchunkptr next = chunk_plus_offset(p, psize);
   4815         if (!pinuse(p)) {
   4816           size_t prevsize = p->prev_foot;
   4817           if ((prevsize & IS_MMAPPED_BIT) != 0) {
   4818             prevsize &= ~IS_MMAPPED_BIT;
   4819             psize += prevsize + MMAP_FOOT_PAD;
   4820             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
   4821               fm->footprint -= psize;
   4822             goto postaction;
   4823           }
   4824           else {
   4825             mchunkptr prev = chunk_minus_offset(p, prevsize);
   4826             psize += prevsize;
   4827             p = prev;
   4828             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
   4829               if (p != fm->dv) {
   4830                 unlink_chunk(fm, p, prevsize);
   4831               }
   4832               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
   4833                 fm->dvsize = psize;
   4834                 set_free_with_pinuse(p, psize, next);
   4835                 goto postaction;
   4836               }
   4837             }
   4838             else
   4839               goto erroraction;
   4840           }
   4841         }
   4842 
   4843         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
   4844           if (!cinuse(next)) {  /* consolidate forward */
   4845             if (next == fm->top) {
   4846               size_t tsize = fm->topsize += psize;
   4847               fm->top = p;
   4848               p->head = tsize | PINUSE_BIT;
   4849               if (p == fm->dv) {
   4850                 fm->dv = 0;
   4851                 fm->dvsize = 0;
   4852               }
   4853               if (should_trim(fm, tsize))
   4854                 sys_trim(fm, 0);
   4855               goto postaction;
   4856             }
   4857             else if (next == fm->dv) {
   4858               size_t dsize = fm->dvsize += psize;
   4859               fm->dv = p;
   4860               set_size_and_pinuse_of_free_chunk(p, dsize);
   4861               goto postaction;
   4862             }
   4863             else {
   4864               size_t nsize = chunksize(next);
   4865               psize += nsize;
   4866               unlink_chunk(fm, next, nsize);
   4867               set_size_and_pinuse_of_free_chunk(p, psize);
   4868               if (p == fm->dv) {
   4869                 fm->dvsize = psize;
   4870                 goto postaction;
   4871               }
   4872             }
   4873           }
   4874           else
   4875             set_free_with_pinuse(p, psize, next);
   4876           insert_chunk(fm, p, psize);
   4877           check_free_chunk(fm, p);
   4878           goto postaction;
   4879         }
   4880       }
   4881     erroraction:
   4882       USAGE_ERROR_ACTION(fm, p);
   4883     postaction:
   4884       POSTACTION(fm);
   4885     }
   4886   }
   4887 }
   4888 
   4889 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
   4890   void *mem;
   4891   mstate ms = (mstate)msp;
   4892   if (!ok_magic(ms)) {
   4893     USAGE_ERROR_ACTION(ms,ms);
   4894     return 0;
   4895   }
   4896   if (n_elements && MAX_SIZE_T / n_elements < elem_size) {
   4897     /* Fail on overflow */
   4898     MALLOC_FAILURE_ACTION;
   4899     return NULL;
   4900   }
   4901   elem_size *= n_elements;
   4902   mem = internal_malloc(ms, elem_size);
   4903   if (mem && calloc_must_clear(mem2chunk(mem)))
   4904     memset(mem, 0, elem_size);
   4905   return mem;
   4906 }
   4907 
   4908 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
   4909   if (oldmem == 0)
   4910     return mspace_malloc(msp, bytes);
   4911 #ifdef REALLOC_ZERO_BYTES_FREES
   4912   if (bytes == 0) {
   4913     mspace_free(msp, oldmem);
   4914     return 0;
   4915   }
   4916 #endif /* REALLOC_ZERO_BYTES_FREES */
   4917   else {
   4918 #if FOOTERS
   4919     mchunkptr p  = mem2chunk(oldmem);
   4920     mstate ms = get_mstate_for(p);
   4921 #else /* FOOTERS */
   4922     mstate ms = (mstate)msp;
   4923 #endif /* FOOTERS */
   4924     if (!ok_magic(ms)) {
   4925       USAGE_ERROR_ACTION(ms,ms);
   4926       return 0;
   4927     }
   4928     return internal_realloc(ms, oldmem, bytes);
   4929   }
   4930 }
   4931 
   4932 #if ANDROID
   4933 void* mspace_merge_objects(mspace msp, void* mema, void* memb)
   4934 {
   4935   /* PREACTION/POSTACTION aren't necessary because we are only
   4936      modifying fields of inuse chunks owned by the current thread, in
   4937      which case no other malloc operations can touch them.
   4938    */
   4939   if (mema == NULL || memb == NULL) {
   4940     return NULL;
   4941   }
   4942   mchunkptr pa = mem2chunk(mema);
   4943   mchunkptr pb = mem2chunk(memb);
   4944 
   4945 #if FOOTERS
   4946   mstate fm = get_mstate_for(pa);
   4947 #else /* FOOTERS */
   4948   mstate fm = (mstate)msp;
   4949 #endif /* FOOTERS */
   4950   if (!ok_magic(fm)) {
   4951     USAGE_ERROR_ACTION(fm, pa);
   4952     return NULL;
   4953   }
   4954   check_inuse_chunk(fm, pa);
   4955   if (RTCHECK(ok_address(fm, pa) && ok_cinuse(pa))) {
   4956     if (next_chunk(pa) != pb) {
   4957       /* Since pb may not be in fm, we can't check ok_address(fm, pb);
   4958          since ok_cinuse(pb) would be unsafe before an address check,
   4959          return NULL rather than invoke USAGE_ERROR_ACTION if pb is not
   4960          in use or is a bogus address.
   4961        */
   4962       return NULL;
   4963     }
   4964     /* Since b follows a, they share the mspace. */
   4965 #if FOOTERS
   4966     assert(fm == get_mstate_for(pb));
   4967 #endif /* FOOTERS */
   4968     check_inuse_chunk(fm, pb);
   4969     if (RTCHECK(ok_address(fm, pb) && ok_cinuse(pb))) {
   4970       size_t sz = chunksize(pb);
   4971       pa->head += sz;
   4972       /* Make sure pa still passes. */
   4973       check_inuse_chunk(fm, pa);
   4974       return mema;
   4975     }
   4976     else {
   4977       USAGE_ERROR_ACTION(fm, pb);
   4978       return NULL;
   4979     }
   4980   }
   4981   else {
   4982     USAGE_ERROR_ACTION(fm, pa);
   4983     return NULL;
   4984   }
   4985 }
   4986 #endif /* ANDROID */
   4987 
   4988 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
   4989   mstate ms = (mstate)msp;
   4990   if (!ok_magic(ms)) {
   4991     USAGE_ERROR_ACTION(ms,ms);
   4992     return 0;
   4993   }
   4994   return internal_memalign(ms, alignment, bytes);
   4995 }
   4996 
   4997 void** mspace_independent_calloc(mspace msp, size_t n_elements,
   4998                                  size_t elem_size, void* chunks[]) {
   4999   size_t sz = elem_size; /* serves as 1-element array */
   5000   mstate ms = (mstate)msp;
   5001   if (!ok_magic(ms)) {
   5002     USAGE_ERROR_ACTION(ms,ms);
   5003     return 0;
   5004   }
   5005   return ialloc(ms, n_elements, &sz, 3, chunks);
   5006 }
   5007 
   5008 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
   5009                                    size_t sizes[], void* chunks[]) {
   5010   mstate ms = (mstate)msp;
   5011   if (!ok_magic(ms)) {
   5012     USAGE_ERROR_ACTION(ms,ms);
   5013     return 0;
   5014   }
   5015   return ialloc(ms, n_elements, sizes, 0, chunks);
   5016 }
   5017 
   5018 int mspace_trim(mspace msp, size_t pad) {
   5019   int result = 0;
   5020   mstate ms = (mstate)msp;
   5021   if (ok_magic(ms)) {
   5022     if (!PREACTION(ms)) {
   5023       result = sys_trim(ms, pad);
   5024       POSTACTION(ms);
   5025     }
   5026   }
   5027   else {
   5028     USAGE_ERROR_ACTION(ms,ms);
   5029   }
   5030   return result;
   5031 }
   5032 
   5033 void mspace_malloc_stats(mspace msp) {
   5034   mstate ms = (mstate)msp;
   5035   if (ok_magic(ms)) {
   5036     internal_malloc_stats(ms);
   5037   }
   5038   else {
   5039     USAGE_ERROR_ACTION(ms,ms);
   5040   }
   5041 }
   5042 
   5043 size_t mspace_footprint(mspace msp) {
   5044   size_t result;
   5045   mstate ms = (mstate)msp;
   5046   if (ok_magic(ms)) {
   5047     result = ms->footprint;
   5048   }
   5049   else {
   5050     USAGE_ERROR_ACTION(ms,ms);
   5051   }
   5052   return result;
   5053 }
   5054 
   5055 #if USE_MAX_ALLOWED_FOOTPRINT
   5056 size_t mspace_max_allowed_footprint(mspace msp) {
   5057   size_t result;
   5058   mstate ms = (mstate)msp;
   5059   if (ok_magic(ms)) {
   5060     result = ms->max_allowed_footprint;
   5061   }
   5062   else {
   5063     USAGE_ERROR_ACTION(ms,ms);
   5064   }
   5065   return result;
   5066 }
   5067 
   5068 void mspace_set_max_allowed_footprint(mspace msp, size_t bytes) {
   5069   mstate ms = (mstate)msp;
   5070   if (ok_magic(ms)) {
   5071     if (bytes > ms->footprint) {
   5072       /* Increase the size in multiples of the granularity,
   5073        * which is the smallest unit we request from the system.
   5074        */
   5075       ms->max_allowed_footprint = ms->footprint +
   5076                                   granularity_align(bytes - ms->footprint);
   5077     }
   5078     else {
   5079       //TODO: allow for reducing the max footprint
   5080       ms->max_allowed_footprint = ms->footprint;
   5081     }
   5082   }
   5083   else {
   5084     USAGE_ERROR_ACTION(ms,ms);
   5085   }
   5086 }
   5087 #endif
   5088 
   5089 size_t mspace_max_footprint(mspace msp) {
   5090   size_t result;
   5091   mstate ms = (mstate)msp;
   5092   if (ok_magic(ms)) {
   5093     result = ms->max_footprint;
   5094   }
   5095   else {
   5096     USAGE_ERROR_ACTION(ms,ms);
   5097   }
   5098   return result;
   5099 }
   5100 
   5101 
   5102 #if !NO_MALLINFO
   5103 struct mallinfo mspace_mallinfo(mspace msp) {
   5104   mstate ms = (mstate)msp;
   5105   if (!ok_magic(ms)) {
   5106     USAGE_ERROR_ACTION(ms,ms);
   5107   }
   5108   return internal_mallinfo(ms);
   5109 }
   5110 #endif /* NO_MALLINFO */
   5111 
   5112 int mspace_mallopt(int param_number, int value) {
   5113   return change_mparam(param_number, value);
   5114 }
   5115 
   5116 #endif /* MSPACES */
   5117 
   5118 #if MSPACES && ONLY_MSPACES
   5119 void mspace_walk_free_pages(mspace msp,
   5120     void(*handler)(void *start, void *end, void *arg), void *harg)
   5121 {
   5122   mstate m = (mstate)msp;
   5123   if (!ok_magic(m)) {
   5124     USAGE_ERROR_ACTION(m,m);
   5125     return;
   5126   }
   5127 #else
   5128 void dlmalloc_walk_free_pages(void(*handler)(void *start, void *end, void *arg),
   5129     void *harg)
   5130 {
   5131   mstate m = (mstate)gm;
   5132 #endif
   5133   if (!PREACTION(m)) {
   5134     if (is_initialized(m)) {
   5135       msegmentptr s = &m->seg;
   5136       while (s != 0) {
   5137         mchunkptr p = align_as_chunk(s->base);
   5138         while (segment_holds(s, p) &&
   5139                p != m->top && p->head != FENCEPOST_HEAD) {
   5140           void *chunkptr, *userptr;
   5141           size_t chunklen, userlen;
   5142           chunkptr = p;
   5143           chunklen = chunksize(p);
   5144           if (!cinuse(p)) {
   5145             void *start;
   5146             if (is_small(chunklen)) {
   5147               start = (void *)(p + 1);
   5148             }
   5149             else {
   5150               start = (void *)((tchunkptr)p + 1);
   5151             }
   5152             handler(start, next_chunk(p), harg);
   5153           }
   5154           p = next_chunk(p);
   5155         }
   5156         if (p == m->top) {
   5157           handler((void *)(p + 1), next_chunk(p), harg);
   5158         }
   5159         s = s->next;
   5160       }
   5161     }
   5162     POSTACTION(m);
   5163   }
   5164 }
   5165 
   5166 
   5167 #if MSPACES && ONLY_MSPACES
   5168 void mspace_walk_heap(mspace msp,
   5169                       void(*handler)(const void *chunkptr, size_t chunklen,
   5170                                      const void *userptr, size_t userlen,
   5171                                      void *arg),
   5172                       void *harg)
   5173 {
   5174   msegmentptr s;
   5175   mstate m = (mstate)msp;
   5176   if (!ok_magic(m)) {
   5177     USAGE_ERROR_ACTION(m,m);
   5178     return;
   5179   }
   5180 #else
   5181 void dlmalloc_walk_heap(void(*handler)(const void *chunkptr, size_t chunklen,
   5182                                        const void *userptr, size_t userlen,
   5183                                        void *arg),
   5184                         void *harg)
   5185 {
   5186   msegmentptr s;
   5187   mstate m = (mstate)gm;
   5188 #endif
   5189 
   5190   s = &m->seg;
   5191   while (s != 0) {
   5192     mchunkptr p = align_as_chunk(s->base);
   5193     while (segment_holds(s, p) &&
   5194            p != m->top && p->head != FENCEPOST_HEAD) {
   5195       void *chunkptr, *userptr;
   5196       size_t chunklen, userlen;
   5197       chunkptr = p;
   5198       chunklen = chunksize(p);
   5199       if (cinuse(p)) {
   5200         userptr = chunk2mem(p);
   5201         userlen = chunklen - overhead_for(p);
   5202       }
   5203       else {
   5204         userptr = NULL;
   5205         userlen = 0;
   5206       }
   5207       handler(chunkptr, chunklen, userptr, userlen, harg);
   5208       p = next_chunk(p);
   5209     }
   5210     if (p == m->top) {
   5211       /* The top chunk is just a big free chunk for our purposes.
   5212        */
   5213       handler(m->top, m->topsize, NULL, 0, harg);
   5214     }
   5215     s = s->next;
   5216   }
   5217 }
   5218 
   5219 /* -------------------- Alternative MORECORE functions ------------------- */
   5220 
   5221 /*
   5222   Guidelines for creating a custom version of MORECORE:
   5223 
   5224   * For best performance, MORECORE should allocate in multiples of pagesize.
   5225   * MORECORE may allocate more memory than requested. (Or even less,
   5226       but this will usually result in a malloc failure.)
   5227   * MORECORE must not allocate memory when given argument zero, but
   5228       instead return one past the end address of memory from previous
   5229       nonzero call.
   5230   * For best performance, consecutive calls to MORECORE with positive
   5231       arguments should return increasing addresses, indicating that
   5232       space has been contiguously extended.
   5233   * Even though consecutive calls to MORECORE need not return contiguous
   5234       addresses, it must be OK for malloc'ed chunks to span multiple
   5235       regions in those cases where they do happen to be contiguous.
   5236   * MORECORE need not handle negative arguments -- it may instead
   5237       just return MFAIL when given negative arguments.
   5238       Negative arguments are always multiples of pagesize. MORECORE
   5239       must not misinterpret negative args as large positive unsigned
   5240       args. You can suppress all such calls from even occurring by defining
   5241       MORECORE_CANNOT_TRIM,
   5242 
   5243   As an example alternative MORECORE, here is a custom allocator
   5244   kindly contributed for pre-OSX macOS.  It uses virtually but not
   5245   necessarily physically contiguous non-paged memory (locked in,
   5246   present and won't get swapped out).  You can use it by uncommenting
   5247   this section, adding some #includes, and setting up the appropriate
   5248   defines above:
   5249 
   5250       #define MORECORE osMoreCore
   5251 
   5252   There is also a shutdown routine that should somehow be called for
   5253   cleanup upon program exit.
   5254 
   5255   #define MAX_POOL_ENTRIES 100
   5256   #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
   5257   static int next_os_pool;
   5258   void *our_os_pools[MAX_POOL_ENTRIES];
   5259 
   5260   void *osMoreCore(int size)
   5261   {
   5262     void *ptr = 0;
   5263     static void *sbrk_top = 0;
   5264 
   5265     if (size > 0)
   5266     {
   5267       if (size < MINIMUM_MORECORE_SIZE)
   5268          size = MINIMUM_MORECORE_SIZE;
   5269       if (CurrentExecutionLevel() == kTaskLevel)
   5270          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
   5271       if (ptr == 0)
   5272       {
   5273         return (void *) MFAIL;
   5274       }
   5275       // save ptrs so they can be freed during cleanup
   5276       our_os_pools[next_os_pool] = ptr;
   5277       next_os_pool++;
   5278       ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
   5279       sbrk_top = (char *) ptr + size;
   5280       return ptr;
   5281     }
   5282     else if (size < 0)
   5283     {
   5284       // we don't currently support shrink behavior
   5285       return (void *) MFAIL;
   5286     }
   5287     else
   5288     {
   5289       return sbrk_top;
   5290     }
   5291   }
   5292 
   5293   // cleanup any allocated memory pools
   5294   // called as last thing before shutting down driver
   5295 
   5296   void osCleanupMem(void)
   5297   {
   5298     void **ptr;
   5299 
   5300     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
   5301       if (*ptr)
   5302       {
   5303          PoolDeallocate(*ptr);
   5304          *ptr = 0;
   5305       }
   5306   }
   5307 
   5308 */
   5309 
   5310 
   5311 /* -----------------------------------------------------------------------
   5312 History:
   5313     V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
   5314       * Add max_footprint functions
   5315       * Ensure all appropriate literals are size_t
   5316       * Fix conditional compilation problem for some #define settings
   5317       * Avoid concatenating segments with the one provided
   5318         in create_mspace_with_base
   5319       * Rename some variables to avoid compiler shadowing warnings
   5320       * Use explicit lock initialization.
   5321       * Better handling of sbrk interference.
   5322       * Simplify and fix segment insertion, trimming and mspace_destroy
   5323       * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
   5324       * Thanks especially to Dennis Flanagan for help on these.
   5325 
   5326     V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
   5327       * Fix memalign brace error.
   5328 
   5329     V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
   5330       * Fix improper #endif nesting in C++
   5331       * Add explicit casts needed for C++
   5332 
   5333     V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
   5334       * Use trees for large bins
   5335       * Support mspaces
   5336       * Use segments to unify sbrk-based and mmap-based system allocation,
   5337         removing need for emulation on most platforms without sbrk.
   5338       * Default safety checks
   5339       * Optional footer checks. Thanks to William Robertson for the idea.
   5340       * Internal code refactoring
   5341       * Incorporate suggestions and platform-specific changes.
   5342         Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
   5343         Aaron Bachmann,  Emery Berger, and others.
   5344       * Speed up non-fastbin processing enough to remove fastbins.
   5345       * Remove useless cfree() to avoid conflicts with other apps.
   5346       * Remove internal memcpy, memset. Compilers handle builtins better.
   5347       * Remove some options that no one ever used and rename others.
   5348 
   5349     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
   5350       * Fix malloc_state bitmap array misdeclaration
   5351 
   5352     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
   5353       * Allow tuning of FIRST_SORTED_BIN_SIZE
   5354       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
   5355       * Better detection and support for non-contiguousness of MORECORE.
   5356         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
   5357       * Bypass most of malloc if no frees. Thanks To Emery Berger.
   5358       * Fix freeing of old top non-contiguous chunk im sysmalloc.
   5359       * Raised default trim and map thresholds to 256K.
   5360       * Fix mmap-related #defines. Thanks to Lubos Lunak.
   5361       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
   5362       * Branch-free bin calculation
   5363       * Default trim and mmap thresholds now 256K.
   5364 
   5365     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
   5366       * Introduce independent_comalloc and independent_calloc.
   5367         Thanks to Michael Pachos for motivation and help.
   5368       * Make optional .h file available
   5369       * Allow > 2GB requests on 32bit systems.
   5370       * new WIN32 sbrk, mmap, munmap, lock code from <Walter (at) GeNeSys-e.de>.
   5371         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
   5372         and Anonymous.
   5373       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
   5374         helping test this.)
   5375       * memalign: check alignment arg
   5376       * realloc: don't try to shift chunks backwards, since this
   5377         leads to  more fragmentation in some programs and doesn't
   5378         seem to help in any others.
   5379       * Collect all cases in malloc requiring system memory into sysmalloc
   5380       * Use mmap as backup to sbrk
   5381       * Place all internal state in malloc_state
   5382       * Introduce fastbins (although similar to 2.5.1)
   5383       * Many minor tunings and cosmetic improvements
   5384       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
   5385       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
   5386         Thanks to Tony E. Bennett <tbennett (at) nvidia.com> and others.
   5387       * Include errno.h to support default failure action.
   5388 
   5389     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
   5390       * return null for negative arguments
   5391       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
   5392          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
   5393           (e.g. WIN32 platforms)
   5394          * Cleanup header file inclusion for WIN32 platforms
   5395          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
   5396          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
   5397            memory allocation routines
   5398          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
   5399          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
   5400            usage of 'assert' in non-WIN32 code
   5401          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
   5402            avoid infinite loop
   5403       * Always call 'fREe()' rather than 'free()'
   5404 
   5405     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
   5406       * Fixed ordering problem with boundary-stamping
   5407 
   5408     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
   5409       * Added pvalloc, as recommended by H.J. Liu
   5410       * Added 64bit pointer support mainly from Wolfram Gloger
   5411       * Added anonymously donated WIN32 sbrk emulation
   5412       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
   5413       * malloc_extend_top: fix mask error that caused wastage after
   5414         foreign sbrks
   5415       * Add linux mremap support code from HJ Liu
   5416 
   5417     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
   5418       * Integrated most documentation with the code.
   5419       * Add support for mmap, with help from
   5420         Wolfram Gloger (Gloger (at) lrz.uni-muenchen.de).
   5421       * Use last_remainder in more cases.
   5422       * Pack bins using idea from  colin (at) nyx10.cs.du.edu
   5423       * Use ordered bins instead of best-fit threshhold
   5424       * Eliminate block-local decls to simplify tracing and debugging.
   5425       * Support another case of realloc via move into top
   5426       * Fix error occuring when initial sbrk_base not word-aligned.
   5427       * Rely on page size for units instead of SBRK_UNIT to
   5428         avoid surprises about sbrk alignment conventions.
   5429       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
   5430         (raymond (at) es.ele.tue.nl) for the suggestion.
   5431       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
   5432       * More precautions for cases where other routines call sbrk,
   5433         courtesy of Wolfram Gloger (Gloger (at) lrz.uni-muenchen.de).
   5434       * Added macros etc., allowing use in linux libc from
   5435         H.J. Lu (hjl (at) gnu.ai.mit.edu)
   5436       * Inverted this history list
   5437 
   5438     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
   5439       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
   5440       * Removed all preallocation code since under current scheme
   5441         the work required to undo bad preallocations exceeds
   5442         the work saved in good cases for most test programs.
   5443       * No longer use return list or unconsolidated bins since
   5444         no scheme using them consistently outperforms those that don't
   5445         given above changes.
   5446       * Use best fit for very large chunks to prevent some worst-cases.
   5447       * Added some support for debugging
   5448 
   5449     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
   5450       * Removed footers when chunks are in use. Thanks to
   5451         Paul Wilson (wilson (at) cs.texas.edu) for the suggestion.
   5452 
   5453     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
   5454       * Added malloc_trim, with help from Wolfram Gloger
   5455         (wmglo (at) Dent.MED.Uni-Muenchen.DE).
   5456 
   5457     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
   5458 
   5459     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
   5460       * realloc: try to expand in both directions
   5461       * malloc: swap order of clean-bin strategy;
   5462       * realloc: only conditionally expand backwards
   5463       * Try not to scavenge used bins
   5464       * Use bin counts as a guide to preallocation
   5465       * Occasionally bin return list chunks in first scan
   5466       * Add a few optimizations from colin (at) nyx10.cs.du.edu
   5467 
   5468     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
   5469       * faster bin computation & slightly different binning
   5470       * merged all consolidations to one part of malloc proper
   5471          (eliminating old malloc_find_space & malloc_clean_bin)
   5472       * Scan 2 returns chunks (not just 1)
   5473       * Propagate failure in realloc if malloc returns 0
   5474       * Add stuff to allow compilation on non-ANSI compilers
   5475           from kpv (at) research.att.com
   5476 
   5477     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
   5478       * removed potential for odd address access in prev_chunk
   5479       * removed dependency on getpagesize.h
   5480       * misc cosmetics and a bit more internal documentation
   5481       * anticosmetics: mangled names in macros to evade debugger strangeness
   5482       * tested on sparc, hp-700, dec-mips, rs6000
   5483           with gcc & native cc (hp, dec only) allowing
   5484           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
   5485 
   5486     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
   5487       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
   5488          structure of old version,  but most details differ.)
   5489 
   5490 */
   5491