Home | History | Annotate | Download | only in dlmalloc
      1 /*
      2   This is a version (aka dlmalloc) of malloc/free/realloc written by
      3   Doug Lea and released to the public domain, as explained at
      4   http://creativecommons.org/publicdomain/zero/1.0/ Send questions,
      5   comments, complaints, performance data, etc to dl (at) cs.oswego.edu
      6 
      7 * Version 2.8.6 Wed Aug 29 06:57:58 2012  Doug Lea
      8    Note: There may be an updated version of this malloc obtainable at
      9            ftp://gee.cs.oswego.edu/pub/misc/malloc.c
     10          Check before installing!
     11 
     12 * Quickstart
     13 
     14   This library is all in one file to simplify the most common usage:
     15   ftp it, compile it (-O3), and link it into another program. All of
     16   the compile-time options default to reasonable values for use on
     17   most platforms.  You might later want to step through various
     18   compile-time and dynamic tuning options.
     19 
     20   For convenience, an include file for code using this malloc is at:
     21      ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h
     22   You don't really need this .h file unless you call functions not
     23   defined in your system include files.  The .h file contains only the
     24   excerpts from this file needed for using this malloc on ANSI C/C++
     25   systems, so long as you haven't changed compile-time options about
     26   naming and tuning parameters.  If you do, then you can create your
     27   own malloc.h that does include all settings by cutting at the point
     28   indicated below. Note that you may already by default be using a C
     29   library containing a malloc that is based on some version of this
     30   malloc (for example in linux). You might still want to use the one
     31   in this file to customize settings or to avoid overheads associated
     32   with library versions.
     33 
     34 * Vital statistics:
     35 
     36   Supported pointer/size_t representation:       4 or 8 bytes
     37        size_t MUST be an unsigned type of the same width as
     38        pointers. (If you are using an ancient system that declares
     39        size_t as a signed type, or need it to be a different width
     40        than pointers, you can use a previous release of this malloc
     41        (e.g. 2.7.2) supporting these.)
     42 
     43   Alignment:                                     8 bytes (minimum)
     44        This suffices for nearly all current machines and C compilers.
     45        However, you can define MALLOC_ALIGNMENT to be wider than this
     46        if necessary (up to 128bytes), at the expense of using more space.
     47 
     48   Minimum overhead per allocated chunk:   4 or  8 bytes (if 4byte sizes)
     49                                           8 or 16 bytes (if 8byte sizes)
     50        Each malloced chunk has a hidden word of overhead holding size
     51        and status information, and additional cross-check word
     52        if FOOTERS is defined.
     53 
     54   Minimum allocated size: 4-byte ptrs:  16 bytes    (including overhead)
     55                           8-byte ptrs:  32 bytes    (including overhead)
     56 
     57        Even a request for zero bytes (i.e., malloc(0)) returns a
     58        pointer to something of the minimum allocatable size.
     59        The maximum overhead wastage (i.e., number of extra bytes
     60        allocated than were requested in malloc) is less than or equal
     61        to the minimum size, except for requests >= mmap_threshold that
     62        are serviced via mmap(), where the worst case wastage is about
     63        32 bytes plus the remainder from a system page (the minimal
     64        mmap unit); typically 4096 or 8192 bytes.
     65 
     66   Security: static-safe; optionally more or less
     67        The "security" of malloc refers to the ability of malicious
     68        code to accentuate the effects of errors (for example, freeing
     69        space that is not currently malloc'ed or overwriting past the
     70        ends of chunks) in code that calls malloc.  This malloc
     71        guarantees not to modify any memory locations below the base of
     72        heap, i.e., static variables, even in the presence of usage
     73        errors.  The routines additionally detect most improper frees
     74        and reallocs.  All this holds as long as the static bookkeeping
     75        for malloc itself is not corrupted by some other means.  This
     76        is only one aspect of security -- these checks do not, and
     77        cannot, detect all possible programming errors.
     78 
     79        If FOOTERS is defined nonzero, then each allocated chunk
     80        carries an additional check word to verify that it was malloced
     81        from its space.  These check words are the same within each
     82        execution of a program using malloc, but differ across
     83        executions, so externally crafted fake chunks cannot be
     84        freed. This improves security by rejecting frees/reallocs that
     85        could corrupt heap memory, in addition to the checks preventing
     86        writes to statics that are always on.  This may further improve
     87        security at the expense of time and space overhead.  (Note that
     88        FOOTERS may also be worth using with MSPACES.)
     89 
     90        By default detected errors cause the program to abort (calling
     91        "abort()"). You can override this to instead proceed past
     92        errors by defining PROCEED_ON_ERROR.  In this case, a bad free
     93        has no effect, and a malloc that encounters a bad address
     94        caused by user overwrites will ignore the bad address by
     95        dropping pointers and indices to all known memory. This may
     96        be appropriate for programs that should continue if at all
     97        possible in the face of programming errors, although they may
     98        run out of memory because dropped memory is never reclaimed.
     99 
    100        If you don't like either of these options, you can define
    101        CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything
    102        else. And if if you are sure that your program using malloc has
    103        no errors or vulnerabilities, you can define INSECURE to 1,
    104        which might (or might not) provide a small performance improvement.
    105 
    106        It is also possible to limit the maximum total allocatable
    107        space, using malloc_set_footprint_limit. This is not
    108        designed as a security feature in itself (calls to set limits
    109        are not screened or privileged), but may be useful as one
    110        aspect of a secure implementation.
    111 
    112   Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero
    113        When USE_LOCKS is defined, each public call to malloc, free,
    114        etc is surrounded with a lock. By default, this uses a plain
    115        pthread mutex, win32 critical section, or a spin-lock if if
    116        available for the platform and not disabled by setting
    117        USE_SPIN_LOCKS=0.  However, if USE_RECURSIVE_LOCKS is defined,
    118        recursive versions are used instead (which are not required for
    119        base functionality but may be needed in layered extensions).
    120        Using a global lock is not especially fast, and can be a major
    121        bottleneck.  It is designed only to provide minimal protection
    122        in concurrent environments, and to provide a basis for
    123        extensions.  If you are using malloc in a concurrent program,
    124        consider instead using nedmalloc
    125        (http://www.nedprod.com/programs/portable/nedmalloc/) or
    126        ptmalloc (See http://www.malloc.de), which are derived from
    127        versions of this malloc.
    128 
    129   System requirements: Any combination of MORECORE and/or MMAP/MUNMAP
    130        This malloc can use unix sbrk or any emulation (invoked using
    131        the CALL_MORECORE macro) and/or mmap/munmap or any emulation
    132        (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system
    133        memory.  On most unix systems, it tends to work best if both
    134        MORECORE and MMAP are enabled.  On Win32, it uses emulations
    135        based on VirtualAlloc. It also uses common C library functions
    136        like memset.
    137 
    138   Compliance: I believe it is compliant with the Single Unix Specification
    139        (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably
    140        others as well.
    141 
    142 * Overview of algorithms
    143 
    144   This is not the fastest, most space-conserving, most portable, or
    145   most tunable malloc ever written. However it is among the fastest
    146   while also being among the most space-conserving, portable and
    147   tunable.  Consistent balance across these factors results in a good
    148   general-purpose allocator for malloc-intensive programs.
    149 
    150   In most ways, this malloc is a best-fit allocator. Generally, it
    151   chooses the best-fitting existing chunk for a request, with ties
    152   broken in approximately least-recently-used order. (This strategy
    153   normally maintains low fragmentation.) However, for requests less
    154   than 256bytes, it deviates from best-fit when there is not an
    155   exactly fitting available chunk by preferring to use space adjacent
    156   to that used for the previous small request, as well as by breaking
    157   ties in approximately most-recently-used order. (These enhance
    158   locality of series of small allocations.)  And for very large requests
    159   (>= 256Kb by default), it relies on system memory mapping
    160   facilities, if supported.  (This helps avoid carrying around and
    161   possibly fragmenting memory used only for large chunks.)
    162 
    163   All operations (except malloc_stats and mallinfo) have execution
    164   times that are bounded by a constant factor of the number of bits in
    165   a size_t, not counting any clearing in calloc or copying in realloc,
    166   or actions surrounding MORECORE and MMAP that have times
    167   proportional to the number of non-contiguous regions returned by
    168   system allocation routines, which is often just 1. In real-time
    169   applications, you can optionally suppress segment traversals using
    170   NO_SEGMENT_TRAVERSAL, which assures bounded execution even when
    171   system allocators return non-contiguous spaces, at the typical
    172   expense of carrying around more memory and increased fragmentation.
    173 
    174   The implementation is not very modular and seriously overuses
    175   macros. Perhaps someday all C compilers will do as good a job
    176   inlining modular code as can now be done by brute-force expansion,
    177   but now, enough of them seem not to.
    178 
    179   Some compilers issue a lot of warnings about code that is
    180   dead/unreachable only on some platforms, and also about intentional
    181   uses of negation on unsigned types. All known cases of each can be
    182   ignored.
    183 
    184   For a longer but out of date high-level description, see
    185      http://gee.cs.oswego.edu/dl/html/malloc.html
    186 
    187 * MSPACES
    188   If MSPACES is defined, then in addition to malloc, free, etc.,
    189   this file also defines mspace_malloc, mspace_free, etc. These
    190   are versions of malloc routines that take an "mspace" argument
    191   obtained using create_mspace, to control all internal bookkeeping.
    192   If ONLY_MSPACES is defined, only these versions are compiled.
    193   So if you would like to use this allocator for only some allocations,
    194   and your system malloc for others, you can compile with
    195   ONLY_MSPACES and then do something like...
    196     static mspace mymspace = create_mspace(0,0); // for example
    197     #define mymalloc(bytes)  mspace_malloc(mymspace, bytes)
    198 
    199   (Note: If you only need one instance of an mspace, you can instead
    200   use "USE_DL_PREFIX" to relabel the global malloc.)
    201 
    202   You can similarly create thread-local allocators by storing
    203   mspaces as thread-locals. For example:
    204     static __thread mspace tlms = 0;
    205     void*  tlmalloc(size_t bytes) {
    206       if (tlms == 0) tlms = create_mspace(0, 0);
    207       return mspace_malloc(tlms, bytes);
    208     }
    209     void  tlfree(void* mem) { mspace_free(tlms, mem); }
    210 
    211   Unless FOOTERS is defined, each mspace is completely independent.
    212   You cannot allocate from one and free to another (although
    213   conformance is only weakly checked, so usage errors are not always
    214   caught). If FOOTERS is defined, then each chunk carries around a tag
    215   indicating its originating mspace, and frees are directed to their
    216   originating spaces. Normally, this requires use of locks.
    217 
    218  -------------------------  Compile-time options ---------------------------
    219 
    220 Be careful in setting #define values for numerical constants of type
    221 size_t. On some systems, literal values are not automatically extended
    222 to size_t precision unless they are explicitly casted. You can also
    223 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below.
    224 
    225 WIN32                    default: defined if _WIN32 defined
    226   Defining WIN32 sets up defaults for MS environment and compilers.
    227   Otherwise defaults are for unix. Beware that there seem to be some
    228   cases where this malloc might not be a pure drop-in replacement for
    229   Win32 malloc: Random-looking failures from Win32 GDI API's (eg;
    230   SetDIBits()) may be due to bugs in some video driver implementations
    231   when pixel buffers are malloc()ed, and the region spans more than
    232   one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb)
    233   default granularity, pixel buffers may straddle virtual allocation
    234   regions more often than when using the Microsoft allocator.  You can
    235   avoid this by using VirtualAlloc() and VirtualFree() for all pixel
    236   buffers rather than using malloc().  If this is not possible,
    237   recompile this malloc with a larger DEFAULT_GRANULARITY. Note:
    238   in cases where MSC and gcc (cygwin) are known to differ on WIN32,
    239   conditions use _MSC_VER to distinguish them.
    240 
    241 DLMALLOC_EXPORT       default: extern
    242   Defines how public APIs are declared. If you want to export via a
    243   Windows DLL, you might define this as
    244     #define DLMALLOC_EXPORT extern  __declspec(dllexport)
    245   If you want a POSIX ELF shared object, you might use
    246     #define DLMALLOC_EXPORT extern __attribute__((visibility("default")))
    247 
    248 MALLOC_ALIGNMENT         default: (size_t)(2 * sizeof(void *))
    249   Controls the minimum alignment for malloc'ed chunks.  It must be a
    250   power of two and at least 8, even on machines for which smaller
    251   alignments would suffice. It may be defined as larger than this
    252   though. Note however that code and data structures are optimized for
    253   the case of 8-byte alignment.
    254 
    255 MSPACES                  default: 0 (false)
    256   If true, compile in support for independent allocation spaces.
    257   This is only supported if HAVE_MMAP is true.
    258 
    259 ONLY_MSPACES             default: 0 (false)
    260   If true, only compile in mspace versions, not regular versions.
    261 
    262 USE_LOCKS                default: 0 (false)
    263   Causes each call to each public routine to be surrounded with
    264   pthread or WIN32 mutex lock/unlock. (If set true, this can be
    265   overridden on a per-mspace basis for mspace versions.) If set to a
    266   non-zero value other than 1, locks are used, but their
    267   implementation is left out, so lock functions must be supplied manually,
    268   as described below.
    269 
    270 USE_SPIN_LOCKS           default: 1 iff USE_LOCKS and spin locks available
    271   If true, uses custom spin locks for locking. This is currently
    272   supported only gcc >= 4.1, older gccs on x86 platforms, and recent
    273   MS compilers.  Otherwise, posix locks or win32 critical sections are
    274   used.
    275 
    276 USE_RECURSIVE_LOCKS      default: not defined
    277   If defined nonzero, uses recursive (aka reentrant) locks, otherwise
    278   uses plain mutexes. This is not required for malloc proper, but may
    279   be needed for layered allocators such as nedmalloc.
    280 
    281 LOCK_AT_FORK            default: not defined
    282   If defined nonzero, performs pthread_atfork upon initialization
    283   to initialize child lock while holding parent lock. The implementation
    284   assumes that pthread locks (not custom locks) are being used. In other
    285   cases, you may need to customize the implementation.
    286 
    287 FOOTERS                  default: 0
    288   If true, provide extra checking and dispatching by placing
    289   information in the footers of allocated chunks. This adds
    290   space and time overhead.
    291 
    292 INSECURE                 default: 0
    293   If true, omit checks for usage errors and heap space overwrites.
    294 
    295 USE_DL_PREFIX            default: NOT defined
    296   Causes compiler to prefix all public routines with the string 'dl'.
    297   This can be useful when you only want to use this malloc in one part
    298   of a program, using your regular system malloc elsewhere.
    299 
    300 MALLOC_INSPECT_ALL       default: NOT defined
    301   If defined, compiles malloc_inspect_all and mspace_inspect_all, that
    302   perform traversal of all heap space.  Unless access to these
    303   functions is otherwise restricted, you probably do not want to
    304   include them in secure implementations.
    305 
    306 ABORT                    default: defined as abort()
    307   Defines how to abort on failed checks.  On most systems, a failed
    308   check cannot die with an "assert" or even print an informative
    309   message, because the underlying print routines in turn call malloc,
    310   which will fail again.  Generally, the best policy is to simply call
    311   abort(). It's not very useful to do more than this because many
    312   errors due to overwriting will show up as address faults (null, odd
    313   addresses etc) rather than malloc-triggered checks, so will also
    314   abort.  Also, most compilers know that abort() does not return, so
    315   can better optimize code conditionally calling it.
    316 
    317 PROCEED_ON_ERROR           default: defined as 0 (false)
    318   Controls whether detected bad addresses cause them to bypassed
    319   rather than aborting. If set, detected bad arguments to free and
    320   realloc are ignored. And all bookkeeping information is zeroed out
    321   upon a detected overwrite of freed heap space, thus losing the
    322   ability to ever return it from malloc again, but enabling the
    323   application to proceed. If PROCEED_ON_ERROR is defined, the
    324   static variable malloc_corruption_error_count is compiled in
    325   and can be examined to see if errors have occurred. This option
    326   generates slower code than the default abort policy.
    327 
    328 DEBUG                    default: NOT defined
    329   The DEBUG setting is mainly intended for people trying to modify
    330   this code or diagnose problems when porting to new platforms.
    331   However, it may also be able to better isolate user errors than just
    332   using runtime checks.  The assertions in the check routines spell
    333   out in more detail the assumptions and invariants underlying the
    334   algorithms.  The checking is fairly extensive, and will slow down
    335   execution noticeably. Calling malloc_stats or mallinfo with DEBUG
    336   set will attempt to check every non-mmapped allocated and free chunk
    337   in the course of computing the summaries.
    338 
    339 ABORT_ON_ASSERT_FAILURE   default: defined as 1 (true)
    340   Debugging assertion failures can be nearly impossible if your
    341   version of the assert macro causes malloc to be called, which will
    342   lead to a cascade of further failures, blowing the runtime stack.
    343   ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(),
    344   which will usually make debugging easier.
    345 
    346 MALLOC_FAILURE_ACTION     default: sets errno to ENOMEM, or no-op on win32
    347   The action to take before "return 0" when malloc fails to be able to
    348   return memory because there is none available.
    349 
    350 HAVE_MORECORE             default: 1 (true) unless win32 or ONLY_MSPACES
    351   True if this system supports sbrk or an emulation of it.
    352 
    353 MORECORE                  default: sbrk
    354   The name of the sbrk-style system routine to call to obtain more
    355   memory.  See below for guidance on writing custom MORECORE
    356   functions. The type of the argument to sbrk/MORECORE varies across
    357   systems.  It cannot be size_t, because it supports negative
    358   arguments, so it is normally the signed type of the same width as
    359   size_t (sometimes declared as "intptr_t").  It doesn't much matter
    360   though. Internally, we only call it with arguments less than half
    361   the max value of a size_t, which should work across all reasonable
    362   possibilities, although sometimes generating compiler warnings.
    363 
    364 MORECORE_CONTIGUOUS       default: 1 (true) if HAVE_MORECORE
    365   If true, take advantage of fact that consecutive calls to MORECORE
    366   with positive arguments always return contiguous increasing
    367   addresses.  This is true of unix sbrk. It does not hurt too much to
    368   set it true anyway, since malloc copes with non-contiguities.
    369   Setting it false when definitely non-contiguous saves time
    370   and possibly wasted space it would take to discover this though.
    371 
    372 MORECORE_CANNOT_TRIM      default: NOT defined
    373   True if MORECORE cannot release space back to the system when given
    374   negative arguments. This is generally necessary only if you are
    375   using a hand-crafted MORECORE function that cannot handle negative
    376   arguments.
    377 
    378 NO_SEGMENT_TRAVERSAL       default: 0
    379   If non-zero, suppresses traversals of memory segments
    380   returned by either MORECORE or CALL_MMAP. This disables
    381   merging of segments that are contiguous, and selectively
    382   releasing them to the OS if unused, but bounds execution times.
    383 
    384 HAVE_MMAP                 default: 1 (true)
    385   True if this system supports mmap or an emulation of it.  If so, and
    386   HAVE_MORECORE is not true, MMAP is used for all system
    387   allocation. If set and HAVE_MORECORE is true as well, MMAP is
    388   primarily used to directly allocate very large blocks. It is also
    389   used as a backup strategy in cases where MORECORE fails to provide
    390   space from system. Note: A single call to MUNMAP is assumed to be
    391   able to unmap memory that may have be allocated using multiple calls
    392   to MMAP, so long as they are adjacent.
    393 
    394 HAVE_MREMAP               default: 1 on linux, else 0
    395   If true realloc() uses mremap() to re-allocate large blocks and
    396   extend or shrink allocation spaces.
    397 
    398 MMAP_CLEARS               default: 1 except on WINCE.
    399   True if mmap clears memory so calloc doesn't need to. This is true
    400   for standard unix mmap using /dev/zero and on WIN32 except for WINCE.
    401 
    402 USE_BUILTIN_FFS            default: 0 (i.e., not used)
    403   Causes malloc to use the builtin ffs() function to compute indices.
    404   Some compilers may recognize and intrinsify ffs to be faster than the
    405   supplied C version. Also, the case of x86 using gcc is special-cased
    406   to an asm instruction, so is already as fast as it can be, and so
    407   this setting has no effect. Similarly for Win32 under recent MS compilers.
    408   (On most x86s, the asm version is only slightly faster than the C version.)
    409 
    410 malloc_getpagesize         default: derive from system includes, or 4096.
    411   The system page size. To the extent possible, this malloc manages
    412   memory from the system in page-size units.  This may be (and
    413   usually is) a function rather than a constant. This is ignored
    414   if WIN32, where page size is determined using getSystemInfo during
    415   initialization.
    416 
    417 USE_DEV_RANDOM             default: 0 (i.e., not used)
    418   Causes malloc to use /dev/random to initialize secure magic seed for
    419   stamping footers. Otherwise, the current time is used.
    420 
    421 NO_MALLINFO                default: 0
    422   If defined, don't compile "mallinfo". This can be a simple way
    423   of dealing with mismatches between system declarations and
    424   those in this file.
    425 
    426 MALLINFO_FIELD_TYPE        default: size_t
    427   The type of the fields in the mallinfo struct. This was originally
    428   defined as "int" in SVID etc, but is more usefully defined as
    429   size_t. The value is used only if  HAVE_USR_INCLUDE_MALLOC_H is not set
    430 
    431 NO_MALLOC_STATS            default: 0
    432   If defined, don't compile "malloc_stats". This avoids calls to
    433   fprintf and bringing in stdio dependencies you might not want.
    434 
    435 REALLOC_ZERO_BYTES_FREES    default: not defined
    436   This should be set if a call to realloc with zero bytes should
    437   be the same as a call to free. Some people think it should. Otherwise,
    438   since this malloc returns a unique pointer for malloc(0), so does
    439   realloc(p, 0).
    440 
    441 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H
    442 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H,  LACKS_ERRNO_H
    443 LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H  default: NOT defined unless on WIN32
    444   Define these if your system does not have these header files.
    445   You might need to manually insert some of the declarations they provide.
    446 
    447 DEFAULT_GRANULARITY        default: page size if MORECORE_CONTIGUOUS,
    448                                 system_info.dwAllocationGranularity in WIN32,
    449                                 otherwise 64K.
    450       Also settable using mallopt(M_GRANULARITY, x)
    451   The unit for allocating and deallocating memory from the system.  On
    452   most systems with contiguous MORECORE, there is no reason to
    453   make this more than a page. However, systems with MMAP tend to
    454   either require or encourage larger granularities.  You can increase
    455   this value to prevent system allocation functions to be called so
    456   often, especially if they are slow.  The value must be at least one
    457   page and must be a power of two.  Setting to 0 causes initialization
    458   to either page size or win32 region size.  (Note: In previous
    459   versions of malloc, the equivalent of this option was called
    460   "TOP_PAD")
    461 
    462 DEFAULT_TRIM_THRESHOLD    default: 2MB
    463       Also settable using mallopt(M_TRIM_THRESHOLD, x)
    464   The maximum amount of unused top-most memory to keep before
    465   releasing via malloc_trim in free().  Automatic trimming is mainly
    466   useful in long-lived programs using contiguous MORECORE.  Because
    467   trimming via sbrk can be slow on some systems, and can sometimes be
    468   wasteful (in cases where programs immediately afterward allocate
    469   more large chunks) the value should be high enough so that your
    470   overall system performance would improve by releasing this much
    471   memory.  As a rough guide, you might set to a value close to the
    472   average size of a process (program) running on your system.
    473   Releasing this much memory would allow such a process to run in
    474   memory.  Generally, it is worth tuning trim thresholds when a
    475   program undergoes phases where several large chunks are allocated
    476   and released in ways that can reuse each other's storage, perhaps
    477   mixed with phases where there are no such chunks at all. The trim
    478   value must be greater than page size to have any useful effect.  To
    479   disable trimming completely, you can set to MAX_SIZE_T. Note that the trick
    480   some people use of mallocing a huge space and then freeing it at
    481   program startup, in an attempt to reserve system memory, doesn't
    482   have the intended effect under automatic trimming, since that memory
    483   will immediately be returned to the system.
    484 
    485 DEFAULT_MMAP_THRESHOLD       default: 256K
    486       Also settable using mallopt(M_MMAP_THRESHOLD, x)
    487   The request size threshold for using MMAP to directly service a
    488   request. Requests of at least this size that cannot be allocated
    489   using already-existing space will be serviced via mmap.  (If enough
    490   normal freed space already exists it is used instead.)  Using mmap
    491   segregates relatively large chunks of memory so that they can be
    492   individually obtained and released from the host system. A request
    493   serviced through mmap is never reused by any other request (at least
    494   not directly; the system may just so happen to remap successive
    495   requests to the same locations).  Segregating space in this way has
    496   the benefits that: Mmapped space can always be individually released
    497   back to the system, which helps keep the system level memory demands
    498   of a long-lived program low.  Also, mapped memory doesn't become
    499   `locked' between other chunks, as can happen with normally allocated
    500   chunks, which means that even trimming via malloc_trim would not
    501   release them.  However, it has the disadvantage that the space
    502   cannot be reclaimed, consolidated, and then used to service later
    503   requests, as happens with normal chunks.  The advantages of mmap
    504   nearly always outweigh disadvantages for "large" chunks, but the
    505   value of "large" may vary across systems.  The default is an
    506   empirically derived value that works well in most systems. You can
    507   disable mmap by setting to MAX_SIZE_T.
    508 
    509 MAX_RELEASE_CHECK_RATE   default: 4095 unless not HAVE_MMAP
    510   The number of consolidated frees between checks to release
    511   unused segments when freeing. When using non-contiguous segments,
    512   especially with multiple mspaces, checking only for topmost space
    513   doesn't always suffice to trigger trimming. To compensate for this,
    514   free() will, with a period of MAX_RELEASE_CHECK_RATE (or the
    515   current number of segments, if greater) try to release unused
    516   segments to the OS when freeing chunks that result in
    517   consolidation. The best value for this parameter is a compromise
    518   between slowing down frees with relatively costly checks that
    519   rarely trigger versus holding on to unused memory. To effectively
    520   disable, set to MAX_SIZE_T. This may lead to a very slight speed
    521   improvement at the expense of carrying around more memory.
    522 */
    523 
    524 /* Version identifier to allow people to support multiple versions */
    525 #ifndef DLMALLOC_VERSION
    526 #define DLMALLOC_VERSION 20806
    527 #endif /* DLMALLOC_VERSION */
    528 
    529 #ifndef DLMALLOC_EXPORT
    530 #define DLMALLOC_EXPORT extern
    531 #endif
    532 
    533 #ifndef WIN32
    534 #ifdef _WIN32
    535 #define WIN32 1
    536 #endif  /* _WIN32 */
    537 #ifdef _WIN32_WCE
    538 #define LACKS_FCNTL_H
    539 #define WIN32 1
    540 #endif /* _WIN32_WCE */
    541 #endif  /* WIN32 */
    542 #ifdef WIN32
    543 #define WIN32_LEAN_AND_MEAN
    544 #include <windows.h>
    545 #include <tchar.h>
    546 #define HAVE_MMAP 1
    547 #define HAVE_MORECORE 0
    548 #define LACKS_UNISTD_H
    549 #define LACKS_SYS_PARAM_H
    550 #define LACKS_SYS_MMAN_H
    551 #define LACKS_STRING_H
    552 #define LACKS_STRINGS_H
    553 #define LACKS_SYS_TYPES_H
    554 #define LACKS_ERRNO_H
    555 #define LACKS_SCHED_H
    556 #ifndef MALLOC_FAILURE_ACTION
    557 #define MALLOC_FAILURE_ACTION
    558 #endif /* MALLOC_FAILURE_ACTION */
    559 #ifndef MMAP_CLEARS
    560 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */
    561 #define MMAP_CLEARS 0
    562 #else
    563 #define MMAP_CLEARS 1
    564 #endif /* _WIN32_WCE */
    565 #endif /*MMAP_CLEARS */
    566 #endif  /* WIN32 */
    567 
    568 #if defined(DARWIN) || defined(_DARWIN)
    569 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */
    570 #ifndef HAVE_MORECORE
    571 #define HAVE_MORECORE 0
    572 #define HAVE_MMAP 1
    573 /* OSX allocators provide 16 byte alignment */
    574 #ifndef MALLOC_ALIGNMENT
    575 #define MALLOC_ALIGNMENT ((size_t)16U)
    576 #endif
    577 #endif  /* HAVE_MORECORE */
    578 #endif  /* DARWIN */
    579 
    580 #ifndef LACKS_SYS_TYPES_H
    581 #include <sys/types.h>  /* For size_t */
    582 #endif  /* LACKS_SYS_TYPES_H */
    583 
    584 /* The maximum possible size_t value has all bits set */
    585 #define MAX_SIZE_T           (~(size_t)0)
    586 
    587 #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */
    588 #define USE_LOCKS  ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \
    589                     (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0))
    590 #endif /* USE_LOCKS */
    591 
    592 #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */
    593 #if ((defined(__GNUC__) &&                                              \
    594       ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) ||      \
    595        defined(__i386__) || defined(__x86_64__))) ||                    \
    596      (defined(_MSC_VER) && _MSC_VER>=1310))
    597 #ifndef USE_SPIN_LOCKS
    598 #define USE_SPIN_LOCKS 1
    599 #endif /* USE_SPIN_LOCKS */
    600 #elif USE_SPIN_LOCKS
    601 #error "USE_SPIN_LOCKS defined without implementation"
    602 #endif /* ... locks available... */
    603 #elif !defined(USE_SPIN_LOCKS)
    604 #define USE_SPIN_LOCKS 0
    605 #endif /* USE_LOCKS */
    606 
    607 #ifndef ONLY_MSPACES
    608 #define ONLY_MSPACES 0
    609 #endif  /* ONLY_MSPACES */
    610 #ifndef MSPACES
    611 #if ONLY_MSPACES
    612 #define MSPACES 1
    613 #else   /* ONLY_MSPACES */
    614 #define MSPACES 0
    615 #endif  /* ONLY_MSPACES */
    616 #endif  /* MSPACES */
    617 #ifndef MALLOC_ALIGNMENT
    618 #define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *)))
    619 #endif  /* MALLOC_ALIGNMENT */
    620 #ifndef FOOTERS
    621 #define FOOTERS 0
    622 #endif  /* FOOTERS */
    623 #ifndef ABORT
    624 #define ABORT  abort()
    625 #endif  /* ABORT */
    626 #ifndef ABORT_ON_ASSERT_FAILURE
    627 #define ABORT_ON_ASSERT_FAILURE 1
    628 #endif  /* ABORT_ON_ASSERT_FAILURE */
    629 #ifndef PROCEED_ON_ERROR
    630 #define PROCEED_ON_ERROR 0
    631 #endif  /* PROCEED_ON_ERROR */
    632 
    633 #ifndef INSECURE
    634 #define INSECURE 0
    635 #endif  /* INSECURE */
    636 #ifndef MALLOC_INSPECT_ALL
    637 #define MALLOC_INSPECT_ALL 0
    638 #endif  /* MALLOC_INSPECT_ALL */
    639 #ifndef HAVE_MMAP
    640 #define HAVE_MMAP 1
    641 #endif  /* HAVE_MMAP */
    642 #ifndef MMAP_CLEARS
    643 #define MMAP_CLEARS 1
    644 #endif  /* MMAP_CLEARS */
    645 #ifndef HAVE_MREMAP
    646 #ifdef linux
    647 #define HAVE_MREMAP 1
    648 #define _GNU_SOURCE /* Turns on mremap() definition */
    649 #else   /* linux */
    650 #define HAVE_MREMAP 0
    651 #endif  /* linux */
    652 #endif  /* HAVE_MREMAP */
    653 #ifndef MALLOC_FAILURE_ACTION
    654 #define MALLOC_FAILURE_ACTION  errno = ENOMEM;
    655 #endif  /* MALLOC_FAILURE_ACTION */
    656 #ifndef HAVE_MORECORE
    657 #if ONLY_MSPACES
    658 #define HAVE_MORECORE 0
    659 #else   /* ONLY_MSPACES */
    660 #define HAVE_MORECORE 1
    661 #endif  /* ONLY_MSPACES */
    662 #endif  /* HAVE_MORECORE */
    663 #if !HAVE_MORECORE
    664 #define MORECORE_CONTIGUOUS 0
    665 #else   /* !HAVE_MORECORE */
    666 #define MORECORE_DEFAULT sbrk
    667 #ifndef MORECORE_CONTIGUOUS
    668 #define MORECORE_CONTIGUOUS 1
    669 #endif  /* MORECORE_CONTIGUOUS */
    670 #endif  /* HAVE_MORECORE */
    671 #ifndef DEFAULT_GRANULARITY
    672 #if (MORECORE_CONTIGUOUS || defined(WIN32))
    673 #define DEFAULT_GRANULARITY (0)  /* 0 means to compute in init_mparams */
    674 #else   /* MORECORE_CONTIGUOUS */
    675 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U)
    676 #endif  /* MORECORE_CONTIGUOUS */
    677 #endif  /* DEFAULT_GRANULARITY */
    678 #ifndef DEFAULT_TRIM_THRESHOLD
    679 #ifndef MORECORE_CANNOT_TRIM
    680 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U)
    681 #else   /* MORECORE_CANNOT_TRIM */
    682 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T
    683 #endif  /* MORECORE_CANNOT_TRIM */
    684 #endif  /* DEFAULT_TRIM_THRESHOLD */
    685 #ifndef DEFAULT_MMAP_THRESHOLD
    686 #if HAVE_MMAP
    687 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U)
    688 #else   /* HAVE_MMAP */
    689 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T
    690 #endif  /* HAVE_MMAP */
    691 #endif  /* DEFAULT_MMAP_THRESHOLD */
    692 #ifndef MAX_RELEASE_CHECK_RATE
    693 #if HAVE_MMAP
    694 #define MAX_RELEASE_CHECK_RATE 4095
    695 #else
    696 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T
    697 #endif /* HAVE_MMAP */
    698 #endif /* MAX_RELEASE_CHECK_RATE */
    699 #ifndef USE_BUILTIN_FFS
    700 #define USE_BUILTIN_FFS 0
    701 #endif  /* USE_BUILTIN_FFS */
    702 #ifndef USE_DEV_RANDOM
    703 #define USE_DEV_RANDOM 0
    704 #endif  /* USE_DEV_RANDOM */
    705 #ifndef NO_MALLINFO
    706 #define NO_MALLINFO 0
    707 #endif  /* NO_MALLINFO */
    708 #ifndef MALLINFO_FIELD_TYPE
    709 #define MALLINFO_FIELD_TYPE size_t
    710 #endif  /* MALLINFO_FIELD_TYPE */
    711 #ifndef NO_MALLOC_STATS
    712 #define NO_MALLOC_STATS 0
    713 #endif  /* NO_MALLOC_STATS */
    714 #ifndef NO_SEGMENT_TRAVERSAL
    715 #define NO_SEGMENT_TRAVERSAL 0
    716 #endif /* NO_SEGMENT_TRAVERSAL */
    717 
    718 /*
    719   mallopt tuning options.  SVID/XPG defines four standard parameter
    720   numbers for mallopt, normally defined in malloc.h.  None of these
    721   are used in this malloc, so setting them has no effect. But this
    722   malloc does support the following options.
    723 */
    724 
    725 #define M_TRIM_THRESHOLD     (-1)
    726 #define M_GRANULARITY        (-2)
    727 #define M_MMAP_THRESHOLD     (-3)
    728 
    729 /* ------------------------ Mallinfo declarations ------------------------ */
    730 
    731 #if !NO_MALLINFO
    732 /*
    733   This version of malloc supports the standard SVID/XPG mallinfo
    734   routine that returns a struct containing usage properties and
    735   statistics. It should work on any system that has a
    736   /usr/include/malloc.h defining struct mallinfo.  The main
    737   declaration needed is the mallinfo struct that is returned (by-copy)
    738   by mallinfo().  The malloinfo struct contains a bunch of fields that
    739   are not even meaningful in this version of malloc.  These fields are
    740   are instead filled by mallinfo() with other numbers that might be of
    741   interest.
    742 
    743   HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
    744   /usr/include/malloc.h file that includes a declaration of struct
    745   mallinfo.  If so, it is included; else a compliant version is
    746   declared below.  These must be precisely the same for mallinfo() to
    747   work.  The original SVID version of this struct, defined on most
    748   systems with mallinfo, declares all fields as ints. But some others
    749   define as unsigned long. If your system defines the fields using a
    750   type of different width than listed here, you MUST #include your
    751   system version and #define HAVE_USR_INCLUDE_MALLOC_H.
    752 */
    753 
    754 /* #define HAVE_USR_INCLUDE_MALLOC_H */
    755 
    756 #ifdef HAVE_USR_INCLUDE_MALLOC_H
    757 #include "/usr/include/malloc.h"
    758 #else /* HAVE_USR_INCLUDE_MALLOC_H */
    759 #ifndef STRUCT_MALLINFO_DECLARED
    760 /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */
    761 #define _STRUCT_MALLINFO
    762 #define STRUCT_MALLINFO_DECLARED 1
    763 struct mallinfo {
    764   MALLINFO_FIELD_TYPE arena;    /* non-mmapped space allocated from system */
    765   MALLINFO_FIELD_TYPE ordblks;  /* number of free chunks */
    766   MALLINFO_FIELD_TYPE smblks;   /* always 0 */
    767   MALLINFO_FIELD_TYPE hblks;    /* always 0 */
    768   MALLINFO_FIELD_TYPE hblkhd;   /* space in mmapped regions */
    769   MALLINFO_FIELD_TYPE usmblks;  /* maximum total allocated space */
    770   MALLINFO_FIELD_TYPE fsmblks;  /* always 0 */
    771   MALLINFO_FIELD_TYPE uordblks; /* total allocated space */
    772   MALLINFO_FIELD_TYPE fordblks; /* total free space */
    773   MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */
    774 };
    775 #endif /* STRUCT_MALLINFO_DECLARED */
    776 #endif /* HAVE_USR_INCLUDE_MALLOC_H */
    777 #endif /* NO_MALLINFO */
    778 
    779 /*
    780   Try to persuade compilers to inline. The most critical functions for
    781   inlining are defined as macros, so these aren't used for them.
    782 */
    783 
    784 #ifndef FORCEINLINE
    785   #if defined(__GNUC__)
    786 #define FORCEINLINE __inline __attribute__ ((always_inline))
    787   #elif defined(_MSC_VER)
    788     #define FORCEINLINE __forceinline
    789   #endif
    790 #endif
    791 #ifndef NOINLINE
    792   #if defined(__GNUC__)
    793     #define NOINLINE __attribute__ ((noinline))
    794   #elif defined(_MSC_VER)
    795     #define NOINLINE __declspec(noinline)
    796   #else
    797     #define NOINLINE
    798   #endif
    799 #endif
    800 
    801 #ifdef __cplusplus
    802 extern "C" {
    803 #ifndef FORCEINLINE
    804  #define FORCEINLINE inline
    805 #endif
    806 #endif /* __cplusplus */
    807 #ifndef FORCEINLINE
    808  #define FORCEINLINE
    809 #endif
    810 
    811 #if !ONLY_MSPACES
    812 
    813 /* ------------------- Declarations of public routines ------------------- */
    814 
    815 #ifndef USE_DL_PREFIX
    816 #define dlcalloc               calloc
    817 #define dlfree                 free
    818 #define dlmalloc               malloc
    819 #define dlmemalign             memalign
    820 #define dlposix_memalign       posix_memalign
    821 #define dlrealloc              realloc
    822 #define dlrealloc_in_place     realloc_in_place
    823 #define dlvalloc               valloc
    824 #define dlpvalloc              pvalloc
    825 #define dlmallinfo             mallinfo
    826 #define dlmallopt              mallopt
    827 #define dlmalloc_trim          malloc_trim
    828 #define dlmalloc_stats         malloc_stats
    829 #define dlmalloc_usable_size   malloc_usable_size
    830 #define dlmalloc_footprint     malloc_footprint
    831 #define dlmalloc_max_footprint malloc_max_footprint
    832 #define dlmalloc_footprint_limit malloc_footprint_limit
    833 #define dlmalloc_set_footprint_limit malloc_set_footprint_limit
    834 #define dlmalloc_inspect_all   malloc_inspect_all
    835 #define dlindependent_calloc   independent_calloc
    836 #define dlindependent_comalloc independent_comalloc
    837 #define dlbulk_free            bulk_free
    838 #endif /* USE_DL_PREFIX */
    839 
    840 /*
    841   malloc(size_t n)
    842   Returns a pointer to a newly allocated chunk of at least n bytes, or
    843   null if no space is available, in which case errno is set to ENOMEM
    844   on ANSI C systems.
    845 
    846   If n is zero, malloc returns a minimum-sized chunk. (The minimum
    847   size is 16 bytes on most 32bit systems, and 32 bytes on 64bit
    848   systems.)  Note that size_t is an unsigned type, so calls with
    849   arguments that would be negative if signed are interpreted as
    850   requests for huge amounts of space, which will often fail. The
    851   maximum supported value of n differs across systems, but is in all
    852   cases less than the maximum representable value of a size_t.
    853 */
    854 DLMALLOC_EXPORT void* dlmalloc(size_t);
    855 
    856 /*
    857   free(void* p)
    858   Releases the chunk of memory pointed to by p, that had been previously
    859   allocated using malloc or a related routine such as realloc.
    860   It has no effect if p is null. If p was not malloced or already
    861   freed, free(p) will by default cause the current program to abort.
    862 */
    863 DLMALLOC_EXPORT void  dlfree(void*);
    864 
    865 /*
    866   calloc(size_t n_elements, size_t element_size);
    867   Returns a pointer to n_elements * element_size bytes, with all locations
    868   set to zero.
    869 */
    870 DLMALLOC_EXPORT void* dlcalloc(size_t, size_t);
    871 
    872 /*
    873   realloc(void* p, size_t n)
    874   Returns a pointer to a chunk of size n that contains the same data
    875   as does chunk p up to the minimum of (n, p's size) bytes, or null
    876   if no space is available.
    877 
    878   The returned pointer may or may not be the same as p. The algorithm
    879   prefers extending p in most cases when possible, otherwise it
    880   employs the equivalent of a malloc-copy-free sequence.
    881 
    882   If p is null, realloc is equivalent to malloc.
    883 
    884   If space is not available, realloc returns null, errno is set (if on
    885   ANSI) and p is NOT freed.
    886 
    887   if n is for fewer bytes than already held by p, the newly unused
    888   space is lopped off and freed if possible.  realloc with a size
    889   argument of zero (re)allocates a minimum-sized chunk.
    890 
    891   The old unix realloc convention of allowing the last-free'd chunk
    892   to be used as an argument to realloc is not supported.
    893 */
    894 DLMALLOC_EXPORT void* dlrealloc(void*, size_t);
    895 
    896 /*
    897   realloc_in_place(void* p, size_t n)
    898   Resizes the space allocated for p to size n, only if this can be
    899   done without moving p (i.e., only if there is adjacent space
    900   available if n is greater than p's current allocated size, or n is
    901   less than or equal to p's size). This may be used instead of plain
    902   realloc if an alternative allocation strategy is needed upon failure
    903   to expand space; for example, reallocation of a buffer that must be
    904   memory-aligned or cleared. You can use realloc_in_place to trigger
    905   these alternatives only when needed.
    906 
    907   Returns p if successful; otherwise null.
    908 */
    909 DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t);
    910 
    911 /*
    912   memalign(size_t alignment, size_t n);
    913   Returns a pointer to a newly allocated chunk of n bytes, aligned
    914   in accord with the alignment argument.
    915 
    916   The alignment argument should be a power of two. If the argument is
    917   not a power of two, the nearest greater power is used.
    918   8-byte alignment is guaranteed by normal malloc calls, so don't
    919   bother calling memalign with an argument of 8 or less.
    920 
    921   Overreliance on memalign is a sure way to fragment space.
    922 */
    923 DLMALLOC_EXPORT void* dlmemalign(size_t, size_t);
    924 
    925 /*
    926   int posix_memalign(void** pp, size_t alignment, size_t n);
    927   Allocates a chunk of n bytes, aligned in accord with the alignment
    928   argument. Differs from memalign only in that it (1) assigns the
    929   allocated memory to *pp rather than returning it, (2) fails and
    930   returns EINVAL if the alignment is not a power of two (3) fails and
    931   returns ENOMEM if memory cannot be allocated.
    932 */
    933 DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t);
    934 
    935 /*
    936   valloc(size_t n);
    937   Equivalent to memalign(pagesize, n), where pagesize is the page
    938   size of the system. If the pagesize is unknown, 4096 is used.
    939 */
    940 DLMALLOC_EXPORT void* dlvalloc(size_t);
    941 
    942 /*
    943   mallopt(int parameter_number, int parameter_value)
    944   Sets tunable parameters The format is to provide a
    945   (parameter-number, parameter-value) pair.  mallopt then sets the
    946   corresponding parameter to the argument value if it can (i.e., so
    947   long as the value is meaningful), and returns 1 if successful else
    948   0.  To workaround the fact that mallopt is specified to use int,
    949   not size_t parameters, the value -1 is specially treated as the
    950   maximum unsigned size_t value.
    951 
    952   SVID/XPG/ANSI defines four standard param numbers for mallopt,
    953   normally defined in malloc.h.  None of these are use in this malloc,
    954   so setting them has no effect. But this malloc also supports other
    955   options in mallopt. See below for details.  Briefly, supported
    956   parameters are as follows (listed defaults are for "typical"
    957   configurations).
    958 
    959   Symbol            param #  default    allowed param values
    960   M_TRIM_THRESHOLD     -1   2*1024*1024   any   (-1 disables)
    961   M_GRANULARITY        -2     page size   any power of 2 >= page size
    962   M_MMAP_THRESHOLD     -3      256*1024   any   (or 0 if no MMAP support)
    963 */
    964 DLMALLOC_EXPORT int dlmallopt(int, int);
    965 
    966 /*
    967   malloc_footprint();
    968   Returns the number of bytes obtained from the system.  The total
    969   number of bytes allocated by malloc, realloc etc., is less than this
    970   value. Unlike mallinfo, this function returns only a precomputed
    971   result, so can be called frequently to monitor memory consumption.
    972   Even if locks are otherwise defined, this function does not use them,
    973   so results might not be up to date.
    974 */
    975 DLMALLOC_EXPORT size_t dlmalloc_footprint(void);
    976 
    977 /*
    978   malloc_max_footprint();
    979   Returns the maximum number of bytes obtained from the system. This
    980   value will be greater than current footprint if deallocated space
    981   has been reclaimed by the system. The peak number of bytes allocated
    982   by malloc, realloc etc., is less than this value. Unlike mallinfo,
    983   this function returns only a precomputed result, so can be called
    984   frequently to monitor memory consumption.  Even if locks are
    985   otherwise defined, this function does not use them, so results might
    986   not be up to date.
    987 */
    988 DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void);
    989 
    990 /*
    991   malloc_footprint_limit();
    992   Returns the number of bytes that the heap is allowed to obtain from
    993   the system, returning the last value returned by
    994   malloc_set_footprint_limit, or the maximum size_t value if
    995   never set. The returned value reflects a permission. There is no
    996   guarantee that this number of bytes can actually be obtained from
    997   the system.
    998 */
    999 DLMALLOC_EXPORT size_t dlmalloc_footprint_limit();
   1000 
   1001 /*
   1002   malloc_set_footprint_limit();
   1003   Sets the maximum number of bytes to obtain from the system, causing
   1004   failure returns from malloc and related functions upon attempts to
   1005   exceed this value. The argument value may be subject to page
   1006   rounding to an enforceable limit; this actual value is returned.
   1007   Using an argument of the maximum possible size_t effectively
   1008   disables checks. If the argument is less than or equal to the
   1009   current malloc_footprint, then all future allocations that require
   1010   additional system memory will fail. However, invocation cannot
   1011   retroactively deallocate existing used memory.
   1012 */
   1013 DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes);
   1014 
   1015 #if MALLOC_INSPECT_ALL
   1016 /*
   1017   malloc_inspect_all(void(*handler)(void *start,
   1018                                     void *end,
   1019                                     size_t used_bytes,
   1020                                     void* callback_arg),
   1021                       void* arg);
   1022   Traverses the heap and calls the given handler for each managed
   1023   region, skipping all bytes that are (or may be) used for bookkeeping
   1024   purposes.  Traversal does not include include chunks that have been
   1025   directly memory mapped. Each reported region begins at the start
   1026   address, and continues up to but not including the end address.  The
   1027   first used_bytes of the region contain allocated data. If
   1028   used_bytes is zero, the region is unallocated. The handler is
   1029   invoked with the given callback argument. If locks are defined, they
   1030   are held during the entire traversal. It is a bad idea to invoke
   1031   other malloc functions from within the handler.
   1032 
   1033   For example, to count the number of in-use chunks with size greater
   1034   than 1000, you could write:
   1035   static int count = 0;
   1036   void count_chunks(void* start, void* end, size_t used, void* arg) {
   1037     if (used >= 1000) ++count;
   1038   }
   1039   then:
   1040     malloc_inspect_all(count_chunks, NULL);
   1041 
   1042   malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined.
   1043 */
   1044 DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*),
   1045                            void* arg);
   1046 
   1047 #endif /* MALLOC_INSPECT_ALL */
   1048 
   1049 #if !NO_MALLINFO
   1050 /*
   1051   mallinfo()
   1052   Returns (by copy) a struct containing various summary statistics:
   1053 
   1054   arena:     current total non-mmapped bytes allocated from system
   1055   ordblks:   the number of free chunks
   1056   smblks:    always zero.
   1057   hblks:     current number of mmapped regions
   1058   hblkhd:    total bytes held in mmapped regions
   1059   usmblks:   the maximum total allocated space. This will be greater
   1060                 than current total if trimming has occurred.
   1061   fsmblks:   always zero
   1062   uordblks:  current total allocated space (normal or mmapped)
   1063   fordblks:  total free space
   1064   keepcost:  the maximum number of bytes that could ideally be released
   1065                back to system via malloc_trim. ("ideally" means that
   1066                it ignores page restrictions etc.)
   1067 
   1068   Because these fields are ints, but internal bookkeeping may
   1069   be kept as longs, the reported values may wrap around zero and
   1070   thus be inaccurate.
   1071 */
   1072 DLMALLOC_EXPORT struct mallinfo dlmallinfo(void);
   1073 #endif /* NO_MALLINFO */
   1074 
   1075 /*
   1076   independent_calloc(size_t n_elements, size_t element_size, void* chunks[]);
   1077 
   1078   independent_calloc is similar to calloc, but instead of returning a
   1079   single cleared space, it returns an array of pointers to n_elements
   1080   independent elements that can hold contents of size elem_size, each
   1081   of which starts out cleared, and can be independently freed,
   1082   realloc'ed etc. The elements are guaranteed to be adjacently
   1083   allocated (this is not guaranteed to occur with multiple callocs or
   1084   mallocs), which may also improve cache locality in some
   1085   applications.
   1086 
   1087   The "chunks" argument is optional (i.e., may be null, which is
   1088   probably the most typical usage). If it is null, the returned array
   1089   is itself dynamically allocated and should also be freed when it is
   1090   no longer needed. Otherwise, the chunks array must be of at least
   1091   n_elements in length. It is filled in with the pointers to the
   1092   chunks.
   1093 
   1094   In either case, independent_calloc returns this pointer array, or
   1095   null if the allocation failed.  If n_elements is zero and "chunks"
   1096   is null, it returns a chunk representing an array with zero elements
   1097   (which should be freed if not wanted).
   1098 
   1099   Each element must be freed when it is no longer needed. This can be
   1100   done all at once using bulk_free.
   1101 
   1102   independent_calloc simplifies and speeds up implementations of many
   1103   kinds of pools.  It may also be useful when constructing large data
   1104   structures that initially have a fixed number of fixed-sized nodes,
   1105   but the number is not known at compile time, and some of the nodes
   1106   may later need to be freed. For example:
   1107 
   1108   struct Node { int item; struct Node* next; };
   1109 
   1110   struct Node* build_list() {
   1111     struct Node** pool;
   1112     int n = read_number_of_nodes_needed();
   1113     if (n <= 0) return 0;
   1114     pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0);
   1115     if (pool == 0) die();
   1116     // organize into a linked list...
   1117     struct Node* first = pool[0];
   1118     for (i = 0; i < n-1; ++i)
   1119       pool[i]->next = pool[i+1];
   1120     free(pool);     // Can now free the array (or not, if it is needed later)
   1121     return first;
   1122   }
   1123 */
   1124 DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**);
   1125 
   1126 /*
   1127   independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]);
   1128 
   1129   independent_comalloc allocates, all at once, a set of n_elements
   1130   chunks with sizes indicated in the "sizes" array.    It returns
   1131   an array of pointers to these elements, each of which can be
   1132   independently freed, realloc'ed etc. The elements are guaranteed to
   1133   be adjacently allocated (this is not guaranteed to occur with
   1134   multiple callocs or mallocs), which may also improve cache locality
   1135   in some applications.
   1136 
   1137   The "chunks" argument is optional (i.e., may be null). If it is null
   1138   the returned array is itself dynamically allocated and should also
   1139   be freed when it is no longer needed. Otherwise, the chunks array
   1140   must be of at least n_elements in length. It is filled in with the
   1141   pointers to the chunks.
   1142 
   1143   In either case, independent_comalloc returns this pointer array, or
   1144   null if the allocation failed.  If n_elements is zero and chunks is
   1145   null, it returns a chunk representing an array with zero elements
   1146   (which should be freed if not wanted).
   1147 
   1148   Each element must be freed when it is no longer needed. This can be
   1149   done all at once using bulk_free.
   1150 
   1151   independent_comallac differs from independent_calloc in that each
   1152   element may have a different size, and also that it does not
   1153   automatically clear elements.
   1154 
   1155   independent_comalloc can be used to speed up allocation in cases
   1156   where several structs or objects must always be allocated at the
   1157   same time.  For example:
   1158 
   1159   struct Head { ... }
   1160   struct Foot { ... }
   1161 
   1162   void send_message(char* msg) {
   1163     int msglen = strlen(msg);
   1164     size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) };
   1165     void* chunks[3];
   1166     if (independent_comalloc(3, sizes, chunks) == 0)
   1167       die();
   1168     struct Head* head = (struct Head*)(chunks[0]);
   1169     char*        body = (char*)(chunks[1]);
   1170     struct Foot* foot = (struct Foot*)(chunks[2]);
   1171     // ...
   1172   }
   1173 
   1174   In general though, independent_comalloc is worth using only for
   1175   larger values of n_elements. For small values, you probably won't
   1176   detect enough difference from series of malloc calls to bother.
   1177 
   1178   Overuse of independent_comalloc can increase overall memory usage,
   1179   since it cannot reuse existing noncontiguous small chunks that
   1180   might be available for some of the elements.
   1181 */
   1182 DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**);
   1183 
   1184 /*
   1185   bulk_free(void* array[], size_t n_elements)
   1186   Frees and clears (sets to null) each non-null pointer in the given
   1187   array.  This is likely to be faster than freeing them one-by-one.
   1188   If footers are used, pointers that have been allocated in different
   1189   mspaces are not freed or cleared, and the count of all such pointers
   1190   is returned.  For large arrays of pointers with poor locality, it
   1191   may be worthwhile to sort this array before calling bulk_free.
   1192 */
   1193 DLMALLOC_EXPORT size_t  dlbulk_free(void**, size_t n_elements);
   1194 
   1195 /*
   1196   pvalloc(size_t n);
   1197   Equivalent to valloc(minimum-page-that-holds(n)), that is,
   1198   round up n to nearest pagesize.
   1199  */
   1200 DLMALLOC_EXPORT void*  dlpvalloc(size_t);
   1201 
   1202 /*
   1203   malloc_trim(size_t pad);
   1204 
   1205   If possible, gives memory back to the system (via negative arguments
   1206   to sbrk) if there is unused memory at the `high' end of the malloc
   1207   pool or in unused MMAP segments. You can call this after freeing
   1208   large blocks of memory to potentially reduce the system-level memory
   1209   requirements of a program. However, it cannot guarantee to reduce
   1210   memory. Under some allocation patterns, some large free blocks of
   1211   memory will be locked between two used chunks, so they cannot be
   1212   given back to the system.
   1213 
   1214   The `pad' argument to malloc_trim represents the amount of free
   1215   trailing space to leave untrimmed. If this argument is zero, only
   1216   the minimum amount of memory to maintain internal data structures
   1217   will be left. Non-zero arguments can be supplied to maintain enough
   1218   trailing space to service future expected allocations without having
   1219   to re-obtain memory from the system.
   1220 
   1221   Malloc_trim returns 1 if it actually released any memory, else 0.
   1222 */
   1223 DLMALLOC_EXPORT int  dlmalloc_trim(size_t);
   1224 
   1225 /*
   1226   malloc_stats();
   1227   Prints on stderr the amount of space obtained from the system (both
   1228   via sbrk and mmap), the maximum amount (which may be more than
   1229   current if malloc_trim and/or munmap got called), and the current
   1230   number of bytes allocated via malloc (or realloc, etc) but not yet
   1231   freed. Note that this is the number of bytes allocated, not the
   1232   number requested. It will be larger than the number requested
   1233   because of alignment and bookkeeping overhead. Because it includes
   1234   alignment wastage as being in use, this figure may be greater than
   1235   zero even when no user-level chunks are allocated.
   1236 
   1237   The reported current and maximum system memory can be inaccurate if
   1238   a program makes other calls to system memory allocation functions
   1239   (normally sbrk) outside of malloc.
   1240 
   1241   malloc_stats prints only the most commonly interesting statistics.
   1242   More information can be obtained by calling mallinfo.
   1243 */
   1244 DLMALLOC_EXPORT void  dlmalloc_stats(void);
   1245 
   1246 /*
   1247   malloc_usable_size(void* p);
   1248 
   1249   Returns the number of bytes you can actually use in
   1250   an allocated chunk, which may be more than you requested (although
   1251   often not) due to alignment and minimum size constraints.
   1252   You can use this many bytes without worrying about
   1253   overwriting other allocated objects. This is not a particularly great
   1254   programming practice. malloc_usable_size can be more useful in
   1255   debugging and assertions, for example:
   1256 
   1257   p = malloc(n);
   1258   assert(malloc_usable_size(p) >= 256);
   1259 */
   1260 /* BEGIN android-changed: added const */
   1261 size_t dlmalloc_usable_size(const void*);
   1262 /* END android-change */
   1263 
   1264 #endif /* ONLY_MSPACES */
   1265 
   1266 #if MSPACES
   1267 
   1268 /*
   1269   mspace is an opaque type representing an independent
   1270   region of space that supports mspace_malloc, etc.
   1271 */
   1272 typedef void* mspace;
   1273 
   1274 /*
   1275   create_mspace creates and returns a new independent space with the
   1276   given initial capacity, or, if 0, the default granularity size.  It
   1277   returns null if there is no system memory available to create the
   1278   space.  If argument locked is non-zero, the space uses a separate
   1279   lock to control access. The capacity of the space will grow
   1280   dynamically as needed to service mspace_malloc requests.  You can
   1281   control the sizes of incremental increases of this space by
   1282   compiling with a different DEFAULT_GRANULARITY or dynamically
   1283   setting with mallopt(M_GRANULARITY, value).
   1284 */
   1285 DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked);
   1286 
   1287 /*
   1288   destroy_mspace destroys the given space, and attempts to return all
   1289   of its memory back to the system, returning the total number of
   1290   bytes freed. After destruction, the results of access to all memory
   1291   used by the space become undefined.
   1292 */
   1293 DLMALLOC_EXPORT size_t destroy_mspace(mspace msp);
   1294 
   1295 /*
   1296   create_mspace_with_base uses the memory supplied as the initial base
   1297   of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this
   1298   space is used for bookkeeping, so the capacity must be at least this
   1299   large. (Otherwise 0 is returned.) When this initial space is
   1300   exhausted, additional memory will be obtained from the system.
   1301   Destroying this space will deallocate all additionally allocated
   1302   space (if possible) but not the initial base.
   1303 */
   1304 DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked);
   1305 
   1306 /*
   1307   mspace_track_large_chunks controls whether requests for large chunks
   1308   are allocated in their own untracked mmapped regions, separate from
   1309   others in this mspace. By default large chunks are not tracked,
   1310   which reduces fragmentation. However, such chunks are not
   1311   necessarily released to the system upon destroy_mspace.  Enabling
   1312   tracking by setting to true may increase fragmentation, but avoids
   1313   leakage when relying on destroy_mspace to release all memory
   1314   allocated using this space.  The function returns the previous
   1315   setting.
   1316 */
   1317 DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable);
   1318 
   1319 
   1320 /*
   1321   mspace_malloc behaves as malloc, but operates within
   1322   the given space.
   1323 */
   1324 DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes);
   1325 
   1326 /*
   1327   mspace_free behaves as free, but operates within
   1328   the given space.
   1329 
   1330   If compiled with FOOTERS==1, mspace_free is not actually needed.
   1331   free may be called instead of mspace_free because freed chunks from
   1332   any space are handled by their originating spaces.
   1333 */
   1334 DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem);
   1335 
   1336 /*
   1337   mspace_realloc behaves as realloc, but operates within
   1338   the given space.
   1339 
   1340   If compiled with FOOTERS==1, mspace_realloc is not actually
   1341   needed.  realloc may be called instead of mspace_realloc because
   1342   realloced chunks from any space are handled by their originating
   1343   spaces.
   1344 */
   1345 DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize);
   1346 
   1347 /*
   1348   mspace_calloc behaves as calloc, but operates within
   1349   the given space.
   1350 */
   1351 DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size);
   1352 
   1353 /*
   1354   mspace_memalign behaves as memalign, but operates within
   1355   the given space.
   1356 */
   1357 DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes);
   1358 
   1359 /*
   1360   mspace_independent_calloc behaves as independent_calloc, but
   1361   operates within the given space.
   1362 */
   1363 DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements,
   1364                                  size_t elem_size, void* chunks[]);
   1365 
   1366 /*
   1367   mspace_independent_comalloc behaves as independent_comalloc, but
   1368   operates within the given space.
   1369 */
   1370 DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements,
   1371                                    size_t sizes[], void* chunks[]);
   1372 
   1373 /*
   1374   mspace_footprint() returns the number of bytes obtained from the
   1375   system for this space.
   1376 */
   1377 DLMALLOC_EXPORT size_t mspace_footprint(mspace msp);
   1378 
   1379 /*
   1380   mspace_max_footprint() returns the peak number of bytes obtained from the
   1381   system for this space.
   1382 */
   1383 DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp);
   1384 
   1385 
   1386 #if !NO_MALLINFO
   1387 /*
   1388   mspace_mallinfo behaves as mallinfo, but reports properties of
   1389   the given space.
   1390 */
   1391 DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp);
   1392 #endif /* NO_MALLINFO */
   1393 
   1394 /*
   1395   malloc_usable_size(void* p) behaves the same as malloc_usable_size;
   1396 */
   1397 DLMALLOC_EXPORT size_t mspace_usable_size(const void* mem);
   1398 
   1399 /*
   1400   mspace_malloc_stats behaves as malloc_stats, but reports
   1401   properties of the given space.
   1402 */
   1403 DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp);
   1404 
   1405 /*
   1406   mspace_trim behaves as malloc_trim, but
   1407   operates within the given space.
   1408 */
   1409 DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad);
   1410 
   1411 /*
   1412   An alias for mallopt.
   1413 */
   1414 DLMALLOC_EXPORT int mspace_mallopt(int, int);
   1415 
   1416 #endif /* MSPACES */
   1417 
   1418 #ifdef __cplusplus
   1419 }  /* end of extern "C" */
   1420 #endif /* __cplusplus */
   1421 
   1422 /*
   1423   ========================================================================
   1424   To make a fully customizable malloc.h header file, cut everything
   1425   above this line, put into file malloc.h, edit to suit, and #include it
   1426   on the next line, as well as in programs that use this malloc.
   1427   ========================================================================
   1428 */
   1429 
   1430 /* #include "malloc.h" */
   1431 
   1432 /*------------------------------ internal #includes ---------------------- */
   1433 
   1434 #ifdef _MSC_VER
   1435 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */
   1436 #endif /* _MSC_VER */
   1437 #if !NO_MALLOC_STATS
   1438 #include <stdio.h>       /* for printing in malloc_stats */
   1439 #endif /* NO_MALLOC_STATS */
   1440 #ifndef LACKS_ERRNO_H
   1441 #include <errno.h>       /* for MALLOC_FAILURE_ACTION */
   1442 #endif /* LACKS_ERRNO_H */
   1443 #ifdef DEBUG
   1444 #if ABORT_ON_ASSERT_FAILURE
   1445 #undef assert
   1446 #define assert(x) if(!(x)) ABORT
   1447 #else /* ABORT_ON_ASSERT_FAILURE */
   1448 #include <assert.h>
   1449 #endif /* ABORT_ON_ASSERT_FAILURE */
   1450 #else  /* DEBUG */
   1451 #ifndef assert
   1452 #define assert(x)
   1453 #endif
   1454 #define DEBUG 0
   1455 #endif /* DEBUG */
   1456 #if !defined(WIN32) && !defined(LACKS_TIME_H)
   1457 #include <time.h>        /* for magic initialization */
   1458 #endif /* WIN32 */
   1459 #ifndef LACKS_STDLIB_H
   1460 #include <stdlib.h>      /* for abort() */
   1461 #endif /* LACKS_STDLIB_H */
   1462 #ifndef LACKS_STRING_H
   1463 #include <string.h>      /* for memset etc */
   1464 #endif  /* LACKS_STRING_H */
   1465 #if USE_BUILTIN_FFS
   1466 #ifndef LACKS_STRINGS_H
   1467 #include <strings.h>     /* for ffs */
   1468 #endif /* LACKS_STRINGS_H */
   1469 #endif /* USE_BUILTIN_FFS */
   1470 #if HAVE_MMAP
   1471 #ifndef LACKS_SYS_MMAN_H
   1472 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */
   1473 #if (defined(linux) && !defined(__USE_GNU))
   1474 #define __USE_GNU 1
   1475 #include <sys/mman.h>    /* for mmap */
   1476 #undef __USE_GNU
   1477 #else
   1478 #include <sys/mman.h>    /* for mmap */
   1479 #endif /* linux */
   1480 #endif /* LACKS_SYS_MMAN_H */
   1481 #ifndef LACKS_FCNTL_H
   1482 #include <fcntl.h>
   1483 #endif /* LACKS_FCNTL_H */
   1484 #endif /* HAVE_MMAP */
   1485 #ifndef LACKS_UNISTD_H
   1486 #include <unistd.h>     /* for sbrk, sysconf */
   1487 #else /* LACKS_UNISTD_H */
   1488 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
   1489 extern void*     sbrk(ptrdiff_t);
   1490 #endif /* FreeBSD etc */
   1491 #endif /* LACKS_UNISTD_H */
   1492 
   1493 /* Declarations for locking */
   1494 #if USE_LOCKS
   1495 #ifndef WIN32
   1496 #if defined (__SVR4) && defined (__sun)  /* solaris */
   1497 #include <thread.h>
   1498 #elif !defined(LACKS_SCHED_H)
   1499 #include <sched.h>
   1500 #endif /* solaris or LACKS_SCHED_H */
   1501 #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS
   1502 #include <pthread.h>
   1503 #endif /* USE_RECURSIVE_LOCKS ... */
   1504 #elif defined(_MSC_VER)
   1505 #ifndef _M_AMD64
   1506 /* These are already defined on AMD64 builds */
   1507 #ifdef __cplusplus
   1508 extern "C" {
   1509 #endif /* __cplusplus */
   1510 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp);
   1511 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value);
   1512 #ifdef __cplusplus
   1513 }
   1514 #endif /* __cplusplus */
   1515 #endif /* _M_AMD64 */
   1516 #pragma intrinsic (_InterlockedCompareExchange)
   1517 #pragma intrinsic (_InterlockedExchange)
   1518 #define interlockedcompareexchange _InterlockedCompareExchange
   1519 #define interlockedexchange _InterlockedExchange
   1520 #elif defined(WIN32) && defined(__GNUC__)
   1521 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b)
   1522 #define interlockedexchange __sync_lock_test_and_set
   1523 #endif /* Win32 */
   1524 #else /* USE_LOCKS */
   1525 #endif /* USE_LOCKS */
   1526 
   1527 #ifndef LOCK_AT_FORK
   1528 #define LOCK_AT_FORK 0
   1529 #endif
   1530 
   1531 /* Declarations for bit scanning on win32 */
   1532 #if defined(_MSC_VER) && _MSC_VER>=1300
   1533 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */
   1534 #ifdef __cplusplus
   1535 extern "C" {
   1536 #endif /* __cplusplus */
   1537 unsigned char _BitScanForward(unsigned long *index, unsigned long mask);
   1538 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask);
   1539 #ifdef __cplusplus
   1540 }
   1541 #endif /* __cplusplus */
   1542 
   1543 #define BitScanForward _BitScanForward
   1544 #define BitScanReverse _BitScanReverse
   1545 #pragma intrinsic(_BitScanForward)
   1546 #pragma intrinsic(_BitScanReverse)
   1547 #endif /* BitScanForward */
   1548 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */
   1549 
   1550 #ifndef WIN32
   1551 #ifndef malloc_getpagesize
   1552 #  ifdef _SC_PAGESIZE         /* some SVR4 systems omit an underscore */
   1553 #    ifndef _SC_PAGE_SIZE
   1554 #      define _SC_PAGE_SIZE _SC_PAGESIZE
   1555 #    endif
   1556 #  endif
   1557 #  ifdef _SC_PAGE_SIZE
   1558 #    define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
   1559 #  else
   1560 #    if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
   1561        extern size_t getpagesize();
   1562 #      define malloc_getpagesize getpagesize()
   1563 #    else
   1564 #      ifdef WIN32 /* use supplied emulation of getpagesize */
   1565 #        define malloc_getpagesize getpagesize()
   1566 #      else
   1567 #        ifndef LACKS_SYS_PARAM_H
   1568 #          include <sys/param.h>
   1569 #        endif
   1570 #        ifdef EXEC_PAGESIZE
   1571 #          define malloc_getpagesize EXEC_PAGESIZE
   1572 #        else
   1573 #          ifdef NBPG
   1574 #            ifndef CLSIZE
   1575 #              define malloc_getpagesize NBPG
   1576 #            else
   1577 #              define malloc_getpagesize (NBPG * CLSIZE)
   1578 #            endif
   1579 #          else
   1580 #            ifdef NBPC
   1581 #              define malloc_getpagesize NBPC
   1582 #            else
   1583 #              ifdef PAGESIZE
   1584 #                define malloc_getpagesize PAGESIZE
   1585 #              else /* just guess */
   1586 #                define malloc_getpagesize ((size_t)4096U)
   1587 #              endif
   1588 #            endif
   1589 #          endif
   1590 #        endif
   1591 #      endif
   1592 #    endif
   1593 #  endif
   1594 #endif
   1595 #endif
   1596 
   1597 /* ------------------- size_t and alignment properties -------------------- */
   1598 
   1599 /* The byte and bit size of a size_t */
   1600 #define SIZE_T_SIZE         (sizeof(size_t))
   1601 #define SIZE_T_BITSIZE      (sizeof(size_t) << 3)
   1602 
   1603 /* Some constants coerced to size_t */
   1604 /* Annoying but necessary to avoid errors on some platforms */
   1605 #define SIZE_T_ZERO         ((size_t)0)
   1606 #define SIZE_T_ONE          ((size_t)1)
   1607 #define SIZE_T_TWO          ((size_t)2)
   1608 #define SIZE_T_FOUR         ((size_t)4)
   1609 #define TWO_SIZE_T_SIZES    (SIZE_T_SIZE<<1)
   1610 #define FOUR_SIZE_T_SIZES   (SIZE_T_SIZE<<2)
   1611 #define SIX_SIZE_T_SIZES    (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES)
   1612 #define HALF_MAX_SIZE_T     (MAX_SIZE_T / 2U)
   1613 
   1614 /* The bit mask value corresponding to MALLOC_ALIGNMENT */
   1615 #define CHUNK_ALIGN_MASK    (MALLOC_ALIGNMENT - SIZE_T_ONE)
   1616 
   1617 /* True if address a has acceptable alignment */
   1618 #define is_aligned(A)       (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0)
   1619 
   1620 /* the number of bytes to offset an address to align it */
   1621 #define align_offset(A)\
   1622  ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\
   1623   ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK))
   1624 
   1625 /* -------------------------- MMAP preliminaries ------------------------- */
   1626 
   1627 /*
   1628    If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and
   1629    checks to fail so compiler optimizer can delete code rather than
   1630    using so many "#if"s.
   1631 */
   1632 
   1633 
   1634 /* MORECORE and MMAP must return MFAIL on failure */
   1635 #define MFAIL                ((void*)(MAX_SIZE_T))
   1636 #define CMFAIL               ((char*)(MFAIL)) /* defined for convenience */
   1637 
   1638 #if HAVE_MMAP
   1639 
   1640 #ifndef WIN32
   1641 #define MUNMAP_DEFAULT(a, s)  munmap((a), (s))
   1642 #define MMAP_PROT            (PROT_READ|PROT_WRITE)
   1643 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
   1644 #define MAP_ANONYMOUS        MAP_ANON
   1645 #endif /* MAP_ANON */
   1646 #ifdef MAP_ANONYMOUS
   1647 #define MMAP_FLAGS           (MAP_PRIVATE|MAP_ANONYMOUS)
   1648 #define MMAP_DEFAULT(s)       mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0)
   1649 #else /* MAP_ANONYMOUS */
   1650 /*
   1651    Nearly all versions of mmap support MAP_ANONYMOUS, so the following
   1652    is unlikely to be needed, but is supplied just in case.
   1653 */
   1654 #define MMAP_FLAGS           (MAP_PRIVATE)
   1655 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
   1656 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \
   1657            (dev_zero_fd = open("/dev/zero", O_RDWR), \
   1658             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \
   1659             mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0))
   1660 #endif /* MAP_ANONYMOUS */
   1661 
   1662 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s)
   1663 
   1664 #else /* WIN32 */
   1665 
   1666 /* Win32 MMAP via VirtualAlloc */
   1667 static FORCEINLINE void* win32mmap(size_t size) {
   1668   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE);
   1669   return (ptr != 0)? ptr: MFAIL;
   1670 }
   1671 
   1672 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */
   1673 static FORCEINLINE void* win32direct_mmap(size_t size) {
   1674   void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN,
   1675                            PAGE_READWRITE);
   1676   return (ptr != 0)? ptr: MFAIL;
   1677 }
   1678 
   1679 /* This function supports releasing coalesed segments */
   1680 static FORCEINLINE int win32munmap(void* ptr, size_t size) {
   1681   MEMORY_BASIC_INFORMATION minfo;
   1682   char* cptr = (char*)ptr;
   1683   while (size) {
   1684     if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0)
   1685       return -1;
   1686     if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr ||
   1687         minfo.State != MEM_COMMIT || minfo.RegionSize > size)
   1688       return -1;
   1689     if (VirtualFree(cptr, 0, MEM_RELEASE) == 0)
   1690       return -1;
   1691     cptr += minfo.RegionSize;
   1692     size -= minfo.RegionSize;
   1693   }
   1694   return 0;
   1695 }
   1696 
   1697 #define MMAP_DEFAULT(s)             win32mmap(s)
   1698 #define MUNMAP_DEFAULT(a, s)        win32munmap((a), (s))
   1699 #define DIRECT_MMAP_DEFAULT(s)      win32direct_mmap(s)
   1700 #endif /* WIN32 */
   1701 #endif /* HAVE_MMAP */
   1702 
   1703 #if HAVE_MREMAP
   1704 #ifndef WIN32
   1705 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv))
   1706 #endif /* WIN32 */
   1707 #endif /* HAVE_MREMAP */
   1708 
   1709 /**
   1710  * Define CALL_MORECORE
   1711  */
   1712 #if HAVE_MORECORE
   1713     #ifdef MORECORE
   1714         #define CALL_MORECORE(S)    MORECORE(S)
   1715     #else  /* MORECORE */
   1716         #define CALL_MORECORE(S)    MORECORE_DEFAULT(S)
   1717     #endif /* MORECORE */
   1718 #else  /* HAVE_MORECORE */
   1719     #define CALL_MORECORE(S)        MFAIL
   1720 #endif /* HAVE_MORECORE */
   1721 
   1722 /**
   1723  * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP
   1724  */
   1725 #if HAVE_MMAP
   1726     #define USE_MMAP_BIT            (SIZE_T_ONE)
   1727 
   1728     #ifdef MMAP
   1729         #define CALL_MMAP(s)        MMAP(s)
   1730     #else /* MMAP */
   1731         #define CALL_MMAP(s)        MMAP_DEFAULT(s)
   1732     #endif /* MMAP */
   1733     #ifdef MUNMAP
   1734         #define CALL_MUNMAP(a, s)   MUNMAP((a), (s))
   1735     #else /* MUNMAP */
   1736         #define CALL_MUNMAP(a, s)   MUNMAP_DEFAULT((a), (s))
   1737     #endif /* MUNMAP */
   1738     #ifdef DIRECT_MMAP
   1739         #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s)
   1740     #else /* DIRECT_MMAP */
   1741         #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s)
   1742     #endif /* DIRECT_MMAP */
   1743 #else  /* HAVE_MMAP */
   1744     #define USE_MMAP_BIT            (SIZE_T_ZERO)
   1745 
   1746     #define MMAP(s)                 MFAIL
   1747     #define MUNMAP(a, s)            (-1)
   1748     #define DIRECT_MMAP(s)          MFAIL
   1749     #define CALL_DIRECT_MMAP(s)     DIRECT_MMAP(s)
   1750     #define CALL_MMAP(s)            MMAP(s)
   1751     #define CALL_MUNMAP(a, s)       MUNMAP((a), (s))
   1752 #endif /* HAVE_MMAP */
   1753 
   1754 /**
   1755  * Define CALL_MREMAP
   1756  */
   1757 #if HAVE_MMAP && HAVE_MREMAP
   1758     #ifdef MREMAP
   1759         #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv))
   1760     #else /* MREMAP */
   1761         #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv))
   1762     #endif /* MREMAP */
   1763 #else  /* HAVE_MMAP && HAVE_MREMAP */
   1764     #define CALL_MREMAP(addr, osz, nsz, mv)     MFAIL
   1765 #endif /* HAVE_MMAP && HAVE_MREMAP */
   1766 
   1767 /* mstate bit set if continguous morecore disabled or failed */
   1768 #define USE_NONCONTIGUOUS_BIT (4U)
   1769 
   1770 /* segment bit set in create_mspace_with_base */
   1771 #define EXTERN_BIT            (8U)
   1772 
   1773 
   1774 /* --------------------------- Lock preliminaries ------------------------ */
   1775 
   1776 /*
   1777   When locks are defined, there is one global lock, plus
   1778   one per-mspace lock.
   1779 
   1780   The global lock_ensures that mparams.magic and other unique
   1781   mparams values are initialized only once. It also protects
   1782   sequences of calls to MORECORE.  In many cases sys_alloc requires
   1783   two calls, that should not be interleaved with calls by other
   1784   threads.  This does not protect against direct calls to MORECORE
   1785   by other threads not using this lock, so there is still code to
   1786   cope the best we can on interference.
   1787 
   1788   Per-mspace locks surround calls to malloc, free, etc.
   1789   By default, locks are simple non-reentrant mutexes.
   1790 
   1791   Because lock-protected regions generally have bounded times, it is
   1792   OK to use the supplied simple spinlocks. Spinlocks are likely to
   1793   improve performance for lightly contended applications, but worsen
   1794   performance under heavy contention.
   1795 
   1796   If USE_LOCKS is > 1, the definitions of lock routines here are
   1797   bypassed, in which case you will need to define the type MLOCK_T,
   1798   and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK
   1799   and TRY_LOCK.  You must also declare a
   1800     static MLOCK_T malloc_global_mutex = { initialization values };.
   1801 
   1802 */
   1803 
   1804 #if !USE_LOCKS
   1805 #define USE_LOCK_BIT               (0U)
   1806 #define INITIAL_LOCK(l)            (0)
   1807 #define DESTROY_LOCK(l)            (0)
   1808 #define ACQUIRE_MALLOC_GLOBAL_LOCK()
   1809 #define RELEASE_MALLOC_GLOBAL_LOCK()
   1810 
   1811 #else
   1812 #if USE_LOCKS > 1
   1813 /* -----------------------  User-defined locks ------------------------ */
   1814 /* Define your own lock implementation here */
   1815 /* #define INITIAL_LOCK(lk)  ... */
   1816 /* #define DESTROY_LOCK(lk)  ... */
   1817 /* #define ACQUIRE_LOCK(lk)  ... */
   1818 /* #define RELEASE_LOCK(lk)  ... */
   1819 /* #define TRY_LOCK(lk) ... */
   1820 /* static MLOCK_T malloc_global_mutex = ... */
   1821 
   1822 #elif USE_SPIN_LOCKS
   1823 
   1824 /* First, define CAS_LOCK and CLEAR_LOCK on ints */
   1825 /* Note CAS_LOCK defined to return 0 on success */
   1826 
   1827 #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1))
   1828 #define CAS_LOCK(sl)     __sync_lock_test_and_set(sl, 1)
   1829 #define CLEAR_LOCK(sl)   __sync_lock_release(sl)
   1830 
   1831 #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)))
   1832 /* Custom spin locks for older gcc on x86 */
   1833 static FORCEINLINE int x86_cas_lock(int *sl) {
   1834   int ret;
   1835   int val = 1;
   1836   int cmp = 0;
   1837   __asm__ __volatile__  ("lock; cmpxchgl %1, %2"
   1838                          : "=a" (ret)
   1839                          : "r" (val), "m" (*(sl)), "0"(cmp)
   1840                          : "memory", "cc");
   1841   return ret;
   1842 }
   1843 
   1844 static FORCEINLINE void x86_clear_lock(int* sl) {
   1845   assert(*sl != 0);
   1846   int prev = 0;
   1847   int ret;
   1848   __asm__ __volatile__ ("lock; xchgl %0, %1"
   1849                         : "=r" (ret)
   1850                         : "m" (*(sl)), "0"(prev)
   1851                         : "memory");
   1852 }
   1853 
   1854 #define CAS_LOCK(sl)     x86_cas_lock(sl)
   1855 #define CLEAR_LOCK(sl)   x86_clear_lock(sl)
   1856 
   1857 #else /* Win32 MSC */
   1858 #define CAS_LOCK(sl)     interlockedexchange(sl, (LONG)1)
   1859 #define CLEAR_LOCK(sl)   interlockedexchange (sl, (LONG)0)
   1860 
   1861 #endif /* ... gcc spins locks ... */
   1862 
   1863 /* How to yield for a spin lock */
   1864 #define SPINS_PER_YIELD       63
   1865 #if defined(_MSC_VER)
   1866 #define SLEEP_EX_DURATION     50 /* delay for yield/sleep */
   1867 #define SPIN_LOCK_YIELD  SleepEx(SLEEP_EX_DURATION, FALSE)
   1868 #elif defined (__SVR4) && defined (__sun) /* solaris */
   1869 #define SPIN_LOCK_YIELD   thr_yield();
   1870 #elif !defined(LACKS_SCHED_H)
   1871 #define SPIN_LOCK_YIELD   sched_yield();
   1872 #else
   1873 #define SPIN_LOCK_YIELD
   1874 #endif /* ... yield ... */
   1875 
   1876 #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0
   1877 /* Plain spin locks use single word (embedded in malloc_states) */
   1878 static int spin_acquire_lock(int *sl) {
   1879   int spins = 0;
   1880   while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) {
   1881     if ((++spins & SPINS_PER_YIELD) == 0) {
   1882       SPIN_LOCK_YIELD;
   1883     }
   1884   }
   1885   return 0;
   1886 }
   1887 
   1888 #define MLOCK_T               int
   1889 #define TRY_LOCK(sl)          !CAS_LOCK(sl)
   1890 #define RELEASE_LOCK(sl)      CLEAR_LOCK(sl)
   1891 #define ACQUIRE_LOCK(sl)      (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0)
   1892 #define INITIAL_LOCK(sl)      (*sl = 0)
   1893 #define DESTROY_LOCK(sl)      (0)
   1894 static MLOCK_T malloc_global_mutex = 0;
   1895 
   1896 #else /* USE_RECURSIVE_LOCKS */
   1897 /* types for lock owners */
   1898 #ifdef WIN32
   1899 #define THREAD_ID_T           DWORD
   1900 #define CURRENT_THREAD        GetCurrentThreadId()
   1901 #define EQ_OWNER(X,Y)         ((X) == (Y))
   1902 #else
   1903 /*
   1904   Note: the following assume that pthread_t is a type that can be
   1905   initialized to (casted) zero. If this is not the case, you will need to
   1906   somehow redefine these or not use spin locks.
   1907 */
   1908 #define THREAD_ID_T           pthread_t
   1909 #define CURRENT_THREAD        pthread_self()
   1910 #define EQ_OWNER(X,Y)         pthread_equal(X, Y)
   1911 #endif
   1912 
   1913 struct malloc_recursive_lock {
   1914   int sl;
   1915   unsigned int c;
   1916   THREAD_ID_T threadid;
   1917 };
   1918 
   1919 #define MLOCK_T  struct malloc_recursive_lock
   1920 static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0};
   1921 
   1922 static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) {
   1923   assert(lk->sl != 0);
   1924   if (--lk->c == 0) {
   1925     CLEAR_LOCK(&lk->sl);
   1926   }
   1927 }
   1928 
   1929 static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) {
   1930   THREAD_ID_T mythreadid = CURRENT_THREAD;
   1931   int spins = 0;
   1932   for (;;) {
   1933     if (*((volatile int *)(&lk->sl)) == 0) {
   1934       if (!CAS_LOCK(&lk->sl)) {
   1935         lk->threadid = mythreadid;
   1936         lk->c = 1;
   1937         return 0;
   1938       }
   1939     }
   1940     else if (EQ_OWNER(lk->threadid, mythreadid)) {
   1941       ++lk->c;
   1942       return 0;
   1943     }
   1944     if ((++spins & SPINS_PER_YIELD) == 0) {
   1945       SPIN_LOCK_YIELD;
   1946     }
   1947   }
   1948 }
   1949 
   1950 static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) {
   1951   THREAD_ID_T mythreadid = CURRENT_THREAD;
   1952   if (*((volatile int *)(&lk->sl)) == 0) {
   1953     if (!CAS_LOCK(&lk->sl)) {
   1954       lk->threadid = mythreadid;
   1955       lk->c = 1;
   1956       return 1;
   1957     }
   1958   }
   1959   else if (EQ_OWNER(lk->threadid, mythreadid)) {
   1960     ++lk->c;
   1961     return 1;
   1962   }
   1963   return 0;
   1964 }
   1965 
   1966 #define RELEASE_LOCK(lk)      recursive_release_lock(lk)
   1967 #define TRY_LOCK(lk)          recursive_try_lock(lk)
   1968 #define ACQUIRE_LOCK(lk)      recursive_acquire_lock(lk)
   1969 #define INITIAL_LOCK(lk)      ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0)
   1970 #define DESTROY_LOCK(lk)      (0)
   1971 #endif /* USE_RECURSIVE_LOCKS */
   1972 
   1973 #elif defined(WIN32) /* Win32 critical sections */
   1974 #define MLOCK_T               CRITICAL_SECTION
   1975 #define ACQUIRE_LOCK(lk)      (EnterCriticalSection(lk), 0)
   1976 #define RELEASE_LOCK(lk)      LeaveCriticalSection(lk)
   1977 #define TRY_LOCK(lk)          TryEnterCriticalSection(lk)
   1978 #define INITIAL_LOCK(lk)      (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000))
   1979 #define DESTROY_LOCK(lk)      (DeleteCriticalSection(lk), 0)
   1980 #define NEED_GLOBAL_LOCK_INIT
   1981 
   1982 static MLOCK_T malloc_global_mutex;
   1983 static volatile LONG malloc_global_mutex_status;
   1984 
   1985 /* Use spin loop to initialize global lock */
   1986 static void init_malloc_global_mutex() {
   1987   for (;;) {
   1988     long stat = malloc_global_mutex_status;
   1989     if (stat > 0)
   1990       return;
   1991     /* transition to < 0 while initializing, then to > 0) */
   1992     if (stat == 0 &&
   1993         interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)0) == 0) {
   1994       InitializeCriticalSection(&malloc_global_mutex);
   1995       interlockedexchange(&malloc_global_mutex_status, (LONG)1);
   1996       return;
   1997     }
   1998     SleepEx(0, FALSE);
   1999   }
   2000 }
   2001 
   2002 #else /* pthreads-based locks */
   2003 #define MLOCK_T               pthread_mutex_t
   2004 #define ACQUIRE_LOCK(lk)      pthread_mutex_lock(lk)
   2005 #define RELEASE_LOCK(lk)      pthread_mutex_unlock(lk)
   2006 #define TRY_LOCK(lk)          (!pthread_mutex_trylock(lk))
   2007 #define INITIAL_LOCK(lk)      pthread_init_lock(lk)
   2008 #define DESTROY_LOCK(lk)      pthread_mutex_destroy(lk)
   2009 
   2010 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE)
   2011 /* Cope with old-style linux recursive lock initialization by adding */
   2012 /* skipped internal declaration from pthread.h */
   2013 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr,
   2014                                               int __kind));
   2015 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP
   2016 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y)
   2017 #endif /* USE_RECURSIVE_LOCKS ... */
   2018 
   2019 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER;
   2020 
   2021 static int pthread_init_lock (MLOCK_T *lk) {
   2022   pthread_mutexattr_t attr;
   2023   if (pthread_mutexattr_init(&attr)) return 1;
   2024 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0
   2025   if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1;
   2026 #endif
   2027   if (pthread_mutex_init(lk, &attr)) return 1;
   2028   if (pthread_mutexattr_destroy(&attr)) return 1;
   2029   return 0;
   2030 }
   2031 
   2032 #endif /* ... lock types ... */
   2033 
   2034 /* Common code for all lock types */
   2035 #define USE_LOCK_BIT               (2U)
   2036 
   2037 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK
   2038 #define ACQUIRE_MALLOC_GLOBAL_LOCK()  ACQUIRE_LOCK(&malloc_global_mutex);
   2039 #endif
   2040 
   2041 #ifndef RELEASE_MALLOC_GLOBAL_LOCK
   2042 #define RELEASE_MALLOC_GLOBAL_LOCK()  RELEASE_LOCK(&malloc_global_mutex);
   2043 #endif
   2044 
   2045 #endif /* USE_LOCKS */
   2046 
   2047 /* -----------------------  Chunk representations ------------------------ */
   2048 
   2049 /*
   2050   (The following includes lightly edited explanations by Colin Plumb.)
   2051 
   2052   The malloc_chunk declaration below is misleading (but accurate and
   2053   necessary).  It declares a "view" into memory allowing access to
   2054   necessary fields at known offsets from a given base.
   2055 
   2056   Chunks of memory are maintained using a `boundary tag' method as
   2057   originally described by Knuth.  (See the paper by Paul Wilson
   2058   ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such
   2059   techniques.)  Sizes of free chunks are stored both in the front of
   2060   each chunk and at the end.  This makes consolidating fragmented
   2061   chunks into bigger chunks fast.  The head fields also hold bits
   2062   representing whether chunks are free or in use.
   2063 
   2064   Here are some pictures to make it clearer.  They are "exploded" to
   2065   show that the state of a chunk can be thought of as extending from
   2066   the high 31 bits of the head field of its header through the
   2067   prev_foot and PINUSE_BIT bit of the following chunk header.
   2068 
   2069   A chunk that's in use looks like:
   2070 
   2071    chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2072            | Size of previous chunk (if P = 0)                             |
   2073            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2074          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
   2075          | Size of this chunk                                         1| +-+
   2076    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2077          |                                                               |
   2078          +-                                                             -+
   2079          |                                                               |
   2080          +-                                                             -+
   2081          |                                                               :
   2082          +-      size - sizeof(size_t) available payload bytes          -+
   2083          :                                                               |
   2084  chunk-> +-                                                             -+
   2085          |                                                               |
   2086          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2087        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1|
   2088        | Size of next chunk (may or may not be in use)               | +-+
   2089  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2090 
   2091     And if it's free, it looks like this:
   2092 
   2093    chunk-> +-                                                             -+
   2094            | User payload (must be in use, or we would have merged!)       |
   2095            +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2096          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P|
   2097          | Size of this chunk                                         0| +-+
   2098    mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2099          | Next pointer                                                  |
   2100          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2101          | Prev pointer                                                  |
   2102          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2103          |                                                               :
   2104          +-      size - sizeof(struct chunk) unused bytes               -+
   2105          :                                                               |
   2106  chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2107          | Size of this chunk                                            |
   2108          +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2109        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0|
   2110        | Size of next chunk (must be in use, or we would have merged)| +-+
   2111  mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2112        |                                                               :
   2113        +- User payload                                                -+
   2114        :                                                               |
   2115        +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2116                                                                      |0|
   2117                                                                      +-+
   2118   Note that since we always merge adjacent free chunks, the chunks
   2119   adjacent to a free chunk must be in use.
   2120 
   2121   Given a pointer to a chunk (which can be derived trivially from the
   2122   payload pointer) we can, in O(1) time, find out whether the adjacent
   2123   chunks are free, and if so, unlink them from the lists that they
   2124   are on and merge them with the current chunk.
   2125 
   2126   Chunks always begin on even word boundaries, so the mem portion
   2127   (which is returned to the user) is also on an even word boundary, and
   2128   thus at least double-word aligned.
   2129 
   2130   The P (PINUSE_BIT) bit, stored in the unused low-order bit of the
   2131   chunk size (which is always a multiple of two words), is an in-use
   2132   bit for the *previous* chunk.  If that bit is *clear*, then the
   2133   word before the current chunk size contains the previous chunk
   2134   size, and can be used to find the front of the previous chunk.
   2135   The very first chunk allocated always has this bit set, preventing
   2136   access to non-existent (or non-owned) memory. If pinuse is set for
   2137   any given chunk, then you CANNOT determine the size of the
   2138   previous chunk, and might even get a memory addressing fault when
   2139   trying to do so.
   2140 
   2141   The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of
   2142   the chunk size redundantly records whether the current chunk is
   2143   inuse (unless the chunk is mmapped). This redundancy enables usage
   2144   checks within free and realloc, and reduces indirection when freeing
   2145   and consolidating chunks.
   2146 
   2147   Each freshly allocated chunk must have both cinuse and pinuse set.
   2148   That is, each allocated chunk borders either a previously allocated
   2149   and still in-use chunk, or the base of its memory arena. This is
   2150   ensured by making all allocations from the `lowest' part of any
   2151   found chunk.  Further, no free chunk physically borders another one,
   2152   so each free chunk is known to be preceded and followed by either
   2153   inuse chunks or the ends of memory.
   2154 
   2155   Note that the `foot' of the current chunk is actually represented
   2156   as the prev_foot of the NEXT chunk. This makes it easier to
   2157   deal with alignments etc but can be very confusing when trying
   2158   to extend or adapt this code.
   2159 
   2160   The exceptions to all this are
   2161 
   2162      1. The special chunk `top' is the top-most available chunk (i.e.,
   2163         the one bordering the end of available memory). It is treated
   2164         specially.  Top is never included in any bin, is used only if
   2165         no other chunk is available, and is released back to the
   2166         system if it is very large (see M_TRIM_THRESHOLD).  In effect,
   2167         the top chunk is treated as larger (and thus less well
   2168         fitting) than any other available chunk.  The top chunk
   2169         doesn't update its trailing size field since there is no next
   2170         contiguous chunk that would have to index off it. However,
   2171         space is still allocated for it (TOP_FOOT_SIZE) to enable
   2172         separation or merging when space is extended.
   2173 
   2174      3. Chunks allocated via mmap, have both cinuse and pinuse bits
   2175         cleared in their head fields.  Because they are allocated
   2176         one-by-one, each must carry its own prev_foot field, which is
   2177         also used to hold the offset this chunk has within its mmapped
   2178         region, which is needed to preserve alignment. Each mmapped
   2179         chunk is trailed by the first two fields of a fake next-chunk
   2180         for sake of usage checks.
   2181 
   2182 */
   2183 
   2184 struct malloc_chunk {
   2185   size_t               prev_foot;  /* Size of previous chunk (if free).  */
   2186   size_t               head;       /* Size and inuse bits. */
   2187   struct malloc_chunk* fd;         /* double links -- used only if free. */
   2188   struct malloc_chunk* bk;
   2189 };
   2190 
   2191 typedef struct malloc_chunk  mchunk;
   2192 typedef struct malloc_chunk* mchunkptr;
   2193 typedef struct malloc_chunk* sbinptr;  /* The type of bins of chunks */
   2194 typedef unsigned int bindex_t;         /* Described below */
   2195 typedef unsigned int binmap_t;         /* Described below */
   2196 typedef unsigned int flag_t;           /* The type of various bit flag sets */
   2197 
   2198 /* ------------------- Chunks sizes and alignments ----------------------- */
   2199 
   2200 #define MCHUNK_SIZE         (sizeof(mchunk))
   2201 
   2202 #if FOOTERS
   2203 #define CHUNK_OVERHEAD      (TWO_SIZE_T_SIZES)
   2204 #else /* FOOTERS */
   2205 #define CHUNK_OVERHEAD      (SIZE_T_SIZE)
   2206 #endif /* FOOTERS */
   2207 
   2208 /* MMapped chunks need a second word of overhead ... */
   2209 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES)
   2210 /* ... and additional padding for fake next-chunk at foot */
   2211 #define MMAP_FOOT_PAD       (FOUR_SIZE_T_SIZES)
   2212 
   2213 /* The smallest size we can malloc is an aligned minimal chunk */
   2214 #define MIN_CHUNK_SIZE\
   2215   ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
   2216 
   2217 /* conversion from malloc headers to user pointers, and back */
   2218 #define chunk2mem(p)        ((void*)((char*)(p)       + TWO_SIZE_T_SIZES))
   2219 #define mem2chunk(mem)      ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES))
   2220 /* chunk associated with aligned address A */
   2221 #define align_as_chunk(A)   (mchunkptr)((A) + align_offset(chunk2mem(A)))
   2222 
   2223 /* Bounds on request (not chunk) sizes. */
   2224 #define MAX_REQUEST         ((-MIN_CHUNK_SIZE) << 2)
   2225 #define MIN_REQUEST         (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE)
   2226 
   2227 /* pad request bytes into a usable size */
   2228 #define pad_request(req) \
   2229    (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK)
   2230 
   2231 /* pad request, checking for minimum (but not maximum) */
   2232 #define request2size(req) \
   2233   (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req))
   2234 
   2235 
   2236 /* ------------------ Operations on head and foot fields ----------------- */
   2237 
   2238 /*
   2239   The head field of a chunk is or'ed with PINUSE_BIT when previous
   2240   adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in
   2241   use, unless mmapped, in which case both bits are cleared.
   2242 
   2243   FLAG4_BIT is not used by this malloc, but might be useful in extensions.
   2244 */
   2245 
   2246 #define PINUSE_BIT          (SIZE_T_ONE)
   2247 #define CINUSE_BIT          (SIZE_T_TWO)
   2248 #define FLAG4_BIT           (SIZE_T_FOUR)
   2249 #define INUSE_BITS          (PINUSE_BIT|CINUSE_BIT)
   2250 #define FLAG_BITS           (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT)
   2251 
   2252 /* Head value for fenceposts */
   2253 #define FENCEPOST_HEAD      (INUSE_BITS|SIZE_T_SIZE)
   2254 
   2255 /* extraction of fields from head words */
   2256 #define cinuse(p)           ((p)->head & CINUSE_BIT)
   2257 #define pinuse(p)           ((p)->head & PINUSE_BIT)
   2258 #define flag4inuse(p)       ((p)->head & FLAG4_BIT)
   2259 #define is_inuse(p)         (((p)->head & INUSE_BITS) != PINUSE_BIT)
   2260 #define is_mmapped(p)       (((p)->head & INUSE_BITS) == 0)
   2261 
   2262 #define chunksize(p)        ((p)->head & ~(FLAG_BITS))
   2263 
   2264 #define clear_pinuse(p)     ((p)->head &= ~PINUSE_BIT)
   2265 #define set_flag4(p)        ((p)->head |= FLAG4_BIT)
   2266 #define clear_flag4(p)      ((p)->head &= ~FLAG4_BIT)
   2267 
   2268 /* Treat space at ptr +/- offset as a chunk */
   2269 #define chunk_plus_offset(p, s)  ((mchunkptr)(((char*)(p)) + (s)))
   2270 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s)))
   2271 
   2272 /* Ptr to next or previous physical malloc_chunk. */
   2273 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS)))
   2274 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) ))
   2275 
   2276 /* extract next chunk's pinuse bit */
   2277 #define next_pinuse(p)  ((next_chunk(p)->head) & PINUSE_BIT)
   2278 
   2279 /* Get/set size at footer */
   2280 #define get_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot)
   2281 #define set_foot(p, s)  (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s))
   2282 
   2283 /* Set size, pinuse bit, and foot */
   2284 #define set_size_and_pinuse_of_free_chunk(p, s)\
   2285   ((p)->head = (s|PINUSE_BIT), set_foot(p, s))
   2286 
   2287 /* Set size, pinuse bit, foot, and clear next pinuse */
   2288 #define set_free_with_pinuse(p, s, n)\
   2289   (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s))
   2290 
   2291 /* Get the internal overhead associated with chunk p */
   2292 #define overhead_for(p)\
   2293  (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD)
   2294 
   2295 /* Return true if malloced space is not necessarily cleared */
   2296 #if MMAP_CLEARS
   2297 #define calloc_must_clear(p) (!is_mmapped(p))
   2298 #else /* MMAP_CLEARS */
   2299 #define calloc_must_clear(p) (1)
   2300 #endif /* MMAP_CLEARS */
   2301 
   2302 /* ---------------------- Overlaid data structures ----------------------- */
   2303 
   2304 /*
   2305   When chunks are not in use, they are treated as nodes of either
   2306   lists or trees.
   2307 
   2308   "Small"  chunks are stored in circular doubly-linked lists, and look
   2309   like this:
   2310 
   2311     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2312             |             Size of previous chunk                            |
   2313             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2314     `head:' |             Size of chunk, in bytes                         |P|
   2315       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2316             |             Forward pointer to next chunk in list             |
   2317             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2318             |             Back pointer to previous chunk in list            |
   2319             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2320             |             Unused space (may be 0 bytes long)                .
   2321             .                                                               .
   2322             .                                                               |
   2323 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2324     `foot:' |             Size of chunk, in bytes                           |
   2325             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2326 
   2327   Larger chunks are kept in a form of bitwise digital trees (aka
   2328   tries) keyed on chunksizes.  Because malloc_tree_chunks are only for
   2329   free chunks greater than 256 bytes, their size doesn't impose any
   2330   constraints on user chunk sizes.  Each node looks like:
   2331 
   2332     chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2333             |             Size of previous chunk                            |
   2334             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2335     `head:' |             Size of chunk, in bytes                         |P|
   2336       mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2337             |             Forward pointer to next chunk of same size        |
   2338             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2339             |             Back pointer to previous chunk of same size       |
   2340             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2341             |             Pointer to left child (child[0])                  |
   2342             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2343             |             Pointer to right child (child[1])                 |
   2344             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2345             |             Pointer to parent                                 |
   2346             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2347             |             bin index of this chunk                           |
   2348             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2349             |             Unused space                                      .
   2350             .                                                               |
   2351 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2352     `foot:' |             Size of chunk, in bytes                           |
   2353             +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   2354 
   2355   Each tree holding treenodes is a tree of unique chunk sizes.  Chunks
   2356   of the same size are arranged in a circularly-linked list, with only
   2357   the oldest chunk (the next to be used, in our FIFO ordering)
   2358   actually in the tree.  (Tree members are distinguished by a non-null
   2359   parent pointer.)  If a chunk with the same size an an existing node
   2360   is inserted, it is linked off the existing node using pointers that
   2361   work in the same way as fd/bk pointers of small chunks.
   2362 
   2363   Each tree contains a power of 2 sized range of chunk sizes (the
   2364   smallest is 0x100 <= x < 0x180), which is is divided in half at each
   2365   tree level, with the chunks in the smaller half of the range (0x100
   2366   <= x < 0x140 for the top nose) in the left subtree and the larger
   2367   half (0x140 <= x < 0x180) in the right subtree.  This is, of course,
   2368   done by inspecting individual bits.
   2369 
   2370   Using these rules, each node's left subtree contains all smaller
   2371   sizes than its right subtree.  However, the node at the root of each
   2372   subtree has no particular ordering relationship to either.  (The
   2373   dividing line between the subtree sizes is based on trie relation.)
   2374   If we remove the last chunk of a given size from the interior of the
   2375   tree, we need to replace it with a leaf node.  The tree ordering
   2376   rules permit a node to be replaced by any leaf below it.
   2377 
   2378   The smallest chunk in a tree (a common operation in a best-fit
   2379   allocator) can be found by walking a path to the leftmost leaf in
   2380   the tree.  Unlike a usual binary tree, where we follow left child
   2381   pointers until we reach a null, here we follow the right child
   2382   pointer any time the left one is null, until we reach a leaf with
   2383   both child pointers null. The smallest chunk in the tree will be
   2384   somewhere along that path.
   2385 
   2386   The worst case number of steps to add, find, or remove a node is
   2387   bounded by the number of bits differentiating chunks within
   2388   bins. Under current bin calculations, this ranges from 6 up to 21
   2389   (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case
   2390   is of course much better.
   2391 */
   2392 
   2393 struct malloc_tree_chunk {
   2394   /* The first four fields must be compatible with malloc_chunk */
   2395   size_t                    prev_foot;
   2396   size_t                    head;
   2397   struct malloc_tree_chunk* fd;
   2398   struct malloc_tree_chunk* bk;
   2399 
   2400   struct malloc_tree_chunk* child[2];
   2401   struct malloc_tree_chunk* parent;
   2402   bindex_t                  index;
   2403 };
   2404 
   2405 typedef struct malloc_tree_chunk  tchunk;
   2406 typedef struct malloc_tree_chunk* tchunkptr;
   2407 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */
   2408 
   2409 /* A little helper macro for trees */
   2410 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1])
   2411 
   2412 /* ----------------------------- Segments -------------------------------- */
   2413 
   2414 /*
   2415   Each malloc space may include non-contiguous segments, held in a
   2416   list headed by an embedded malloc_segment record representing the
   2417   top-most space. Segments also include flags holding properties of
   2418   the space. Large chunks that are directly allocated by mmap are not
   2419   included in this list. They are instead independently created and
   2420   destroyed without otherwise keeping track of them.
   2421 
   2422   Segment management mainly comes into play for spaces allocated by
   2423   MMAP.  Any call to MMAP might or might not return memory that is
   2424   adjacent to an existing segment.  MORECORE normally contiguously
   2425   extends the current space, so this space is almost always adjacent,
   2426   which is simpler and faster to deal with. (This is why MORECORE is
   2427   used preferentially to MMAP when both are available -- see
   2428   sys_alloc.)  When allocating using MMAP, we don't use any of the
   2429   hinting mechanisms (inconsistently) supported in various
   2430   implementations of unix mmap, or distinguish reserving from
   2431   committing memory. Instead, we just ask for space, and exploit
   2432   contiguity when we get it.  It is probably possible to do
   2433   better than this on some systems, but no general scheme seems
   2434   to be significantly better.
   2435 
   2436   Management entails a simpler variant of the consolidation scheme
   2437   used for chunks to reduce fragmentation -- new adjacent memory is
   2438   normally prepended or appended to an existing segment. However,
   2439   there are limitations compared to chunk consolidation that mostly
   2440   reflect the fact that segment processing is relatively infrequent
   2441   (occurring only when getting memory from system) and that we
   2442   don't expect to have huge numbers of segments:
   2443 
   2444   * Segments are not indexed, so traversal requires linear scans.  (It
   2445     would be possible to index these, but is not worth the extra
   2446     overhead and complexity for most programs on most platforms.)
   2447   * New segments are only appended to old ones when holding top-most
   2448     memory; if they cannot be prepended to others, they are held in
   2449     different segments.
   2450 
   2451   Except for the top-most segment of an mstate, each segment record
   2452   is kept at the tail of its segment. Segments are added by pushing
   2453   segment records onto the list headed by &mstate.seg for the
   2454   containing mstate.
   2455 
   2456   Segment flags control allocation/merge/deallocation policies:
   2457   * If EXTERN_BIT set, then we did not allocate this segment,
   2458     and so should not try to deallocate or merge with others.
   2459     (This currently holds only for the initial segment passed
   2460     into create_mspace_with_base.)
   2461   * If USE_MMAP_BIT set, the segment may be merged with
   2462     other surrounding mmapped segments and trimmed/de-allocated
   2463     using munmap.
   2464   * If neither bit is set, then the segment was obtained using
   2465     MORECORE so can be merged with surrounding MORECORE'd segments
   2466     and deallocated/trimmed using MORECORE with negative arguments.
   2467 */
   2468 
   2469 struct malloc_segment {
   2470   char*        base;             /* base address */
   2471   size_t       size;             /* allocated size */
   2472   struct malloc_segment* next;   /* ptr to next segment */
   2473   flag_t       sflags;           /* mmap and extern flag */
   2474 };
   2475 
   2476 #define is_mmapped_segment(S)  ((S)->sflags & USE_MMAP_BIT)
   2477 #define is_extern_segment(S)   ((S)->sflags & EXTERN_BIT)
   2478 
   2479 typedef struct malloc_segment  msegment;
   2480 typedef struct malloc_segment* msegmentptr;
   2481 
   2482 /* ---------------------------- malloc_state ----------------------------- */
   2483 
   2484 /*
   2485    A malloc_state holds all of the bookkeeping for a space.
   2486    The main fields are:
   2487 
   2488   Top
   2489     The topmost chunk of the currently active segment. Its size is
   2490     cached in topsize.  The actual size of topmost space is
   2491     topsize+TOP_FOOT_SIZE, which includes space reserved for adding
   2492     fenceposts and segment records if necessary when getting more
   2493     space from the system.  The size at which to autotrim top is
   2494     cached from mparams in trim_check, except that it is disabled if
   2495     an autotrim fails.
   2496 
   2497   Designated victim (dv)
   2498     This is the preferred chunk for servicing small requests that
   2499     don't have exact fits.  It is normally the chunk split off most
   2500     recently to service another small request.  Its size is cached in
   2501     dvsize. The link fields of this chunk are not maintained since it
   2502     is not kept in a bin.
   2503 
   2504   SmallBins
   2505     An array of bin headers for free chunks.  These bins hold chunks
   2506     with sizes less than MIN_LARGE_SIZE bytes. Each bin contains
   2507     chunks of all the same size, spaced 8 bytes apart.  To simplify
   2508     use in double-linked lists, each bin header acts as a malloc_chunk
   2509     pointing to the real first node, if it exists (else pointing to
   2510     itself).  This avoids special-casing for headers.  But to avoid
   2511     waste, we allocate only the fd/bk pointers of bins, and then use
   2512     repositioning tricks to treat these as the fields of a chunk.
   2513 
   2514   TreeBins
   2515     Treebins are pointers to the roots of trees holding a range of
   2516     sizes. There are 2 equally spaced treebins for each power of two
   2517     from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything
   2518     larger.
   2519 
   2520   Bin maps
   2521     There is one bit map for small bins ("smallmap") and one for
   2522     treebins ("treemap).  Each bin sets its bit when non-empty, and
   2523     clears the bit when empty.  Bit operations are then used to avoid
   2524     bin-by-bin searching -- nearly all "search" is done without ever
   2525     looking at bins that won't be selected.  The bit maps
   2526     conservatively use 32 bits per map word, even if on 64bit system.
   2527     For a good description of some of the bit-based techniques used
   2528     here, see Henry S. Warren Jr's book "Hacker's Delight" (and
   2529     supplement at http://hackersdelight.org/). Many of these are
   2530     intended to reduce the branchiness of paths through malloc etc, as
   2531     well as to reduce the number of memory locations read or written.
   2532 
   2533   Segments
   2534     A list of segments headed by an embedded malloc_segment record
   2535     representing the initial space.
   2536 
   2537   Address check support
   2538     The least_addr field is the least address ever obtained from
   2539     MORECORE or MMAP. Attempted frees and reallocs of any address less
   2540     than this are trapped (unless INSECURE is defined).
   2541 
   2542   Magic tag
   2543     A cross-check field that should always hold same value as mparams.magic.
   2544 
   2545   Max allowed footprint
   2546     The maximum allowed bytes to allocate from system (zero means no limit)
   2547 
   2548   Flags
   2549     Bits recording whether to use MMAP, locks, or contiguous MORECORE
   2550 
   2551   Statistics
   2552     Each space keeps track of current and maximum system memory
   2553     obtained via MORECORE or MMAP.
   2554 
   2555   Trim support
   2556     Fields holding the amount of unused topmost memory that should trigger
   2557     trimming, and a counter to force periodic scanning to release unused
   2558     non-topmost segments.
   2559 
   2560   Locking
   2561     If USE_LOCKS is defined, the "mutex" lock is acquired and released
   2562     around every public call using this mspace.
   2563 
   2564   Extension support
   2565     A void* pointer and a size_t field that can be used to help implement
   2566     extensions to this malloc.
   2567 */
   2568 
   2569 /* Bin types, widths and sizes */
   2570 #define NSMALLBINS        (32U)
   2571 #define NTREEBINS         (32U)
   2572 #define SMALLBIN_SHIFT    (3U)
   2573 #define SMALLBIN_WIDTH    (SIZE_T_ONE << SMALLBIN_SHIFT)
   2574 #define TREEBIN_SHIFT     (8U)
   2575 #define MIN_LARGE_SIZE    (SIZE_T_ONE << TREEBIN_SHIFT)
   2576 #define MAX_SMALL_SIZE    (MIN_LARGE_SIZE - SIZE_T_ONE)
   2577 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD)
   2578 
   2579 struct malloc_state {
   2580   binmap_t   smallmap;
   2581   binmap_t   treemap;
   2582   size_t     dvsize;
   2583   size_t     topsize;
   2584   char*      least_addr;
   2585   mchunkptr  dv;
   2586   mchunkptr  top;
   2587   size_t     trim_check;
   2588   size_t     release_checks;
   2589   size_t     magic;
   2590   mchunkptr  smallbins[(NSMALLBINS+1)*2];
   2591   tbinptr    treebins[NTREEBINS];
   2592   size_t     footprint;
   2593   size_t     max_footprint;
   2594   size_t     footprint_limit; /* zero means no limit */
   2595   flag_t     mflags;
   2596 #if USE_LOCKS
   2597   MLOCK_T    mutex;     /* locate lock among fields that rarely change */
   2598 #endif /* USE_LOCKS */
   2599   msegment   seg;
   2600   void*      extp;      /* Unused but available for extensions */
   2601   size_t     exts;
   2602 };
   2603 
   2604 typedef struct malloc_state*    mstate;
   2605 
   2606 /* ------------- Global malloc_state and malloc_params ------------------- */
   2607 
   2608 /*
   2609   malloc_params holds global properties, including those that can be
   2610   dynamically set using mallopt. There is a single instance, mparams,
   2611   initialized in init_mparams. Note that the non-zeroness of "magic"
   2612   also serves as an initialization flag.
   2613 */
   2614 
   2615 struct malloc_params {
   2616   size_t magic;
   2617   size_t page_size;
   2618   size_t granularity;
   2619   size_t mmap_threshold;
   2620   size_t trim_threshold;
   2621   flag_t default_mflags;
   2622 };
   2623 
   2624 static struct malloc_params mparams;
   2625 
   2626 /* Ensure mparams initialized */
   2627 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams())
   2628 
   2629 #if !ONLY_MSPACES
   2630 
   2631 /* The global malloc_state used for all non-"mspace" calls */
   2632 static struct malloc_state _gm_;
   2633 #define gm                 (&_gm_)
   2634 #define is_global(M)       ((M) == &_gm_)
   2635 
   2636 #endif /* !ONLY_MSPACES */
   2637 
   2638 #define is_initialized(M)  ((M)->top != 0)
   2639 
   2640 /* -------------------------- system alloc setup ------------------------- */
   2641 
   2642 /* Operations on mflags */
   2643 
   2644 #define use_lock(M)           ((M)->mflags &   USE_LOCK_BIT)
   2645 #define enable_lock(M)        ((M)->mflags |=  USE_LOCK_BIT)
   2646 #if USE_LOCKS
   2647 #define disable_lock(M)       ((M)->mflags &= ~USE_LOCK_BIT)
   2648 #else
   2649 #define disable_lock(M)
   2650 #endif
   2651 
   2652 #define use_mmap(M)           ((M)->mflags &   USE_MMAP_BIT)
   2653 #define enable_mmap(M)        ((M)->mflags |=  USE_MMAP_BIT)
   2654 #if HAVE_MMAP
   2655 #define disable_mmap(M)       ((M)->mflags &= ~USE_MMAP_BIT)
   2656 #else
   2657 #define disable_mmap(M)
   2658 #endif
   2659 
   2660 #define use_noncontiguous(M)  ((M)->mflags &   USE_NONCONTIGUOUS_BIT)
   2661 #define disable_contiguous(M) ((M)->mflags |=  USE_NONCONTIGUOUS_BIT)
   2662 
   2663 #define set_lock(M,L)\
   2664  ((M)->mflags = (L)?\
   2665   ((M)->mflags | USE_LOCK_BIT) :\
   2666   ((M)->mflags & ~USE_LOCK_BIT))
   2667 
   2668 /* page-align a size */
   2669 #define page_align(S)\
   2670  (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE))
   2671 
   2672 /* granularity-align a size */
   2673 #define granularity_align(S)\
   2674   (((S) + (mparams.granularity - SIZE_T_ONE))\
   2675    & ~(mparams.granularity - SIZE_T_ONE))
   2676 
   2677 
   2678 /* For mmap, use granularity alignment on windows, else page-align */
   2679 #ifdef WIN32
   2680 #define mmap_align(S) granularity_align(S)
   2681 #else
   2682 #define mmap_align(S) page_align(S)
   2683 #endif
   2684 
   2685 /* For sys_alloc, enough padding to ensure can malloc request on success */
   2686 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT)
   2687 
   2688 #define is_page_aligned(S)\
   2689    (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0)
   2690 #define is_granularity_aligned(S)\
   2691    (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0)
   2692 
   2693 /*  True if segment S holds address A */
   2694 #define segment_holds(S, A)\
   2695   ((char*)(A) >= S->base && (char*)(A) < S->base + S->size)
   2696 
   2697 /* Return segment holding given address */
   2698 static msegmentptr segment_holding(mstate m, char* addr) {
   2699   msegmentptr sp = &m->seg;
   2700   for (;;) {
   2701     if (addr >= sp->base && addr < sp->base + sp->size)
   2702       return sp;
   2703     if ((sp = sp->next) == 0)
   2704       return 0;
   2705   }
   2706 }
   2707 
   2708 /* Return true if segment contains a segment link */
   2709 static int has_segment_link(mstate m, msegmentptr ss) {
   2710   msegmentptr sp = &m->seg;
   2711   for (;;) {
   2712     if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size)
   2713       return 1;
   2714     if ((sp = sp->next) == 0)
   2715       return 0;
   2716   }
   2717 }
   2718 
   2719 #ifndef MORECORE_CANNOT_TRIM
   2720 #define should_trim(M,s)  ((s) > (M)->trim_check)
   2721 #else  /* MORECORE_CANNOT_TRIM */
   2722 #define should_trim(M,s)  (0)
   2723 #endif /* MORECORE_CANNOT_TRIM */
   2724 
   2725 /*
   2726   TOP_FOOT_SIZE is padding at the end of a segment, including space
   2727   that may be needed to place segment records and fenceposts when new
   2728   noncontiguous segments are added.
   2729 */
   2730 #define TOP_FOOT_SIZE\
   2731   (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE)
   2732 
   2733 
   2734 /* -------------------------------  Hooks -------------------------------- */
   2735 
   2736 /*
   2737   PREACTION should be defined to return 0 on success, and nonzero on
   2738   failure. If you are not using locking, you can redefine these to do
   2739   anything you like.
   2740 */
   2741 
   2742 #if USE_LOCKS
   2743 #define PREACTION(M)  ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0)
   2744 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); }
   2745 #else /* USE_LOCKS */
   2746 
   2747 #ifndef PREACTION
   2748 #define PREACTION(M) (0)
   2749 #endif  /* PREACTION */
   2750 
   2751 #ifndef POSTACTION
   2752 #define POSTACTION(M)
   2753 #endif  /* POSTACTION */
   2754 
   2755 #endif /* USE_LOCKS */
   2756 
   2757 /*
   2758   CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses.
   2759   USAGE_ERROR_ACTION is triggered on detected bad frees and
   2760   reallocs. The argument p is an address that might have triggered the
   2761   fault. It is ignored by the two predefined actions, but might be
   2762   useful in custom actions that try to help diagnose errors.
   2763 */
   2764 
   2765 #if PROCEED_ON_ERROR
   2766 
   2767 /* A count of the number of corruption errors causing resets */
   2768 int malloc_corruption_error_count;
   2769 
   2770 /* default corruption action */
   2771 static void reset_on_error(mstate m);
   2772 
   2773 #define CORRUPTION_ERROR_ACTION(m)  reset_on_error(m)
   2774 #define USAGE_ERROR_ACTION(m, p)
   2775 
   2776 #else /* PROCEED_ON_ERROR */
   2777 
   2778 #ifndef CORRUPTION_ERROR_ACTION
   2779 #define CORRUPTION_ERROR_ACTION(m) ABORT
   2780 #endif /* CORRUPTION_ERROR_ACTION */
   2781 
   2782 #ifndef USAGE_ERROR_ACTION
   2783 #define USAGE_ERROR_ACTION(m,p) ABORT
   2784 #endif /* USAGE_ERROR_ACTION */
   2785 
   2786 #endif /* PROCEED_ON_ERROR */
   2787 
   2788 
   2789 /* -------------------------- Debugging setup ---------------------------- */
   2790 
   2791 #if ! DEBUG
   2792 
   2793 #define check_free_chunk(M,P)
   2794 #define check_inuse_chunk(M,P)
   2795 #define check_malloced_chunk(M,P,N)
   2796 #define check_mmapped_chunk(M,P)
   2797 #define check_malloc_state(M)
   2798 #define check_top_chunk(M,P)
   2799 
   2800 #else /* DEBUG */
   2801 #define check_free_chunk(M,P)       do_check_free_chunk(M,P)
   2802 #define check_inuse_chunk(M,P)      do_check_inuse_chunk(M,P)
   2803 #define check_top_chunk(M,P)        do_check_top_chunk(M,P)
   2804 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N)
   2805 #define check_mmapped_chunk(M,P)    do_check_mmapped_chunk(M,P)
   2806 #define check_malloc_state(M)       do_check_malloc_state(M)
   2807 
   2808 static void   do_check_any_chunk(mstate m, mchunkptr p);
   2809 static void   do_check_top_chunk(mstate m, mchunkptr p);
   2810 static void   do_check_mmapped_chunk(mstate m, mchunkptr p);
   2811 static void   do_check_inuse_chunk(mstate m, mchunkptr p);
   2812 static void   do_check_free_chunk(mstate m, mchunkptr p);
   2813 static void   do_check_malloced_chunk(mstate m, void* mem, size_t s);
   2814 static void   do_check_tree(mstate m, tchunkptr t);
   2815 static void   do_check_treebin(mstate m, bindex_t i);
   2816 static void   do_check_smallbin(mstate m, bindex_t i);
   2817 static void   do_check_malloc_state(mstate m);
   2818 static int    bin_find(mstate m, mchunkptr x);
   2819 static size_t traverse_and_check(mstate m);
   2820 #endif /* DEBUG */
   2821 
   2822 /* ---------------------------- Indexing Bins ---------------------------- */
   2823 
   2824 #define is_small(s)         (((s) >> SMALLBIN_SHIFT) < NSMALLBINS)
   2825 #define small_index(s)      (bindex_t)((s)  >> SMALLBIN_SHIFT)
   2826 #define small_index2size(i) ((i)  << SMALLBIN_SHIFT)
   2827 #define MIN_SMALL_INDEX     (small_index(MIN_CHUNK_SIZE))
   2828 
   2829 /* addressing by index. See above about smallbin repositioning */
   2830 /* BEGIN android-changed: strict aliasing change: char* cast to void* */
   2831 #define smallbin_at(M, i)   ((sbinptr)((void*)&((M)->smallbins[(i)<<1])))
   2832 /* END android-changed */
   2833 #define treebin_at(M,i)     (&((M)->treebins[i]))
   2834 
   2835 /* assign tree index for size S to variable I. Use x86 asm if possible  */
   2836 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
   2837 #define compute_tree_index(S, I)\
   2838 {\
   2839   unsigned int X = S >> TREEBIN_SHIFT;\
   2840   if (X == 0)\
   2841     I = 0;\
   2842   else if (X > 0xFFFF)\
   2843     I = NTREEBINS-1;\
   2844   else {\
   2845     unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \
   2846     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
   2847   }\
   2848 }
   2849 
   2850 #elif defined (__INTEL_COMPILER)
   2851 #define compute_tree_index(S, I)\
   2852 {\
   2853   size_t X = S >> TREEBIN_SHIFT;\
   2854   if (X == 0)\
   2855     I = 0;\
   2856   else if (X > 0xFFFF)\
   2857     I = NTREEBINS-1;\
   2858   else {\
   2859     unsigned int K = _bit_scan_reverse (X); \
   2860     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
   2861   }\
   2862 }
   2863 
   2864 #elif defined(_MSC_VER) && _MSC_VER>=1300
   2865 #define compute_tree_index(S, I)\
   2866 {\
   2867   size_t X = S >> TREEBIN_SHIFT;\
   2868   if (X == 0)\
   2869     I = 0;\
   2870   else if (X > 0xFFFF)\
   2871     I = NTREEBINS-1;\
   2872   else {\
   2873     unsigned int K;\
   2874     _BitScanReverse((DWORD *) &K, (DWORD) X);\
   2875     I =  (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\
   2876   }\
   2877 }
   2878 
   2879 #else /* GNUC */
   2880 #define compute_tree_index(S, I)\
   2881 {\
   2882   size_t X = S >> TREEBIN_SHIFT;\
   2883   if (X == 0)\
   2884     I = 0;\
   2885   else if (X > 0xFFFF)\
   2886     I = NTREEBINS-1;\
   2887   else {\
   2888     unsigned int Y = (unsigned int)X;\
   2889     unsigned int N = ((Y - 0x100) >> 16) & 8;\
   2890     unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\
   2891     N += K;\
   2892     N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\
   2893     K = 14 - N + ((Y <<= K) >> 15);\
   2894     I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\
   2895   }\
   2896 }
   2897 #endif /* GNUC */
   2898 
   2899 /* Bit representing maximum resolved size in a treebin at i */
   2900 #define bit_for_tree_index(i) \
   2901    (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2)
   2902 
   2903 /* Shift placing maximum resolved bit in a treebin at i as sign bit */
   2904 #define leftshift_for_tree_index(i) \
   2905    ((i == NTREEBINS-1)? 0 : \
   2906     ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2)))
   2907 
   2908 /* The size of the smallest chunk held in bin with index i */
   2909 #define minsize_for_tree_index(i) \
   2910    ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) |  \
   2911    (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1)))
   2912 
   2913 
   2914 /* ------------------------ Operations on bin maps ----------------------- */
   2915 
   2916 /* bit corresponding to given index */
   2917 #define idx2bit(i)              ((binmap_t)(1) << (i))
   2918 
   2919 /* Mark/Clear bits with given index */
   2920 #define mark_smallmap(M,i)      ((M)->smallmap |=  idx2bit(i))
   2921 #define clear_smallmap(M,i)     ((M)->smallmap &= ~idx2bit(i))
   2922 #define smallmap_is_marked(M,i) ((M)->smallmap &   idx2bit(i))
   2923 
   2924 #define mark_treemap(M,i)       ((M)->treemap  |=  idx2bit(i))
   2925 #define clear_treemap(M,i)      ((M)->treemap  &= ~idx2bit(i))
   2926 #define treemap_is_marked(M,i)  ((M)->treemap  &   idx2bit(i))
   2927 
   2928 /* isolate the least set bit of a bitmap */
   2929 #define least_bit(x)         ((x) & -(x))
   2930 
   2931 /* mask with all bits to left of least bit of x on */
   2932 #define left_bits(x)         ((x<<1) | -(x<<1))
   2933 
   2934 /* mask with all bits to left of or equal to least bit of x on */
   2935 #define same_or_left_bits(x) ((x) | -(x))
   2936 
   2937 /* index corresponding to given bit. Use x86 asm if possible */
   2938 
   2939 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
   2940 #define compute_bit2idx(X, I)\
   2941 {\
   2942   unsigned int J;\
   2943   J = __builtin_ctz(X); \
   2944   I = (bindex_t)J;\
   2945 }
   2946 
   2947 #elif defined (__INTEL_COMPILER)
   2948 #define compute_bit2idx(X, I)\
   2949 {\
   2950   unsigned int J;\
   2951   J = _bit_scan_forward (X); \
   2952   I = (bindex_t)J;\
   2953 }
   2954 
   2955 #elif defined(_MSC_VER) && _MSC_VER>=1300
   2956 #define compute_bit2idx(X, I)\
   2957 {\
   2958   unsigned int J;\
   2959   _BitScanForward((DWORD *) &J, X);\
   2960   I = (bindex_t)J;\
   2961 }
   2962 
   2963 #elif USE_BUILTIN_FFS
   2964 #define compute_bit2idx(X, I) I = ffs(X)-1
   2965 
   2966 #else
   2967 #define compute_bit2idx(X, I)\
   2968 {\
   2969   unsigned int Y = X - 1;\
   2970   unsigned int K = Y >> (16-4) & 16;\
   2971   unsigned int N = K;        Y >>= K;\
   2972   N += K = Y >> (8-3) &  8;  Y >>= K;\
   2973   N += K = Y >> (4-2) &  4;  Y >>= K;\
   2974   N += K = Y >> (2-1) &  2;  Y >>= K;\
   2975   N += K = Y >> (1-0) &  1;  Y >>= K;\
   2976   I = (bindex_t)(N + Y);\
   2977 }
   2978 #endif /* GNUC */
   2979 
   2980 
   2981 /* ----------------------- Runtime Check Support ------------------------- */
   2982 
   2983 /*
   2984   For security, the main invariant is that malloc/free/etc never
   2985   writes to a static address other than malloc_state, unless static
   2986   malloc_state itself has been corrupted, which cannot occur via
   2987   malloc (because of these checks). In essence this means that we
   2988   believe all pointers, sizes, maps etc held in malloc_state, but
   2989   check all of those linked or offsetted from other embedded data
   2990   structures.  These checks are interspersed with main code in a way
   2991   that tends to minimize their run-time cost.
   2992 
   2993   When FOOTERS is defined, in addition to range checking, we also
   2994   verify footer fields of inuse chunks, which can be used guarantee
   2995   that the mstate controlling malloc/free is intact.  This is a
   2996   streamlined version of the approach described by William Robertson
   2997   et al in "Run-time Detection of Heap-based Overflows" LISA'03
   2998   http://www.usenix.org/events/lisa03/tech/robertson.html The footer
   2999   of an inuse chunk holds the xor of its mstate and a random seed,
   3000   that is checked upon calls to free() and realloc().  This is
   3001   (probabalistically) unguessable from outside the program, but can be
   3002   computed by any code successfully malloc'ing any chunk, so does not
   3003   itself provide protection against code that has already broken
   3004   security through some other means.  Unlike Robertson et al, we
   3005   always dynamically check addresses of all offset chunks (previous,
   3006   next, etc). This turns out to be cheaper than relying on hashes.
   3007 */
   3008 
   3009 #if !INSECURE
   3010 /* Check if address a is at least as high as any from MORECORE or MMAP */
   3011 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr)
   3012 /* Check if address of next chunk n is higher than base chunk p */
   3013 #define ok_next(p, n)    ((char*)(p) < (char*)(n))
   3014 /* Check if p has inuse status */
   3015 #define ok_inuse(p)     is_inuse(p)
   3016 /* Check if p has its pinuse bit on */
   3017 #define ok_pinuse(p)     pinuse(p)
   3018 
   3019 #else /* !INSECURE */
   3020 #define ok_address(M, a) (1)
   3021 #define ok_next(b, n)    (1)
   3022 #define ok_inuse(p)      (1)
   3023 #define ok_pinuse(p)     (1)
   3024 #endif /* !INSECURE */
   3025 
   3026 #if (FOOTERS && !INSECURE)
   3027 /* Check if (alleged) mstate m has expected magic field */
   3028 #define ok_magic(M)      ((M)->magic == mparams.magic)
   3029 #else  /* (FOOTERS && !INSECURE) */
   3030 #define ok_magic(M)      (1)
   3031 #endif /* (FOOTERS && !INSECURE) */
   3032 
   3033 /* In gcc, use __builtin_expect to minimize impact of checks */
   3034 #if !INSECURE
   3035 #if defined(__GNUC__) && __GNUC__ >= 3
   3036 #define RTCHECK(e)  __builtin_expect(e, 1)
   3037 #else /* GNUC */
   3038 #define RTCHECK(e)  (e)
   3039 #endif /* GNUC */
   3040 #else /* !INSECURE */
   3041 #define RTCHECK(e)  (1)
   3042 #endif /* !INSECURE */
   3043 
   3044 /* macros to set up inuse chunks with or without footers */
   3045 
   3046 #if !FOOTERS
   3047 
   3048 #define mark_inuse_foot(M,p,s)
   3049 
   3050 /* Macros for setting head/foot of non-mmapped chunks */
   3051 
   3052 /* Set cinuse bit and pinuse bit of next chunk */
   3053 #define set_inuse(M,p,s)\
   3054   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
   3055   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
   3056 
   3057 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */
   3058 #define set_inuse_and_pinuse(M,p,s)\
   3059   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
   3060   ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT)
   3061 
   3062 /* Set size, cinuse and pinuse bit of this chunk */
   3063 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
   3064   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT))
   3065 
   3066 #else /* FOOTERS */
   3067 
   3068 /* Set foot of inuse chunk to be xor of mstate and seed */
   3069 #define mark_inuse_foot(M,p,s)\
   3070   (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic))
   3071 
   3072 #define get_mstate_for(p)\
   3073   ((mstate)(((mchunkptr)((char*)(p) +\
   3074     (chunksize(p))))->prev_foot ^ mparams.magic))
   3075 
   3076 #define set_inuse(M,p,s)\
   3077   ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\
   3078   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \
   3079   mark_inuse_foot(M,p,s))
   3080 
   3081 #define set_inuse_and_pinuse(M,p,s)\
   3082   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
   3083   (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\
   3084  mark_inuse_foot(M,p,s))
   3085 
   3086 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\
   3087   ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\
   3088   mark_inuse_foot(M, p, s))
   3089 
   3090 #endif /* !FOOTERS */
   3091 
   3092 /* ---------------------------- setting mparams -------------------------- */
   3093 
   3094 #if LOCK_AT_FORK
   3095 static void pre_fork(void)         { ACQUIRE_LOCK(&(gm)->mutex); }
   3096 static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); }
   3097 static void post_fork_child(void)  { INITIAL_LOCK(&(gm)->mutex); }
   3098 #endif /* LOCK_AT_FORK */
   3099 
   3100 /* Initialize mparams */
   3101 static int init_mparams(void) {
   3102   /* BEGIN android-added: move pthread_atfork outside of lock */
   3103   int first_run = 0;
   3104   /* END android-added */
   3105 #ifdef NEED_GLOBAL_LOCK_INIT
   3106   if (malloc_global_mutex_status <= 0)
   3107     init_malloc_global_mutex();
   3108 #endif
   3109 
   3110   ACQUIRE_MALLOC_GLOBAL_LOCK();
   3111   if (mparams.magic == 0) {
   3112     size_t magic;
   3113     size_t psize;
   3114     size_t gsize;
   3115     /* BEGIN android-added: move pthread_atfork outside of lock */
   3116     first_run = 1;
   3117     /* END android-added */
   3118 
   3119 #ifndef WIN32
   3120     psize = malloc_getpagesize;
   3121     gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize);
   3122 #else /* WIN32 */
   3123     {
   3124       SYSTEM_INFO system_info;
   3125       GetSystemInfo(&system_info);
   3126       psize = system_info.dwPageSize;
   3127       gsize = ((DEFAULT_GRANULARITY != 0)?
   3128                DEFAULT_GRANULARITY : system_info.dwAllocationGranularity);
   3129     }
   3130 #endif /* WIN32 */
   3131 
   3132     /* Sanity-check configuration:
   3133        size_t must be unsigned and as wide as pointer type.
   3134        ints must be at least 4 bytes.
   3135        alignment must be at least 8.
   3136        Alignment, min chunk size, and page size must all be powers of 2.
   3137     */
   3138     if ((sizeof(size_t) != sizeof(char*)) ||
   3139         (MAX_SIZE_T < MIN_CHUNK_SIZE)  ||
   3140         (sizeof(int) < 4)  ||
   3141         (MALLOC_ALIGNMENT < (size_t)8U) ||
   3142         ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) ||
   3143         ((MCHUNK_SIZE      & (MCHUNK_SIZE-SIZE_T_ONE))      != 0) ||
   3144         ((gsize            & (gsize-SIZE_T_ONE))            != 0) ||
   3145         ((psize            & (psize-SIZE_T_ONE))            != 0))
   3146       ABORT;
   3147     mparams.granularity = gsize;
   3148     mparams.page_size = psize;
   3149     mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD;
   3150     mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD;
   3151 #if MORECORE_CONTIGUOUS
   3152     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT;
   3153 #else  /* MORECORE_CONTIGUOUS */
   3154     mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT;
   3155 #endif /* MORECORE_CONTIGUOUS */
   3156 
   3157 #if !ONLY_MSPACES
   3158     /* Set up lock for main malloc area */
   3159     gm->mflags = mparams.default_mflags;
   3160     (void)INITIAL_LOCK(&gm->mutex);
   3161 #endif
   3162     /* BEGIN android-removed: move pthread_atfork outside of lock */
   3163 #if 0 && LOCK_AT_FORK
   3164     pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
   3165 #endif
   3166     /* END android-removed */
   3167 
   3168     {
   3169 #if USE_DEV_RANDOM
   3170       int fd;
   3171       unsigned char buf[sizeof(size_t)];
   3172       /* Try to use /dev/urandom, else fall back on using time */
   3173       if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 &&
   3174           read(fd, buf, sizeof(buf)) == sizeof(buf)) {
   3175         magic = *((size_t *) buf);
   3176         close(fd);
   3177       }
   3178       else
   3179 #endif /* USE_DEV_RANDOM */
   3180 #ifdef WIN32
   3181       magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U);
   3182 #elif defined(LACKS_TIME_H)
   3183       magic = (size_t)&magic ^ (size_t)0x55555555U;
   3184 #else
   3185       magic = (size_t)(time(0) ^ (size_t)0x55555555U);
   3186 #endif
   3187       magic |= (size_t)8U;    /* ensure nonzero */
   3188       magic &= ~(size_t)7U;   /* improve chances of fault for bad values */
   3189       /* Until memory modes commonly available, use volatile-write */
   3190       (*(volatile size_t *)(&(mparams.magic))) = magic;
   3191     }
   3192   }
   3193 
   3194   RELEASE_MALLOC_GLOBAL_LOCK();
   3195   /* BEGIN android-added: move pthread_atfork outside of lock */
   3196   if (first_run != 0) {
   3197 #if LOCK_AT_FORK
   3198     pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child);
   3199 #endif
   3200   }
   3201   /* END android-added */
   3202   return 1;
   3203 }
   3204 
   3205 /* support for mallopt */
   3206 static int change_mparam(int param_number, int value) {
   3207   size_t val;
   3208   ensure_initialization();
   3209   val = (value == -1)? MAX_SIZE_T : (size_t)value;
   3210   switch(param_number) {
   3211   case M_TRIM_THRESHOLD:
   3212     mparams.trim_threshold = val;
   3213     return 1;
   3214   case M_GRANULARITY:
   3215     if (val >= mparams.page_size && ((val & (val-1)) == 0)) {
   3216       mparams.granularity = val;
   3217       return 1;
   3218     }
   3219     else
   3220       return 0;
   3221   case M_MMAP_THRESHOLD:
   3222     mparams.mmap_threshold = val;
   3223     return 1;
   3224   default:
   3225     return 0;
   3226   }
   3227 }
   3228 
   3229 #if DEBUG
   3230 /* ------------------------- Debugging Support --------------------------- */
   3231 
   3232 /* Check properties of any chunk, whether free, inuse, mmapped etc  */
   3233 static void do_check_any_chunk(mstate m, mchunkptr p) {
   3234   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
   3235   assert(ok_address(m, p));
   3236 }
   3237 
   3238 /* Check properties of top chunk */
   3239 static void do_check_top_chunk(mstate m, mchunkptr p) {
   3240   msegmentptr sp = segment_holding(m, (char*)p);
   3241   size_t  sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */
   3242   assert(sp != 0);
   3243   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
   3244   assert(ok_address(m, p));
   3245   assert(sz == m->topsize);
   3246   assert(sz > 0);
   3247   assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE);
   3248   assert(pinuse(p));
   3249   assert(!pinuse(chunk_plus_offset(p, sz)));
   3250 }
   3251 
   3252 /* Check properties of (inuse) mmapped chunks */
   3253 static void do_check_mmapped_chunk(mstate m, mchunkptr p) {
   3254   size_t  sz = chunksize(p);
   3255   size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD);
   3256   assert(is_mmapped(p));
   3257   assert(use_mmap(m));
   3258   assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD));
   3259   assert(ok_address(m, p));
   3260   assert(!is_small(sz));
   3261   assert((len & (mparams.page_size-SIZE_T_ONE)) == 0);
   3262   assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD);
   3263   assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0);
   3264 }
   3265 
   3266 /* Check properties of inuse chunks */
   3267 static void do_check_inuse_chunk(mstate m, mchunkptr p) {
   3268   do_check_any_chunk(m, p);
   3269   assert(is_inuse(p));
   3270   assert(next_pinuse(p));
   3271   /* If not pinuse and not mmapped, previous chunk has OK offset */
   3272   assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p);
   3273   if (is_mmapped(p))
   3274     do_check_mmapped_chunk(m, p);
   3275 }
   3276 
   3277 /* Check properties of free chunks */
   3278 static void do_check_free_chunk(mstate m, mchunkptr p) {
   3279   size_t sz = chunksize(p);
   3280   mchunkptr next = chunk_plus_offset(p, sz);
   3281   do_check_any_chunk(m, p);
   3282   assert(!is_inuse(p));
   3283   assert(!next_pinuse(p));
   3284   assert (!is_mmapped(p));
   3285   if (p != m->dv && p != m->top) {
   3286     if (sz >= MIN_CHUNK_SIZE) {
   3287       assert((sz & CHUNK_ALIGN_MASK) == 0);
   3288       assert(is_aligned(chunk2mem(p)));
   3289       assert(next->prev_foot == sz);
   3290       assert(pinuse(p));
   3291       assert (next == m->top || is_inuse(next));
   3292       assert(p->fd->bk == p);
   3293       assert(p->bk->fd == p);
   3294     }
   3295     else  /* markers are always of size SIZE_T_SIZE */
   3296       assert(sz == SIZE_T_SIZE);
   3297   }
   3298 }
   3299 
   3300 /* Check properties of malloced chunks at the point they are malloced */
   3301 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) {
   3302   if (mem != 0) {
   3303     mchunkptr p = mem2chunk(mem);
   3304     size_t sz = p->head & ~INUSE_BITS;
   3305     do_check_inuse_chunk(m, p);
   3306     assert((sz & CHUNK_ALIGN_MASK) == 0);
   3307     assert(sz >= MIN_CHUNK_SIZE);
   3308     assert(sz >= s);
   3309     /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
   3310     assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE));
   3311   }
   3312 }
   3313 
   3314 /* Check a tree and its subtrees.  */
   3315 static void do_check_tree(mstate m, tchunkptr t) {
   3316   tchunkptr head = 0;
   3317   tchunkptr u = t;
   3318   bindex_t tindex = t->index;
   3319   size_t tsize = chunksize(t);
   3320   bindex_t idx;
   3321   compute_tree_index(tsize, idx);
   3322   assert(tindex == idx);
   3323   assert(tsize >= MIN_LARGE_SIZE);
   3324   assert(tsize >= minsize_for_tree_index(idx));
   3325   assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1))));
   3326 
   3327   do { /* traverse through chain of same-sized nodes */
   3328     do_check_any_chunk(m, ((mchunkptr)u));
   3329     assert(u->index == tindex);
   3330     assert(chunksize(u) == tsize);
   3331     assert(!is_inuse(u));
   3332     assert(!next_pinuse(u));
   3333     assert(u->fd->bk == u);
   3334     assert(u->bk->fd == u);
   3335     if (u->parent == 0) {
   3336       assert(u->child[0] == 0);
   3337       assert(u->child[1] == 0);
   3338     }
   3339     else {
   3340       assert(head == 0); /* only one node on chain has parent */
   3341       head = u;
   3342       assert(u->parent != u);
   3343       assert (u->parent->child[0] == u ||
   3344               u->parent->child[1] == u ||
   3345               *((tbinptr*)(u->parent)) == u);
   3346       if (u->child[0] != 0) {
   3347         assert(u->child[0]->parent == u);
   3348         assert(u->child[0] != u);
   3349         do_check_tree(m, u->child[0]);
   3350       }
   3351       if (u->child[1] != 0) {
   3352         assert(u->child[1]->parent == u);
   3353         assert(u->child[1] != u);
   3354         do_check_tree(m, u->child[1]);
   3355       }
   3356       if (u->child[0] != 0 && u->child[1] != 0) {
   3357         assert(chunksize(u->child[0]) < chunksize(u->child[1]));
   3358       }
   3359     }
   3360     u = u->fd;
   3361   } while (u != t);
   3362   assert(head != 0);
   3363 }
   3364 
   3365 /*  Check all the chunks in a treebin.  */
   3366 static void do_check_treebin(mstate m, bindex_t i) {
   3367   tbinptr* tb = treebin_at(m, i);
   3368   tchunkptr t = *tb;
   3369   int empty = (m->treemap & (1U << i)) == 0;
   3370   if (t == 0)
   3371     assert(empty);
   3372   if (!empty)
   3373     do_check_tree(m, t);
   3374 }
   3375 
   3376 /*  Check all the chunks in a smallbin.  */
   3377 static void do_check_smallbin(mstate m, bindex_t i) {
   3378   sbinptr b = smallbin_at(m, i);
   3379   mchunkptr p = b->bk;
   3380   unsigned int empty = (m->smallmap & (1U << i)) == 0;
   3381   if (p == b)
   3382     assert(empty);
   3383   if (!empty) {
   3384     for (; p != b; p = p->bk) {
   3385       size_t size = chunksize(p);
   3386       mchunkptr q;
   3387       /* each chunk claims to be free */
   3388       do_check_free_chunk(m, p);
   3389       /* chunk belongs in bin */
   3390       assert(small_index(size) == i);
   3391       assert(p->bk == b || chunksize(p->bk) == chunksize(p));
   3392       /* chunk is followed by an inuse chunk */
   3393       q = next_chunk(p);
   3394       if (q->head != FENCEPOST_HEAD)
   3395         do_check_inuse_chunk(m, q);
   3396     }
   3397   }
   3398 }
   3399 
   3400 /* Find x in a bin. Used in other check functions. */
   3401 static int bin_find(mstate m, mchunkptr x) {
   3402   size_t size = chunksize(x);
   3403   if (is_small(size)) {
   3404     bindex_t sidx = small_index(size);
   3405     sbinptr b = smallbin_at(m, sidx);
   3406     if (smallmap_is_marked(m, sidx)) {
   3407       mchunkptr p = b;
   3408       do {
   3409         if (p == x)
   3410           return 1;
   3411       } while ((p = p->fd) != b);
   3412     }
   3413   }
   3414   else {
   3415     bindex_t tidx;
   3416     compute_tree_index(size, tidx);
   3417     if (treemap_is_marked(m, tidx)) {
   3418       tchunkptr t = *treebin_at(m, tidx);
   3419       size_t sizebits = size << leftshift_for_tree_index(tidx);
   3420       while (t != 0 && chunksize(t) != size) {
   3421         t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
   3422         sizebits <<= 1;
   3423       }
   3424       if (t != 0) {
   3425         tchunkptr u = t;
   3426         do {
   3427           if (u == (tchunkptr)x)
   3428             return 1;
   3429         } while ((u = u->fd) != t);
   3430       }
   3431     }
   3432   }
   3433   return 0;
   3434 }
   3435 
   3436 /* Traverse each chunk and check it; return total */
   3437 static size_t traverse_and_check(mstate m) {
   3438   size_t sum = 0;
   3439   if (is_initialized(m)) {
   3440     msegmentptr s = &m->seg;
   3441     sum += m->topsize + TOP_FOOT_SIZE;
   3442     while (s != 0) {
   3443       mchunkptr q = align_as_chunk(s->base);
   3444       mchunkptr lastq = 0;
   3445       assert(pinuse(q));
   3446       while (segment_holds(s, q) &&
   3447              q != m->top && q->head != FENCEPOST_HEAD) {
   3448         sum += chunksize(q);
   3449         if (is_inuse(q)) {
   3450           assert(!bin_find(m, q));
   3451           do_check_inuse_chunk(m, q);
   3452         }
   3453         else {
   3454           assert(q == m->dv || bin_find(m, q));
   3455           assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */
   3456           do_check_free_chunk(m, q);
   3457         }
   3458         lastq = q;
   3459         q = next_chunk(q);
   3460       }
   3461       s = s->next;
   3462     }
   3463   }
   3464   return sum;
   3465 }
   3466 
   3467 
   3468 /* Check all properties of malloc_state. */
   3469 static void do_check_malloc_state(mstate m) {
   3470   bindex_t i;
   3471   size_t total;
   3472   /* check bins */
   3473   for (i = 0; i < NSMALLBINS; ++i)
   3474     do_check_smallbin(m, i);
   3475   for (i = 0; i < NTREEBINS; ++i)
   3476     do_check_treebin(m, i);
   3477 
   3478   if (m->dvsize != 0) { /* check dv chunk */
   3479     do_check_any_chunk(m, m->dv);
   3480     assert(m->dvsize == chunksize(m->dv));
   3481     assert(m->dvsize >= MIN_CHUNK_SIZE);
   3482     assert(bin_find(m, m->dv) == 0);
   3483   }
   3484 
   3485   if (m->top != 0) {   /* check top chunk */
   3486     do_check_top_chunk(m, m->top);
   3487     /*assert(m->topsize == chunksize(m->top)); redundant */
   3488     assert(m->topsize > 0);
   3489     assert(bin_find(m, m->top) == 0);
   3490   }
   3491 
   3492   total = traverse_and_check(m);
   3493   assert(total <= m->footprint);
   3494   assert(m->footprint <= m->max_footprint);
   3495 }
   3496 #endif /* DEBUG */
   3497 
   3498 /* ----------------------------- statistics ------------------------------ */
   3499 
   3500 #if !NO_MALLINFO
   3501 static struct mallinfo internal_mallinfo(mstate m) {
   3502   struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
   3503   ensure_initialization();
   3504   if (!PREACTION(m)) {
   3505     check_malloc_state(m);
   3506     if (is_initialized(m)) {
   3507       size_t nfree = SIZE_T_ONE; /* top always free */
   3508       size_t mfree = m->topsize + TOP_FOOT_SIZE;
   3509       size_t sum = mfree;
   3510       msegmentptr s = &m->seg;
   3511       while (s != 0) {
   3512         mchunkptr q = align_as_chunk(s->base);
   3513         while (segment_holds(s, q) &&
   3514                q != m->top && q->head != FENCEPOST_HEAD) {
   3515           size_t sz = chunksize(q);
   3516           sum += sz;
   3517           if (!is_inuse(q)) {
   3518             mfree += sz;
   3519             ++nfree;
   3520           }
   3521           q = next_chunk(q);
   3522         }
   3523         s = s->next;
   3524       }
   3525 
   3526       nm.arena    = sum;
   3527       nm.ordblks  = nfree;
   3528       nm.hblkhd   = m->footprint - sum;
   3529       /* BEGIN android-changed: usmblks set to footprint from max_footprint */
   3530       nm.usmblks  = m->footprint;
   3531       /* END android-changed */
   3532       nm.uordblks = m->footprint - mfree;
   3533       nm.fordblks = mfree;
   3534       nm.keepcost = m->topsize;
   3535     }
   3536 
   3537     POSTACTION(m);
   3538   }
   3539   return nm;
   3540 }
   3541 #endif /* !NO_MALLINFO */
   3542 
   3543 #if !NO_MALLOC_STATS
   3544 static void internal_malloc_stats(mstate m) {
   3545   ensure_initialization();
   3546   if (!PREACTION(m)) {
   3547     size_t maxfp = 0;
   3548     size_t fp = 0;
   3549     size_t used = 0;
   3550     check_malloc_state(m);
   3551     if (is_initialized(m)) {
   3552       msegmentptr s = &m->seg;
   3553       maxfp = m->max_footprint;
   3554       fp = m->footprint;
   3555       used = fp - (m->topsize + TOP_FOOT_SIZE);
   3556 
   3557       while (s != 0) {
   3558         mchunkptr q = align_as_chunk(s->base);
   3559         while (segment_holds(s, q) &&
   3560                q != m->top && q->head != FENCEPOST_HEAD) {
   3561           if (!is_inuse(q))
   3562             used -= chunksize(q);
   3563           q = next_chunk(q);
   3564         }
   3565         s = s->next;
   3566       }
   3567     }
   3568     POSTACTION(m); /* drop lock */
   3569     fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp));
   3570     fprintf(stderr, "system bytes     = %10lu\n", (unsigned long)(fp));
   3571     fprintf(stderr, "in use bytes     = %10lu\n", (unsigned long)(used));
   3572   }
   3573 }
   3574 #endif /* NO_MALLOC_STATS */
   3575 
   3576 /* ----------------------- Operations on smallbins ----------------------- */
   3577 
   3578 /*
   3579   Various forms of linking and unlinking are defined as macros.  Even
   3580   the ones for trees, which are very long but have very short typical
   3581   paths.  This is ugly but reduces reliance on inlining support of
   3582   compilers.
   3583 */
   3584 
   3585 /* Link a free chunk into a smallbin  */
   3586 #define insert_small_chunk(M, P, S) {\
   3587   bindex_t I  = small_index(S);\
   3588   mchunkptr B = smallbin_at(M, I);\
   3589   mchunkptr F = B;\
   3590   assert(S >= MIN_CHUNK_SIZE);\
   3591   if (!smallmap_is_marked(M, I))\
   3592     mark_smallmap(M, I);\
   3593   else if (RTCHECK(ok_address(M, B->fd)))\
   3594     F = B->fd;\
   3595   else {\
   3596     CORRUPTION_ERROR_ACTION(M);\
   3597   }\
   3598   B->fd = P;\
   3599   F->bk = P;\
   3600   P->fd = F;\
   3601   P->bk = B;\
   3602 }
   3603 
   3604 /* Unlink a chunk from a smallbin  */
   3605 #define unlink_small_chunk(M, P, S) {\
   3606   mchunkptr F = P->fd;\
   3607   mchunkptr B = P->bk;\
   3608   bindex_t I = small_index(S);\
   3609   assert(P != B);\
   3610   assert(P != F);\
   3611   assert(chunksize(P) == small_index2size(I));\
   3612   if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \
   3613     if (B == F) {\
   3614       clear_smallmap(M, I);\
   3615     }\
   3616     else if (RTCHECK(B == smallbin_at(M,I) ||\
   3617                      (ok_address(M, B) && B->fd == P))) {\
   3618       F->bk = B;\
   3619       B->fd = F;\
   3620     }\
   3621     else {\
   3622       CORRUPTION_ERROR_ACTION(M);\
   3623     }\
   3624   }\
   3625   else {\
   3626     CORRUPTION_ERROR_ACTION(M);\
   3627   }\
   3628 }
   3629 
   3630 /* Unlink the first chunk from a smallbin */
   3631 #define unlink_first_small_chunk(M, B, P, I) {\
   3632   mchunkptr F = P->fd;\
   3633   assert(P != B);\
   3634   assert(P != F);\
   3635   assert(chunksize(P) == small_index2size(I));\
   3636   if (B == F) {\
   3637     clear_smallmap(M, I);\
   3638   }\
   3639   else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\
   3640     F->bk = B;\
   3641     B->fd = F;\
   3642   }\
   3643   else {\
   3644     CORRUPTION_ERROR_ACTION(M);\
   3645   }\
   3646 }
   3647 
   3648 /* Replace dv node, binning the old one */
   3649 /* Used only when dvsize known to be small */
   3650 #define replace_dv(M, P, S) {\
   3651   size_t DVS = M->dvsize;\
   3652   assert(is_small(DVS));\
   3653   if (DVS != 0) {\
   3654     mchunkptr DV = M->dv;\
   3655     insert_small_chunk(M, DV, DVS);\
   3656   }\
   3657   M->dvsize = S;\
   3658   M->dv = P;\
   3659 }
   3660 
   3661 /* ------------------------- Operations on trees ------------------------- */
   3662 
   3663 /* Insert chunk into tree */
   3664 #define insert_large_chunk(M, X, S) {\
   3665   tbinptr* H;\
   3666   bindex_t I;\
   3667   compute_tree_index(S, I);\
   3668   H = treebin_at(M, I);\
   3669   X->index = I;\
   3670   X->child[0] = X->child[1] = 0;\
   3671   if (!treemap_is_marked(M, I)) {\
   3672     mark_treemap(M, I);\
   3673     *H = X;\
   3674     X->parent = (tchunkptr)H;\
   3675     X->fd = X->bk = X;\
   3676   }\
   3677   else {\
   3678     tchunkptr T = *H;\
   3679     size_t K = S << leftshift_for_tree_index(I);\
   3680     for (;;) {\
   3681       if (chunksize(T) != S) {\
   3682         tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\
   3683         K <<= 1;\
   3684         if (*C != 0)\
   3685           T = *C;\
   3686         else if (RTCHECK(ok_address(M, C))) {\
   3687           *C = X;\
   3688           X->parent = T;\
   3689           X->fd = X->bk = X;\
   3690           break;\
   3691         }\
   3692         else {\
   3693           CORRUPTION_ERROR_ACTION(M);\
   3694           break;\
   3695         }\
   3696       }\
   3697       else {\
   3698         tchunkptr F = T->fd;\
   3699         if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\
   3700           T->fd = F->bk = X;\
   3701           X->fd = F;\
   3702           X->bk = T;\
   3703           X->parent = 0;\
   3704           break;\
   3705         }\
   3706         else {\
   3707           CORRUPTION_ERROR_ACTION(M);\
   3708           break;\
   3709         }\
   3710       }\
   3711     }\
   3712   }\
   3713 }
   3714 
   3715 /*
   3716   Unlink steps:
   3717 
   3718   1. If x is a chained node, unlink it from its same-sized fd/bk links
   3719      and choose its bk node as its replacement.
   3720   2. If x was the last node of its size, but not a leaf node, it must
   3721      be replaced with a leaf node (not merely one with an open left or
   3722      right), to make sure that lefts and rights of descendents
   3723      correspond properly to bit masks.  We use the rightmost descendent
   3724      of x.  We could use any other leaf, but this is easy to locate and
   3725      tends to counteract removal of leftmosts elsewhere, and so keeps
   3726      paths shorter than minimally guaranteed.  This doesn't loop much
   3727      because on average a node in a tree is near the bottom.
   3728   3. If x is the base of a chain (i.e., has parent links) relink
   3729      x's parent and children to x's replacement (or null if none).
   3730 */
   3731 
   3732 #define unlink_large_chunk(M, X) {\
   3733   tchunkptr XP = X->parent;\
   3734   tchunkptr R;\
   3735   if (X->bk != X) {\
   3736     tchunkptr F = X->fd;\
   3737     R = X->bk;\
   3738     if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\
   3739       F->bk = R;\
   3740       R->fd = F;\
   3741     }\
   3742     else {\
   3743       CORRUPTION_ERROR_ACTION(M);\
   3744     }\
   3745   }\
   3746   else {\
   3747     tchunkptr* RP;\
   3748     if (((R = *(RP = &(X->child[1]))) != 0) ||\
   3749         ((R = *(RP = &(X->child[0]))) != 0)) {\
   3750       tchunkptr* CP;\
   3751       while ((*(CP = &(R->child[1])) != 0) ||\
   3752              (*(CP = &(R->child[0])) != 0)) {\
   3753         R = *(RP = CP);\
   3754       }\
   3755       if (RTCHECK(ok_address(M, RP)))\
   3756         *RP = 0;\
   3757       else {\
   3758         CORRUPTION_ERROR_ACTION(M);\
   3759       }\
   3760     }\
   3761   }\
   3762   if (XP != 0) {\
   3763     tbinptr* H = treebin_at(M, X->index);\
   3764     if (X == *H) {\
   3765       if ((*H = R) == 0) \
   3766         clear_treemap(M, X->index);\
   3767     }\
   3768     else if (RTCHECK(ok_address(M, XP))) {\
   3769       if (XP->child[0] == X) \
   3770         XP->child[0] = R;\
   3771       else \
   3772         XP->child[1] = R;\
   3773     }\
   3774     else\
   3775       CORRUPTION_ERROR_ACTION(M);\
   3776     if (R != 0) {\
   3777       if (RTCHECK(ok_address(M, R))) {\
   3778         tchunkptr C0, C1;\
   3779         R->parent = XP;\
   3780         if ((C0 = X->child[0]) != 0) {\
   3781           if (RTCHECK(ok_address(M, C0))) {\
   3782             R->child[0] = C0;\
   3783             C0->parent = R;\
   3784           }\
   3785           else\
   3786             CORRUPTION_ERROR_ACTION(M);\
   3787         }\
   3788         if ((C1 = X->child[1]) != 0) {\
   3789           if (RTCHECK(ok_address(M, C1))) {\
   3790             R->child[1] = C1;\
   3791             C1->parent = R;\
   3792           }\
   3793           else\
   3794             CORRUPTION_ERROR_ACTION(M);\
   3795         }\
   3796       }\
   3797       else\
   3798         CORRUPTION_ERROR_ACTION(M);\
   3799     }\
   3800   }\
   3801 }
   3802 
   3803 /* Relays to large vs small bin operations */
   3804 
   3805 #define insert_chunk(M, P, S)\
   3806   if (is_small(S)) insert_small_chunk(M, P, S)\
   3807   else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); }
   3808 
   3809 #define unlink_chunk(M, P, S)\
   3810   if (is_small(S)) unlink_small_chunk(M, P, S)\
   3811   else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); }
   3812 
   3813 
   3814 /* Relays to internal calls to malloc/free from realloc, memalign etc */
   3815 
   3816 #if ONLY_MSPACES
   3817 #define internal_malloc(m, b) mspace_malloc(m, b)
   3818 #define internal_free(m, mem) mspace_free(m,mem);
   3819 #else /* ONLY_MSPACES */
   3820 #if MSPACES
   3821 #define internal_malloc(m, b)\
   3822   ((m == gm)? dlmalloc(b) : mspace_malloc(m, b))
   3823 #define internal_free(m, mem)\
   3824    if (m == gm) dlfree(mem); else mspace_free(m,mem);
   3825 #else /* MSPACES */
   3826 #define internal_malloc(m, b) dlmalloc(b)
   3827 #define internal_free(m, mem) dlfree(mem)
   3828 #endif /* MSPACES */
   3829 #endif /* ONLY_MSPACES */
   3830 
   3831 /* -----------------------  Direct-mmapping chunks ----------------------- */
   3832 
   3833 /*
   3834   Directly mmapped chunks are set up with an offset to the start of
   3835   the mmapped region stored in the prev_foot field of the chunk. This
   3836   allows reconstruction of the required argument to MUNMAP when freed,
   3837   and also allows adjustment of the returned chunk to meet alignment
   3838   requirements (especially in memalign).
   3839 */
   3840 
   3841 /* Malloc using mmap */
   3842 static void* mmap_alloc(mstate m, size_t nb) {
   3843   size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
   3844   if (m->footprint_limit != 0) {
   3845     size_t fp = m->footprint + mmsize;
   3846     if (fp <= m->footprint || fp > m->footprint_limit)
   3847       return 0;
   3848   }
   3849   if (mmsize > nb) {     /* Check for wrap around 0 */
   3850     char* mm = (char*)(CALL_DIRECT_MMAP(mmsize));
   3851     if (mm != CMFAIL) {
   3852       size_t offset = align_offset(chunk2mem(mm));
   3853       size_t psize = mmsize - offset - MMAP_FOOT_PAD;
   3854       mchunkptr p = (mchunkptr)(mm + offset);
   3855       p->prev_foot = offset;
   3856       p->head = psize;
   3857       mark_inuse_foot(m, p, psize);
   3858       chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD;
   3859       chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0;
   3860 
   3861       if (m->least_addr == 0 || mm < m->least_addr)
   3862         m->least_addr = mm;
   3863       if ((m->footprint += mmsize) > m->max_footprint)
   3864         m->max_footprint = m->footprint;
   3865       assert(is_aligned(chunk2mem(p)));
   3866       check_mmapped_chunk(m, p);
   3867       return chunk2mem(p);
   3868     }
   3869   }
   3870   return 0;
   3871 }
   3872 
   3873 /* Realloc using mmap */
   3874 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) {
   3875   size_t oldsize = chunksize(oldp);
   3876   (void)flags; /* placate people compiling -Wunused */
   3877   if (is_small(nb)) /* Can't shrink mmap regions below small size */
   3878     return 0;
   3879   /* Keep old chunk if big enough but not too big */
   3880   if (oldsize >= nb + SIZE_T_SIZE &&
   3881       (oldsize - nb) <= (mparams.granularity << 1))
   3882     return oldp;
   3883   else {
   3884     size_t offset = oldp->prev_foot;
   3885     size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD;
   3886     size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
   3887     char* cp = (char*)CALL_MREMAP((char*)oldp - offset,
   3888                                   oldmmsize, newmmsize, flags);
   3889     if (cp != CMFAIL) {
   3890       mchunkptr newp = (mchunkptr)(cp + offset);
   3891       size_t psize = newmmsize - offset - MMAP_FOOT_PAD;
   3892       newp->head = psize;
   3893       mark_inuse_foot(m, newp, psize);
   3894       chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD;
   3895       chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0;
   3896 
   3897       if (cp < m->least_addr)
   3898         m->least_addr = cp;
   3899       if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint)
   3900         m->max_footprint = m->footprint;
   3901       check_mmapped_chunk(m, newp);
   3902       return newp;
   3903     }
   3904   }
   3905   return 0;
   3906 }
   3907 
   3908 
   3909 /* -------------------------- mspace management -------------------------- */
   3910 
   3911 /* Initialize top chunk and its size */
   3912 static void init_top(mstate m, mchunkptr p, size_t psize) {
   3913   /* Ensure alignment */
   3914   size_t offset = align_offset(chunk2mem(p));
   3915   p = (mchunkptr)((char*)p + offset);
   3916   psize -= offset;
   3917 
   3918   m->top = p;
   3919   m->topsize = psize;
   3920   p->head = psize | PINUSE_BIT;
   3921   /* set size of fake trailing chunk holding overhead space only once */
   3922   chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE;
   3923   m->trim_check = mparams.trim_threshold; /* reset on each update */
   3924 }
   3925 
   3926 /* Initialize bins for a new mstate that is otherwise zeroed out */
   3927 static void init_bins(mstate m) {
   3928   /* Establish circular links for smallbins */
   3929   bindex_t i;
   3930   for (i = 0; i < NSMALLBINS; ++i) {
   3931     sbinptr bin = smallbin_at(m,i);
   3932     bin->fd = bin->bk = bin;
   3933   }
   3934 }
   3935 
   3936 #if PROCEED_ON_ERROR
   3937 
   3938 /* default corruption action */
   3939 static void reset_on_error(mstate m) {
   3940   int i;
   3941   ++malloc_corruption_error_count;
   3942   /* Reinitialize fields to forget about all memory */
   3943   m->smallmap = m->treemap = 0;
   3944   m->dvsize = m->topsize = 0;
   3945   m->seg.base = 0;
   3946   m->seg.size = 0;
   3947   m->seg.next = 0;
   3948   m->top = m->dv = 0;
   3949   for (i = 0; i < NTREEBINS; ++i)
   3950     *treebin_at(m, i) = 0;
   3951   init_bins(m);
   3952 }
   3953 #endif /* PROCEED_ON_ERROR */
   3954 
   3955 /* Allocate chunk and prepend remainder with chunk in successor base. */
   3956 static void* prepend_alloc(mstate m, char* newbase, char* oldbase,
   3957                            size_t nb) {
   3958   mchunkptr p = align_as_chunk(newbase);
   3959   mchunkptr oldfirst = align_as_chunk(oldbase);
   3960   size_t psize = (char*)oldfirst - (char*)p;
   3961   mchunkptr q = chunk_plus_offset(p, nb);
   3962   size_t qsize = psize - nb;
   3963   set_size_and_pinuse_of_inuse_chunk(m, p, nb);
   3964 
   3965   assert((char*)oldfirst > (char*)q);
   3966   assert(pinuse(oldfirst));
   3967   assert(qsize >= MIN_CHUNK_SIZE);
   3968 
   3969   /* consolidate remainder with first chunk of old base */
   3970   if (oldfirst == m->top) {
   3971     size_t tsize = m->topsize += qsize;
   3972     m->top = q;
   3973     q->head = tsize | PINUSE_BIT;
   3974     check_top_chunk(m, q);
   3975   }
   3976   else if (oldfirst == m->dv) {
   3977     size_t dsize = m->dvsize += qsize;
   3978     m->dv = q;
   3979     set_size_and_pinuse_of_free_chunk(q, dsize);
   3980   }
   3981   else {
   3982     if (!is_inuse(oldfirst)) {
   3983       size_t nsize = chunksize(oldfirst);
   3984       unlink_chunk(m, oldfirst, nsize);
   3985       oldfirst = chunk_plus_offset(oldfirst, nsize);
   3986       qsize += nsize;
   3987     }
   3988     set_free_with_pinuse(q, qsize, oldfirst);
   3989     insert_chunk(m, q, qsize);
   3990     check_free_chunk(m, q);
   3991   }
   3992 
   3993   check_malloced_chunk(m, chunk2mem(p), nb);
   3994   return chunk2mem(p);
   3995 }
   3996 
   3997 /* Add a segment to hold a new noncontiguous region */
   3998 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
   3999   /* Determine locations and sizes of segment, fenceposts, old top */
   4000   char* old_top = (char*)m->top;
   4001   msegmentptr oldsp = segment_holding(m, old_top);
   4002   char* old_end = oldsp->base + oldsp->size;
   4003   size_t ssize = pad_request(sizeof(struct malloc_segment));
   4004   char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK);
   4005   size_t offset = align_offset(chunk2mem(rawsp));
   4006   char* asp = rawsp + offset;
   4007   char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp;
   4008   mchunkptr sp = (mchunkptr)csp;
   4009   msegmentptr ss = (msegmentptr)(chunk2mem(sp));
   4010   mchunkptr tnext = chunk_plus_offset(sp, ssize);
   4011   mchunkptr p = tnext;
   4012   int nfences = 0;
   4013 
   4014   /* reset top to new space */
   4015   init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
   4016 
   4017   /* Set up segment record */
   4018   assert(is_aligned(ss));
   4019   set_size_and_pinuse_of_inuse_chunk(m, sp, ssize);
   4020   *ss = m->seg; /* Push current record */
   4021   m->seg.base = tbase;
   4022   m->seg.size = tsize;
   4023   m->seg.sflags = mmapped;
   4024   m->seg.next = ss;
   4025 
   4026   /* Insert trailing fenceposts */
   4027   for (;;) {
   4028     mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE);
   4029     p->head = FENCEPOST_HEAD;
   4030     ++nfences;
   4031     if ((char*)(&(nextp->head)) < old_end)
   4032       p = nextp;
   4033     else
   4034       break;
   4035   }
   4036   assert(nfences >= 2);
   4037 
   4038   /* Insert the rest of old top into a bin as an ordinary free chunk */
   4039   if (csp != old_top) {
   4040     mchunkptr q = (mchunkptr)old_top;
   4041     size_t psize = csp - old_top;
   4042     mchunkptr tn = chunk_plus_offset(q, psize);
   4043     set_free_with_pinuse(q, psize, tn);
   4044     insert_chunk(m, q, psize);
   4045   }
   4046 
   4047   check_top_chunk(m, m->top);
   4048 }
   4049 
   4050 /* -------------------------- System allocation -------------------------- */
   4051 
   4052 /* Get memory from system using MORECORE or MMAP */
   4053 static void* sys_alloc(mstate m, size_t nb) {
   4054   char* tbase = CMFAIL;
   4055   size_t tsize = 0;
   4056   flag_t mmap_flag = 0;
   4057   size_t asize; /* allocation size */
   4058 
   4059   ensure_initialization();
   4060 
   4061   /* Directly map large chunks, but only if already initialized */
   4062   if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) {
   4063     void* mem = mmap_alloc(m, nb);
   4064     if (mem != 0)
   4065       return mem;
   4066   }
   4067 
   4068   asize = granularity_align(nb + SYS_ALLOC_PADDING);
   4069   if (asize <= nb) {
   4070     /* BEGIN android-added: set errno */
   4071     MALLOC_FAILURE_ACTION;
   4072     /* END android-added */
   4073     return 0; /* wraparound */
   4074   }
   4075   if (m->footprint_limit != 0) {
   4076     size_t fp = m->footprint + asize;
   4077     if (fp <= m->footprint || fp > m->footprint_limit) {
   4078       /* BEGIN android-added: set errno */
   4079       MALLOC_FAILURE_ACTION;
   4080       /* END android-added */
   4081       return 0;
   4082     }
   4083   }
   4084 
   4085   /*
   4086     Try getting memory in any of three ways (in most-preferred to
   4087     least-preferred order):
   4088     1. A call to MORECORE that can normally contiguously extend memory.
   4089        (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or
   4090        or main space is mmapped or a previous contiguous call failed)
   4091     2. A call to MMAP new space (disabled if not HAVE_MMAP).
   4092        Note that under the default settings, if MORECORE is unable to
   4093        fulfill a request, and HAVE_MMAP is true, then mmap is
   4094        used as a noncontiguous system allocator. This is a useful backup
   4095        strategy for systems with holes in address spaces -- in this case
   4096        sbrk cannot contiguously expand the heap, but mmap may be able to
   4097        find space.
   4098     3. A call to MORECORE that cannot usually contiguously extend memory.
   4099        (disabled if not HAVE_MORECORE)
   4100 
   4101    In all cases, we need to request enough bytes from system to ensure
   4102    we can malloc nb bytes upon success, so pad with enough space for
   4103    top_foot, plus alignment-pad to make sure we don't lose bytes if
   4104    not on boundary, and round this up to a granularity unit.
   4105   */
   4106 
   4107   if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) {
   4108     char* br = CMFAIL;
   4109     size_t ssize = asize; /* sbrk call size */
   4110     msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top);
   4111     ACQUIRE_MALLOC_GLOBAL_LOCK();
   4112 
   4113     if (ss == 0) {  /* First time through or recovery */
   4114       char* base = (char*)CALL_MORECORE(0);
   4115       if (base != CMFAIL) {
   4116         size_t fp;
   4117         /* Adjust to end on a page boundary */
   4118         if (!is_page_aligned(base))
   4119           ssize += (page_align((size_t)base) - (size_t)base);
   4120         fp = m->footprint + ssize; /* recheck limits */
   4121         if (ssize > nb && ssize < HALF_MAX_SIZE_T &&
   4122             (m->footprint_limit == 0 ||
   4123              (fp > m->footprint && fp <= m->footprint_limit)) &&
   4124             (br = (char*)(CALL_MORECORE(ssize))) == base) {
   4125           tbase = base;
   4126           tsize = ssize;
   4127         }
   4128       }
   4129     }
   4130     else {
   4131       /* Subtract out existing available top space from MORECORE request. */
   4132       ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING);
   4133       /* Use mem here only if it did continuously extend old space */
   4134       if (ssize < HALF_MAX_SIZE_T &&
   4135           (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) {
   4136         tbase = br;
   4137         tsize = ssize;
   4138       }
   4139     }
   4140 
   4141     if (tbase == CMFAIL) {    /* Cope with partial failure */
   4142       if (br != CMFAIL) {    /* Try to use/extend the space we did get */
   4143         if (ssize < HALF_MAX_SIZE_T &&
   4144             ssize < nb + SYS_ALLOC_PADDING) {
   4145           size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize);
   4146           if (esize < HALF_MAX_SIZE_T) {
   4147             char* end = (char*)CALL_MORECORE(esize);
   4148             if (end != CMFAIL)
   4149               ssize += esize;
   4150             else {            /* Can't use; try to release */
   4151               (void) CALL_MORECORE(-ssize);
   4152               br = CMFAIL;
   4153             }
   4154           }
   4155         }
   4156       }
   4157       if (br != CMFAIL) {    /* Use the space we did get */
   4158         tbase = br;
   4159         tsize = ssize;
   4160       }
   4161       else
   4162         disable_contiguous(m); /* Don't try contiguous path in the future */
   4163     }
   4164 
   4165     RELEASE_MALLOC_GLOBAL_LOCK();
   4166   }
   4167 
   4168   if (HAVE_MMAP && tbase == CMFAIL) {  /* Try MMAP */
   4169     char* mp = (char*)(CALL_MMAP(asize));
   4170     if (mp != CMFAIL) {
   4171       tbase = mp;
   4172       tsize = asize;
   4173       mmap_flag = USE_MMAP_BIT;
   4174     }
   4175   }
   4176 
   4177   if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */
   4178     if (asize < HALF_MAX_SIZE_T) {
   4179       char* br = CMFAIL;
   4180       char* end = CMFAIL;
   4181       ACQUIRE_MALLOC_GLOBAL_LOCK();
   4182       br = (char*)(CALL_MORECORE(asize));
   4183       end = (char*)(CALL_MORECORE(0));
   4184       RELEASE_MALLOC_GLOBAL_LOCK();
   4185       if (br != CMFAIL && end != CMFAIL && br < end) {
   4186         size_t ssize = end - br;
   4187         if (ssize > nb + TOP_FOOT_SIZE) {
   4188           tbase = br;
   4189           tsize = ssize;
   4190         }
   4191       }
   4192     }
   4193   }
   4194 
   4195   if (tbase != CMFAIL) {
   4196 
   4197     if ((m->footprint += tsize) > m->max_footprint)
   4198       m->max_footprint = m->footprint;
   4199 
   4200     if (!is_initialized(m)) { /* first-time initialization */
   4201       if (m->least_addr == 0 || tbase < m->least_addr)
   4202         m->least_addr = tbase;
   4203       m->seg.base = tbase;
   4204       m->seg.size = tsize;
   4205       m->seg.sflags = mmap_flag;
   4206       m->magic = mparams.magic;
   4207       m->release_checks = MAX_RELEASE_CHECK_RATE;
   4208       init_bins(m);
   4209 #if !ONLY_MSPACES
   4210       if (is_global(m))
   4211         init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE);
   4212       else
   4213 #endif
   4214       {
   4215         /* Offset top by embedded malloc_state */
   4216         mchunkptr mn = next_chunk(mem2chunk(m));
   4217         init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE);
   4218       }
   4219     }
   4220 
   4221     else {
   4222       /* Try to merge with an existing segment */
   4223       msegmentptr sp = &m->seg;
   4224       /* Only consider most recent segment if traversal suppressed */
   4225       while (sp != 0 && tbase != sp->base + sp->size)
   4226         sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
   4227       if (sp != 0 &&
   4228           !is_extern_segment(sp) &&
   4229           (sp->sflags & USE_MMAP_BIT) == mmap_flag &&
   4230           segment_holds(sp, m->top)) { /* append */
   4231         sp->size += tsize;
   4232         init_top(m, m->top, m->topsize + tsize);
   4233       }
   4234       else {
   4235         if (tbase < m->least_addr)
   4236           m->least_addr = tbase;
   4237         sp = &m->seg;
   4238         while (sp != 0 && sp->base != tbase + tsize)
   4239           sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next;
   4240         if (sp != 0 &&
   4241             !is_extern_segment(sp) &&
   4242             (sp->sflags & USE_MMAP_BIT) == mmap_flag) {
   4243           char* oldbase = sp->base;
   4244           sp->base = tbase;
   4245           sp->size += tsize;
   4246           return prepend_alloc(m, tbase, oldbase, nb);
   4247         }
   4248         else
   4249           add_segment(m, tbase, tsize, mmap_flag);
   4250       }
   4251     }
   4252 
   4253     if (nb < m->topsize) { /* Allocate from new or extended top space */
   4254       size_t rsize = m->topsize -= nb;
   4255       mchunkptr p = m->top;
   4256       mchunkptr r = m->top = chunk_plus_offset(p, nb);
   4257       r->head = rsize | PINUSE_BIT;
   4258       set_size_and_pinuse_of_inuse_chunk(m, p, nb);
   4259       check_top_chunk(m, m->top);
   4260       check_malloced_chunk(m, chunk2mem(p), nb);
   4261       return chunk2mem(p);
   4262     }
   4263   }
   4264 
   4265   MALLOC_FAILURE_ACTION;
   4266   return 0;
   4267 }
   4268 
   4269 /* -----------------------  system deallocation -------------------------- */
   4270 
   4271 /* Unmap and unlink any mmapped segments that don't contain used chunks */
   4272 static size_t release_unused_segments(mstate m) {
   4273   size_t released = 0;
   4274   int nsegs = 0;
   4275   msegmentptr pred = &m->seg;
   4276   msegmentptr sp = pred->next;
   4277   while (sp != 0) {
   4278     char* base = sp->base;
   4279     size_t size = sp->size;
   4280     msegmentptr next = sp->next;
   4281     ++nsegs;
   4282     if (is_mmapped_segment(sp) && !is_extern_segment(sp)) {
   4283       mchunkptr p = align_as_chunk(base);
   4284       size_t psize = chunksize(p);
   4285       /* Can unmap if first chunk holds entire segment and not pinned */
   4286       if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) {
   4287         tchunkptr tp = (tchunkptr)p;
   4288         assert(segment_holds(sp, (char*)sp));
   4289         if (p == m->dv) {
   4290           m->dv = 0;
   4291           m->dvsize = 0;
   4292         }
   4293         else {
   4294           unlink_large_chunk(m, tp);
   4295         }
   4296         if (CALL_MUNMAP(base, size) == 0) {
   4297           released += size;
   4298           m->footprint -= size;
   4299           /* unlink obsoleted record */
   4300           sp = pred;
   4301           sp->next = next;
   4302         }
   4303         else { /* back out if cannot unmap */
   4304           insert_large_chunk(m, tp, psize);
   4305         }
   4306       }
   4307     }
   4308     if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */
   4309       break;
   4310     pred = sp;
   4311     sp = next;
   4312   }
   4313   /* Reset check counter */
   4314   m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)?
   4315                        (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE);
   4316   return released;
   4317 }
   4318 
   4319 static int sys_trim(mstate m, size_t pad) {
   4320   size_t released = 0;
   4321   ensure_initialization();
   4322   if (pad < MAX_REQUEST && is_initialized(m)) {
   4323     pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */
   4324 
   4325     if (m->topsize > pad) {
   4326       /* Shrink top space in granularity-size units, keeping at least one */
   4327       size_t unit = mparams.granularity;
   4328       size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit -
   4329                       SIZE_T_ONE) * unit;
   4330       msegmentptr sp = segment_holding(m, (char*)m->top);
   4331 
   4332       if (!is_extern_segment(sp)) {
   4333         if (is_mmapped_segment(sp)) {
   4334           if (HAVE_MMAP &&
   4335               sp->size >= extra &&
   4336               !has_segment_link(m, sp)) { /* can't shrink if pinned */
   4337             size_t newsize = sp->size - extra;
   4338             (void)newsize; /* placate people compiling -Wunused-variable */
   4339             /* Prefer mremap, fall back to munmap */
   4340             if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) ||
   4341                 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) {
   4342               released = extra;
   4343             }
   4344           }
   4345         }
   4346         else if (HAVE_MORECORE) {
   4347           if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */
   4348             extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit;
   4349           ACQUIRE_MALLOC_GLOBAL_LOCK();
   4350           {
   4351             /* Make sure end of memory is where we last set it. */
   4352             char* old_br = (char*)(CALL_MORECORE(0));
   4353             if (old_br == sp->base + sp->size) {
   4354               char* rel_br = (char*)(CALL_MORECORE(-extra));
   4355               char* new_br = (char*)(CALL_MORECORE(0));
   4356               if (rel_br != CMFAIL && new_br < old_br)
   4357                 released = old_br - new_br;
   4358             }
   4359           }
   4360           RELEASE_MALLOC_GLOBAL_LOCK();
   4361         }
   4362       }
   4363 
   4364       if (released != 0) {
   4365         sp->size -= released;
   4366         m->footprint -= released;
   4367         init_top(m, m->top, m->topsize - released);
   4368         check_top_chunk(m, m->top);
   4369       }
   4370     }
   4371 
   4372     /* Unmap any unused mmapped segments */
   4373     if (HAVE_MMAP)
   4374       released += release_unused_segments(m);
   4375 
   4376     /* On failure, disable autotrim to avoid repeated failed future calls */
   4377     if (released == 0 && m->topsize > m->trim_check)
   4378       m->trim_check = MAX_SIZE_T;
   4379   }
   4380 
   4381   return (released != 0)? 1 : 0;
   4382 }
   4383 
   4384 /* Consolidate and bin a chunk. Differs from exported versions
   4385    of free mainly in that the chunk need not be marked as inuse.
   4386 */
   4387 static void dispose_chunk(mstate m, mchunkptr p, size_t psize) {
   4388   mchunkptr next = chunk_plus_offset(p, psize);
   4389   if (!pinuse(p)) {
   4390     mchunkptr prev;
   4391     size_t prevsize = p->prev_foot;
   4392     if (is_mmapped(p)) {
   4393       psize += prevsize + MMAP_FOOT_PAD;
   4394       if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
   4395         m->footprint -= psize;
   4396       return;
   4397     }
   4398     prev = chunk_minus_offset(p, prevsize);
   4399     psize += prevsize;
   4400     p = prev;
   4401     if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */
   4402       if (p != m->dv) {
   4403         unlink_chunk(m, p, prevsize);
   4404       }
   4405       else if ((next->head & INUSE_BITS) == INUSE_BITS) {
   4406         m->dvsize = psize;
   4407         set_free_with_pinuse(p, psize, next);
   4408         return;
   4409       }
   4410     }
   4411     else {
   4412       CORRUPTION_ERROR_ACTION(m);
   4413       return;
   4414     }
   4415   }
   4416   if (RTCHECK(ok_address(m, next))) {
   4417     if (!cinuse(next)) {  /* consolidate forward */
   4418       if (next == m->top) {
   4419         size_t tsize = m->topsize += psize;
   4420         m->top = p;
   4421         p->head = tsize | PINUSE_BIT;
   4422         if (p == m->dv) {
   4423           m->dv = 0;
   4424           m->dvsize = 0;
   4425         }
   4426         return;
   4427       }
   4428       else if (next == m->dv) {
   4429         size_t dsize = m->dvsize += psize;
   4430         m->dv = p;
   4431         set_size_and_pinuse_of_free_chunk(p, dsize);
   4432         return;
   4433       }
   4434       else {
   4435         size_t nsize = chunksize(next);
   4436         psize += nsize;
   4437         unlink_chunk(m, next, nsize);
   4438         set_size_and_pinuse_of_free_chunk(p, psize);
   4439         if (p == m->dv) {
   4440           m->dvsize = psize;
   4441           return;
   4442         }
   4443       }
   4444     }
   4445     else {
   4446       set_free_with_pinuse(p, psize, next);
   4447     }
   4448     insert_chunk(m, p, psize);
   4449   }
   4450   else {
   4451     CORRUPTION_ERROR_ACTION(m);
   4452   }
   4453 }
   4454 
   4455 /* ---------------------------- malloc --------------------------- */
   4456 
   4457 /* allocate a large request from the best fitting chunk in a treebin */
   4458 static void* tmalloc_large(mstate m, size_t nb) {
   4459   tchunkptr v = 0;
   4460   size_t rsize = -nb; /* Unsigned negation */
   4461   tchunkptr t;
   4462   bindex_t idx;
   4463   compute_tree_index(nb, idx);
   4464   if ((t = *treebin_at(m, idx)) != 0) {
   4465     /* Traverse tree for this bin looking for node with size == nb */
   4466     size_t sizebits = nb << leftshift_for_tree_index(idx);
   4467     tchunkptr rst = 0;  /* The deepest untaken right subtree */
   4468     for (;;) {
   4469       tchunkptr rt;
   4470       size_t trem = chunksize(t) - nb;
   4471       if (trem < rsize) {
   4472         v = t;
   4473         if ((rsize = trem) == 0)
   4474           break;
   4475       }
   4476       rt = t->child[1];
   4477       t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1];
   4478       if (rt != 0 && rt != t)
   4479         rst = rt;
   4480       if (t == 0) {
   4481         t = rst; /* set t to least subtree holding sizes > nb */
   4482         break;
   4483       }
   4484       sizebits <<= 1;
   4485     }
   4486   }
   4487   if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */
   4488     binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap;
   4489     if (leftbits != 0) {
   4490       bindex_t i;
   4491       binmap_t leastbit = least_bit(leftbits);
   4492       compute_bit2idx(leastbit, i);
   4493       t = *treebin_at(m, i);
   4494     }
   4495   }
   4496 
   4497   while (t != 0) { /* find smallest of tree or subtree */
   4498     size_t trem = chunksize(t) - nb;
   4499     if (trem < rsize) {
   4500       rsize = trem;
   4501       v = t;
   4502     }
   4503     t = leftmost_child(t);
   4504   }
   4505 
   4506   /*  If dv is a better fit, return 0 so malloc will use it */
   4507   if (v != 0 && rsize < (size_t)(m->dvsize - nb)) {
   4508     if (RTCHECK(ok_address(m, v))) { /* split */
   4509       mchunkptr r = chunk_plus_offset(v, nb);
   4510       assert(chunksize(v) == rsize + nb);
   4511       if (RTCHECK(ok_next(v, r))) {
   4512         unlink_large_chunk(m, v);
   4513         if (rsize < MIN_CHUNK_SIZE)
   4514           set_inuse_and_pinuse(m, v, (rsize + nb));
   4515         else {
   4516           set_size_and_pinuse_of_inuse_chunk(m, v, nb);
   4517           set_size_and_pinuse_of_free_chunk(r, rsize);
   4518           insert_chunk(m, r, rsize);
   4519         }
   4520         return chunk2mem(v);
   4521       }
   4522     }
   4523     CORRUPTION_ERROR_ACTION(m);
   4524   }
   4525   return 0;
   4526 }
   4527 
   4528 /* allocate a small request from the best fitting chunk in a treebin */
   4529 static void* tmalloc_small(mstate m, size_t nb) {
   4530   tchunkptr t, v;
   4531   size_t rsize;
   4532   bindex_t i;
   4533   binmap_t leastbit = least_bit(m->treemap);
   4534   compute_bit2idx(leastbit, i);
   4535   v = t = *treebin_at(m, i);
   4536   rsize = chunksize(t) - nb;
   4537 
   4538   while ((t = leftmost_child(t)) != 0) {
   4539     size_t trem = chunksize(t) - nb;
   4540     if (trem < rsize) {
   4541       rsize = trem;
   4542       v = t;
   4543     }
   4544   }
   4545 
   4546   if (RTCHECK(ok_address(m, v))) {
   4547     mchunkptr r = chunk_plus_offset(v, nb);
   4548     assert(chunksize(v) == rsize + nb);
   4549     if (RTCHECK(ok_next(v, r))) {
   4550       unlink_large_chunk(m, v);
   4551       if (rsize < MIN_CHUNK_SIZE)
   4552         set_inuse_and_pinuse(m, v, (rsize + nb));
   4553       else {
   4554         set_size_and_pinuse_of_inuse_chunk(m, v, nb);
   4555         set_size_and_pinuse_of_free_chunk(r, rsize);
   4556         replace_dv(m, r, rsize);
   4557       }
   4558       return chunk2mem(v);
   4559     }
   4560   }
   4561 
   4562   CORRUPTION_ERROR_ACTION(m);
   4563   return 0;
   4564 }
   4565 
   4566 #if !ONLY_MSPACES
   4567 
   4568 void* dlmalloc(size_t bytes) {
   4569   /*
   4570      Basic algorithm:
   4571      If a small request (< 256 bytes minus per-chunk overhead):
   4572        1. If one exists, use a remainderless chunk in associated smallbin.
   4573           (Remainderless means that there are too few excess bytes to
   4574           represent as a chunk.)
   4575        2. If it is big enough, use the dv chunk, which is normally the
   4576           chunk adjacent to the one used for the most recent small request.
   4577        3. If one exists, split the smallest available chunk in a bin,
   4578           saving remainder in dv.
   4579        4. If it is big enough, use the top chunk.
   4580        5. If available, get memory from system and use it
   4581      Otherwise, for a large request:
   4582        1. Find the smallest available binned chunk that fits, and use it
   4583           if it is better fitting than dv chunk, splitting if necessary.
   4584        2. If better fitting than any binned chunk, use the dv chunk.
   4585        3. If it is big enough, use the top chunk.
   4586        4. If request size >= mmap threshold, try to directly mmap this chunk.
   4587        5. If available, get memory from system and use it
   4588 
   4589      The ugly goto's here ensure that postaction occurs along all paths.
   4590   */
   4591 
   4592 #if USE_LOCKS
   4593   ensure_initialization(); /* initialize in sys_alloc if not using locks */
   4594 #endif
   4595 
   4596   if (!PREACTION(gm)) {
   4597     void* mem;
   4598     size_t nb;
   4599     if (bytes <= MAX_SMALL_REQUEST) {
   4600       bindex_t idx;
   4601       binmap_t smallbits;
   4602       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
   4603       idx = small_index(nb);
   4604       smallbits = gm->smallmap >> idx;
   4605 
   4606       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
   4607         mchunkptr b, p;
   4608         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
   4609         b = smallbin_at(gm, idx);
   4610         p = b->fd;
   4611         assert(chunksize(p) == small_index2size(idx));
   4612         unlink_first_small_chunk(gm, b, p, idx);
   4613         set_inuse_and_pinuse(gm, p, small_index2size(idx));
   4614         mem = chunk2mem(p);
   4615         check_malloced_chunk(gm, mem, nb);
   4616         goto postaction;
   4617       }
   4618 
   4619       else if (nb > gm->dvsize) {
   4620         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
   4621           mchunkptr b, p, r;
   4622           size_t rsize;
   4623           bindex_t i;
   4624           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
   4625           binmap_t leastbit = least_bit(leftbits);
   4626           compute_bit2idx(leastbit, i);
   4627           b = smallbin_at(gm, i);
   4628           p = b->fd;
   4629           assert(chunksize(p) == small_index2size(i));
   4630           unlink_first_small_chunk(gm, b, p, i);
   4631           rsize = small_index2size(i) - nb;
   4632           /* Fit here cannot be remainderless if 4byte sizes */
   4633           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
   4634             set_inuse_and_pinuse(gm, p, small_index2size(i));
   4635           else {
   4636             set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
   4637             r = chunk_plus_offset(p, nb);
   4638             set_size_and_pinuse_of_free_chunk(r, rsize);
   4639             replace_dv(gm, r, rsize);
   4640           }
   4641           mem = chunk2mem(p);
   4642           check_malloced_chunk(gm, mem, nb);
   4643           goto postaction;
   4644         }
   4645 
   4646         else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) {
   4647           check_malloced_chunk(gm, mem, nb);
   4648           goto postaction;
   4649         }
   4650       }
   4651     }
   4652     else if (bytes >= MAX_REQUEST)
   4653       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
   4654     else {
   4655       nb = pad_request(bytes);
   4656       if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) {
   4657         check_malloced_chunk(gm, mem, nb);
   4658         goto postaction;
   4659       }
   4660     }
   4661 
   4662     if (nb <= gm->dvsize) {
   4663       size_t rsize = gm->dvsize - nb;
   4664       mchunkptr p = gm->dv;
   4665       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
   4666         mchunkptr r = gm->dv = chunk_plus_offset(p, nb);
   4667         gm->dvsize = rsize;
   4668         set_size_and_pinuse_of_free_chunk(r, rsize);
   4669         set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
   4670       }
   4671       else { /* exhaust dv */
   4672         size_t dvs = gm->dvsize;
   4673         gm->dvsize = 0;
   4674         gm->dv = 0;
   4675         set_inuse_and_pinuse(gm, p, dvs);
   4676       }
   4677       mem = chunk2mem(p);
   4678       check_malloced_chunk(gm, mem, nb);
   4679       goto postaction;
   4680     }
   4681 
   4682     else if (nb < gm->topsize) { /* Split top */
   4683       size_t rsize = gm->topsize -= nb;
   4684       mchunkptr p = gm->top;
   4685       mchunkptr r = gm->top = chunk_plus_offset(p, nb);
   4686       r->head = rsize | PINUSE_BIT;
   4687       set_size_and_pinuse_of_inuse_chunk(gm, p, nb);
   4688       mem = chunk2mem(p);
   4689       check_top_chunk(gm, gm->top);
   4690       check_malloced_chunk(gm, mem, nb);
   4691       goto postaction;
   4692     }
   4693 
   4694     mem = sys_alloc(gm, nb);
   4695 
   4696   postaction:
   4697     POSTACTION(gm);
   4698     return mem;
   4699   }
   4700 
   4701   return 0;
   4702 }
   4703 
   4704 /* ---------------------------- free --------------------------- */
   4705 
   4706 void dlfree(void* mem) {
   4707   /*
   4708      Consolidate freed chunks with preceeding or succeeding bordering
   4709      free chunks, if they exist, and then place in a bin.  Intermixed
   4710      with special cases for top, dv, mmapped chunks, and usage errors.
   4711   */
   4712 
   4713   if (mem != 0) {
   4714     mchunkptr p  = mem2chunk(mem);
   4715 #if FOOTERS
   4716     mstate fm = get_mstate_for(p);
   4717     if (!ok_magic(fm)) {
   4718       USAGE_ERROR_ACTION(fm, p);
   4719       return;
   4720     }
   4721 #else /* FOOTERS */
   4722 #define fm gm
   4723 #endif /* FOOTERS */
   4724     if (!PREACTION(fm)) {
   4725       check_inuse_chunk(fm, p);
   4726       if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
   4727         size_t psize = chunksize(p);
   4728         mchunkptr next = chunk_plus_offset(p, psize);
   4729         if (!pinuse(p)) {
   4730           size_t prevsize = p->prev_foot;
   4731           if (is_mmapped(p)) {
   4732             psize += prevsize + MMAP_FOOT_PAD;
   4733             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
   4734               fm->footprint -= psize;
   4735             goto postaction;
   4736           }
   4737           else {
   4738             mchunkptr prev = chunk_minus_offset(p, prevsize);
   4739             psize += prevsize;
   4740             p = prev;
   4741             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
   4742               if (p != fm->dv) {
   4743                 unlink_chunk(fm, p, prevsize);
   4744               }
   4745               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
   4746                 fm->dvsize = psize;
   4747                 set_free_with_pinuse(p, psize, next);
   4748                 goto postaction;
   4749               }
   4750             }
   4751             else
   4752               goto erroraction;
   4753           }
   4754         }
   4755 
   4756         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
   4757           if (!cinuse(next)) {  /* consolidate forward */
   4758             if (next == fm->top) {
   4759               size_t tsize = fm->topsize += psize;
   4760               fm->top = p;
   4761               p->head = tsize | PINUSE_BIT;
   4762               if (p == fm->dv) {
   4763                 fm->dv = 0;
   4764                 fm->dvsize = 0;
   4765               }
   4766               if (should_trim(fm, tsize))
   4767                 sys_trim(fm, 0);
   4768               goto postaction;
   4769             }
   4770             else if (next == fm->dv) {
   4771               size_t dsize = fm->dvsize += psize;
   4772               fm->dv = p;
   4773               set_size_and_pinuse_of_free_chunk(p, dsize);
   4774               goto postaction;
   4775             }
   4776             else {
   4777               size_t nsize = chunksize(next);
   4778               psize += nsize;
   4779               unlink_chunk(fm, next, nsize);
   4780               set_size_and_pinuse_of_free_chunk(p, psize);
   4781               if (p == fm->dv) {
   4782                 fm->dvsize = psize;
   4783                 goto postaction;
   4784               }
   4785             }
   4786           }
   4787           else
   4788             set_free_with_pinuse(p, psize, next);
   4789 
   4790           if (is_small(psize)) {
   4791             insert_small_chunk(fm, p, psize);
   4792             check_free_chunk(fm, p);
   4793           }
   4794           else {
   4795             tchunkptr tp = (tchunkptr)p;
   4796             insert_large_chunk(fm, tp, psize);
   4797             check_free_chunk(fm, p);
   4798             if (--fm->release_checks == 0)
   4799               release_unused_segments(fm);
   4800           }
   4801           goto postaction;
   4802         }
   4803       }
   4804     erroraction:
   4805       USAGE_ERROR_ACTION(fm, p);
   4806     postaction:
   4807       POSTACTION(fm);
   4808     }
   4809   }
   4810 #if !FOOTERS
   4811 #undef fm
   4812 #endif /* FOOTERS */
   4813 }
   4814 
   4815 void* dlcalloc(size_t n_elements, size_t elem_size) {
   4816   void* mem;
   4817   size_t req = 0;
   4818   if (n_elements != 0) {
   4819     req = n_elements * elem_size;
   4820     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
   4821         (req / n_elements != elem_size))
   4822       req = MAX_SIZE_T; /* force downstream failure on overflow */
   4823   }
   4824   mem = dlmalloc(req);
   4825   if (mem != 0) {
   4826     mchunkptr p = mem2chunk(mem);
   4827     if (calloc_must_clear(p)) {
   4828       /* Make sure to clear all of the buffer, not just the requested size. */
   4829       memset(mem, 0, chunksize(p) - overhead_for(p));
   4830     }
   4831   }
   4832   return mem;
   4833 }
   4834 
   4835 #endif /* !ONLY_MSPACES */
   4836 
   4837 /* ------------ Internal support for realloc, memalign, etc -------------- */
   4838 
   4839 /* Try to realloc; only in-place unless can_move true */
   4840 static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb,
   4841                                    int can_move) {
   4842   mchunkptr newp = 0;
   4843   size_t oldsize = chunksize(p);
   4844   mchunkptr next = chunk_plus_offset(p, oldsize);
   4845   if (RTCHECK(ok_address(m, p) && ok_inuse(p) &&
   4846               ok_next(p, next) && ok_pinuse(next))) {
   4847     if (is_mmapped(p)) {
   4848       newp = mmap_resize(m, p, nb, can_move);
   4849     }
   4850     else if (oldsize >= nb) {             /* already big enough */
   4851       size_t rsize = oldsize - nb;
   4852       if (rsize >= MIN_CHUNK_SIZE) {      /* split off remainder */
   4853         mchunkptr r = chunk_plus_offset(p, nb);
   4854         set_inuse(m, p, nb);
   4855         set_inuse(m, r, rsize);
   4856         dispose_chunk(m, r, rsize);
   4857       }
   4858       newp = p;
   4859     }
   4860     else if (next == m->top) {  /* extend into top */
   4861       if (oldsize + m->topsize > nb) {
   4862         size_t newsize = oldsize + m->topsize;
   4863         size_t newtopsize = newsize - nb;
   4864         mchunkptr newtop = chunk_plus_offset(p, nb);
   4865         set_inuse(m, p, nb);
   4866         newtop->head = newtopsize |PINUSE_BIT;
   4867         m->top = newtop;
   4868         m->topsize = newtopsize;
   4869         newp = p;
   4870       }
   4871     }
   4872     else if (next == m->dv) { /* extend into dv */
   4873       size_t dvs = m->dvsize;
   4874       if (oldsize + dvs >= nb) {
   4875         size_t dsize = oldsize + dvs - nb;
   4876         if (dsize >= MIN_CHUNK_SIZE) {
   4877           mchunkptr r = chunk_plus_offset(p, nb);
   4878           mchunkptr n = chunk_plus_offset(r, dsize);
   4879           set_inuse(m, p, nb);
   4880           set_size_and_pinuse_of_free_chunk(r, dsize);
   4881           clear_pinuse(n);
   4882           m->dvsize = dsize;
   4883           m->dv = r;
   4884         }
   4885         else { /* exhaust dv */
   4886           size_t newsize = oldsize + dvs;
   4887           set_inuse(m, p, newsize);
   4888           m->dvsize = 0;
   4889           m->dv = 0;
   4890         }
   4891         newp = p;
   4892       }
   4893     }
   4894     else if (!cinuse(next)) { /* extend into next free chunk */
   4895       size_t nextsize = chunksize(next);
   4896       if (oldsize + nextsize >= nb) {
   4897         size_t rsize = oldsize + nextsize - nb;
   4898         unlink_chunk(m, next, nextsize);
   4899         if (rsize < MIN_CHUNK_SIZE) {
   4900           size_t newsize = oldsize + nextsize;
   4901           set_inuse(m, p, newsize);
   4902         }
   4903         else {
   4904           mchunkptr r = chunk_plus_offset(p, nb);
   4905           set_inuse(m, p, nb);
   4906           set_inuse(m, r, rsize);
   4907           dispose_chunk(m, r, rsize);
   4908         }
   4909         newp = p;
   4910       }
   4911     }
   4912   }
   4913   else {
   4914     USAGE_ERROR_ACTION(m, chunk2mem(p));
   4915   }
   4916   return newp;
   4917 }
   4918 
   4919 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) {
   4920   void* mem = 0;
   4921   if (alignment <  MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */
   4922     alignment = MIN_CHUNK_SIZE;
   4923   if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */
   4924     size_t a = MALLOC_ALIGNMENT << 1;
   4925     while (a < alignment) a <<= 1;
   4926     alignment = a;
   4927   }
   4928   if (bytes >= MAX_REQUEST - alignment) {
   4929     if (m != 0)  { /* Test isn't needed but avoids compiler warning */
   4930       MALLOC_FAILURE_ACTION;
   4931     }
   4932   }
   4933   else {
   4934     size_t nb = request2size(bytes);
   4935     size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD;
   4936     mem = internal_malloc(m, req);
   4937     if (mem != 0) {
   4938       mchunkptr p = mem2chunk(mem);
   4939       if (PREACTION(m))
   4940         return 0;
   4941       if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */
   4942         /*
   4943           Find an aligned spot inside chunk.  Since we need to give
   4944           back leading space in a chunk of at least MIN_CHUNK_SIZE, if
   4945           the first calculation places us at a spot with less than
   4946           MIN_CHUNK_SIZE leader, we can move to the next aligned spot.
   4947           We've allocated enough total room so that this is always
   4948           possible.
   4949         */
   4950         char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment -
   4951                                                        SIZE_T_ONE)) &
   4952                                              -alignment));
   4953         char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)?
   4954           br : br+alignment;
   4955         mchunkptr newp = (mchunkptr)pos;
   4956         size_t leadsize = pos - (char*)(p);
   4957         size_t newsize = chunksize(p) - leadsize;
   4958 
   4959         if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
   4960           newp->prev_foot = p->prev_foot + leadsize;
   4961           newp->head = newsize;
   4962         }
   4963         else { /* Otherwise, give back leader, use the rest */
   4964           set_inuse(m, newp, newsize);
   4965           set_inuse(m, p, leadsize);
   4966           dispose_chunk(m, p, leadsize);
   4967         }
   4968         p = newp;
   4969       }
   4970 
   4971       /* Give back spare room at the end */
   4972       if (!is_mmapped(p)) {
   4973         size_t size = chunksize(p);
   4974         if (size > nb + MIN_CHUNK_SIZE) {
   4975           size_t remainder_size = size - nb;
   4976           mchunkptr remainder = chunk_plus_offset(p, nb);
   4977           set_inuse(m, p, nb);
   4978           set_inuse(m, remainder, remainder_size);
   4979           dispose_chunk(m, remainder, remainder_size);
   4980         }
   4981       }
   4982 
   4983       mem = chunk2mem(p);
   4984       assert (chunksize(p) >= nb);
   4985       assert(((size_t)mem & (alignment - 1)) == 0);
   4986       check_inuse_chunk(m, p);
   4987       POSTACTION(m);
   4988     }
   4989   }
   4990   return mem;
   4991 }
   4992 
   4993 /*
   4994   Common support for independent_X routines, handling
   4995     all of the combinations that can result.
   4996   The opts arg has:
   4997     bit 0 set if all elements are same size (using sizes[0])
   4998     bit 1 set if elements should be zeroed
   4999 */
   5000 static void** ialloc(mstate m,
   5001                      size_t n_elements,
   5002                      size_t* sizes,
   5003                      int opts,
   5004                      void* chunks[]) {
   5005 
   5006   size_t    element_size;   /* chunksize of each element, if all same */
   5007   size_t    contents_size;  /* total size of elements */
   5008   size_t    array_size;     /* request size of pointer array */
   5009   void*     mem;            /* malloced aggregate space */
   5010   mchunkptr p;              /* corresponding chunk */
   5011   size_t    remainder_size; /* remaining bytes while splitting */
   5012   void**    marray;         /* either "chunks" or malloced ptr array */
   5013   mchunkptr array_chunk;    /* chunk for malloced ptr array */
   5014   flag_t    was_enabled;    /* to disable mmap */
   5015   size_t    size;
   5016   size_t    i;
   5017 
   5018   ensure_initialization();
   5019   /* compute array length, if needed */
   5020   if (chunks != 0) {
   5021     if (n_elements == 0)
   5022       return chunks; /* nothing to do */
   5023     marray = chunks;
   5024     array_size = 0;
   5025   }
   5026   else {
   5027     /* if empty req, must still return chunk representing empty array */
   5028     if (n_elements == 0)
   5029       return (void**)internal_malloc(m, 0);
   5030     marray = 0;
   5031     array_size = request2size(n_elements * (sizeof(void*)));
   5032   }
   5033 
   5034   /* compute total element size */
   5035   if (opts & 0x1) { /* all-same-size */
   5036     element_size = request2size(*sizes);
   5037     contents_size = n_elements * element_size;
   5038   }
   5039   else { /* add up all the sizes */
   5040     element_size = 0;
   5041     contents_size = 0;
   5042     for (i = 0; i != n_elements; ++i)
   5043       contents_size += request2size(sizes[i]);
   5044   }
   5045 
   5046   size = contents_size + array_size;
   5047 
   5048   /*
   5049      Allocate the aggregate chunk.  First disable direct-mmapping so
   5050      malloc won't use it, since we would not be able to later
   5051      free/realloc space internal to a segregated mmap region.
   5052   */
   5053   was_enabled = use_mmap(m);
   5054   disable_mmap(m);
   5055   mem = internal_malloc(m, size - CHUNK_OVERHEAD);
   5056   if (was_enabled)
   5057     enable_mmap(m);
   5058   if (mem == 0)
   5059     return 0;
   5060 
   5061   if (PREACTION(m)) return 0;
   5062   p = mem2chunk(mem);
   5063   remainder_size = chunksize(p);
   5064 
   5065   assert(!is_mmapped(p));
   5066 
   5067   if (opts & 0x2) {       /* optionally clear the elements */
   5068     memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size);
   5069   }
   5070 
   5071   /* If not provided, allocate the pointer array as final part of chunk */
   5072   if (marray == 0) {
   5073     size_t  array_chunk_size;
   5074     array_chunk = chunk_plus_offset(p, contents_size);
   5075     array_chunk_size = remainder_size - contents_size;
   5076     marray = (void**) (chunk2mem(array_chunk));
   5077     set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size);
   5078     remainder_size = contents_size;
   5079   }
   5080 
   5081   /* split out elements */
   5082   for (i = 0; ; ++i) {
   5083     marray[i] = chunk2mem(p);
   5084     if (i != n_elements-1) {
   5085       if (element_size != 0)
   5086         size = element_size;
   5087       else
   5088         size = request2size(sizes[i]);
   5089       remainder_size -= size;
   5090       set_size_and_pinuse_of_inuse_chunk(m, p, size);
   5091       p = chunk_plus_offset(p, size);
   5092     }
   5093     else { /* the final element absorbs any overallocation slop */
   5094       set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size);
   5095       break;
   5096     }
   5097   }
   5098 
   5099 #if DEBUG
   5100   if (marray != chunks) {
   5101     /* final element must have exactly exhausted chunk */
   5102     if (element_size != 0) {
   5103       assert(remainder_size == element_size);
   5104     }
   5105     else {
   5106       assert(remainder_size == request2size(sizes[i]));
   5107     }
   5108     check_inuse_chunk(m, mem2chunk(marray));
   5109   }
   5110   for (i = 0; i != n_elements; ++i)
   5111     check_inuse_chunk(m, mem2chunk(marray[i]));
   5112 
   5113 #endif /* DEBUG */
   5114 
   5115   POSTACTION(m);
   5116   return marray;
   5117 }
   5118 
   5119 /* Try to free all pointers in the given array.
   5120    Note: this could be made faster, by delaying consolidation,
   5121    at the price of disabling some user integrity checks, We
   5122    still optimize some consolidations by combining adjacent
   5123    chunks before freeing, which will occur often if allocated
   5124    with ialloc or the array is sorted.
   5125 */
   5126 static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) {
   5127   size_t unfreed = 0;
   5128   if (!PREACTION(m)) {
   5129     void** a;
   5130     void** fence = &(array[nelem]);
   5131     for (a = array; a != fence; ++a) {
   5132       void* mem = *a;
   5133       if (mem != 0) {
   5134         mchunkptr p = mem2chunk(mem);
   5135         size_t psize = chunksize(p);
   5136 #if FOOTERS
   5137         if (get_mstate_for(p) != m) {
   5138           ++unfreed;
   5139           continue;
   5140         }
   5141 #endif
   5142         check_inuse_chunk(m, p);
   5143         *a = 0;
   5144         if (RTCHECK(ok_address(m, p) && ok_inuse(p))) {
   5145           void ** b = a + 1; /* try to merge with next chunk */
   5146           mchunkptr next = next_chunk(p);
   5147           if (b != fence && *b == chunk2mem(next)) {
   5148             size_t newsize = chunksize(next) + psize;
   5149             set_inuse(m, p, newsize);
   5150             *b = chunk2mem(p);
   5151           }
   5152           else
   5153             dispose_chunk(m, p, psize);
   5154         }
   5155         else {
   5156           CORRUPTION_ERROR_ACTION(m);
   5157           break;
   5158         }
   5159       }
   5160     }
   5161     if (should_trim(m, m->topsize))
   5162       sys_trim(m, 0);
   5163     POSTACTION(m);
   5164   }
   5165   return unfreed;
   5166 }
   5167 
   5168 /* Traversal */
   5169 #if MALLOC_INSPECT_ALL
   5170 static void internal_inspect_all(mstate m,
   5171                                  void(*handler)(void *start,
   5172                                                 void *end,
   5173                                                 size_t used_bytes,
   5174                                                 void* callback_arg),
   5175                                  void* arg) {
   5176   if (is_initialized(m)) {
   5177     mchunkptr top = m->top;
   5178     msegmentptr s;
   5179     for (s = &m->seg; s != 0; s = s->next) {
   5180       mchunkptr q = align_as_chunk(s->base);
   5181       while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) {
   5182         mchunkptr next = next_chunk(q);
   5183         size_t sz = chunksize(q);
   5184         size_t used;
   5185         void* start;
   5186         if (is_inuse(q)) {
   5187           used = sz - CHUNK_OVERHEAD; /* must not be mmapped */
   5188           start = chunk2mem(q);
   5189         }
   5190         else {
   5191           used = 0;
   5192           if (is_small(sz)) {     /* offset by possible bookkeeping */
   5193             start = (void*)((char*)q + sizeof(struct malloc_chunk));
   5194           }
   5195           else {
   5196             start = (void*)((char*)q + sizeof(struct malloc_tree_chunk));
   5197           }
   5198         }
   5199         if (start < (void*)next)  /* skip if all space is bookkeeping */
   5200           handler(start, next, used, arg);
   5201         if (q == top)
   5202           break;
   5203         q = next;
   5204       }
   5205     }
   5206   }
   5207 }
   5208 #endif /* MALLOC_INSPECT_ALL */
   5209 
   5210 /* ------------------ Exported realloc, memalign, etc -------------------- */
   5211 
   5212 #if !ONLY_MSPACES
   5213 
   5214 void* dlrealloc(void* oldmem, size_t bytes) {
   5215   void* mem = 0;
   5216   if (oldmem == 0) {
   5217     mem = dlmalloc(bytes);
   5218   }
   5219   else if (bytes >= MAX_REQUEST) {
   5220     MALLOC_FAILURE_ACTION;
   5221   }
   5222 #ifdef REALLOC_ZERO_BYTES_FREES
   5223   else if (bytes == 0) {
   5224     dlfree(oldmem);
   5225   }
   5226 #endif /* REALLOC_ZERO_BYTES_FREES */
   5227   else {
   5228     size_t nb = request2size(bytes);
   5229     mchunkptr oldp = mem2chunk(oldmem);
   5230 #if ! FOOTERS
   5231     mstate m = gm;
   5232 #else /* FOOTERS */
   5233     mstate m = get_mstate_for(oldp);
   5234     if (!ok_magic(m)) {
   5235       USAGE_ERROR_ACTION(m, oldmem);
   5236       return 0;
   5237     }
   5238 #endif /* FOOTERS */
   5239     if (!PREACTION(m)) {
   5240       mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
   5241       POSTACTION(m);
   5242       if (newp != 0) {
   5243         check_inuse_chunk(m, newp);
   5244         mem = chunk2mem(newp);
   5245       }
   5246       else {
   5247         mem = internal_malloc(m, bytes);
   5248         if (mem != 0) {
   5249           size_t oc = chunksize(oldp) - overhead_for(oldp);
   5250           memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
   5251           internal_free(m, oldmem);
   5252         }
   5253       }
   5254     }
   5255   }
   5256   return mem;
   5257 }
   5258 
   5259 void* dlrealloc_in_place(void* oldmem, size_t bytes) {
   5260   void* mem = 0;
   5261   if (oldmem != 0) {
   5262     if (bytes >= MAX_REQUEST) {
   5263       MALLOC_FAILURE_ACTION;
   5264     }
   5265     else {
   5266       size_t nb = request2size(bytes);
   5267       mchunkptr oldp = mem2chunk(oldmem);
   5268 #if ! FOOTERS
   5269       mstate m = gm;
   5270 #else /* FOOTERS */
   5271       mstate m = get_mstate_for(oldp);
   5272       if (!ok_magic(m)) {
   5273         USAGE_ERROR_ACTION(m, oldmem);
   5274         return 0;
   5275       }
   5276 #endif /* FOOTERS */
   5277       if (!PREACTION(m)) {
   5278         mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
   5279         POSTACTION(m);
   5280         if (newp == oldp) {
   5281           check_inuse_chunk(m, newp);
   5282           mem = oldmem;
   5283         }
   5284       }
   5285     }
   5286   }
   5287   return mem;
   5288 }
   5289 
   5290 void* dlmemalign(size_t alignment, size_t bytes) {
   5291   if (alignment <= MALLOC_ALIGNMENT) {
   5292     return dlmalloc(bytes);
   5293   }
   5294   return internal_memalign(gm, alignment, bytes);
   5295 }
   5296 
   5297 int dlposix_memalign(void** pp, size_t alignment, size_t bytes) {
   5298   void* mem = 0;
   5299   if (alignment == MALLOC_ALIGNMENT)
   5300     mem = dlmalloc(bytes);
   5301   else {
   5302     size_t d = alignment / sizeof(void*);
   5303     size_t r = alignment % sizeof(void*);
   5304     if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0)
   5305       return EINVAL;
   5306     else if (bytes <= MAX_REQUEST - alignment) {
   5307       if (alignment <  MIN_CHUNK_SIZE)
   5308         alignment = MIN_CHUNK_SIZE;
   5309       mem = internal_memalign(gm, alignment, bytes);
   5310     }
   5311   }
   5312   if (mem == 0)
   5313     return ENOMEM;
   5314   else {
   5315     *pp = mem;
   5316     return 0;
   5317   }
   5318 }
   5319 
   5320 void* dlvalloc(size_t bytes) {
   5321   size_t pagesz;
   5322   ensure_initialization();
   5323   pagesz = mparams.page_size;
   5324   return dlmemalign(pagesz, bytes);
   5325 }
   5326 
   5327 /* BEGIN android-changed: added overflow check */
   5328 void* dlpvalloc(size_t bytes) {
   5329   size_t pagesz;
   5330   size_t size;
   5331   ensure_initialization();
   5332   pagesz = mparams.page_size;
   5333   size = (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE);
   5334   if (size < bytes) {
   5335     return NULL;
   5336   }
   5337   return dlmemalign(pagesz, size);
   5338 }
   5339 /* END android-change */
   5340 
   5341 void** dlindependent_calloc(size_t n_elements, size_t elem_size,
   5342                             void* chunks[]) {
   5343   size_t sz = elem_size; /* serves as 1-element array */
   5344   return ialloc(gm, n_elements, &sz, 3, chunks);
   5345 }
   5346 
   5347 void** dlindependent_comalloc(size_t n_elements, size_t sizes[],
   5348                               void* chunks[]) {
   5349   return ialloc(gm, n_elements, sizes, 0, chunks);
   5350 }
   5351 
   5352 size_t dlbulk_free(void* array[], size_t nelem) {
   5353   return internal_bulk_free(gm, array, nelem);
   5354 }
   5355 
   5356 #if MALLOC_INSPECT_ALL
   5357 void dlmalloc_inspect_all(void(*handler)(void *start,
   5358                                          void *end,
   5359                                          size_t used_bytes,
   5360                                          void* callback_arg),
   5361                           void* arg) {
   5362   ensure_initialization();
   5363   if (!PREACTION(gm)) {
   5364     internal_inspect_all(gm, handler, arg);
   5365     POSTACTION(gm);
   5366   }
   5367 }
   5368 #endif /* MALLOC_INSPECT_ALL */
   5369 
   5370 int dlmalloc_trim(size_t pad) {
   5371   int result = 0;
   5372   ensure_initialization();
   5373   if (!PREACTION(gm)) {
   5374     result = sys_trim(gm, pad);
   5375     POSTACTION(gm);
   5376   }
   5377   return result;
   5378 }
   5379 
   5380 size_t dlmalloc_footprint(void) {
   5381   return gm->footprint;
   5382 }
   5383 
   5384 size_t dlmalloc_max_footprint(void) {
   5385   return gm->max_footprint;
   5386 }
   5387 
   5388 size_t dlmalloc_footprint_limit(void) {
   5389   size_t maf = gm->footprint_limit;
   5390   return maf == 0 ? MAX_SIZE_T : maf;
   5391 }
   5392 
   5393 size_t dlmalloc_set_footprint_limit(size_t bytes) {
   5394   size_t result;  /* invert sense of 0 */
   5395   if (bytes == 0)
   5396     result = granularity_align(1); /* Use minimal size */
   5397   if (bytes == MAX_SIZE_T)
   5398     result = 0;                    /* disable */
   5399   else
   5400     result = granularity_align(bytes);
   5401   return gm->footprint_limit = result;
   5402 }
   5403 
   5404 #if !NO_MALLINFO
   5405 struct mallinfo dlmallinfo(void) {
   5406   return internal_mallinfo(gm);
   5407 }
   5408 #endif /* NO_MALLINFO */
   5409 
   5410 #if !NO_MALLOC_STATS
   5411 void dlmalloc_stats() {
   5412   internal_malloc_stats(gm);
   5413 }
   5414 #endif /* NO_MALLOC_STATS */
   5415 
   5416 int dlmallopt(int param_number, int value) {
   5417   return change_mparam(param_number, value);
   5418 }
   5419 
   5420 /* BEGIN android-changed: added const */
   5421 size_t dlmalloc_usable_size(const void* mem) {
   5422 /* END android-change */
   5423   if (mem != 0) {
   5424     mchunkptr p = mem2chunk(mem);
   5425     if (is_inuse(p))
   5426       return chunksize(p) - overhead_for(p);
   5427   }
   5428   return 0;
   5429 }
   5430 
   5431 #endif /* !ONLY_MSPACES */
   5432 
   5433 /* ----------------------------- user mspaces ---------------------------- */
   5434 
   5435 #if MSPACES
   5436 
   5437 static mstate init_user_mstate(char* tbase, size_t tsize) {
   5438   size_t msize = pad_request(sizeof(struct malloc_state));
   5439   mchunkptr mn;
   5440   mchunkptr msp = align_as_chunk(tbase);
   5441   mstate m = (mstate)(chunk2mem(msp));
   5442   memset(m, 0, msize);
   5443   (void)INITIAL_LOCK(&m->mutex);
   5444   msp->head = (msize|INUSE_BITS);
   5445   m->seg.base = m->least_addr = tbase;
   5446   m->seg.size = m->footprint = m->max_footprint = tsize;
   5447   m->magic = mparams.magic;
   5448   m->release_checks = MAX_RELEASE_CHECK_RATE;
   5449   m->mflags = mparams.default_mflags;
   5450   m->extp = 0;
   5451   m->exts = 0;
   5452   disable_contiguous(m);
   5453   init_bins(m);
   5454   mn = next_chunk(mem2chunk(m));
   5455   init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE);
   5456   check_top_chunk(m, m->top);
   5457   return m;
   5458 }
   5459 
   5460 mspace create_mspace(size_t capacity, int locked) {
   5461   mstate m = 0;
   5462   size_t msize;
   5463   ensure_initialization();
   5464   msize = pad_request(sizeof(struct malloc_state));
   5465   if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
   5466     size_t rs = ((capacity == 0)? mparams.granularity :
   5467                  (capacity + TOP_FOOT_SIZE + msize));
   5468     size_t tsize = granularity_align(rs);
   5469     char* tbase = (char*)(CALL_MMAP(tsize));
   5470     if (tbase != CMFAIL) {
   5471       m = init_user_mstate(tbase, tsize);
   5472       m->seg.sflags = USE_MMAP_BIT;
   5473       set_lock(m, locked);
   5474     }
   5475   }
   5476   return (mspace)m;
   5477 }
   5478 
   5479 mspace create_mspace_with_base(void* base, size_t capacity, int locked) {
   5480   mstate m = 0;
   5481   size_t msize;
   5482   ensure_initialization();
   5483   msize = pad_request(sizeof(struct malloc_state));
   5484   if (capacity > msize + TOP_FOOT_SIZE &&
   5485       capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) {
   5486     m = init_user_mstate((char*)base, capacity);
   5487     m->seg.sflags = EXTERN_BIT;
   5488     set_lock(m, locked);
   5489   }
   5490   return (mspace)m;
   5491 }
   5492 
   5493 int mspace_track_large_chunks(mspace msp, int enable) {
   5494   int ret = 0;
   5495   mstate ms = (mstate)msp;
   5496   if (!PREACTION(ms)) {
   5497     if (!use_mmap(ms)) {
   5498       ret = 1;
   5499     }
   5500     if (!enable) {
   5501       enable_mmap(ms);
   5502     } else {
   5503       disable_mmap(ms);
   5504     }
   5505     POSTACTION(ms);
   5506   }
   5507   return ret;
   5508 }
   5509 
   5510 size_t destroy_mspace(mspace msp) {
   5511   size_t freed = 0;
   5512   mstate ms = (mstate)msp;
   5513   if (ok_magic(ms)) {
   5514     msegmentptr sp = &ms->seg;
   5515     (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */
   5516     while (sp != 0) {
   5517       char* base = sp->base;
   5518       size_t size = sp->size;
   5519       flag_t flag = sp->sflags;
   5520       (void)base; /* placate people compiling -Wunused-variable */
   5521       sp = sp->next;
   5522       if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) &&
   5523           CALL_MUNMAP(base, size) == 0)
   5524         freed += size;
   5525     }
   5526   }
   5527   else {
   5528     USAGE_ERROR_ACTION(ms,ms);
   5529   }
   5530   return freed;
   5531 }
   5532 
   5533 /*
   5534   mspace versions of routines are near-clones of the global
   5535   versions. This is not so nice but better than the alternatives.
   5536 */
   5537 
   5538 void* mspace_malloc(mspace msp, size_t bytes) {
   5539   mstate ms = (mstate)msp;
   5540   if (!ok_magic(ms)) {
   5541     USAGE_ERROR_ACTION(ms,ms);
   5542     return 0;
   5543   }
   5544   if (!PREACTION(ms)) {
   5545     void* mem;
   5546     size_t nb;
   5547     if (bytes <= MAX_SMALL_REQUEST) {
   5548       bindex_t idx;
   5549       binmap_t smallbits;
   5550       nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes);
   5551       idx = small_index(nb);
   5552       smallbits = ms->smallmap >> idx;
   5553 
   5554       if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */
   5555         mchunkptr b, p;
   5556         idx += ~smallbits & 1;       /* Uses next bin if idx empty */
   5557         b = smallbin_at(ms, idx);
   5558         p = b->fd;
   5559         assert(chunksize(p) == small_index2size(idx));
   5560         unlink_first_small_chunk(ms, b, p, idx);
   5561         set_inuse_and_pinuse(ms, p, small_index2size(idx));
   5562         mem = chunk2mem(p);
   5563         check_malloced_chunk(ms, mem, nb);
   5564         goto postaction;
   5565       }
   5566 
   5567       else if (nb > ms->dvsize) {
   5568         if (smallbits != 0) { /* Use chunk in next nonempty smallbin */
   5569           mchunkptr b, p, r;
   5570           size_t rsize;
   5571           bindex_t i;
   5572           binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx));
   5573           binmap_t leastbit = least_bit(leftbits);
   5574           compute_bit2idx(leastbit, i);
   5575           b = smallbin_at(ms, i);
   5576           p = b->fd;
   5577           assert(chunksize(p) == small_index2size(i));
   5578           unlink_first_small_chunk(ms, b, p, i);
   5579           rsize = small_index2size(i) - nb;
   5580           /* Fit here cannot be remainderless if 4byte sizes */
   5581           if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE)
   5582             set_inuse_and_pinuse(ms, p, small_index2size(i));
   5583           else {
   5584             set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
   5585             r = chunk_plus_offset(p, nb);
   5586             set_size_and_pinuse_of_free_chunk(r, rsize);
   5587             replace_dv(ms, r, rsize);
   5588           }
   5589           mem = chunk2mem(p);
   5590           check_malloced_chunk(ms, mem, nb);
   5591           goto postaction;
   5592         }
   5593 
   5594         else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) {
   5595           check_malloced_chunk(ms, mem, nb);
   5596           goto postaction;
   5597         }
   5598       }
   5599     }
   5600     else if (bytes >= MAX_REQUEST)
   5601       nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */
   5602     else {
   5603       nb = pad_request(bytes);
   5604       if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) {
   5605         check_malloced_chunk(ms, mem, nb);
   5606         goto postaction;
   5607       }
   5608     }
   5609 
   5610     if (nb <= ms->dvsize) {
   5611       size_t rsize = ms->dvsize - nb;
   5612       mchunkptr p = ms->dv;
   5613       if (rsize >= MIN_CHUNK_SIZE) { /* split dv */
   5614         mchunkptr r = ms->dv = chunk_plus_offset(p, nb);
   5615         ms->dvsize = rsize;
   5616         set_size_and_pinuse_of_free_chunk(r, rsize);
   5617         set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
   5618       }
   5619       else { /* exhaust dv */
   5620         size_t dvs = ms->dvsize;
   5621         ms->dvsize = 0;
   5622         ms->dv = 0;
   5623         set_inuse_and_pinuse(ms, p, dvs);
   5624       }
   5625       mem = chunk2mem(p);
   5626       check_malloced_chunk(ms, mem, nb);
   5627       goto postaction;
   5628     }
   5629 
   5630     else if (nb < ms->topsize) { /* Split top */
   5631       size_t rsize = ms->topsize -= nb;
   5632       mchunkptr p = ms->top;
   5633       mchunkptr r = ms->top = chunk_plus_offset(p, nb);
   5634       r->head = rsize | PINUSE_BIT;
   5635       set_size_and_pinuse_of_inuse_chunk(ms, p, nb);
   5636       mem = chunk2mem(p);
   5637       check_top_chunk(ms, ms->top);
   5638       check_malloced_chunk(ms, mem, nb);
   5639       goto postaction;
   5640     }
   5641 
   5642     mem = sys_alloc(ms, nb);
   5643 
   5644   postaction:
   5645     POSTACTION(ms);
   5646     return mem;
   5647   }
   5648 
   5649   return 0;
   5650 }
   5651 
   5652 void mspace_free(mspace msp, void* mem) {
   5653   if (mem != 0) {
   5654     mchunkptr p  = mem2chunk(mem);
   5655 #if FOOTERS
   5656     mstate fm = get_mstate_for(p);
   5657     (void)msp; /* placate people compiling -Wunused */
   5658 #else /* FOOTERS */
   5659     mstate fm = (mstate)msp;
   5660 #endif /* FOOTERS */
   5661     if (!ok_magic(fm)) {
   5662       USAGE_ERROR_ACTION(fm, p);
   5663       return;
   5664     }
   5665     if (!PREACTION(fm)) {
   5666       check_inuse_chunk(fm, p);
   5667       if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) {
   5668         size_t psize = chunksize(p);
   5669         mchunkptr next = chunk_plus_offset(p, psize);
   5670         if (!pinuse(p)) {
   5671           size_t prevsize = p->prev_foot;
   5672           if (is_mmapped(p)) {
   5673             psize += prevsize + MMAP_FOOT_PAD;
   5674             if (CALL_MUNMAP((char*)p - prevsize, psize) == 0)
   5675               fm->footprint -= psize;
   5676             goto postaction;
   5677           }
   5678           else {
   5679             mchunkptr prev = chunk_minus_offset(p, prevsize);
   5680             psize += prevsize;
   5681             p = prev;
   5682             if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */
   5683               if (p != fm->dv) {
   5684                 unlink_chunk(fm, p, prevsize);
   5685               }
   5686               else if ((next->head & INUSE_BITS) == INUSE_BITS) {
   5687                 fm->dvsize = psize;
   5688                 set_free_with_pinuse(p, psize, next);
   5689                 goto postaction;
   5690               }
   5691             }
   5692             else
   5693               goto erroraction;
   5694           }
   5695         }
   5696 
   5697         if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) {
   5698           if (!cinuse(next)) {  /* consolidate forward */
   5699             if (next == fm->top) {
   5700               size_t tsize = fm->topsize += psize;
   5701               fm->top = p;
   5702               p->head = tsize | PINUSE_BIT;
   5703               if (p == fm->dv) {
   5704                 fm->dv = 0;
   5705                 fm->dvsize = 0;
   5706               }
   5707               if (should_trim(fm, tsize))
   5708                 sys_trim(fm, 0);
   5709               goto postaction;
   5710             }
   5711             else if (next == fm->dv) {
   5712               size_t dsize = fm->dvsize += psize;
   5713               fm->dv = p;
   5714               set_size_and_pinuse_of_free_chunk(p, dsize);
   5715               goto postaction;
   5716             }
   5717             else {
   5718               size_t nsize = chunksize(next);
   5719               psize += nsize;
   5720               unlink_chunk(fm, next, nsize);
   5721               set_size_and_pinuse_of_free_chunk(p, psize);
   5722               if (p == fm->dv) {
   5723                 fm->dvsize = psize;
   5724                 goto postaction;
   5725               }
   5726             }
   5727           }
   5728           else
   5729             set_free_with_pinuse(p, psize, next);
   5730 
   5731           if (is_small(psize)) {
   5732             insert_small_chunk(fm, p, psize);
   5733             check_free_chunk(fm, p);
   5734           }
   5735           else {
   5736             tchunkptr tp = (tchunkptr)p;
   5737             insert_large_chunk(fm, tp, psize);
   5738             check_free_chunk(fm, p);
   5739             if (--fm->release_checks == 0)
   5740               release_unused_segments(fm);
   5741           }
   5742           goto postaction;
   5743         }
   5744       }
   5745     erroraction:
   5746       USAGE_ERROR_ACTION(fm, p);
   5747     postaction:
   5748       POSTACTION(fm);
   5749     }
   5750   }
   5751 }
   5752 
   5753 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) {
   5754   void* mem;
   5755   size_t req = 0;
   5756   mstate ms = (mstate)msp;
   5757   if (!ok_magic(ms)) {
   5758     USAGE_ERROR_ACTION(ms,ms);
   5759     return 0;
   5760   }
   5761   if (n_elements != 0) {
   5762     req = n_elements * elem_size;
   5763     if (((n_elements | elem_size) & ~(size_t)0xffff) &&
   5764         (req / n_elements != elem_size))
   5765       req = MAX_SIZE_T; /* force downstream failure on overflow */
   5766   }
   5767   mem = internal_malloc(ms, req);
   5768   if (mem != 0 && calloc_must_clear(mem2chunk(mem)))
   5769     memset(mem, 0, req);
   5770   return mem;
   5771 }
   5772 
   5773 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) {
   5774   void* mem = 0;
   5775   if (oldmem == 0) {
   5776     mem = mspace_malloc(msp, bytes);
   5777   }
   5778   else if (bytes >= MAX_REQUEST) {
   5779     MALLOC_FAILURE_ACTION;
   5780   }
   5781 #ifdef REALLOC_ZERO_BYTES_FREES
   5782   else if (bytes == 0) {
   5783     mspace_free(msp, oldmem);
   5784   }
   5785 #endif /* REALLOC_ZERO_BYTES_FREES */
   5786   else {
   5787     size_t nb = request2size(bytes);
   5788     mchunkptr oldp = mem2chunk(oldmem);
   5789 #if ! FOOTERS
   5790     mstate m = (mstate)msp;
   5791 #else /* FOOTERS */
   5792     mstate m = get_mstate_for(oldp);
   5793     if (!ok_magic(m)) {
   5794       USAGE_ERROR_ACTION(m, oldmem);
   5795       return 0;
   5796     }
   5797 #endif /* FOOTERS */
   5798     if (!PREACTION(m)) {
   5799       mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1);
   5800       POSTACTION(m);
   5801       if (newp != 0) {
   5802         check_inuse_chunk(m, newp);
   5803         mem = chunk2mem(newp);
   5804       }
   5805       else {
   5806         mem = mspace_malloc(m, bytes);
   5807         if (mem != 0) {
   5808           size_t oc = chunksize(oldp) - overhead_for(oldp);
   5809           memcpy(mem, oldmem, (oc < bytes)? oc : bytes);
   5810           mspace_free(m, oldmem);
   5811         }
   5812       }
   5813     }
   5814   }
   5815   return mem;
   5816 }
   5817 
   5818 void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) {
   5819   void* mem = 0;
   5820   if (oldmem != 0) {
   5821     if (bytes >= MAX_REQUEST) {
   5822       MALLOC_FAILURE_ACTION;
   5823     }
   5824     else {
   5825       size_t nb = request2size(bytes);
   5826       mchunkptr oldp = mem2chunk(oldmem);
   5827 #if ! FOOTERS
   5828       mstate m = (mstate)msp;
   5829 #else /* FOOTERS */
   5830       mstate m = get_mstate_for(oldp);
   5831       (void)msp; /* placate people compiling -Wunused */
   5832       if (!ok_magic(m)) {
   5833         USAGE_ERROR_ACTION(m, oldmem);
   5834         return 0;
   5835       }
   5836 #endif /* FOOTERS */
   5837       if (!PREACTION(m)) {
   5838         mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0);
   5839         POSTACTION(m);
   5840         if (newp == oldp) {
   5841           check_inuse_chunk(m, newp);
   5842           mem = oldmem;
   5843         }
   5844       }
   5845     }
   5846   }
   5847   return mem;
   5848 }
   5849 
   5850 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) {
   5851   mstate ms = (mstate)msp;
   5852   if (!ok_magic(ms)) {
   5853     USAGE_ERROR_ACTION(ms,ms);
   5854     return 0;
   5855   }
   5856   if (alignment <= MALLOC_ALIGNMENT)
   5857     return mspace_malloc(msp, bytes);
   5858   return internal_memalign(ms, alignment, bytes);
   5859 }
   5860 
   5861 void** mspace_independent_calloc(mspace msp, size_t n_elements,
   5862                                  size_t elem_size, void* chunks[]) {
   5863   size_t sz = elem_size; /* serves as 1-element array */
   5864   mstate ms = (mstate)msp;
   5865   if (!ok_magic(ms)) {
   5866     USAGE_ERROR_ACTION(ms,ms);
   5867     return 0;
   5868   }
   5869   return ialloc(ms, n_elements, &sz, 3, chunks);
   5870 }
   5871 
   5872 void** mspace_independent_comalloc(mspace msp, size_t n_elements,
   5873                                    size_t sizes[], void* chunks[]) {
   5874   mstate ms = (mstate)msp;
   5875   if (!ok_magic(ms)) {
   5876     USAGE_ERROR_ACTION(ms,ms);
   5877     return 0;
   5878   }
   5879   return ialloc(ms, n_elements, sizes, 0, chunks);
   5880 }
   5881 
   5882 size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) {
   5883   return internal_bulk_free((mstate)msp, array, nelem);
   5884 }
   5885 
   5886 #if MALLOC_INSPECT_ALL
   5887 void mspace_inspect_all(mspace msp,
   5888                         void(*handler)(void *start,
   5889                                        void *end,
   5890                                        size_t used_bytes,
   5891                                        void* callback_arg),
   5892                         void* arg) {
   5893   mstate ms = (mstate)msp;
   5894   if (ok_magic(ms)) {
   5895     if (!PREACTION(ms)) {
   5896       internal_inspect_all(ms, handler, arg);
   5897       POSTACTION(ms);
   5898     }
   5899   }
   5900   else {
   5901     USAGE_ERROR_ACTION(ms,ms);
   5902   }
   5903 }
   5904 #endif /* MALLOC_INSPECT_ALL */
   5905 
   5906 int mspace_trim(mspace msp, size_t pad) {
   5907   int result = 0;
   5908   mstate ms = (mstate)msp;
   5909   if (ok_magic(ms)) {
   5910     if (!PREACTION(ms)) {
   5911       result = sys_trim(ms, pad);
   5912       POSTACTION(ms);
   5913     }
   5914   }
   5915   else {
   5916     USAGE_ERROR_ACTION(ms,ms);
   5917   }
   5918   return result;
   5919 }
   5920 
   5921 #if !NO_MALLOC_STATS
   5922 void mspace_malloc_stats(mspace msp) {
   5923   mstate ms = (mstate)msp;
   5924   if (ok_magic(ms)) {
   5925     internal_malloc_stats(ms);
   5926   }
   5927   else {
   5928     USAGE_ERROR_ACTION(ms,ms);
   5929   }
   5930 }
   5931 #endif /* NO_MALLOC_STATS */
   5932 
   5933 size_t mspace_footprint(mspace msp) {
   5934   size_t result = 0;
   5935   mstate ms = (mstate)msp;
   5936   if (ok_magic(ms)) {
   5937     result = ms->footprint;
   5938   }
   5939   else {
   5940     USAGE_ERROR_ACTION(ms,ms);
   5941   }
   5942   return result;
   5943 }
   5944 
   5945 size_t mspace_max_footprint(mspace msp) {
   5946   size_t result = 0;
   5947   mstate ms = (mstate)msp;
   5948   if (ok_magic(ms)) {
   5949     result = ms->max_footprint;
   5950   }
   5951   else {
   5952     USAGE_ERROR_ACTION(ms,ms);
   5953   }
   5954   return result;
   5955 }
   5956 
   5957 size_t mspace_footprint_limit(mspace msp) {
   5958   size_t result = 0;
   5959   mstate ms = (mstate)msp;
   5960   if (ok_magic(ms)) {
   5961     size_t maf = ms->footprint_limit;
   5962     result = (maf == 0) ? MAX_SIZE_T : maf;
   5963   }
   5964   else {
   5965     USAGE_ERROR_ACTION(ms,ms);
   5966   }
   5967   return result;
   5968 }
   5969 
   5970 size_t mspace_set_footprint_limit(mspace msp, size_t bytes) {
   5971   size_t result = 0;
   5972   mstate ms = (mstate)msp;
   5973   if (ok_magic(ms)) {
   5974     if (bytes == 0)
   5975       result = granularity_align(1); /* Use minimal size */
   5976     if (bytes == MAX_SIZE_T)
   5977       result = 0;                    /* disable */
   5978     else
   5979       result = granularity_align(bytes);
   5980     ms->footprint_limit = result;
   5981   }
   5982   else {
   5983     USAGE_ERROR_ACTION(ms,ms);
   5984   }
   5985   return result;
   5986 }
   5987 
   5988 #if !NO_MALLINFO
   5989 struct mallinfo mspace_mallinfo(mspace msp) {
   5990   mstate ms = (mstate)msp;
   5991   if (!ok_magic(ms)) {
   5992     USAGE_ERROR_ACTION(ms,ms);
   5993   }
   5994   return internal_mallinfo(ms);
   5995 }
   5996 #endif /* NO_MALLINFO */
   5997 
   5998 size_t mspace_usable_size(const void* mem) {
   5999   if (mem != 0) {
   6000     mchunkptr p = mem2chunk(mem);
   6001     if (is_inuse(p))
   6002       return chunksize(p) - overhead_for(p);
   6003   }
   6004   return 0;
   6005 }
   6006 
   6007 int mspace_mallopt(int param_number, int value) {
   6008   return change_mparam(param_number, value);
   6009 }
   6010 
   6011 #endif /* MSPACES */
   6012 
   6013 
   6014 /* -------------------- Alternative MORECORE functions ------------------- */
   6015 
   6016 /*
   6017   Guidelines for creating a custom version of MORECORE:
   6018 
   6019   * For best performance, MORECORE should allocate in multiples of pagesize.
   6020   * MORECORE may allocate more memory than requested. (Or even less,
   6021       but this will usually result in a malloc failure.)
   6022   * MORECORE must not allocate memory when given argument zero, but
   6023       instead return one past the end address of memory from previous
   6024       nonzero call.
   6025   * For best performance, consecutive calls to MORECORE with positive
   6026       arguments should return increasing addresses, indicating that
   6027       space has been contiguously extended.
   6028   * Even though consecutive calls to MORECORE need not return contiguous
   6029       addresses, it must be OK for malloc'ed chunks to span multiple
   6030       regions in those cases where they do happen to be contiguous.
   6031   * MORECORE need not handle negative arguments -- it may instead
   6032       just return MFAIL when given negative arguments.
   6033       Negative arguments are always multiples of pagesize. MORECORE
   6034       must not misinterpret negative args as large positive unsigned
   6035       args. You can suppress all such calls from even occurring by defining
   6036       MORECORE_CANNOT_TRIM,
   6037 
   6038   As an example alternative MORECORE, here is a custom allocator
   6039   kindly contributed for pre-OSX macOS.  It uses virtually but not
   6040   necessarily physically contiguous non-paged memory (locked in,
   6041   present and won't get swapped out).  You can use it by uncommenting
   6042   this section, adding some #includes, and setting up the appropriate
   6043   defines above:
   6044 
   6045       #define MORECORE osMoreCore
   6046 
   6047   There is also a shutdown routine that should somehow be called for
   6048   cleanup upon program exit.
   6049 
   6050   #define MAX_POOL_ENTRIES 100
   6051   #define MINIMUM_MORECORE_SIZE  (64 * 1024U)
   6052   static int next_os_pool;
   6053   void *our_os_pools[MAX_POOL_ENTRIES];
   6054 
   6055   void *osMoreCore(int size)
   6056   {
   6057     void *ptr = 0;
   6058     static void *sbrk_top = 0;
   6059 
   6060     if (size > 0)
   6061     {
   6062       if (size < MINIMUM_MORECORE_SIZE)
   6063          size = MINIMUM_MORECORE_SIZE;
   6064       if (CurrentExecutionLevel() == kTaskLevel)
   6065          ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
   6066       if (ptr == 0)
   6067       {
   6068         return (void *) MFAIL;
   6069       }
   6070       // save ptrs so they can be freed during cleanup
   6071       our_os_pools[next_os_pool] = ptr;
   6072       next_os_pool++;
   6073       ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
   6074       sbrk_top = (char *) ptr + size;
   6075       return ptr;
   6076     }
   6077     else if (size < 0)
   6078     {
   6079       // we don't currently support shrink behavior
   6080       return (void *) MFAIL;
   6081     }
   6082     else
   6083     {
   6084       return sbrk_top;
   6085     }
   6086   }
   6087 
   6088   // cleanup any allocated memory pools
   6089   // called as last thing before shutting down driver
   6090 
   6091   void osCleanupMem(void)
   6092   {
   6093     void **ptr;
   6094 
   6095     for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
   6096       if (*ptr)
   6097       {
   6098          PoolDeallocate(*ptr);
   6099          *ptr = 0;
   6100       }
   6101   }
   6102 
   6103 */
   6104 
   6105 
   6106 /* -----------------------------------------------------------------------
   6107 History:
   6108     v2.8.6 Wed Aug 29 06:57:58 2012  Doug Lea
   6109       * fix bad comparison in dlposix_memalign
   6110       * don't reuse adjusted asize in sys_alloc
   6111       * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion
   6112       * reduce compiler warnings -- thanks to all who reported/suggested these
   6113 
   6114     v2.8.5 Sun May 22 10:26:02 2011  Doug Lea  (dl at gee)
   6115       * Always perform unlink checks unless INSECURE
   6116       * Add posix_memalign.
   6117       * Improve realloc to expand in more cases; expose realloc_in_place.
   6118         Thanks to Peter Buhr for the suggestion.
   6119       * Add footprint_limit, inspect_all, bulk_free. Thanks
   6120         to Barry Hayes and others for the suggestions.
   6121       * Internal refactorings to avoid calls while holding locks
   6122       * Use non-reentrant locks by default. Thanks to Roland McGrath
   6123         for the suggestion.
   6124       * Small fixes to mspace_destroy, reset_on_error.
   6125       * Various configuration extensions/changes. Thanks
   6126          to all who contributed these.
   6127 
   6128     V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu)
   6129       * Update Creative Commons URL
   6130 
   6131     V2.8.4 Wed May 27 09:56:23 2009  Doug Lea  (dl at gee)
   6132       * Use zeros instead of prev foot for is_mmapped
   6133       * Add mspace_track_large_chunks; thanks to Jean Brouwers
   6134       * Fix set_inuse in internal_realloc; thanks to Jean Brouwers
   6135       * Fix insufficient sys_alloc padding when using 16byte alignment
   6136       * Fix bad error check in mspace_footprint
   6137       * Adaptations for ptmalloc; thanks to Wolfram Gloger.
   6138       * Reentrant spin locks; thanks to Earl Chew and others
   6139       * Win32 improvements; thanks to Niall Douglas and Earl Chew
   6140       * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options
   6141       * Extension hook in malloc_state
   6142       * Various small adjustments to reduce warnings on some compilers
   6143       * Various configuration extensions/changes for more platforms. Thanks
   6144          to all who contributed these.
   6145 
   6146     V2.8.3 Thu Sep 22 11:16:32 2005  Doug Lea  (dl at gee)
   6147       * Add max_footprint functions
   6148       * Ensure all appropriate literals are size_t
   6149       * Fix conditional compilation problem for some #define settings
   6150       * Avoid concatenating segments with the one provided
   6151         in create_mspace_with_base
   6152       * Rename some variables to avoid compiler shadowing warnings
   6153       * Use explicit lock initialization.
   6154       * Better handling of sbrk interference.
   6155       * Simplify and fix segment insertion, trimming and mspace_destroy
   6156       * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x
   6157       * Thanks especially to Dennis Flanagan for help on these.
   6158 
   6159     V2.8.2 Sun Jun 12 16:01:10 2005  Doug Lea  (dl at gee)
   6160       * Fix memalign brace error.
   6161 
   6162     V2.8.1 Wed Jun  8 16:11:46 2005  Doug Lea  (dl at gee)
   6163       * Fix improper #endif nesting in C++
   6164       * Add explicit casts needed for C++
   6165 
   6166     V2.8.0 Mon May 30 14:09:02 2005  Doug Lea  (dl at gee)
   6167       * Use trees for large bins
   6168       * Support mspaces
   6169       * Use segments to unify sbrk-based and mmap-based system allocation,
   6170         removing need for emulation on most platforms without sbrk.
   6171       * Default safety checks
   6172       * Optional footer checks. Thanks to William Robertson for the idea.
   6173       * Internal code refactoring
   6174       * Incorporate suggestions and platform-specific changes.
   6175         Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas,
   6176         Aaron Bachmann,  Emery Berger, and others.
   6177       * Speed up non-fastbin processing enough to remove fastbins.
   6178       * Remove useless cfree() to avoid conflicts with other apps.
   6179       * Remove internal memcpy, memset. Compilers handle builtins better.
   6180       * Remove some options that no one ever used and rename others.
   6181 
   6182     V2.7.2 Sat Aug 17 09:07:30 2002  Doug Lea  (dl at gee)
   6183       * Fix malloc_state bitmap array misdeclaration
   6184 
   6185     V2.7.1 Thu Jul 25 10:58:03 2002  Doug Lea  (dl at gee)
   6186       * Allow tuning of FIRST_SORTED_BIN_SIZE
   6187       * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte.
   6188       * Better detection and support for non-contiguousness of MORECORE.
   6189         Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger
   6190       * Bypass most of malloc if no frees. Thanks To Emery Berger.
   6191       * Fix freeing of old top non-contiguous chunk im sysmalloc.
   6192       * Raised default trim and map thresholds to 256K.
   6193       * Fix mmap-related #defines. Thanks to Lubos Lunak.
   6194       * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield.
   6195       * Branch-free bin calculation
   6196       * Default trim and mmap thresholds now 256K.
   6197 
   6198     V2.7.0 Sun Mar 11 14:14:06 2001  Doug Lea  (dl at gee)
   6199       * Introduce independent_comalloc and independent_calloc.
   6200         Thanks to Michael Pachos for motivation and help.
   6201       * Make optional .h file available
   6202       * Allow > 2GB requests on 32bit systems.
   6203       * new WIN32 sbrk, mmap, munmap, lock code from <Walter (at) GeNeSys-e.de>.
   6204         Thanks also to Andreas Mueller <a.mueller at paradatec.de>,
   6205         and Anonymous.
   6206       * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
   6207         helping test this.)
   6208       * memalign: check alignment arg
   6209       * realloc: don't try to shift chunks backwards, since this
   6210         leads to  more fragmentation in some programs and doesn't
   6211         seem to help in any others.
   6212       * Collect all cases in malloc requiring system memory into sysmalloc
   6213       * Use mmap as backup to sbrk
   6214       * Place all internal state in malloc_state
   6215       * Introduce fastbins (although similar to 2.5.1)
   6216       * Many minor tunings and cosmetic improvements
   6217       * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
   6218       * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
   6219         Thanks to Tony E. Bennett <tbennett (at) nvidia.com> and others.
   6220       * Include errno.h to support default failure action.
   6221 
   6222     V2.6.6 Sun Dec  5 07:42:19 1999  Doug Lea  (dl at gee)
   6223       * return null for negative arguments
   6224       * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
   6225          * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
   6226           (e.g. WIN32 platforms)
   6227          * Cleanup header file inclusion for WIN32 platforms
   6228          * Cleanup code to avoid Microsoft Visual C++ compiler complaints
   6229          * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
   6230            memory allocation routines
   6231          * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
   6232          * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
   6233            usage of 'assert' in non-WIN32 code
   6234          * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
   6235            avoid infinite loop
   6236       * Always call 'fREe()' rather than 'free()'
   6237 
   6238     V2.6.5 Wed Jun 17 15:57:31 1998  Doug Lea  (dl at gee)
   6239       * Fixed ordering problem with boundary-stamping
   6240 
   6241     V2.6.3 Sun May 19 08:17:58 1996  Doug Lea  (dl at gee)
   6242       * Added pvalloc, as recommended by H.J. Liu
   6243       * Added 64bit pointer support mainly from Wolfram Gloger
   6244       * Added anonymously donated WIN32 sbrk emulation
   6245       * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
   6246       * malloc_extend_top: fix mask error that caused wastage after
   6247         foreign sbrks
   6248       * Add linux mremap support code from HJ Liu
   6249 
   6250     V2.6.2 Tue Dec  5 06:52:55 1995  Doug Lea  (dl at gee)
   6251       * Integrated most documentation with the code.
   6252       * Add support for mmap, with help from
   6253         Wolfram Gloger (Gloger (at) lrz.uni-muenchen.de).
   6254       * Use last_remainder in more cases.
   6255       * Pack bins using idea from  colin (at) nyx10.cs.du.edu
   6256       * Use ordered bins instead of best-fit threshhold
   6257       * Eliminate block-local decls to simplify tracing and debugging.
   6258       * Support another case of realloc via move into top
   6259       * Fix error occuring when initial sbrk_base not word-aligned.
   6260       * Rely on page size for units instead of SBRK_UNIT to
   6261         avoid surprises about sbrk alignment conventions.
   6262       * Add mallinfo, mallopt. Thanks to Raymond Nijssen
   6263         (raymond (at) es.ele.tue.nl) for the suggestion.
   6264       * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
   6265       * More precautions for cases where other routines call sbrk,
   6266         courtesy of Wolfram Gloger (Gloger (at) lrz.uni-muenchen.de).
   6267       * Added macros etc., allowing use in linux libc from
   6268         H.J. Lu (hjl (at) gnu.ai.mit.edu)
   6269       * Inverted this history list
   6270 
   6271     V2.6.1 Sat Dec  2 14:10:57 1995  Doug Lea  (dl at gee)
   6272       * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
   6273       * Removed all preallocation code since under current scheme
   6274         the work required to undo bad preallocations exceeds
   6275         the work saved in good cases for most test programs.
   6276       * No longer use return list or unconsolidated bins since
   6277         no scheme using them consistently outperforms those that don't
   6278         given above changes.
   6279       * Use best fit for very large chunks to prevent some worst-cases.
   6280       * Added some support for debugging
   6281 
   6282     V2.6.0 Sat Nov  4 07:05:23 1995  Doug Lea  (dl at gee)
   6283       * Removed footers when chunks are in use. Thanks to
   6284         Paul Wilson (wilson (at) cs.texas.edu) for the suggestion.
   6285 
   6286     V2.5.4 Wed Nov  1 07:54:51 1995  Doug Lea  (dl at gee)
   6287       * Added malloc_trim, with help from Wolfram Gloger
   6288         (wmglo (at) Dent.MED.Uni-Muenchen.DE).
   6289 
   6290     V2.5.3 Tue Apr 26 10:16:01 1994  Doug Lea  (dl at g)
   6291 
   6292     V2.5.2 Tue Apr  5 16:20:40 1994  Doug Lea  (dl at g)
   6293       * realloc: try to expand in both directions
   6294       * malloc: swap order of clean-bin strategy;
   6295       * realloc: only conditionally expand backwards
   6296       * Try not to scavenge used bins
   6297       * Use bin counts as a guide to preallocation
   6298       * Occasionally bin return list chunks in first scan
   6299       * Add a few optimizations from colin (at) nyx10.cs.du.edu
   6300 
   6301     V2.5.1 Sat Aug 14 15:40:43 1993  Doug Lea  (dl at g)
   6302       * faster bin computation & slightly different binning
   6303       * merged all consolidations to one part of malloc proper
   6304          (eliminating old malloc_find_space & malloc_clean_bin)
   6305       * Scan 2 returns chunks (not just 1)
   6306       * Propagate failure in realloc if malloc returns 0
   6307       * Add stuff to allow compilation on non-ANSI compilers
   6308           from kpv (at) research.att.com
   6309 
   6310     V2.5 Sat Aug  7 07:41:59 1993  Doug Lea  (dl at g.oswego.edu)
   6311       * removed potential for odd address access in prev_chunk
   6312       * removed dependency on getpagesize.h
   6313       * misc cosmetics and a bit more internal documentation
   6314       * anticosmetics: mangled names in macros to evade debugger strangeness
   6315       * tested on sparc, hp-700, dec-mips, rs6000
   6316           with gcc & native cc (hp, dec only) allowing
   6317           Detlefs & Zorn comparison study (in SIGPLAN Notices.)
   6318 
   6319     Trial version Fri Aug 28 13:14:29 1992  Doug Lea  (dl at g.oswego.edu)
   6320       * Based loosely on libg++-1.2X malloc. (It retains some of the overall
   6321          structure of old version,  but most details differ.)
   6322 
   6323 */
   6324