1 /* 2 This is a version (aka dlmalloc) of malloc/free/realloc written by 3 Doug Lea and released to the public domain, as explained at 4 http://creativecommons.org/publicdomain/zero/1.0/ Send questions, 5 comments, complaints, performance data, etc to dl (at) cs.oswego.edu 6 7 * Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea 8 Note: There may be an updated version of this malloc obtainable at 9 ftp://gee.cs.oswego.edu/pub/misc/malloc.c 10 Check before installing! 11 12 * Quickstart 13 14 This library is all in one file to simplify the most common usage: 15 ftp it, compile it (-O3), and link it into another program. All of 16 the compile-time options default to reasonable values for use on 17 most platforms. You might later want to step through various 18 compile-time and dynamic tuning options. 19 20 For convenience, an include file for code using this malloc is at: 21 ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h 22 You don't really need this .h file unless you call functions not 23 defined in your system include files. The .h file contains only the 24 excerpts from this file needed for using this malloc on ANSI C/C++ 25 systems, so long as you haven't changed compile-time options about 26 naming and tuning parameters. If you do, then you can create your 27 own malloc.h that does include all settings by cutting at the point 28 indicated below. Note that you may already by default be using a C 29 library containing a malloc that is based on some version of this 30 malloc (for example in linux). You might still want to use the one 31 in this file to customize settings or to avoid overheads associated 32 with library versions. 33 34 * Vital statistics: 35 36 Supported pointer/size_t representation: 4 or 8 bytes 37 size_t MUST be an unsigned type of the same width as 38 pointers. (If you are using an ancient system that declares 39 size_t as a signed type, or need it to be a different width 40 than pointers, you can use a previous release of this malloc 41 (e.g. 2.7.2) supporting these.) 42 43 Alignment: 8 bytes (minimum) 44 This suffices for nearly all current machines and C compilers. 45 However, you can define MALLOC_ALIGNMENT to be wider than this 46 if necessary (up to 128bytes), at the expense of using more space. 47 48 Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes) 49 8 or 16 bytes (if 8byte sizes) 50 Each malloced chunk has a hidden word of overhead holding size 51 and status information, and additional cross-check word 52 if FOOTERS is defined. 53 54 Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead) 55 8-byte ptrs: 32 bytes (including overhead) 56 57 Even a request for zero bytes (i.e., malloc(0)) returns a 58 pointer to something of the minimum allocatable size. 59 The maximum overhead wastage (i.e., number of extra bytes 60 allocated than were requested in malloc) is less than or equal 61 to the minimum size, except for requests >= mmap_threshold that 62 are serviced via mmap(), where the worst case wastage is about 63 32 bytes plus the remainder from a system page (the minimal 64 mmap unit); typically 4096 or 8192 bytes. 65 66 Security: static-safe; optionally more or less 67 The "security" of malloc refers to the ability of malicious 68 code to accentuate the effects of errors (for example, freeing 69 space that is not currently malloc'ed or overwriting past the 70 ends of chunks) in code that calls malloc. This malloc 71 guarantees not to modify any memory locations below the base of 72 heap, i.e., static variables, even in the presence of usage 73 errors. The routines additionally detect most improper frees 74 and reallocs. All this holds as long as the static bookkeeping 75 for malloc itself is not corrupted by some other means. This 76 is only one aspect of security -- these checks do not, and 77 cannot, detect all possible programming errors. 78 79 If FOOTERS is defined nonzero, then each allocated chunk 80 carries an additional check word to verify that it was malloced 81 from its space. These check words are the same within each 82 execution of a program using malloc, but differ across 83 executions, so externally crafted fake chunks cannot be 84 freed. This improves security by rejecting frees/reallocs that 85 could corrupt heap memory, in addition to the checks preventing 86 writes to statics that are always on. This may further improve 87 security at the expense of time and space overhead. (Note that 88 FOOTERS may also be worth using with MSPACES.) 89 90 By default detected errors cause the program to abort (calling 91 "abort()"). You can override this to instead proceed past 92 errors by defining PROCEED_ON_ERROR. In this case, a bad free 93 has no effect, and a malloc that encounters a bad address 94 caused by user overwrites will ignore the bad address by 95 dropping pointers and indices to all known memory. This may 96 be appropriate for programs that should continue if at all 97 possible in the face of programming errors, although they may 98 run out of memory because dropped memory is never reclaimed. 99 100 If you don't like either of these options, you can define 101 CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything 102 else. And if if you are sure that your program using malloc has 103 no errors or vulnerabilities, you can define INSECURE to 1, 104 which might (or might not) provide a small performance improvement. 105 106 It is also possible to limit the maximum total allocatable 107 space, using malloc_set_footprint_limit. This is not 108 designed as a security feature in itself (calls to set limits 109 are not screened or privileged), but may be useful as one 110 aspect of a secure implementation. 111 112 Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero 113 When USE_LOCKS is defined, each public call to malloc, free, 114 etc is surrounded with a lock. By default, this uses a plain 115 pthread mutex, win32 critical section, or a spin-lock if if 116 available for the platform and not disabled by setting 117 USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined, 118 recursive versions are used instead (which are not required for 119 base functionality but may be needed in layered extensions). 120 Using a global lock is not especially fast, and can be a major 121 bottleneck. It is designed only to provide minimal protection 122 in concurrent environments, and to provide a basis for 123 extensions. If you are using malloc in a concurrent program, 124 consider instead using nedmalloc 125 (http://www.nedprod.com/programs/portable/nedmalloc/) or 126 ptmalloc (See http://www.malloc.de), which are derived from 127 versions of this malloc. 128 129 System requirements: Any combination of MORECORE and/or MMAP/MUNMAP 130 This malloc can use unix sbrk or any emulation (invoked using 131 the CALL_MORECORE macro) and/or mmap/munmap or any emulation 132 (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system 133 memory. On most unix systems, it tends to work best if both 134 MORECORE and MMAP are enabled. On Win32, it uses emulations 135 based on VirtualAlloc. It also uses common C library functions 136 like memset. 137 138 Compliance: I believe it is compliant with the Single Unix Specification 139 (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably 140 others as well. 141 142 * Overview of algorithms 143 144 This is not the fastest, most space-conserving, most portable, or 145 most tunable malloc ever written. However it is among the fastest 146 while also being among the most space-conserving, portable and 147 tunable. Consistent balance across these factors results in a good 148 general-purpose allocator for malloc-intensive programs. 149 150 In most ways, this malloc is a best-fit allocator. Generally, it 151 chooses the best-fitting existing chunk for a request, with ties 152 broken in approximately least-recently-used order. (This strategy 153 normally maintains low fragmentation.) However, for requests less 154 than 256bytes, it deviates from best-fit when there is not an 155 exactly fitting available chunk by preferring to use space adjacent 156 to that used for the previous small request, as well as by breaking 157 ties in approximately most-recently-used order. (These enhance 158 locality of series of small allocations.) And for very large requests 159 (>= 256Kb by default), it relies on system memory mapping 160 facilities, if supported. (This helps avoid carrying around and 161 possibly fragmenting memory used only for large chunks.) 162 163 All operations (except malloc_stats and mallinfo) have execution 164 times that are bounded by a constant factor of the number of bits in 165 a size_t, not counting any clearing in calloc or copying in realloc, 166 or actions surrounding MORECORE and MMAP that have times 167 proportional to the number of non-contiguous regions returned by 168 system allocation routines, which is often just 1. In real-time 169 applications, you can optionally suppress segment traversals using 170 NO_SEGMENT_TRAVERSAL, which assures bounded execution even when 171 system allocators return non-contiguous spaces, at the typical 172 expense of carrying around more memory and increased fragmentation. 173 174 The implementation is not very modular and seriously overuses 175 macros. Perhaps someday all C compilers will do as good a job 176 inlining modular code as can now be done by brute-force expansion, 177 but now, enough of them seem not to. 178 179 Some compilers issue a lot of warnings about code that is 180 dead/unreachable only on some platforms, and also about intentional 181 uses of negation on unsigned types. All known cases of each can be 182 ignored. 183 184 For a longer but out of date high-level description, see 185 http://gee.cs.oswego.edu/dl/html/malloc.html 186 187 * MSPACES 188 If MSPACES is defined, then in addition to malloc, free, etc., 189 this file also defines mspace_malloc, mspace_free, etc. These 190 are versions of malloc routines that take an "mspace" argument 191 obtained using create_mspace, to control all internal bookkeeping. 192 If ONLY_MSPACES is defined, only these versions are compiled. 193 So if you would like to use this allocator for only some allocations, 194 and your system malloc for others, you can compile with 195 ONLY_MSPACES and then do something like... 196 static mspace mymspace = create_mspace(0,0); // for example 197 #define mymalloc(bytes) mspace_malloc(mymspace, bytes) 198 199 (Note: If you only need one instance of an mspace, you can instead 200 use "USE_DL_PREFIX" to relabel the global malloc.) 201 202 You can similarly create thread-local allocators by storing 203 mspaces as thread-locals. For example: 204 static __thread mspace tlms = 0; 205 void* tlmalloc(size_t bytes) { 206 if (tlms == 0) tlms = create_mspace(0, 0); 207 return mspace_malloc(tlms, bytes); 208 } 209 void tlfree(void* mem) { mspace_free(tlms, mem); } 210 211 Unless FOOTERS is defined, each mspace is completely independent. 212 You cannot allocate from one and free to another (although 213 conformance is only weakly checked, so usage errors are not always 214 caught). If FOOTERS is defined, then each chunk carries around a tag 215 indicating its originating mspace, and frees are directed to their 216 originating spaces. Normally, this requires use of locks. 217 218 ------------------------- Compile-time options --------------------------- 219 220 Be careful in setting #define values for numerical constants of type 221 size_t. On some systems, literal values are not automatically extended 222 to size_t precision unless they are explicitly casted. You can also 223 use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below. 224 225 WIN32 default: defined if _WIN32 defined 226 Defining WIN32 sets up defaults for MS environment and compilers. 227 Otherwise defaults are for unix. Beware that there seem to be some 228 cases where this malloc might not be a pure drop-in replacement for 229 Win32 malloc: Random-looking failures from Win32 GDI API's (eg; 230 SetDIBits()) may be due to bugs in some video driver implementations 231 when pixel buffers are malloc()ed, and the region spans more than 232 one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb) 233 default granularity, pixel buffers may straddle virtual allocation 234 regions more often than when using the Microsoft allocator. You can 235 avoid this by using VirtualAlloc() and VirtualFree() for all pixel 236 buffers rather than using malloc(). If this is not possible, 237 recompile this malloc with a larger DEFAULT_GRANULARITY. Note: 238 in cases where MSC and gcc (cygwin) are known to differ on WIN32, 239 conditions use _MSC_VER to distinguish them. 240 241 DLMALLOC_EXPORT default: extern 242 Defines how public APIs are declared. If you want to export via a 243 Windows DLL, you might define this as 244 #define DLMALLOC_EXPORT extern __declspec(dllexport) 245 If you want a POSIX ELF shared object, you might use 246 #define DLMALLOC_EXPORT extern __attribute__((visibility("default"))) 247 248 MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *)) 249 Controls the minimum alignment for malloc'ed chunks. It must be a 250 power of two and at least 8, even on machines for which smaller 251 alignments would suffice. It may be defined as larger than this 252 though. Note however that code and data structures are optimized for 253 the case of 8-byte alignment. 254 255 MSPACES default: 0 (false) 256 If true, compile in support for independent allocation spaces. 257 This is only supported if HAVE_MMAP is true. 258 259 ONLY_MSPACES default: 0 (false) 260 If true, only compile in mspace versions, not regular versions. 261 262 USE_LOCKS default: 0 (false) 263 Causes each call to each public routine to be surrounded with 264 pthread or WIN32 mutex lock/unlock. (If set true, this can be 265 overridden on a per-mspace basis for mspace versions.) If set to a 266 non-zero value other than 1, locks are used, but their 267 implementation is left out, so lock functions must be supplied manually, 268 as described below. 269 270 USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available 271 If true, uses custom spin locks for locking. This is currently 272 supported only gcc >= 4.1, older gccs on x86 platforms, and recent 273 MS compilers. Otherwise, posix locks or win32 critical sections are 274 used. 275 276 USE_RECURSIVE_LOCKS default: not defined 277 If defined nonzero, uses recursive (aka reentrant) locks, otherwise 278 uses plain mutexes. This is not required for malloc proper, but may 279 be needed for layered allocators such as nedmalloc. 280 281 LOCK_AT_FORK default: not defined 282 If defined nonzero, performs pthread_atfork upon initialization 283 to initialize child lock while holding parent lock. The implementation 284 assumes that pthread locks (not custom locks) are being used. In other 285 cases, you may need to customize the implementation. 286 287 FOOTERS default: 0 288 If true, provide extra checking and dispatching by placing 289 information in the footers of allocated chunks. This adds 290 space and time overhead. 291 292 INSECURE default: 0 293 If true, omit checks for usage errors and heap space overwrites. 294 295 USE_DL_PREFIX default: NOT defined 296 Causes compiler to prefix all public routines with the string 'dl'. 297 This can be useful when you only want to use this malloc in one part 298 of a program, using your regular system malloc elsewhere. 299 300 MALLOC_INSPECT_ALL default: NOT defined 301 If defined, compiles malloc_inspect_all and mspace_inspect_all, that 302 perform traversal of all heap space. Unless access to these 303 functions is otherwise restricted, you probably do not want to 304 include them in secure implementations. 305 306 ABORT default: defined as abort() 307 Defines how to abort on failed checks. On most systems, a failed 308 check cannot die with an "assert" or even print an informative 309 message, because the underlying print routines in turn call malloc, 310 which will fail again. Generally, the best policy is to simply call 311 abort(). It's not very useful to do more than this because many 312 errors due to overwriting will show up as address faults (null, odd 313 addresses etc) rather than malloc-triggered checks, so will also 314 abort. Also, most compilers know that abort() does not return, so 315 can better optimize code conditionally calling it. 316 317 PROCEED_ON_ERROR default: defined as 0 (false) 318 Controls whether detected bad addresses cause them to bypassed 319 rather than aborting. If set, detected bad arguments to free and 320 realloc are ignored. And all bookkeeping information is zeroed out 321 upon a detected overwrite of freed heap space, thus losing the 322 ability to ever return it from malloc again, but enabling the 323 application to proceed. If PROCEED_ON_ERROR is defined, the 324 static variable malloc_corruption_error_count is compiled in 325 and can be examined to see if errors have occurred. This option 326 generates slower code than the default abort policy. 327 328 DEBUG default: NOT defined 329 The DEBUG setting is mainly intended for people trying to modify 330 this code or diagnose problems when porting to new platforms. 331 However, it may also be able to better isolate user errors than just 332 using runtime checks. The assertions in the check routines spell 333 out in more detail the assumptions and invariants underlying the 334 algorithms. The checking is fairly extensive, and will slow down 335 execution noticeably. Calling malloc_stats or mallinfo with DEBUG 336 set will attempt to check every non-mmapped allocated and free chunk 337 in the course of computing the summaries. 338 339 ABORT_ON_ASSERT_FAILURE default: defined as 1 (true) 340 Debugging assertion failures can be nearly impossible if your 341 version of the assert macro causes malloc to be called, which will 342 lead to a cascade of further failures, blowing the runtime stack. 343 ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(), 344 which will usually make debugging easier. 345 346 MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32 347 The action to take before "return 0" when malloc fails to be able to 348 return memory because there is none available. 349 350 HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES 351 True if this system supports sbrk or an emulation of it. 352 353 MORECORE default: sbrk 354 The name of the sbrk-style system routine to call to obtain more 355 memory. See below for guidance on writing custom MORECORE 356 functions. The type of the argument to sbrk/MORECORE varies across 357 systems. It cannot be size_t, because it supports negative 358 arguments, so it is normally the signed type of the same width as 359 size_t (sometimes declared as "intptr_t"). It doesn't much matter 360 though. Internally, we only call it with arguments less than half 361 the max value of a size_t, which should work across all reasonable 362 possibilities, although sometimes generating compiler warnings. 363 364 MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE 365 If true, take advantage of fact that consecutive calls to MORECORE 366 with positive arguments always return contiguous increasing 367 addresses. This is true of unix sbrk. It does not hurt too much to 368 set it true anyway, since malloc copes with non-contiguities. 369 Setting it false when definitely non-contiguous saves time 370 and possibly wasted space it would take to discover this though. 371 372 MORECORE_CANNOT_TRIM default: NOT defined 373 True if MORECORE cannot release space back to the system when given 374 negative arguments. This is generally necessary only if you are 375 using a hand-crafted MORECORE function that cannot handle negative 376 arguments. 377 378 NO_SEGMENT_TRAVERSAL default: 0 379 If non-zero, suppresses traversals of memory segments 380 returned by either MORECORE or CALL_MMAP. This disables 381 merging of segments that are contiguous, and selectively 382 releasing them to the OS if unused, but bounds execution times. 383 384 HAVE_MMAP default: 1 (true) 385 True if this system supports mmap or an emulation of it. If so, and 386 HAVE_MORECORE is not true, MMAP is used for all system 387 allocation. If set and HAVE_MORECORE is true as well, MMAP is 388 primarily used to directly allocate very large blocks. It is also 389 used as a backup strategy in cases where MORECORE fails to provide 390 space from system. Note: A single call to MUNMAP is assumed to be 391 able to unmap memory that may have be allocated using multiple calls 392 to MMAP, so long as they are adjacent. 393 394 HAVE_MREMAP default: 1 on linux, else 0 395 If true realloc() uses mremap() to re-allocate large blocks and 396 extend or shrink allocation spaces. 397 398 MMAP_CLEARS default: 1 except on WINCE. 399 True if mmap clears memory so calloc doesn't need to. This is true 400 for standard unix mmap using /dev/zero and on WIN32 except for WINCE. 401 402 USE_BUILTIN_FFS default: 0 (i.e., not used) 403 Causes malloc to use the builtin ffs() function to compute indices. 404 Some compilers may recognize and intrinsify ffs to be faster than the 405 supplied C version. Also, the case of x86 using gcc is special-cased 406 to an asm instruction, so is already as fast as it can be, and so 407 this setting has no effect. Similarly for Win32 under recent MS compilers. 408 (On most x86s, the asm version is only slightly faster than the C version.) 409 410 malloc_getpagesize default: derive from system includes, or 4096. 411 The system page size. To the extent possible, this malloc manages 412 memory from the system in page-size units. This may be (and 413 usually is) a function rather than a constant. This is ignored 414 if WIN32, where page size is determined using getSystemInfo during 415 initialization. 416 417 USE_DEV_RANDOM default: 0 (i.e., not used) 418 Causes malloc to use /dev/random to initialize secure magic seed for 419 stamping footers. Otherwise, the current time is used. 420 421 NO_MALLINFO default: 0 422 If defined, don't compile "mallinfo". This can be a simple way 423 of dealing with mismatches between system declarations and 424 those in this file. 425 426 MALLINFO_FIELD_TYPE default: size_t 427 The type of the fields in the mallinfo struct. This was originally 428 defined as "int" in SVID etc, but is more usefully defined as 429 size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set 430 431 NO_MALLOC_STATS default: 0 432 If defined, don't compile "malloc_stats". This avoids calls to 433 fprintf and bringing in stdio dependencies you might not want. 434 435 REALLOC_ZERO_BYTES_FREES default: not defined 436 This should be set if a call to realloc with zero bytes should 437 be the same as a call to free. Some people think it should. Otherwise, 438 since this malloc returns a unique pointer for malloc(0), so does 439 realloc(p, 0). 440 441 LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H 442 LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H 443 LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32 444 Define these if your system does not have these header files. 445 You might need to manually insert some of the declarations they provide. 446 447 DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS, 448 system_info.dwAllocationGranularity in WIN32, 449 otherwise 64K. 450 Also settable using mallopt(M_GRANULARITY, x) 451 The unit for allocating and deallocating memory from the system. On 452 most systems with contiguous MORECORE, there is no reason to 453 make this more than a page. However, systems with MMAP tend to 454 either require or encourage larger granularities. You can increase 455 this value to prevent system allocation functions to be called so 456 often, especially if they are slow. The value must be at least one 457 page and must be a power of two. Setting to 0 causes initialization 458 to either page size or win32 region size. (Note: In previous 459 versions of malloc, the equivalent of this option was called 460 "TOP_PAD") 461 462 DEFAULT_TRIM_THRESHOLD default: 2MB 463 Also settable using mallopt(M_TRIM_THRESHOLD, x) 464 The maximum amount of unused top-most memory to keep before 465 releasing via malloc_trim in free(). Automatic trimming is mainly 466 useful in long-lived programs using contiguous MORECORE. Because 467 trimming via sbrk can be slow on some systems, and can sometimes be 468 wasteful (in cases where programs immediately afterward allocate 469 more large chunks) the value should be high enough so that your 470 overall system performance would improve by releasing this much 471 memory. As a rough guide, you might set to a value close to the 472 average size of a process (program) running on your system. 473 Releasing this much memory would allow such a process to run in 474 memory. Generally, it is worth tuning trim thresholds when a 475 program undergoes phases where several large chunks are allocated 476 and released in ways that can reuse each other's storage, perhaps 477 mixed with phases where there are no such chunks at all. The trim 478 value must be greater than page size to have any useful effect. To 479 disable trimming completely, you can set to MAX_SIZE_T. Note that the trick 480 some people use of mallocing a huge space and then freeing it at 481 program startup, in an attempt to reserve system memory, doesn't 482 have the intended effect under automatic trimming, since that memory 483 will immediately be returned to the system. 484 485 DEFAULT_MMAP_THRESHOLD default: 256K 486 Also settable using mallopt(M_MMAP_THRESHOLD, x) 487 The request size threshold for using MMAP to directly service a 488 request. Requests of at least this size that cannot be allocated 489 using already-existing space will be serviced via mmap. (If enough 490 normal freed space already exists it is used instead.) Using mmap 491 segregates relatively large chunks of memory so that they can be 492 individually obtained and released from the host system. A request 493 serviced through mmap is never reused by any other request (at least 494 not directly; the system may just so happen to remap successive 495 requests to the same locations). Segregating space in this way has 496 the benefits that: Mmapped space can always be individually released 497 back to the system, which helps keep the system level memory demands 498 of a long-lived program low. Also, mapped memory doesn't become 499 `locked' between other chunks, as can happen with normally allocated 500 chunks, which means that even trimming via malloc_trim would not 501 release them. However, it has the disadvantage that the space 502 cannot be reclaimed, consolidated, and then used to service later 503 requests, as happens with normal chunks. The advantages of mmap 504 nearly always outweigh disadvantages for "large" chunks, but the 505 value of "large" may vary across systems. The default is an 506 empirically derived value that works well in most systems. You can 507 disable mmap by setting to MAX_SIZE_T. 508 509 MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP 510 The number of consolidated frees between checks to release 511 unused segments when freeing. When using non-contiguous segments, 512 especially with multiple mspaces, checking only for topmost space 513 doesn't always suffice to trigger trimming. To compensate for this, 514 free() will, with a period of MAX_RELEASE_CHECK_RATE (or the 515 current number of segments, if greater) try to release unused 516 segments to the OS when freeing chunks that result in 517 consolidation. The best value for this parameter is a compromise 518 between slowing down frees with relatively costly checks that 519 rarely trigger versus holding on to unused memory. To effectively 520 disable, set to MAX_SIZE_T. This may lead to a very slight speed 521 improvement at the expense of carrying around more memory. 522 */ 523 524 /* Version identifier to allow people to support multiple versions */ 525 #ifndef DLMALLOC_VERSION 526 #define DLMALLOC_VERSION 20806 527 #endif /* DLMALLOC_VERSION */ 528 529 #ifndef DLMALLOC_EXPORT 530 #define DLMALLOC_EXPORT extern 531 #endif 532 533 #ifndef WIN32 534 #ifdef _WIN32 535 #define WIN32 1 536 #endif /* _WIN32 */ 537 #ifdef _WIN32_WCE 538 #define LACKS_FCNTL_H 539 #define WIN32 1 540 #endif /* _WIN32_WCE */ 541 #endif /* WIN32 */ 542 #ifdef WIN32 543 #define WIN32_LEAN_AND_MEAN 544 #include <windows.h> 545 #include <tchar.h> 546 #define HAVE_MMAP 1 547 #define HAVE_MORECORE 0 548 #define LACKS_UNISTD_H 549 #define LACKS_SYS_PARAM_H 550 #define LACKS_SYS_MMAN_H 551 #define LACKS_STRING_H 552 #define LACKS_STRINGS_H 553 #define LACKS_SYS_TYPES_H 554 #define LACKS_ERRNO_H 555 #define LACKS_SCHED_H 556 #ifndef MALLOC_FAILURE_ACTION 557 #define MALLOC_FAILURE_ACTION 558 #endif /* MALLOC_FAILURE_ACTION */ 559 #ifndef MMAP_CLEARS 560 #ifdef _WIN32_WCE /* WINCE reportedly does not clear */ 561 #define MMAP_CLEARS 0 562 #else 563 #define MMAP_CLEARS 1 564 #endif /* _WIN32_WCE */ 565 #endif /*MMAP_CLEARS */ 566 #endif /* WIN32 */ 567 568 #if defined(DARWIN) || defined(_DARWIN) 569 /* Mac OSX docs advise not to use sbrk; it seems better to use mmap */ 570 #ifndef HAVE_MORECORE 571 #define HAVE_MORECORE 0 572 #define HAVE_MMAP 1 573 /* OSX allocators provide 16 byte alignment */ 574 #ifndef MALLOC_ALIGNMENT 575 #define MALLOC_ALIGNMENT ((size_t)16U) 576 #endif 577 #endif /* HAVE_MORECORE */ 578 #endif /* DARWIN */ 579 580 #ifndef LACKS_SYS_TYPES_H 581 #include <sys/types.h> /* For size_t */ 582 #endif /* LACKS_SYS_TYPES_H */ 583 584 /* The maximum possible size_t value has all bits set */ 585 #define MAX_SIZE_T (~(size_t)0) 586 587 #ifndef USE_LOCKS /* ensure true if spin or recursive locks set */ 588 #define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \ 589 (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0)) 590 #endif /* USE_LOCKS */ 591 592 #if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */ 593 #if ((defined(__GNUC__) && \ 594 ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \ 595 defined(__i386__) || defined(__x86_64__))) || \ 596 (defined(_MSC_VER) && _MSC_VER>=1310)) 597 #ifndef USE_SPIN_LOCKS 598 #define USE_SPIN_LOCKS 1 599 #endif /* USE_SPIN_LOCKS */ 600 #elif USE_SPIN_LOCKS 601 #error "USE_SPIN_LOCKS defined without implementation" 602 #endif /* ... locks available... */ 603 #elif !defined(USE_SPIN_LOCKS) 604 #define USE_SPIN_LOCKS 0 605 #endif /* USE_LOCKS */ 606 607 #ifndef ONLY_MSPACES 608 #define ONLY_MSPACES 0 609 #endif /* ONLY_MSPACES */ 610 #ifndef MSPACES 611 #if ONLY_MSPACES 612 #define MSPACES 1 613 #else /* ONLY_MSPACES */ 614 #define MSPACES 0 615 #endif /* ONLY_MSPACES */ 616 #endif /* MSPACES */ 617 #ifndef MALLOC_ALIGNMENT 618 #define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *))) 619 #endif /* MALLOC_ALIGNMENT */ 620 #ifndef FOOTERS 621 #define FOOTERS 0 622 #endif /* FOOTERS */ 623 #ifndef ABORT 624 #define ABORT abort() 625 #endif /* ABORT */ 626 #ifndef ABORT_ON_ASSERT_FAILURE 627 #define ABORT_ON_ASSERT_FAILURE 1 628 #endif /* ABORT_ON_ASSERT_FAILURE */ 629 #ifndef PROCEED_ON_ERROR 630 #define PROCEED_ON_ERROR 0 631 #endif /* PROCEED_ON_ERROR */ 632 633 #ifndef INSECURE 634 #define INSECURE 0 635 #endif /* INSECURE */ 636 #ifndef MALLOC_INSPECT_ALL 637 #define MALLOC_INSPECT_ALL 0 638 #endif /* MALLOC_INSPECT_ALL */ 639 #ifndef HAVE_MMAP 640 #define HAVE_MMAP 1 641 #endif /* HAVE_MMAP */ 642 #ifndef MMAP_CLEARS 643 #define MMAP_CLEARS 1 644 #endif /* MMAP_CLEARS */ 645 #ifndef HAVE_MREMAP 646 #ifdef linux 647 #define HAVE_MREMAP 1 648 #define _GNU_SOURCE /* Turns on mremap() definition */ 649 #else /* linux */ 650 #define HAVE_MREMAP 0 651 #endif /* linux */ 652 #endif /* HAVE_MREMAP */ 653 #ifndef MALLOC_FAILURE_ACTION 654 #define MALLOC_FAILURE_ACTION errno = ENOMEM; 655 #endif /* MALLOC_FAILURE_ACTION */ 656 #ifndef HAVE_MORECORE 657 #if ONLY_MSPACES 658 #define HAVE_MORECORE 0 659 #else /* ONLY_MSPACES */ 660 #define HAVE_MORECORE 1 661 #endif /* ONLY_MSPACES */ 662 #endif /* HAVE_MORECORE */ 663 #if !HAVE_MORECORE 664 #define MORECORE_CONTIGUOUS 0 665 #else /* !HAVE_MORECORE */ 666 #define MORECORE_DEFAULT sbrk 667 #ifndef MORECORE_CONTIGUOUS 668 #define MORECORE_CONTIGUOUS 1 669 #endif /* MORECORE_CONTIGUOUS */ 670 #endif /* HAVE_MORECORE */ 671 #ifndef DEFAULT_GRANULARITY 672 #if (MORECORE_CONTIGUOUS || defined(WIN32)) 673 #define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */ 674 #else /* MORECORE_CONTIGUOUS */ 675 #define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U) 676 #endif /* MORECORE_CONTIGUOUS */ 677 #endif /* DEFAULT_GRANULARITY */ 678 #ifndef DEFAULT_TRIM_THRESHOLD 679 #ifndef MORECORE_CANNOT_TRIM 680 #define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U) 681 #else /* MORECORE_CANNOT_TRIM */ 682 #define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T 683 #endif /* MORECORE_CANNOT_TRIM */ 684 #endif /* DEFAULT_TRIM_THRESHOLD */ 685 #ifndef DEFAULT_MMAP_THRESHOLD 686 #if HAVE_MMAP 687 #define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U) 688 #else /* HAVE_MMAP */ 689 #define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T 690 #endif /* HAVE_MMAP */ 691 #endif /* DEFAULT_MMAP_THRESHOLD */ 692 #ifndef MAX_RELEASE_CHECK_RATE 693 #if HAVE_MMAP 694 #define MAX_RELEASE_CHECK_RATE 4095 695 #else 696 #define MAX_RELEASE_CHECK_RATE MAX_SIZE_T 697 #endif /* HAVE_MMAP */ 698 #endif /* MAX_RELEASE_CHECK_RATE */ 699 #ifndef USE_BUILTIN_FFS 700 #define USE_BUILTIN_FFS 0 701 #endif /* USE_BUILTIN_FFS */ 702 #ifndef USE_DEV_RANDOM 703 #define USE_DEV_RANDOM 0 704 #endif /* USE_DEV_RANDOM */ 705 #ifndef NO_MALLINFO 706 #define NO_MALLINFO 0 707 #endif /* NO_MALLINFO */ 708 #ifndef MALLINFO_FIELD_TYPE 709 #define MALLINFO_FIELD_TYPE size_t 710 #endif /* MALLINFO_FIELD_TYPE */ 711 #ifndef NO_MALLOC_STATS 712 #define NO_MALLOC_STATS 0 713 #endif /* NO_MALLOC_STATS */ 714 #ifndef NO_SEGMENT_TRAVERSAL 715 #define NO_SEGMENT_TRAVERSAL 0 716 #endif /* NO_SEGMENT_TRAVERSAL */ 717 718 /* 719 mallopt tuning options. SVID/XPG defines four standard parameter 720 numbers for mallopt, normally defined in malloc.h. None of these 721 are used in this malloc, so setting them has no effect. But this 722 malloc does support the following options. 723 */ 724 725 #define M_TRIM_THRESHOLD (-1) 726 #define M_GRANULARITY (-2) 727 #define M_MMAP_THRESHOLD (-3) 728 729 /* ------------------------ Mallinfo declarations ------------------------ */ 730 731 #if !NO_MALLINFO 732 /* 733 This version of malloc supports the standard SVID/XPG mallinfo 734 routine that returns a struct containing usage properties and 735 statistics. It should work on any system that has a 736 /usr/include/malloc.h defining struct mallinfo. The main 737 declaration needed is the mallinfo struct that is returned (by-copy) 738 by mallinfo(). The malloinfo struct contains a bunch of fields that 739 are not even meaningful in this version of malloc. These fields are 740 are instead filled by mallinfo() with other numbers that might be of 741 interest. 742 743 HAVE_USR_INCLUDE_MALLOC_H should be set if you have a 744 /usr/include/malloc.h file that includes a declaration of struct 745 mallinfo. If so, it is included; else a compliant version is 746 declared below. These must be precisely the same for mallinfo() to 747 work. The original SVID version of this struct, defined on most 748 systems with mallinfo, declares all fields as ints. But some others 749 define as unsigned long. If your system defines the fields using a 750 type of different width than listed here, you MUST #include your 751 system version and #define HAVE_USR_INCLUDE_MALLOC_H. 752 */ 753 754 /* #define HAVE_USR_INCLUDE_MALLOC_H */ 755 756 #ifdef HAVE_USR_INCLUDE_MALLOC_H 757 #include "/usr/include/malloc.h" 758 #else /* HAVE_USR_INCLUDE_MALLOC_H */ 759 #ifndef STRUCT_MALLINFO_DECLARED 760 /* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */ 761 #define _STRUCT_MALLINFO 762 #define STRUCT_MALLINFO_DECLARED 1 763 struct mallinfo { 764 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */ 765 MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */ 766 MALLINFO_FIELD_TYPE smblks; /* always 0 */ 767 MALLINFO_FIELD_TYPE hblks; /* always 0 */ 768 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */ 769 MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */ 770 MALLINFO_FIELD_TYPE fsmblks; /* always 0 */ 771 MALLINFO_FIELD_TYPE uordblks; /* total allocated space */ 772 MALLINFO_FIELD_TYPE fordblks; /* total free space */ 773 MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */ 774 }; 775 #endif /* STRUCT_MALLINFO_DECLARED */ 776 #endif /* HAVE_USR_INCLUDE_MALLOC_H */ 777 #endif /* NO_MALLINFO */ 778 779 /* 780 Try to persuade compilers to inline. The most critical functions for 781 inlining are defined as macros, so these aren't used for them. 782 */ 783 784 #ifndef FORCEINLINE 785 #if defined(__GNUC__) 786 #define FORCEINLINE __inline __attribute__ ((always_inline)) 787 #elif defined(_MSC_VER) 788 #define FORCEINLINE __forceinline 789 #endif 790 #endif 791 #ifndef NOINLINE 792 #if defined(__GNUC__) 793 #define NOINLINE __attribute__ ((noinline)) 794 #elif defined(_MSC_VER) 795 #define NOINLINE __declspec(noinline) 796 #else 797 #define NOINLINE 798 #endif 799 #endif 800 801 #ifdef __cplusplus 802 extern "C" { 803 #ifndef FORCEINLINE 804 #define FORCEINLINE inline 805 #endif 806 #endif /* __cplusplus */ 807 #ifndef FORCEINLINE 808 #define FORCEINLINE 809 #endif 810 811 #if !ONLY_MSPACES 812 813 /* ------------------- Declarations of public routines ------------------- */ 814 815 #ifndef USE_DL_PREFIX 816 #define dlcalloc calloc 817 #define dlfree free 818 #define dlmalloc malloc 819 #define dlmemalign memalign 820 #define dlposix_memalign posix_memalign 821 #define dlrealloc realloc 822 #define dlrealloc_in_place realloc_in_place 823 #define dlvalloc valloc 824 #define dlpvalloc pvalloc 825 #define dlmallinfo mallinfo 826 #define dlmallopt mallopt 827 #define dlmalloc_trim malloc_trim 828 #define dlmalloc_stats malloc_stats 829 #define dlmalloc_usable_size malloc_usable_size 830 #define dlmalloc_footprint malloc_footprint 831 #define dlmalloc_max_footprint malloc_max_footprint 832 #define dlmalloc_footprint_limit malloc_footprint_limit 833 #define dlmalloc_set_footprint_limit malloc_set_footprint_limit 834 #define dlmalloc_inspect_all malloc_inspect_all 835 #define dlindependent_calloc independent_calloc 836 #define dlindependent_comalloc independent_comalloc 837 #define dlbulk_free bulk_free 838 #endif /* USE_DL_PREFIX */ 839 840 /* 841 malloc(size_t n) 842 Returns a pointer to a newly allocated chunk of at least n bytes, or 843 null if no space is available, in which case errno is set to ENOMEM 844 on ANSI C systems. 845 846 If n is zero, malloc returns a minimum-sized chunk. (The minimum 847 size is 16 bytes on most 32bit systems, and 32 bytes on 64bit 848 systems.) Note that size_t is an unsigned type, so calls with 849 arguments that would be negative if signed are interpreted as 850 requests for huge amounts of space, which will often fail. The 851 maximum supported value of n differs across systems, but is in all 852 cases less than the maximum representable value of a size_t. 853 */ 854 DLMALLOC_EXPORT void* dlmalloc(size_t); 855 856 /* 857 free(void* p) 858 Releases the chunk of memory pointed to by p, that had been previously 859 allocated using malloc or a related routine such as realloc. 860 It has no effect if p is null. If p was not malloced or already 861 freed, free(p) will by default cause the current program to abort. 862 */ 863 DLMALLOC_EXPORT void dlfree(void*); 864 865 /* 866 calloc(size_t n_elements, size_t element_size); 867 Returns a pointer to n_elements * element_size bytes, with all locations 868 set to zero. 869 */ 870 DLMALLOC_EXPORT void* dlcalloc(size_t, size_t); 871 872 /* 873 realloc(void* p, size_t n) 874 Returns a pointer to a chunk of size n that contains the same data 875 as does chunk p up to the minimum of (n, p's size) bytes, or null 876 if no space is available. 877 878 The returned pointer may or may not be the same as p. The algorithm 879 prefers extending p in most cases when possible, otherwise it 880 employs the equivalent of a malloc-copy-free sequence. 881 882 If p is null, realloc is equivalent to malloc. 883 884 If space is not available, realloc returns null, errno is set (if on 885 ANSI) and p is NOT freed. 886 887 if n is for fewer bytes than already held by p, the newly unused 888 space is lopped off and freed if possible. realloc with a size 889 argument of zero (re)allocates a minimum-sized chunk. 890 891 The old unix realloc convention of allowing the last-free'd chunk 892 to be used as an argument to realloc is not supported. 893 */ 894 DLMALLOC_EXPORT void* dlrealloc(void*, size_t); 895 896 /* 897 realloc_in_place(void* p, size_t n) 898 Resizes the space allocated for p to size n, only if this can be 899 done without moving p (i.e., only if there is adjacent space 900 available if n is greater than p's current allocated size, or n is 901 less than or equal to p's size). This may be used instead of plain 902 realloc if an alternative allocation strategy is needed upon failure 903 to expand space; for example, reallocation of a buffer that must be 904 memory-aligned or cleared. You can use realloc_in_place to trigger 905 these alternatives only when needed. 906 907 Returns p if successful; otherwise null. 908 */ 909 DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t); 910 911 /* 912 memalign(size_t alignment, size_t n); 913 Returns a pointer to a newly allocated chunk of n bytes, aligned 914 in accord with the alignment argument. 915 916 The alignment argument should be a power of two. If the argument is 917 not a power of two, the nearest greater power is used. 918 8-byte alignment is guaranteed by normal malloc calls, so don't 919 bother calling memalign with an argument of 8 or less. 920 921 Overreliance on memalign is a sure way to fragment space. 922 */ 923 DLMALLOC_EXPORT void* dlmemalign(size_t, size_t); 924 925 /* 926 int posix_memalign(void** pp, size_t alignment, size_t n); 927 Allocates a chunk of n bytes, aligned in accord with the alignment 928 argument. Differs from memalign only in that it (1) assigns the 929 allocated memory to *pp rather than returning it, (2) fails and 930 returns EINVAL if the alignment is not a power of two (3) fails and 931 returns ENOMEM if memory cannot be allocated. 932 */ 933 DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t); 934 935 /* 936 valloc(size_t n); 937 Equivalent to memalign(pagesize, n), where pagesize is the page 938 size of the system. If the pagesize is unknown, 4096 is used. 939 */ 940 DLMALLOC_EXPORT void* dlvalloc(size_t); 941 942 /* 943 mallopt(int parameter_number, int parameter_value) 944 Sets tunable parameters The format is to provide a 945 (parameter-number, parameter-value) pair. mallopt then sets the 946 corresponding parameter to the argument value if it can (i.e., so 947 long as the value is meaningful), and returns 1 if successful else 948 0. To workaround the fact that mallopt is specified to use int, 949 not size_t parameters, the value -1 is specially treated as the 950 maximum unsigned size_t value. 951 952 SVID/XPG/ANSI defines four standard param numbers for mallopt, 953 normally defined in malloc.h. None of these are use in this malloc, 954 so setting them has no effect. But this malloc also supports other 955 options in mallopt. See below for details. Briefly, supported 956 parameters are as follows (listed defaults are for "typical" 957 configurations). 958 959 Symbol param # default allowed param values 960 M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables) 961 M_GRANULARITY -2 page size any power of 2 >= page size 962 M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support) 963 */ 964 DLMALLOC_EXPORT int dlmallopt(int, int); 965 966 /* 967 malloc_footprint(); 968 Returns the number of bytes obtained from the system. The total 969 number of bytes allocated by malloc, realloc etc., is less than this 970 value. Unlike mallinfo, this function returns only a precomputed 971 result, so can be called frequently to monitor memory consumption. 972 Even if locks are otherwise defined, this function does not use them, 973 so results might not be up to date. 974 */ 975 DLMALLOC_EXPORT size_t dlmalloc_footprint(void); 976 977 /* 978 malloc_max_footprint(); 979 Returns the maximum number of bytes obtained from the system. This 980 value will be greater than current footprint if deallocated space 981 has been reclaimed by the system. The peak number of bytes allocated 982 by malloc, realloc etc., is less than this value. Unlike mallinfo, 983 this function returns only a precomputed result, so can be called 984 frequently to monitor memory consumption. Even if locks are 985 otherwise defined, this function does not use them, so results might 986 not be up to date. 987 */ 988 DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void); 989 990 /* 991 malloc_footprint_limit(); 992 Returns the number of bytes that the heap is allowed to obtain from 993 the system, returning the last value returned by 994 malloc_set_footprint_limit, or the maximum size_t value if 995 never set. The returned value reflects a permission. There is no 996 guarantee that this number of bytes can actually be obtained from 997 the system. 998 */ 999 DLMALLOC_EXPORT size_t dlmalloc_footprint_limit(); 1000 1001 /* 1002 malloc_set_footprint_limit(); 1003 Sets the maximum number of bytes to obtain from the system, causing 1004 failure returns from malloc and related functions upon attempts to 1005 exceed this value. The argument value may be subject to page 1006 rounding to an enforceable limit; this actual value is returned. 1007 Using an argument of the maximum possible size_t effectively 1008 disables checks. If the argument is less than or equal to the 1009 current malloc_footprint, then all future allocations that require 1010 additional system memory will fail. However, invocation cannot 1011 retroactively deallocate existing used memory. 1012 */ 1013 DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes); 1014 1015 #if MALLOC_INSPECT_ALL 1016 /* 1017 malloc_inspect_all(void(*handler)(void *start, 1018 void *end, 1019 size_t used_bytes, 1020 void* callback_arg), 1021 void* arg); 1022 Traverses the heap and calls the given handler for each managed 1023 region, skipping all bytes that are (or may be) used for bookkeeping 1024 purposes. Traversal does not include include chunks that have been 1025 directly memory mapped. Each reported region begins at the start 1026 address, and continues up to but not including the end address. The 1027 first used_bytes of the region contain allocated data. If 1028 used_bytes is zero, the region is unallocated. The handler is 1029 invoked with the given callback argument. If locks are defined, they 1030 are held during the entire traversal. It is a bad idea to invoke 1031 other malloc functions from within the handler. 1032 1033 For example, to count the number of in-use chunks with size greater 1034 than 1000, you could write: 1035 static int count = 0; 1036 void count_chunks(void* start, void* end, size_t used, void* arg) { 1037 if (used >= 1000) ++count; 1038 } 1039 then: 1040 malloc_inspect_all(count_chunks, NULL); 1041 1042 malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined. 1043 */ 1044 DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*), 1045 void* arg); 1046 1047 #endif /* MALLOC_INSPECT_ALL */ 1048 1049 #if !NO_MALLINFO 1050 /* 1051 mallinfo() 1052 Returns (by copy) a struct containing various summary statistics: 1053 1054 arena: current total non-mmapped bytes allocated from system 1055 ordblks: the number of free chunks 1056 smblks: always zero. 1057 hblks: current number of mmapped regions 1058 hblkhd: total bytes held in mmapped regions 1059 usmblks: the maximum total allocated space. This will be greater 1060 than current total if trimming has occurred. 1061 fsmblks: always zero 1062 uordblks: current total allocated space (normal or mmapped) 1063 fordblks: total free space 1064 keepcost: the maximum number of bytes that could ideally be released 1065 back to system via malloc_trim. ("ideally" means that 1066 it ignores page restrictions etc.) 1067 1068 Because these fields are ints, but internal bookkeeping may 1069 be kept as longs, the reported values may wrap around zero and 1070 thus be inaccurate. 1071 */ 1072 DLMALLOC_EXPORT struct mallinfo dlmallinfo(void); 1073 #endif /* NO_MALLINFO */ 1074 1075 /* 1076 independent_calloc(size_t n_elements, size_t element_size, void* chunks[]); 1077 1078 independent_calloc is similar to calloc, but instead of returning a 1079 single cleared space, it returns an array of pointers to n_elements 1080 independent elements that can hold contents of size elem_size, each 1081 of which starts out cleared, and can be independently freed, 1082 realloc'ed etc. The elements are guaranteed to be adjacently 1083 allocated (this is not guaranteed to occur with multiple callocs or 1084 mallocs), which may also improve cache locality in some 1085 applications. 1086 1087 The "chunks" argument is optional (i.e., may be null, which is 1088 probably the most typical usage). If it is null, the returned array 1089 is itself dynamically allocated and should also be freed when it is 1090 no longer needed. Otherwise, the chunks array must be of at least 1091 n_elements in length. It is filled in with the pointers to the 1092 chunks. 1093 1094 In either case, independent_calloc returns this pointer array, or 1095 null if the allocation failed. If n_elements is zero and "chunks" 1096 is null, it returns a chunk representing an array with zero elements 1097 (which should be freed if not wanted). 1098 1099 Each element must be freed when it is no longer needed. This can be 1100 done all at once using bulk_free. 1101 1102 independent_calloc simplifies and speeds up implementations of many 1103 kinds of pools. It may also be useful when constructing large data 1104 structures that initially have a fixed number of fixed-sized nodes, 1105 but the number is not known at compile time, and some of the nodes 1106 may later need to be freed. For example: 1107 1108 struct Node { int item; struct Node* next; }; 1109 1110 struct Node* build_list() { 1111 struct Node** pool; 1112 int n = read_number_of_nodes_needed(); 1113 if (n <= 0) return 0; 1114 pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0); 1115 if (pool == 0) die(); 1116 // organize into a linked list... 1117 struct Node* first = pool[0]; 1118 for (i = 0; i < n-1; ++i) 1119 pool[i]->next = pool[i+1]; 1120 free(pool); // Can now free the array (or not, if it is needed later) 1121 return first; 1122 } 1123 */ 1124 DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**); 1125 1126 /* 1127 independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]); 1128 1129 independent_comalloc allocates, all at once, a set of n_elements 1130 chunks with sizes indicated in the "sizes" array. It returns 1131 an array of pointers to these elements, each of which can be 1132 independently freed, realloc'ed etc. The elements are guaranteed to 1133 be adjacently allocated (this is not guaranteed to occur with 1134 multiple callocs or mallocs), which may also improve cache locality 1135 in some applications. 1136 1137 The "chunks" argument is optional (i.e., may be null). If it is null 1138 the returned array is itself dynamically allocated and should also 1139 be freed when it is no longer needed. Otherwise, the chunks array 1140 must be of at least n_elements in length. It is filled in with the 1141 pointers to the chunks. 1142 1143 In either case, independent_comalloc returns this pointer array, or 1144 null if the allocation failed. If n_elements is zero and chunks is 1145 null, it returns a chunk representing an array with zero elements 1146 (which should be freed if not wanted). 1147 1148 Each element must be freed when it is no longer needed. This can be 1149 done all at once using bulk_free. 1150 1151 independent_comallac differs from independent_calloc in that each 1152 element may have a different size, and also that it does not 1153 automatically clear elements. 1154 1155 independent_comalloc can be used to speed up allocation in cases 1156 where several structs or objects must always be allocated at the 1157 same time. For example: 1158 1159 struct Head { ... } 1160 struct Foot { ... } 1161 1162 void send_message(char* msg) { 1163 int msglen = strlen(msg); 1164 size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) }; 1165 void* chunks[3]; 1166 if (independent_comalloc(3, sizes, chunks) == 0) 1167 die(); 1168 struct Head* head = (struct Head*)(chunks[0]); 1169 char* body = (char*)(chunks[1]); 1170 struct Foot* foot = (struct Foot*)(chunks[2]); 1171 // ... 1172 } 1173 1174 In general though, independent_comalloc is worth using only for 1175 larger values of n_elements. For small values, you probably won't 1176 detect enough difference from series of malloc calls to bother. 1177 1178 Overuse of independent_comalloc can increase overall memory usage, 1179 since it cannot reuse existing noncontiguous small chunks that 1180 might be available for some of the elements. 1181 */ 1182 DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**); 1183 1184 /* 1185 bulk_free(void* array[], size_t n_elements) 1186 Frees and clears (sets to null) each non-null pointer in the given 1187 array. This is likely to be faster than freeing them one-by-one. 1188 If footers are used, pointers that have been allocated in different 1189 mspaces are not freed or cleared, and the count of all such pointers 1190 is returned. For large arrays of pointers with poor locality, it 1191 may be worthwhile to sort this array before calling bulk_free. 1192 */ 1193 DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements); 1194 1195 /* 1196 pvalloc(size_t n); 1197 Equivalent to valloc(minimum-page-that-holds(n)), that is, 1198 round up n to nearest pagesize. 1199 */ 1200 DLMALLOC_EXPORT void* dlpvalloc(size_t); 1201 1202 /* 1203 malloc_trim(size_t pad); 1204 1205 If possible, gives memory back to the system (via negative arguments 1206 to sbrk) if there is unused memory at the `high' end of the malloc 1207 pool or in unused MMAP segments. You can call this after freeing 1208 large blocks of memory to potentially reduce the system-level memory 1209 requirements of a program. However, it cannot guarantee to reduce 1210 memory. Under some allocation patterns, some large free blocks of 1211 memory will be locked between two used chunks, so they cannot be 1212 given back to the system. 1213 1214 The `pad' argument to malloc_trim represents the amount of free 1215 trailing space to leave untrimmed. If this argument is zero, only 1216 the minimum amount of memory to maintain internal data structures 1217 will be left. Non-zero arguments can be supplied to maintain enough 1218 trailing space to service future expected allocations without having 1219 to re-obtain memory from the system. 1220 1221 Malloc_trim returns 1 if it actually released any memory, else 0. 1222 */ 1223 DLMALLOC_EXPORT int dlmalloc_trim(size_t); 1224 1225 /* 1226 malloc_stats(); 1227 Prints on stderr the amount of space obtained from the system (both 1228 via sbrk and mmap), the maximum amount (which may be more than 1229 current if malloc_trim and/or munmap got called), and the current 1230 number of bytes allocated via malloc (or realloc, etc) but not yet 1231 freed. Note that this is the number of bytes allocated, not the 1232 number requested. It will be larger than the number requested 1233 because of alignment and bookkeeping overhead. Because it includes 1234 alignment wastage as being in use, this figure may be greater than 1235 zero even when no user-level chunks are allocated. 1236 1237 The reported current and maximum system memory can be inaccurate if 1238 a program makes other calls to system memory allocation functions 1239 (normally sbrk) outside of malloc. 1240 1241 malloc_stats prints only the most commonly interesting statistics. 1242 More information can be obtained by calling mallinfo. 1243 */ 1244 DLMALLOC_EXPORT void dlmalloc_stats(void); 1245 1246 /* 1247 malloc_usable_size(void* p); 1248 1249 Returns the number of bytes you can actually use in 1250 an allocated chunk, which may be more than you requested (although 1251 often not) due to alignment and minimum size constraints. 1252 You can use this many bytes without worrying about 1253 overwriting other allocated objects. This is not a particularly great 1254 programming practice. malloc_usable_size can be more useful in 1255 debugging and assertions, for example: 1256 1257 p = malloc(n); 1258 assert(malloc_usable_size(p) >= 256); 1259 */ 1260 /* BEGIN android-changed: added const */ 1261 size_t dlmalloc_usable_size(const void*); 1262 /* END android-change */ 1263 1264 #endif /* ONLY_MSPACES */ 1265 1266 #if MSPACES 1267 1268 /* 1269 mspace is an opaque type representing an independent 1270 region of space that supports mspace_malloc, etc. 1271 */ 1272 typedef void* mspace; 1273 1274 /* 1275 create_mspace creates and returns a new independent space with the 1276 given initial capacity, or, if 0, the default granularity size. It 1277 returns null if there is no system memory available to create the 1278 space. If argument locked is non-zero, the space uses a separate 1279 lock to control access. The capacity of the space will grow 1280 dynamically as needed to service mspace_malloc requests. You can 1281 control the sizes of incremental increases of this space by 1282 compiling with a different DEFAULT_GRANULARITY or dynamically 1283 setting with mallopt(M_GRANULARITY, value). 1284 */ 1285 DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked); 1286 1287 /* 1288 destroy_mspace destroys the given space, and attempts to return all 1289 of its memory back to the system, returning the total number of 1290 bytes freed. After destruction, the results of access to all memory 1291 used by the space become undefined. 1292 */ 1293 DLMALLOC_EXPORT size_t destroy_mspace(mspace msp); 1294 1295 /* 1296 create_mspace_with_base uses the memory supplied as the initial base 1297 of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this 1298 space is used for bookkeeping, so the capacity must be at least this 1299 large. (Otherwise 0 is returned.) When this initial space is 1300 exhausted, additional memory will be obtained from the system. 1301 Destroying this space will deallocate all additionally allocated 1302 space (if possible) but not the initial base. 1303 */ 1304 DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked); 1305 1306 /* 1307 mspace_track_large_chunks controls whether requests for large chunks 1308 are allocated in their own untracked mmapped regions, separate from 1309 others in this mspace. By default large chunks are not tracked, 1310 which reduces fragmentation. However, such chunks are not 1311 necessarily released to the system upon destroy_mspace. Enabling 1312 tracking by setting to true may increase fragmentation, but avoids 1313 leakage when relying on destroy_mspace to release all memory 1314 allocated using this space. The function returns the previous 1315 setting. 1316 */ 1317 DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable); 1318 1319 1320 /* 1321 mspace_malloc behaves as malloc, but operates within 1322 the given space. 1323 */ 1324 DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes); 1325 1326 /* 1327 mspace_free behaves as free, but operates within 1328 the given space. 1329 1330 If compiled with FOOTERS==1, mspace_free is not actually needed. 1331 free may be called instead of mspace_free because freed chunks from 1332 any space are handled by their originating spaces. 1333 */ 1334 DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem); 1335 1336 /* 1337 mspace_realloc behaves as realloc, but operates within 1338 the given space. 1339 1340 If compiled with FOOTERS==1, mspace_realloc is not actually 1341 needed. realloc may be called instead of mspace_realloc because 1342 realloced chunks from any space are handled by their originating 1343 spaces. 1344 */ 1345 DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize); 1346 1347 /* 1348 mspace_calloc behaves as calloc, but operates within 1349 the given space. 1350 */ 1351 DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size); 1352 1353 /* 1354 mspace_memalign behaves as memalign, but operates within 1355 the given space. 1356 */ 1357 DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes); 1358 1359 /* 1360 mspace_independent_calloc behaves as independent_calloc, but 1361 operates within the given space. 1362 */ 1363 DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements, 1364 size_t elem_size, void* chunks[]); 1365 1366 /* 1367 mspace_independent_comalloc behaves as independent_comalloc, but 1368 operates within the given space. 1369 */ 1370 DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements, 1371 size_t sizes[], void* chunks[]); 1372 1373 /* 1374 mspace_footprint() returns the number of bytes obtained from the 1375 system for this space. 1376 */ 1377 DLMALLOC_EXPORT size_t mspace_footprint(mspace msp); 1378 1379 /* 1380 mspace_max_footprint() returns the peak number of bytes obtained from the 1381 system for this space. 1382 */ 1383 DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp); 1384 1385 1386 #if !NO_MALLINFO 1387 /* 1388 mspace_mallinfo behaves as mallinfo, but reports properties of 1389 the given space. 1390 */ 1391 DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp); 1392 #endif /* NO_MALLINFO */ 1393 1394 /* 1395 malloc_usable_size(void* p) behaves the same as malloc_usable_size; 1396 */ 1397 DLMALLOC_EXPORT size_t mspace_usable_size(const void* mem); 1398 1399 /* 1400 mspace_malloc_stats behaves as malloc_stats, but reports 1401 properties of the given space. 1402 */ 1403 DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp); 1404 1405 /* 1406 mspace_trim behaves as malloc_trim, but 1407 operates within the given space. 1408 */ 1409 DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad); 1410 1411 /* 1412 An alias for mallopt. 1413 */ 1414 DLMALLOC_EXPORT int mspace_mallopt(int, int); 1415 1416 #endif /* MSPACES */ 1417 1418 #ifdef __cplusplus 1419 } /* end of extern "C" */ 1420 #endif /* __cplusplus */ 1421 1422 /* 1423 ======================================================================== 1424 To make a fully customizable malloc.h header file, cut everything 1425 above this line, put into file malloc.h, edit to suit, and #include it 1426 on the next line, as well as in programs that use this malloc. 1427 ======================================================================== 1428 */ 1429 1430 /* #include "malloc.h" */ 1431 1432 /*------------------------------ internal #includes ---------------------- */ 1433 1434 #ifdef _MSC_VER 1435 #pragma warning( disable : 4146 ) /* no "unsigned" warnings */ 1436 #endif /* _MSC_VER */ 1437 #if !NO_MALLOC_STATS 1438 #include <stdio.h> /* for printing in malloc_stats */ 1439 #endif /* NO_MALLOC_STATS */ 1440 #ifndef LACKS_ERRNO_H 1441 #include <errno.h> /* for MALLOC_FAILURE_ACTION */ 1442 #endif /* LACKS_ERRNO_H */ 1443 #ifdef DEBUG 1444 #if ABORT_ON_ASSERT_FAILURE 1445 #undef assert 1446 #define assert(x) if(!(x)) ABORT 1447 #else /* ABORT_ON_ASSERT_FAILURE */ 1448 #include <assert.h> 1449 #endif /* ABORT_ON_ASSERT_FAILURE */ 1450 #else /* DEBUG */ 1451 #ifndef assert 1452 #define assert(x) 1453 #endif 1454 #define DEBUG 0 1455 #endif /* DEBUG */ 1456 #if !defined(WIN32) && !defined(LACKS_TIME_H) 1457 #include <time.h> /* for magic initialization */ 1458 #endif /* WIN32 */ 1459 #ifndef LACKS_STDLIB_H 1460 #include <stdlib.h> /* for abort() */ 1461 #endif /* LACKS_STDLIB_H */ 1462 #ifndef LACKS_STRING_H 1463 #include <string.h> /* for memset etc */ 1464 #endif /* LACKS_STRING_H */ 1465 #if USE_BUILTIN_FFS 1466 #ifndef LACKS_STRINGS_H 1467 #include <strings.h> /* for ffs */ 1468 #endif /* LACKS_STRINGS_H */ 1469 #endif /* USE_BUILTIN_FFS */ 1470 #if HAVE_MMAP 1471 #ifndef LACKS_SYS_MMAN_H 1472 /* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */ 1473 #if (defined(linux) && !defined(__USE_GNU)) 1474 #define __USE_GNU 1 1475 #include <sys/mman.h> /* for mmap */ 1476 #undef __USE_GNU 1477 #else 1478 #include <sys/mman.h> /* for mmap */ 1479 #endif /* linux */ 1480 #endif /* LACKS_SYS_MMAN_H */ 1481 #ifndef LACKS_FCNTL_H 1482 #include <fcntl.h> 1483 #endif /* LACKS_FCNTL_H */ 1484 #endif /* HAVE_MMAP */ 1485 #ifndef LACKS_UNISTD_H 1486 #include <unistd.h> /* for sbrk, sysconf */ 1487 #else /* LACKS_UNISTD_H */ 1488 #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__) 1489 extern void* sbrk(ptrdiff_t); 1490 #endif /* FreeBSD etc */ 1491 #endif /* LACKS_UNISTD_H */ 1492 1493 /* Declarations for locking */ 1494 #if USE_LOCKS 1495 #ifndef WIN32 1496 #if defined (__SVR4) && defined (__sun) /* solaris */ 1497 #include <thread.h> 1498 #elif !defined(LACKS_SCHED_H) 1499 #include <sched.h> 1500 #endif /* solaris or LACKS_SCHED_H */ 1501 #if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS 1502 #include <pthread.h> 1503 #endif /* USE_RECURSIVE_LOCKS ... */ 1504 #elif defined(_MSC_VER) 1505 #ifndef _M_AMD64 1506 /* These are already defined on AMD64 builds */ 1507 #ifdef __cplusplus 1508 extern "C" { 1509 #endif /* __cplusplus */ 1510 LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp); 1511 LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value); 1512 #ifdef __cplusplus 1513 } 1514 #endif /* __cplusplus */ 1515 #endif /* _M_AMD64 */ 1516 #pragma intrinsic (_InterlockedCompareExchange) 1517 #pragma intrinsic (_InterlockedExchange) 1518 #define interlockedcompareexchange _InterlockedCompareExchange 1519 #define interlockedexchange _InterlockedExchange 1520 #elif defined(WIN32) && defined(__GNUC__) 1521 #define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b) 1522 #define interlockedexchange __sync_lock_test_and_set 1523 #endif /* Win32 */ 1524 #else /* USE_LOCKS */ 1525 #endif /* USE_LOCKS */ 1526 1527 #ifndef LOCK_AT_FORK 1528 #define LOCK_AT_FORK 0 1529 #endif 1530 1531 /* Declarations for bit scanning on win32 */ 1532 #if defined(_MSC_VER) && _MSC_VER>=1300 1533 #ifndef BitScanForward /* Try to avoid pulling in WinNT.h */ 1534 #ifdef __cplusplus 1535 extern "C" { 1536 #endif /* __cplusplus */ 1537 unsigned char _BitScanForward(unsigned long *index, unsigned long mask); 1538 unsigned char _BitScanReverse(unsigned long *index, unsigned long mask); 1539 #ifdef __cplusplus 1540 } 1541 #endif /* __cplusplus */ 1542 1543 #define BitScanForward _BitScanForward 1544 #define BitScanReverse _BitScanReverse 1545 #pragma intrinsic(_BitScanForward) 1546 #pragma intrinsic(_BitScanReverse) 1547 #endif /* BitScanForward */ 1548 #endif /* defined(_MSC_VER) && _MSC_VER>=1300 */ 1549 1550 #ifndef WIN32 1551 #ifndef malloc_getpagesize 1552 # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ 1553 # ifndef _SC_PAGE_SIZE 1554 # define _SC_PAGE_SIZE _SC_PAGESIZE 1555 # endif 1556 # endif 1557 # ifdef _SC_PAGE_SIZE 1558 # define malloc_getpagesize sysconf(_SC_PAGE_SIZE) 1559 # else 1560 # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) 1561 extern size_t getpagesize(); 1562 # define malloc_getpagesize getpagesize() 1563 # else 1564 # ifdef WIN32 /* use supplied emulation of getpagesize */ 1565 # define malloc_getpagesize getpagesize() 1566 # else 1567 # ifndef LACKS_SYS_PARAM_H 1568 # include <sys/param.h> 1569 # endif 1570 # ifdef EXEC_PAGESIZE 1571 # define malloc_getpagesize EXEC_PAGESIZE 1572 # else 1573 # ifdef NBPG 1574 # ifndef CLSIZE 1575 # define malloc_getpagesize NBPG 1576 # else 1577 # define malloc_getpagesize (NBPG * CLSIZE) 1578 # endif 1579 # else 1580 # ifdef NBPC 1581 # define malloc_getpagesize NBPC 1582 # else 1583 # ifdef PAGESIZE 1584 # define malloc_getpagesize PAGESIZE 1585 # else /* just guess */ 1586 # define malloc_getpagesize ((size_t)4096U) 1587 # endif 1588 # endif 1589 # endif 1590 # endif 1591 # endif 1592 # endif 1593 # endif 1594 #endif 1595 #endif 1596 1597 /* ------------------- size_t and alignment properties -------------------- */ 1598 1599 /* The byte and bit size of a size_t */ 1600 #define SIZE_T_SIZE (sizeof(size_t)) 1601 #define SIZE_T_BITSIZE (sizeof(size_t) << 3) 1602 1603 /* Some constants coerced to size_t */ 1604 /* Annoying but necessary to avoid errors on some platforms */ 1605 #define SIZE_T_ZERO ((size_t)0) 1606 #define SIZE_T_ONE ((size_t)1) 1607 #define SIZE_T_TWO ((size_t)2) 1608 #define SIZE_T_FOUR ((size_t)4) 1609 #define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1) 1610 #define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2) 1611 #define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES) 1612 #define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U) 1613 1614 /* The bit mask value corresponding to MALLOC_ALIGNMENT */ 1615 #define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE) 1616 1617 /* True if address a has acceptable alignment */ 1618 #define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0) 1619 1620 /* the number of bytes to offset an address to align it */ 1621 #define align_offset(A)\ 1622 ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\ 1623 ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK)) 1624 1625 /* -------------------------- MMAP preliminaries ------------------------- */ 1626 1627 /* 1628 If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and 1629 checks to fail so compiler optimizer can delete code rather than 1630 using so many "#if"s. 1631 */ 1632 1633 1634 /* MORECORE and MMAP must return MFAIL on failure */ 1635 #define MFAIL ((void*)(MAX_SIZE_T)) 1636 #define CMFAIL ((char*)(MFAIL)) /* defined for convenience */ 1637 1638 #if HAVE_MMAP 1639 1640 #ifndef WIN32 1641 #define MUNMAP_DEFAULT(a, s) munmap((a), (s)) 1642 #define MMAP_PROT (PROT_READ|PROT_WRITE) 1643 #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) 1644 #define MAP_ANONYMOUS MAP_ANON 1645 #endif /* MAP_ANON */ 1646 #ifdef MAP_ANONYMOUS 1647 #define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS) 1648 #define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0) 1649 #else /* MAP_ANONYMOUS */ 1650 /* 1651 Nearly all versions of mmap support MAP_ANONYMOUS, so the following 1652 is unlikely to be needed, but is supplied just in case. 1653 */ 1654 #define MMAP_FLAGS (MAP_PRIVATE) 1655 static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */ 1656 #define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \ 1657 (dev_zero_fd = open("/dev/zero", O_RDWR), \ 1658 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \ 1659 mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) 1660 #endif /* MAP_ANONYMOUS */ 1661 1662 #define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s) 1663 1664 #else /* WIN32 */ 1665 1666 /* Win32 MMAP via VirtualAlloc */ 1667 static FORCEINLINE void* win32mmap(size_t size) { 1668 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE); 1669 return (ptr != 0)? ptr: MFAIL; 1670 } 1671 1672 /* For direct MMAP, use MEM_TOP_DOWN to minimize interference */ 1673 static FORCEINLINE void* win32direct_mmap(size_t size) { 1674 void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN, 1675 PAGE_READWRITE); 1676 return (ptr != 0)? ptr: MFAIL; 1677 } 1678 1679 /* This function supports releasing coalesed segments */ 1680 static FORCEINLINE int win32munmap(void* ptr, size_t size) { 1681 MEMORY_BASIC_INFORMATION minfo; 1682 char* cptr = (char*)ptr; 1683 while (size) { 1684 if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0) 1685 return -1; 1686 if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr || 1687 minfo.State != MEM_COMMIT || minfo.RegionSize > size) 1688 return -1; 1689 if (VirtualFree(cptr, 0, MEM_RELEASE) == 0) 1690 return -1; 1691 cptr += minfo.RegionSize; 1692 size -= minfo.RegionSize; 1693 } 1694 return 0; 1695 } 1696 1697 #define MMAP_DEFAULT(s) win32mmap(s) 1698 #define MUNMAP_DEFAULT(a, s) win32munmap((a), (s)) 1699 #define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s) 1700 #endif /* WIN32 */ 1701 #endif /* HAVE_MMAP */ 1702 1703 #if HAVE_MREMAP 1704 #ifndef WIN32 1705 #define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv)) 1706 #endif /* WIN32 */ 1707 #endif /* HAVE_MREMAP */ 1708 1709 /** 1710 * Define CALL_MORECORE 1711 */ 1712 #if HAVE_MORECORE 1713 #ifdef MORECORE 1714 #define CALL_MORECORE(S) MORECORE(S) 1715 #else /* MORECORE */ 1716 #define CALL_MORECORE(S) MORECORE_DEFAULT(S) 1717 #endif /* MORECORE */ 1718 #else /* HAVE_MORECORE */ 1719 #define CALL_MORECORE(S) MFAIL 1720 #endif /* HAVE_MORECORE */ 1721 1722 /** 1723 * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP 1724 */ 1725 #if HAVE_MMAP 1726 #define USE_MMAP_BIT (SIZE_T_ONE) 1727 1728 #ifdef MMAP 1729 #define CALL_MMAP(s) MMAP(s) 1730 #else /* MMAP */ 1731 #define CALL_MMAP(s) MMAP_DEFAULT(s) 1732 #endif /* MMAP */ 1733 #ifdef MUNMAP 1734 #define CALL_MUNMAP(a, s) MUNMAP((a), (s)) 1735 #else /* MUNMAP */ 1736 #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s)) 1737 #endif /* MUNMAP */ 1738 #ifdef DIRECT_MMAP 1739 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s) 1740 #else /* DIRECT_MMAP */ 1741 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s) 1742 #endif /* DIRECT_MMAP */ 1743 #else /* HAVE_MMAP */ 1744 #define USE_MMAP_BIT (SIZE_T_ZERO) 1745 1746 #define MMAP(s) MFAIL 1747 #define MUNMAP(a, s) (-1) 1748 #define DIRECT_MMAP(s) MFAIL 1749 #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s) 1750 #define CALL_MMAP(s) MMAP(s) 1751 #define CALL_MUNMAP(a, s) MUNMAP((a), (s)) 1752 #endif /* HAVE_MMAP */ 1753 1754 /** 1755 * Define CALL_MREMAP 1756 */ 1757 #if HAVE_MMAP && HAVE_MREMAP 1758 #ifdef MREMAP 1759 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv)) 1760 #else /* MREMAP */ 1761 #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv)) 1762 #endif /* MREMAP */ 1763 #else /* HAVE_MMAP && HAVE_MREMAP */ 1764 #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL 1765 #endif /* HAVE_MMAP && HAVE_MREMAP */ 1766 1767 /* mstate bit set if continguous morecore disabled or failed */ 1768 #define USE_NONCONTIGUOUS_BIT (4U) 1769 1770 /* segment bit set in create_mspace_with_base */ 1771 #define EXTERN_BIT (8U) 1772 1773 1774 /* --------------------------- Lock preliminaries ------------------------ */ 1775 1776 /* 1777 When locks are defined, there is one global lock, plus 1778 one per-mspace lock. 1779 1780 The global lock_ensures that mparams.magic and other unique 1781 mparams values are initialized only once. It also protects 1782 sequences of calls to MORECORE. In many cases sys_alloc requires 1783 two calls, that should not be interleaved with calls by other 1784 threads. This does not protect against direct calls to MORECORE 1785 by other threads not using this lock, so there is still code to 1786 cope the best we can on interference. 1787 1788 Per-mspace locks surround calls to malloc, free, etc. 1789 By default, locks are simple non-reentrant mutexes. 1790 1791 Because lock-protected regions generally have bounded times, it is 1792 OK to use the supplied simple spinlocks. Spinlocks are likely to 1793 improve performance for lightly contended applications, but worsen 1794 performance under heavy contention. 1795 1796 If USE_LOCKS is > 1, the definitions of lock routines here are 1797 bypassed, in which case you will need to define the type MLOCK_T, 1798 and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK 1799 and TRY_LOCK. You must also declare a 1800 static MLOCK_T malloc_global_mutex = { initialization values };. 1801 1802 */ 1803 1804 #if !USE_LOCKS 1805 #define USE_LOCK_BIT (0U) 1806 #define INITIAL_LOCK(l) (0) 1807 #define DESTROY_LOCK(l) (0) 1808 #define ACQUIRE_MALLOC_GLOBAL_LOCK() 1809 #define RELEASE_MALLOC_GLOBAL_LOCK() 1810 1811 #else 1812 #if USE_LOCKS > 1 1813 /* ----------------------- User-defined locks ------------------------ */ 1814 /* Define your own lock implementation here */ 1815 /* #define INITIAL_LOCK(lk) ... */ 1816 /* #define DESTROY_LOCK(lk) ... */ 1817 /* #define ACQUIRE_LOCK(lk) ... */ 1818 /* #define RELEASE_LOCK(lk) ... */ 1819 /* #define TRY_LOCK(lk) ... */ 1820 /* static MLOCK_T malloc_global_mutex = ... */ 1821 1822 #elif USE_SPIN_LOCKS 1823 1824 /* First, define CAS_LOCK and CLEAR_LOCK on ints */ 1825 /* Note CAS_LOCK defined to return 0 on success */ 1826 1827 #if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) 1828 #define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1) 1829 #define CLEAR_LOCK(sl) __sync_lock_release(sl) 1830 1831 #elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))) 1832 /* Custom spin locks for older gcc on x86 */ 1833 static FORCEINLINE int x86_cas_lock(int *sl) { 1834 int ret; 1835 int val = 1; 1836 int cmp = 0; 1837 __asm__ __volatile__ ("lock; cmpxchgl %1, %2" 1838 : "=a" (ret) 1839 : "r" (val), "m" (*(sl)), "0"(cmp) 1840 : "memory", "cc"); 1841 return ret; 1842 } 1843 1844 static FORCEINLINE void x86_clear_lock(int* sl) { 1845 assert(*sl != 0); 1846 int prev = 0; 1847 int ret; 1848 __asm__ __volatile__ ("lock; xchgl %0, %1" 1849 : "=r" (ret) 1850 : "m" (*(sl)), "0"(prev) 1851 : "memory"); 1852 } 1853 1854 #define CAS_LOCK(sl) x86_cas_lock(sl) 1855 #define CLEAR_LOCK(sl) x86_clear_lock(sl) 1856 1857 #else /* Win32 MSC */ 1858 #define CAS_LOCK(sl) interlockedexchange(sl, (LONG)1) 1859 #define CLEAR_LOCK(sl) interlockedexchange (sl, (LONG)0) 1860 1861 #endif /* ... gcc spins locks ... */ 1862 1863 /* How to yield for a spin lock */ 1864 #define SPINS_PER_YIELD 63 1865 #if defined(_MSC_VER) 1866 #define SLEEP_EX_DURATION 50 /* delay for yield/sleep */ 1867 #define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE) 1868 #elif defined (__SVR4) && defined (__sun) /* solaris */ 1869 #define SPIN_LOCK_YIELD thr_yield(); 1870 #elif !defined(LACKS_SCHED_H) 1871 #define SPIN_LOCK_YIELD sched_yield(); 1872 #else 1873 #define SPIN_LOCK_YIELD 1874 #endif /* ... yield ... */ 1875 1876 #if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0 1877 /* Plain spin locks use single word (embedded in malloc_states) */ 1878 static int spin_acquire_lock(int *sl) { 1879 int spins = 0; 1880 while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) { 1881 if ((++spins & SPINS_PER_YIELD) == 0) { 1882 SPIN_LOCK_YIELD; 1883 } 1884 } 1885 return 0; 1886 } 1887 1888 #define MLOCK_T int 1889 #define TRY_LOCK(sl) !CAS_LOCK(sl) 1890 #define RELEASE_LOCK(sl) CLEAR_LOCK(sl) 1891 #define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0) 1892 #define INITIAL_LOCK(sl) (*sl = 0) 1893 #define DESTROY_LOCK(sl) (0) 1894 static MLOCK_T malloc_global_mutex = 0; 1895 1896 #else /* USE_RECURSIVE_LOCKS */ 1897 /* types for lock owners */ 1898 #ifdef WIN32 1899 #define THREAD_ID_T DWORD 1900 #define CURRENT_THREAD GetCurrentThreadId() 1901 #define EQ_OWNER(X,Y) ((X) == (Y)) 1902 #else 1903 /* 1904 Note: the following assume that pthread_t is a type that can be 1905 initialized to (casted) zero. If this is not the case, you will need to 1906 somehow redefine these or not use spin locks. 1907 */ 1908 #define THREAD_ID_T pthread_t 1909 #define CURRENT_THREAD pthread_self() 1910 #define EQ_OWNER(X,Y) pthread_equal(X, Y) 1911 #endif 1912 1913 struct malloc_recursive_lock { 1914 int sl; 1915 unsigned int c; 1916 THREAD_ID_T threadid; 1917 }; 1918 1919 #define MLOCK_T struct malloc_recursive_lock 1920 static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0}; 1921 1922 static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) { 1923 assert(lk->sl != 0); 1924 if (--lk->c == 0) { 1925 CLEAR_LOCK(&lk->sl); 1926 } 1927 } 1928 1929 static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) { 1930 THREAD_ID_T mythreadid = CURRENT_THREAD; 1931 int spins = 0; 1932 for (;;) { 1933 if (*((volatile int *)(&lk->sl)) == 0) { 1934 if (!CAS_LOCK(&lk->sl)) { 1935 lk->threadid = mythreadid; 1936 lk->c = 1; 1937 return 0; 1938 } 1939 } 1940 else if (EQ_OWNER(lk->threadid, mythreadid)) { 1941 ++lk->c; 1942 return 0; 1943 } 1944 if ((++spins & SPINS_PER_YIELD) == 0) { 1945 SPIN_LOCK_YIELD; 1946 } 1947 } 1948 } 1949 1950 static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) { 1951 THREAD_ID_T mythreadid = CURRENT_THREAD; 1952 if (*((volatile int *)(&lk->sl)) == 0) { 1953 if (!CAS_LOCK(&lk->sl)) { 1954 lk->threadid = mythreadid; 1955 lk->c = 1; 1956 return 1; 1957 } 1958 } 1959 else if (EQ_OWNER(lk->threadid, mythreadid)) { 1960 ++lk->c; 1961 return 1; 1962 } 1963 return 0; 1964 } 1965 1966 #define RELEASE_LOCK(lk) recursive_release_lock(lk) 1967 #define TRY_LOCK(lk) recursive_try_lock(lk) 1968 #define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk) 1969 #define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0) 1970 #define DESTROY_LOCK(lk) (0) 1971 #endif /* USE_RECURSIVE_LOCKS */ 1972 1973 #elif defined(WIN32) /* Win32 critical sections */ 1974 #define MLOCK_T CRITICAL_SECTION 1975 #define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0) 1976 #define RELEASE_LOCK(lk) LeaveCriticalSection(lk) 1977 #define TRY_LOCK(lk) TryEnterCriticalSection(lk) 1978 #define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000)) 1979 #define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0) 1980 #define NEED_GLOBAL_LOCK_INIT 1981 1982 static MLOCK_T malloc_global_mutex; 1983 static volatile LONG malloc_global_mutex_status; 1984 1985 /* Use spin loop to initialize global lock */ 1986 static void init_malloc_global_mutex() { 1987 for (;;) { 1988 long stat = malloc_global_mutex_status; 1989 if (stat > 0) 1990 return; 1991 /* transition to < 0 while initializing, then to > 0) */ 1992 if (stat == 0 && 1993 interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)0) == 0) { 1994 InitializeCriticalSection(&malloc_global_mutex); 1995 interlockedexchange(&malloc_global_mutex_status, (LONG)1); 1996 return; 1997 } 1998 SleepEx(0, FALSE); 1999 } 2000 } 2001 2002 #else /* pthreads-based locks */ 2003 #define MLOCK_T pthread_mutex_t 2004 #define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk) 2005 #define RELEASE_LOCK(lk) pthread_mutex_unlock(lk) 2006 #define TRY_LOCK(lk) (!pthread_mutex_trylock(lk)) 2007 #define INITIAL_LOCK(lk) pthread_init_lock(lk) 2008 #define DESTROY_LOCK(lk) pthread_mutex_destroy(lk) 2009 2010 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE) 2011 /* Cope with old-style linux recursive lock initialization by adding */ 2012 /* skipped internal declaration from pthread.h */ 2013 extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr, 2014 int __kind)); 2015 #define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP 2016 #define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y) 2017 #endif /* USE_RECURSIVE_LOCKS ... */ 2018 2019 static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER; 2020 2021 static int pthread_init_lock (MLOCK_T *lk) { 2022 pthread_mutexattr_t attr; 2023 if (pthread_mutexattr_init(&attr)) return 1; 2024 #if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 2025 if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1; 2026 #endif 2027 if (pthread_mutex_init(lk, &attr)) return 1; 2028 if (pthread_mutexattr_destroy(&attr)) return 1; 2029 return 0; 2030 } 2031 2032 #endif /* ... lock types ... */ 2033 2034 /* Common code for all lock types */ 2035 #define USE_LOCK_BIT (2U) 2036 2037 #ifndef ACQUIRE_MALLOC_GLOBAL_LOCK 2038 #define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex); 2039 #endif 2040 2041 #ifndef RELEASE_MALLOC_GLOBAL_LOCK 2042 #define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex); 2043 #endif 2044 2045 #endif /* USE_LOCKS */ 2046 2047 /* ----------------------- Chunk representations ------------------------ */ 2048 2049 /* 2050 (The following includes lightly edited explanations by Colin Plumb.) 2051 2052 The malloc_chunk declaration below is misleading (but accurate and 2053 necessary). It declares a "view" into memory allowing access to 2054 necessary fields at known offsets from a given base. 2055 2056 Chunks of memory are maintained using a `boundary tag' method as 2057 originally described by Knuth. (See the paper by Paul Wilson 2058 ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such 2059 techniques.) Sizes of free chunks are stored both in the front of 2060 each chunk and at the end. This makes consolidating fragmented 2061 chunks into bigger chunks fast. The head fields also hold bits 2062 representing whether chunks are free or in use. 2063 2064 Here are some pictures to make it clearer. They are "exploded" to 2065 show that the state of a chunk can be thought of as extending from 2066 the high 31 bits of the head field of its header through the 2067 prev_foot and PINUSE_BIT bit of the following chunk header. 2068 2069 A chunk that's in use looks like: 2070 2071 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2072 | Size of previous chunk (if P = 0) | 2073 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2074 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| 2075 | Size of this chunk 1| +-+ 2076 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2077 | | 2078 +- -+ 2079 | | 2080 +- -+ 2081 | : 2082 +- size - sizeof(size_t) available payload bytes -+ 2083 : | 2084 chunk-> +- -+ 2085 | | 2086 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2087 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1| 2088 | Size of next chunk (may or may not be in use) | +-+ 2089 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2090 2091 And if it's free, it looks like this: 2092 2093 chunk-> +- -+ 2094 | User payload (must be in use, or we would have merged!) | 2095 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2096 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| 2097 | Size of this chunk 0| +-+ 2098 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2099 | Next pointer | 2100 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2101 | Prev pointer | 2102 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2103 | : 2104 +- size - sizeof(struct chunk) unused bytes -+ 2105 : | 2106 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2107 | Size of this chunk | 2108 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2109 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0| 2110 | Size of next chunk (must be in use, or we would have merged)| +-+ 2111 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2112 | : 2113 +- User payload -+ 2114 : | 2115 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2116 |0| 2117 +-+ 2118 Note that since we always merge adjacent free chunks, the chunks 2119 adjacent to a free chunk must be in use. 2120 2121 Given a pointer to a chunk (which can be derived trivially from the 2122 payload pointer) we can, in O(1) time, find out whether the adjacent 2123 chunks are free, and if so, unlink them from the lists that they 2124 are on and merge them with the current chunk. 2125 2126 Chunks always begin on even word boundaries, so the mem portion 2127 (which is returned to the user) is also on an even word boundary, and 2128 thus at least double-word aligned. 2129 2130 The P (PINUSE_BIT) bit, stored in the unused low-order bit of the 2131 chunk size (which is always a multiple of two words), is an in-use 2132 bit for the *previous* chunk. If that bit is *clear*, then the 2133 word before the current chunk size contains the previous chunk 2134 size, and can be used to find the front of the previous chunk. 2135 The very first chunk allocated always has this bit set, preventing 2136 access to non-existent (or non-owned) memory. If pinuse is set for 2137 any given chunk, then you CANNOT determine the size of the 2138 previous chunk, and might even get a memory addressing fault when 2139 trying to do so. 2140 2141 The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of 2142 the chunk size redundantly records whether the current chunk is 2143 inuse (unless the chunk is mmapped). This redundancy enables usage 2144 checks within free and realloc, and reduces indirection when freeing 2145 and consolidating chunks. 2146 2147 Each freshly allocated chunk must have both cinuse and pinuse set. 2148 That is, each allocated chunk borders either a previously allocated 2149 and still in-use chunk, or the base of its memory arena. This is 2150 ensured by making all allocations from the `lowest' part of any 2151 found chunk. Further, no free chunk physically borders another one, 2152 so each free chunk is known to be preceded and followed by either 2153 inuse chunks or the ends of memory. 2154 2155 Note that the `foot' of the current chunk is actually represented 2156 as the prev_foot of the NEXT chunk. This makes it easier to 2157 deal with alignments etc but can be very confusing when trying 2158 to extend or adapt this code. 2159 2160 The exceptions to all this are 2161 2162 1. The special chunk `top' is the top-most available chunk (i.e., 2163 the one bordering the end of available memory). It is treated 2164 specially. Top is never included in any bin, is used only if 2165 no other chunk is available, and is released back to the 2166 system if it is very large (see M_TRIM_THRESHOLD). In effect, 2167 the top chunk is treated as larger (and thus less well 2168 fitting) than any other available chunk. The top chunk 2169 doesn't update its trailing size field since there is no next 2170 contiguous chunk that would have to index off it. However, 2171 space is still allocated for it (TOP_FOOT_SIZE) to enable 2172 separation or merging when space is extended. 2173 2174 3. Chunks allocated via mmap, have both cinuse and pinuse bits 2175 cleared in their head fields. Because they are allocated 2176 one-by-one, each must carry its own prev_foot field, which is 2177 also used to hold the offset this chunk has within its mmapped 2178 region, which is needed to preserve alignment. Each mmapped 2179 chunk is trailed by the first two fields of a fake next-chunk 2180 for sake of usage checks. 2181 2182 */ 2183 2184 struct malloc_chunk { 2185 size_t prev_foot; /* Size of previous chunk (if free). */ 2186 size_t head; /* Size and inuse bits. */ 2187 struct malloc_chunk* fd; /* double links -- used only if free. */ 2188 struct malloc_chunk* bk; 2189 }; 2190 2191 typedef struct malloc_chunk mchunk; 2192 typedef struct malloc_chunk* mchunkptr; 2193 typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */ 2194 typedef unsigned int bindex_t; /* Described below */ 2195 typedef unsigned int binmap_t; /* Described below */ 2196 typedef unsigned int flag_t; /* The type of various bit flag sets */ 2197 2198 /* ------------------- Chunks sizes and alignments ----------------------- */ 2199 2200 #define MCHUNK_SIZE (sizeof(mchunk)) 2201 2202 #if FOOTERS 2203 #define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) 2204 #else /* FOOTERS */ 2205 #define CHUNK_OVERHEAD (SIZE_T_SIZE) 2206 #endif /* FOOTERS */ 2207 2208 /* MMapped chunks need a second word of overhead ... */ 2209 #define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) 2210 /* ... and additional padding for fake next-chunk at foot */ 2211 #define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES) 2212 2213 /* The smallest size we can malloc is an aligned minimal chunk */ 2214 #define MIN_CHUNK_SIZE\ 2215 ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) 2216 2217 /* conversion from malloc headers to user pointers, and back */ 2218 #define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES)) 2219 #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES)) 2220 /* chunk associated with aligned address A */ 2221 #define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A))) 2222 2223 /* Bounds on request (not chunk) sizes. */ 2224 #define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2) 2225 #define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE) 2226 2227 /* pad request bytes into a usable size */ 2228 #define pad_request(req) \ 2229 (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) 2230 2231 /* pad request, checking for minimum (but not maximum) */ 2232 #define request2size(req) \ 2233 (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req)) 2234 2235 2236 /* ------------------ Operations on head and foot fields ----------------- */ 2237 2238 /* 2239 The head field of a chunk is or'ed with PINUSE_BIT when previous 2240 adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in 2241 use, unless mmapped, in which case both bits are cleared. 2242 2243 FLAG4_BIT is not used by this malloc, but might be useful in extensions. 2244 */ 2245 2246 #define PINUSE_BIT (SIZE_T_ONE) 2247 #define CINUSE_BIT (SIZE_T_TWO) 2248 #define FLAG4_BIT (SIZE_T_FOUR) 2249 #define INUSE_BITS (PINUSE_BIT|CINUSE_BIT) 2250 #define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT) 2251 2252 /* Head value for fenceposts */ 2253 #define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE) 2254 2255 /* extraction of fields from head words */ 2256 #define cinuse(p) ((p)->head & CINUSE_BIT) 2257 #define pinuse(p) ((p)->head & PINUSE_BIT) 2258 #define flag4inuse(p) ((p)->head & FLAG4_BIT) 2259 #define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT) 2260 #define is_mmapped(p) (((p)->head & INUSE_BITS) == 0) 2261 2262 #define chunksize(p) ((p)->head & ~(FLAG_BITS)) 2263 2264 #define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT) 2265 #define set_flag4(p) ((p)->head |= FLAG4_BIT) 2266 #define clear_flag4(p) ((p)->head &= ~FLAG4_BIT) 2267 2268 /* Treat space at ptr +/- offset as a chunk */ 2269 #define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s))) 2270 #define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s))) 2271 2272 /* Ptr to next or previous physical malloc_chunk. */ 2273 #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS))) 2274 #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) )) 2275 2276 /* extract next chunk's pinuse bit */ 2277 #define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT) 2278 2279 /* Get/set size at footer */ 2280 #define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot) 2281 #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s)) 2282 2283 /* Set size, pinuse bit, and foot */ 2284 #define set_size_and_pinuse_of_free_chunk(p, s)\ 2285 ((p)->head = (s|PINUSE_BIT), set_foot(p, s)) 2286 2287 /* Set size, pinuse bit, foot, and clear next pinuse */ 2288 #define set_free_with_pinuse(p, s, n)\ 2289 (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s)) 2290 2291 /* Get the internal overhead associated with chunk p */ 2292 #define overhead_for(p)\ 2293 (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD) 2294 2295 /* Return true if malloced space is not necessarily cleared */ 2296 #if MMAP_CLEARS 2297 #define calloc_must_clear(p) (!is_mmapped(p)) 2298 #else /* MMAP_CLEARS */ 2299 #define calloc_must_clear(p) (1) 2300 #endif /* MMAP_CLEARS */ 2301 2302 /* ---------------------- Overlaid data structures ----------------------- */ 2303 2304 /* 2305 When chunks are not in use, they are treated as nodes of either 2306 lists or trees. 2307 2308 "Small" chunks are stored in circular doubly-linked lists, and look 2309 like this: 2310 2311 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2312 | Size of previous chunk | 2313 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2314 `head:' | Size of chunk, in bytes |P| 2315 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2316 | Forward pointer to next chunk in list | 2317 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2318 | Back pointer to previous chunk in list | 2319 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2320 | Unused space (may be 0 bytes long) . 2321 . . 2322 . | 2323 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2324 `foot:' | Size of chunk, in bytes | 2325 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2326 2327 Larger chunks are kept in a form of bitwise digital trees (aka 2328 tries) keyed on chunksizes. Because malloc_tree_chunks are only for 2329 free chunks greater than 256 bytes, their size doesn't impose any 2330 constraints on user chunk sizes. Each node looks like: 2331 2332 chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2333 | Size of previous chunk | 2334 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2335 `head:' | Size of chunk, in bytes |P| 2336 mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2337 | Forward pointer to next chunk of same size | 2338 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2339 | Back pointer to previous chunk of same size | 2340 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2341 | Pointer to left child (child[0]) | 2342 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2343 | Pointer to right child (child[1]) | 2344 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2345 | Pointer to parent | 2346 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2347 | bin index of this chunk | 2348 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2349 | Unused space . 2350 . | 2351 nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2352 `foot:' | Size of chunk, in bytes | 2353 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 2354 2355 Each tree holding treenodes is a tree of unique chunk sizes. Chunks 2356 of the same size are arranged in a circularly-linked list, with only 2357 the oldest chunk (the next to be used, in our FIFO ordering) 2358 actually in the tree. (Tree members are distinguished by a non-null 2359 parent pointer.) If a chunk with the same size an an existing node 2360 is inserted, it is linked off the existing node using pointers that 2361 work in the same way as fd/bk pointers of small chunks. 2362 2363 Each tree contains a power of 2 sized range of chunk sizes (the 2364 smallest is 0x100 <= x < 0x180), which is is divided in half at each 2365 tree level, with the chunks in the smaller half of the range (0x100 2366 <= x < 0x140 for the top nose) in the left subtree and the larger 2367 half (0x140 <= x < 0x180) in the right subtree. This is, of course, 2368 done by inspecting individual bits. 2369 2370 Using these rules, each node's left subtree contains all smaller 2371 sizes than its right subtree. However, the node at the root of each 2372 subtree has no particular ordering relationship to either. (The 2373 dividing line between the subtree sizes is based on trie relation.) 2374 If we remove the last chunk of a given size from the interior of the 2375 tree, we need to replace it with a leaf node. The tree ordering 2376 rules permit a node to be replaced by any leaf below it. 2377 2378 The smallest chunk in a tree (a common operation in a best-fit 2379 allocator) can be found by walking a path to the leftmost leaf in 2380 the tree. Unlike a usual binary tree, where we follow left child 2381 pointers until we reach a null, here we follow the right child 2382 pointer any time the left one is null, until we reach a leaf with 2383 both child pointers null. The smallest chunk in the tree will be 2384 somewhere along that path. 2385 2386 The worst case number of steps to add, find, or remove a node is 2387 bounded by the number of bits differentiating chunks within 2388 bins. Under current bin calculations, this ranges from 6 up to 21 2389 (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case 2390 is of course much better. 2391 */ 2392 2393 struct malloc_tree_chunk { 2394 /* The first four fields must be compatible with malloc_chunk */ 2395 size_t prev_foot; 2396 size_t head; 2397 struct malloc_tree_chunk* fd; 2398 struct malloc_tree_chunk* bk; 2399 2400 struct malloc_tree_chunk* child[2]; 2401 struct malloc_tree_chunk* parent; 2402 bindex_t index; 2403 }; 2404 2405 typedef struct malloc_tree_chunk tchunk; 2406 typedef struct malloc_tree_chunk* tchunkptr; 2407 typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */ 2408 2409 /* A little helper macro for trees */ 2410 #define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1]) 2411 2412 /* ----------------------------- Segments -------------------------------- */ 2413 2414 /* 2415 Each malloc space may include non-contiguous segments, held in a 2416 list headed by an embedded malloc_segment record representing the 2417 top-most space. Segments also include flags holding properties of 2418 the space. Large chunks that are directly allocated by mmap are not 2419 included in this list. They are instead independently created and 2420 destroyed without otherwise keeping track of them. 2421 2422 Segment management mainly comes into play for spaces allocated by 2423 MMAP. Any call to MMAP might or might not return memory that is 2424 adjacent to an existing segment. MORECORE normally contiguously 2425 extends the current space, so this space is almost always adjacent, 2426 which is simpler and faster to deal with. (This is why MORECORE is 2427 used preferentially to MMAP when both are available -- see 2428 sys_alloc.) When allocating using MMAP, we don't use any of the 2429 hinting mechanisms (inconsistently) supported in various 2430 implementations of unix mmap, or distinguish reserving from 2431 committing memory. Instead, we just ask for space, and exploit 2432 contiguity when we get it. It is probably possible to do 2433 better than this on some systems, but no general scheme seems 2434 to be significantly better. 2435 2436 Management entails a simpler variant of the consolidation scheme 2437 used for chunks to reduce fragmentation -- new adjacent memory is 2438 normally prepended or appended to an existing segment. However, 2439 there are limitations compared to chunk consolidation that mostly 2440 reflect the fact that segment processing is relatively infrequent 2441 (occurring only when getting memory from system) and that we 2442 don't expect to have huge numbers of segments: 2443 2444 * Segments are not indexed, so traversal requires linear scans. (It 2445 would be possible to index these, but is not worth the extra 2446 overhead and complexity for most programs on most platforms.) 2447 * New segments are only appended to old ones when holding top-most 2448 memory; if they cannot be prepended to others, they are held in 2449 different segments. 2450 2451 Except for the top-most segment of an mstate, each segment record 2452 is kept at the tail of its segment. Segments are added by pushing 2453 segment records onto the list headed by &mstate.seg for the 2454 containing mstate. 2455 2456 Segment flags control allocation/merge/deallocation policies: 2457 * If EXTERN_BIT set, then we did not allocate this segment, 2458 and so should not try to deallocate or merge with others. 2459 (This currently holds only for the initial segment passed 2460 into create_mspace_with_base.) 2461 * If USE_MMAP_BIT set, the segment may be merged with 2462 other surrounding mmapped segments and trimmed/de-allocated 2463 using munmap. 2464 * If neither bit is set, then the segment was obtained using 2465 MORECORE so can be merged with surrounding MORECORE'd segments 2466 and deallocated/trimmed using MORECORE with negative arguments. 2467 */ 2468 2469 struct malloc_segment { 2470 char* base; /* base address */ 2471 size_t size; /* allocated size */ 2472 struct malloc_segment* next; /* ptr to next segment */ 2473 flag_t sflags; /* mmap and extern flag */ 2474 }; 2475 2476 #define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT) 2477 #define is_extern_segment(S) ((S)->sflags & EXTERN_BIT) 2478 2479 typedef struct malloc_segment msegment; 2480 typedef struct malloc_segment* msegmentptr; 2481 2482 /* ---------------------------- malloc_state ----------------------------- */ 2483 2484 /* 2485 A malloc_state holds all of the bookkeeping for a space. 2486 The main fields are: 2487 2488 Top 2489 The topmost chunk of the currently active segment. Its size is 2490 cached in topsize. The actual size of topmost space is 2491 topsize+TOP_FOOT_SIZE, which includes space reserved for adding 2492 fenceposts and segment records if necessary when getting more 2493 space from the system. The size at which to autotrim top is 2494 cached from mparams in trim_check, except that it is disabled if 2495 an autotrim fails. 2496 2497 Designated victim (dv) 2498 This is the preferred chunk for servicing small requests that 2499 don't have exact fits. It is normally the chunk split off most 2500 recently to service another small request. Its size is cached in 2501 dvsize. The link fields of this chunk are not maintained since it 2502 is not kept in a bin. 2503 2504 SmallBins 2505 An array of bin headers for free chunks. These bins hold chunks 2506 with sizes less than MIN_LARGE_SIZE bytes. Each bin contains 2507 chunks of all the same size, spaced 8 bytes apart. To simplify 2508 use in double-linked lists, each bin header acts as a malloc_chunk 2509 pointing to the real first node, if it exists (else pointing to 2510 itself). This avoids special-casing for headers. But to avoid 2511 waste, we allocate only the fd/bk pointers of bins, and then use 2512 repositioning tricks to treat these as the fields of a chunk. 2513 2514 TreeBins 2515 Treebins are pointers to the roots of trees holding a range of 2516 sizes. There are 2 equally spaced treebins for each power of two 2517 from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything 2518 larger. 2519 2520 Bin maps 2521 There is one bit map for small bins ("smallmap") and one for 2522 treebins ("treemap). Each bin sets its bit when non-empty, and 2523 clears the bit when empty. Bit operations are then used to avoid 2524 bin-by-bin searching -- nearly all "search" is done without ever 2525 looking at bins that won't be selected. The bit maps 2526 conservatively use 32 bits per map word, even if on 64bit system. 2527 For a good description of some of the bit-based techniques used 2528 here, see Henry S. Warren Jr's book "Hacker's Delight" (and 2529 supplement at http://hackersdelight.org/). Many of these are 2530 intended to reduce the branchiness of paths through malloc etc, as 2531 well as to reduce the number of memory locations read or written. 2532 2533 Segments 2534 A list of segments headed by an embedded malloc_segment record 2535 representing the initial space. 2536 2537 Address check support 2538 The least_addr field is the least address ever obtained from 2539 MORECORE or MMAP. Attempted frees and reallocs of any address less 2540 than this are trapped (unless INSECURE is defined). 2541 2542 Magic tag 2543 A cross-check field that should always hold same value as mparams.magic. 2544 2545 Max allowed footprint 2546 The maximum allowed bytes to allocate from system (zero means no limit) 2547 2548 Flags 2549 Bits recording whether to use MMAP, locks, or contiguous MORECORE 2550 2551 Statistics 2552 Each space keeps track of current and maximum system memory 2553 obtained via MORECORE or MMAP. 2554 2555 Trim support 2556 Fields holding the amount of unused topmost memory that should trigger 2557 trimming, and a counter to force periodic scanning to release unused 2558 non-topmost segments. 2559 2560 Locking 2561 If USE_LOCKS is defined, the "mutex" lock is acquired and released 2562 around every public call using this mspace. 2563 2564 Extension support 2565 A void* pointer and a size_t field that can be used to help implement 2566 extensions to this malloc. 2567 */ 2568 2569 /* Bin types, widths and sizes */ 2570 #define NSMALLBINS (32U) 2571 #define NTREEBINS (32U) 2572 #define SMALLBIN_SHIFT (3U) 2573 #define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT) 2574 #define TREEBIN_SHIFT (8U) 2575 #define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT) 2576 #define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE) 2577 #define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD) 2578 2579 struct malloc_state { 2580 binmap_t smallmap; 2581 binmap_t treemap; 2582 size_t dvsize; 2583 size_t topsize; 2584 char* least_addr; 2585 mchunkptr dv; 2586 mchunkptr top; 2587 size_t trim_check; 2588 size_t release_checks; 2589 size_t magic; 2590 mchunkptr smallbins[(NSMALLBINS+1)*2]; 2591 tbinptr treebins[NTREEBINS]; 2592 size_t footprint; 2593 size_t max_footprint; 2594 size_t footprint_limit; /* zero means no limit */ 2595 flag_t mflags; 2596 #if USE_LOCKS 2597 MLOCK_T mutex; /* locate lock among fields that rarely change */ 2598 #endif /* USE_LOCKS */ 2599 msegment seg; 2600 void* extp; /* Unused but available for extensions */ 2601 size_t exts; 2602 }; 2603 2604 typedef struct malloc_state* mstate; 2605 2606 /* ------------- Global malloc_state and malloc_params ------------------- */ 2607 2608 /* 2609 malloc_params holds global properties, including those that can be 2610 dynamically set using mallopt. There is a single instance, mparams, 2611 initialized in init_mparams. Note that the non-zeroness of "magic" 2612 also serves as an initialization flag. 2613 */ 2614 2615 struct malloc_params { 2616 size_t magic; 2617 size_t page_size; 2618 size_t granularity; 2619 size_t mmap_threshold; 2620 size_t trim_threshold; 2621 flag_t default_mflags; 2622 }; 2623 2624 static struct malloc_params mparams; 2625 2626 /* Ensure mparams initialized */ 2627 #define ensure_initialization() (void)(mparams.magic != 0 || init_mparams()) 2628 2629 #if !ONLY_MSPACES 2630 2631 /* The global malloc_state used for all non-"mspace" calls */ 2632 static struct malloc_state _gm_; 2633 #define gm (&_gm_) 2634 #define is_global(M) ((M) == &_gm_) 2635 2636 #endif /* !ONLY_MSPACES */ 2637 2638 #define is_initialized(M) ((M)->top != 0) 2639 2640 /* -------------------------- system alloc setup ------------------------- */ 2641 2642 /* Operations on mflags */ 2643 2644 #define use_lock(M) ((M)->mflags & USE_LOCK_BIT) 2645 #define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT) 2646 #if USE_LOCKS 2647 #define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT) 2648 #else 2649 #define disable_lock(M) 2650 #endif 2651 2652 #define use_mmap(M) ((M)->mflags & USE_MMAP_BIT) 2653 #define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT) 2654 #if HAVE_MMAP 2655 #define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT) 2656 #else 2657 #define disable_mmap(M) 2658 #endif 2659 2660 #define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT) 2661 #define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT) 2662 2663 #define set_lock(M,L)\ 2664 ((M)->mflags = (L)?\ 2665 ((M)->mflags | USE_LOCK_BIT) :\ 2666 ((M)->mflags & ~USE_LOCK_BIT)) 2667 2668 /* page-align a size */ 2669 #define page_align(S)\ 2670 (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE)) 2671 2672 /* granularity-align a size */ 2673 #define granularity_align(S)\ 2674 (((S) + (mparams.granularity - SIZE_T_ONE))\ 2675 & ~(mparams.granularity - SIZE_T_ONE)) 2676 2677 2678 /* For mmap, use granularity alignment on windows, else page-align */ 2679 #ifdef WIN32 2680 #define mmap_align(S) granularity_align(S) 2681 #else 2682 #define mmap_align(S) page_align(S) 2683 #endif 2684 2685 /* For sys_alloc, enough padding to ensure can malloc request on success */ 2686 #define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT) 2687 2688 #define is_page_aligned(S)\ 2689 (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0) 2690 #define is_granularity_aligned(S)\ 2691 (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0) 2692 2693 /* True if segment S holds address A */ 2694 #define segment_holds(S, A)\ 2695 ((char*)(A) >= S->base && (char*)(A) < S->base + S->size) 2696 2697 /* Return segment holding given address */ 2698 static msegmentptr segment_holding(mstate m, char* addr) { 2699 msegmentptr sp = &m->seg; 2700 for (;;) { 2701 if (addr >= sp->base && addr < sp->base + sp->size) 2702 return sp; 2703 if ((sp = sp->next) == 0) 2704 return 0; 2705 } 2706 } 2707 2708 /* Return true if segment contains a segment link */ 2709 static int has_segment_link(mstate m, msegmentptr ss) { 2710 msegmentptr sp = &m->seg; 2711 for (;;) { 2712 if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size) 2713 return 1; 2714 if ((sp = sp->next) == 0) 2715 return 0; 2716 } 2717 } 2718 2719 #ifndef MORECORE_CANNOT_TRIM 2720 #define should_trim(M,s) ((s) > (M)->trim_check) 2721 #else /* MORECORE_CANNOT_TRIM */ 2722 #define should_trim(M,s) (0) 2723 #endif /* MORECORE_CANNOT_TRIM */ 2724 2725 /* 2726 TOP_FOOT_SIZE is padding at the end of a segment, including space 2727 that may be needed to place segment records and fenceposts when new 2728 noncontiguous segments are added. 2729 */ 2730 #define TOP_FOOT_SIZE\ 2731 (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE) 2732 2733 2734 /* ------------------------------- Hooks -------------------------------- */ 2735 2736 /* 2737 PREACTION should be defined to return 0 on success, and nonzero on 2738 failure. If you are not using locking, you can redefine these to do 2739 anything you like. 2740 */ 2741 2742 #if USE_LOCKS 2743 #define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0) 2744 #define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); } 2745 #else /* USE_LOCKS */ 2746 2747 #ifndef PREACTION 2748 #define PREACTION(M) (0) 2749 #endif /* PREACTION */ 2750 2751 #ifndef POSTACTION 2752 #define POSTACTION(M) 2753 #endif /* POSTACTION */ 2754 2755 #endif /* USE_LOCKS */ 2756 2757 /* 2758 CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses. 2759 USAGE_ERROR_ACTION is triggered on detected bad frees and 2760 reallocs. The argument p is an address that might have triggered the 2761 fault. It is ignored by the two predefined actions, but might be 2762 useful in custom actions that try to help diagnose errors. 2763 */ 2764 2765 #if PROCEED_ON_ERROR 2766 2767 /* A count of the number of corruption errors causing resets */ 2768 int malloc_corruption_error_count; 2769 2770 /* default corruption action */ 2771 static void reset_on_error(mstate m); 2772 2773 #define CORRUPTION_ERROR_ACTION(m) reset_on_error(m) 2774 #define USAGE_ERROR_ACTION(m, p) 2775 2776 #else /* PROCEED_ON_ERROR */ 2777 2778 #ifndef CORRUPTION_ERROR_ACTION 2779 #define CORRUPTION_ERROR_ACTION(m) ABORT 2780 #endif /* CORRUPTION_ERROR_ACTION */ 2781 2782 #ifndef USAGE_ERROR_ACTION 2783 #define USAGE_ERROR_ACTION(m,p) ABORT 2784 #endif /* USAGE_ERROR_ACTION */ 2785 2786 #endif /* PROCEED_ON_ERROR */ 2787 2788 2789 /* -------------------------- Debugging setup ---------------------------- */ 2790 2791 #if ! DEBUG 2792 2793 #define check_free_chunk(M,P) 2794 #define check_inuse_chunk(M,P) 2795 #define check_malloced_chunk(M,P,N) 2796 #define check_mmapped_chunk(M,P) 2797 #define check_malloc_state(M) 2798 #define check_top_chunk(M,P) 2799 2800 #else /* DEBUG */ 2801 #define check_free_chunk(M,P) do_check_free_chunk(M,P) 2802 #define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P) 2803 #define check_top_chunk(M,P) do_check_top_chunk(M,P) 2804 #define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N) 2805 #define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P) 2806 #define check_malloc_state(M) do_check_malloc_state(M) 2807 2808 static void do_check_any_chunk(mstate m, mchunkptr p); 2809 static void do_check_top_chunk(mstate m, mchunkptr p); 2810 static void do_check_mmapped_chunk(mstate m, mchunkptr p); 2811 static void do_check_inuse_chunk(mstate m, mchunkptr p); 2812 static void do_check_free_chunk(mstate m, mchunkptr p); 2813 static void do_check_malloced_chunk(mstate m, void* mem, size_t s); 2814 static void do_check_tree(mstate m, tchunkptr t); 2815 static void do_check_treebin(mstate m, bindex_t i); 2816 static void do_check_smallbin(mstate m, bindex_t i); 2817 static void do_check_malloc_state(mstate m); 2818 static int bin_find(mstate m, mchunkptr x); 2819 static size_t traverse_and_check(mstate m); 2820 #endif /* DEBUG */ 2821 2822 /* ---------------------------- Indexing Bins ---------------------------- */ 2823 2824 #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS) 2825 #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT) 2826 #define small_index2size(i) ((i) << SMALLBIN_SHIFT) 2827 #define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE)) 2828 2829 /* addressing by index. See above about smallbin repositioning */ 2830 /* BEGIN android-changed: strict aliasing change: char* cast to void* */ 2831 #define smallbin_at(M, i) ((sbinptr)((void*)&((M)->smallbins[(i)<<1]))) 2832 /* END android-changed */ 2833 #define treebin_at(M,i) (&((M)->treebins[i])) 2834 2835 /* assign tree index for size S to variable I. Use x86 asm if possible */ 2836 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)) 2837 #define compute_tree_index(S, I)\ 2838 {\ 2839 unsigned int X = S >> TREEBIN_SHIFT;\ 2840 if (X == 0)\ 2841 I = 0;\ 2842 else if (X > 0xFFFF)\ 2843 I = NTREEBINS-1;\ 2844 else {\ 2845 unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \ 2846 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ 2847 }\ 2848 } 2849 2850 #elif defined (__INTEL_COMPILER) 2851 #define compute_tree_index(S, I)\ 2852 {\ 2853 size_t X = S >> TREEBIN_SHIFT;\ 2854 if (X == 0)\ 2855 I = 0;\ 2856 else if (X > 0xFFFF)\ 2857 I = NTREEBINS-1;\ 2858 else {\ 2859 unsigned int K = _bit_scan_reverse (X); \ 2860 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ 2861 }\ 2862 } 2863 2864 #elif defined(_MSC_VER) && _MSC_VER>=1300 2865 #define compute_tree_index(S, I)\ 2866 {\ 2867 size_t X = S >> TREEBIN_SHIFT;\ 2868 if (X == 0)\ 2869 I = 0;\ 2870 else if (X > 0xFFFF)\ 2871 I = NTREEBINS-1;\ 2872 else {\ 2873 unsigned int K;\ 2874 _BitScanReverse((DWORD *) &K, (DWORD) X);\ 2875 I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ 2876 }\ 2877 } 2878 2879 #else /* GNUC */ 2880 #define compute_tree_index(S, I)\ 2881 {\ 2882 size_t X = S >> TREEBIN_SHIFT;\ 2883 if (X == 0)\ 2884 I = 0;\ 2885 else if (X > 0xFFFF)\ 2886 I = NTREEBINS-1;\ 2887 else {\ 2888 unsigned int Y = (unsigned int)X;\ 2889 unsigned int N = ((Y - 0x100) >> 16) & 8;\ 2890 unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\ 2891 N += K;\ 2892 N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\ 2893 K = 14 - N + ((Y <<= K) >> 15);\ 2894 I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\ 2895 }\ 2896 } 2897 #endif /* GNUC */ 2898 2899 /* Bit representing maximum resolved size in a treebin at i */ 2900 #define bit_for_tree_index(i) \ 2901 (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2) 2902 2903 /* Shift placing maximum resolved bit in a treebin at i as sign bit */ 2904 #define leftshift_for_tree_index(i) \ 2905 ((i == NTREEBINS-1)? 0 : \ 2906 ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2))) 2907 2908 /* The size of the smallest chunk held in bin with index i */ 2909 #define minsize_for_tree_index(i) \ 2910 ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \ 2911 (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1))) 2912 2913 2914 /* ------------------------ Operations on bin maps ----------------------- */ 2915 2916 /* bit corresponding to given index */ 2917 #define idx2bit(i) ((binmap_t)(1) << (i)) 2918 2919 /* Mark/Clear bits with given index */ 2920 #define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i)) 2921 #define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i)) 2922 #define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i)) 2923 2924 #define mark_treemap(M,i) ((M)->treemap |= idx2bit(i)) 2925 #define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i)) 2926 #define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i)) 2927 2928 /* isolate the least set bit of a bitmap */ 2929 #define least_bit(x) ((x) & -(x)) 2930 2931 /* mask with all bits to left of least bit of x on */ 2932 #define left_bits(x) ((x<<1) | -(x<<1)) 2933 2934 /* mask with all bits to left of or equal to least bit of x on */ 2935 #define same_or_left_bits(x) ((x) | -(x)) 2936 2937 /* index corresponding to given bit. Use x86 asm if possible */ 2938 2939 #if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)) 2940 #define compute_bit2idx(X, I)\ 2941 {\ 2942 unsigned int J;\ 2943 J = __builtin_ctz(X); \ 2944 I = (bindex_t)J;\ 2945 } 2946 2947 #elif defined (__INTEL_COMPILER) 2948 #define compute_bit2idx(X, I)\ 2949 {\ 2950 unsigned int J;\ 2951 J = _bit_scan_forward (X); \ 2952 I = (bindex_t)J;\ 2953 } 2954 2955 #elif defined(_MSC_VER) && _MSC_VER>=1300 2956 #define compute_bit2idx(X, I)\ 2957 {\ 2958 unsigned int J;\ 2959 _BitScanForward((DWORD *) &J, X);\ 2960 I = (bindex_t)J;\ 2961 } 2962 2963 #elif USE_BUILTIN_FFS 2964 #define compute_bit2idx(X, I) I = ffs(X)-1 2965 2966 #else 2967 #define compute_bit2idx(X, I)\ 2968 {\ 2969 unsigned int Y = X - 1;\ 2970 unsigned int K = Y >> (16-4) & 16;\ 2971 unsigned int N = K; Y >>= K;\ 2972 N += K = Y >> (8-3) & 8; Y >>= K;\ 2973 N += K = Y >> (4-2) & 4; Y >>= K;\ 2974 N += K = Y >> (2-1) & 2; Y >>= K;\ 2975 N += K = Y >> (1-0) & 1; Y >>= K;\ 2976 I = (bindex_t)(N + Y);\ 2977 } 2978 #endif /* GNUC */ 2979 2980 2981 /* ----------------------- Runtime Check Support ------------------------- */ 2982 2983 /* 2984 For security, the main invariant is that malloc/free/etc never 2985 writes to a static address other than malloc_state, unless static 2986 malloc_state itself has been corrupted, which cannot occur via 2987 malloc (because of these checks). In essence this means that we 2988 believe all pointers, sizes, maps etc held in malloc_state, but 2989 check all of those linked or offsetted from other embedded data 2990 structures. These checks are interspersed with main code in a way 2991 that tends to minimize their run-time cost. 2992 2993 When FOOTERS is defined, in addition to range checking, we also 2994 verify footer fields of inuse chunks, which can be used guarantee 2995 that the mstate controlling malloc/free is intact. This is a 2996 streamlined version of the approach described by William Robertson 2997 et al in "Run-time Detection of Heap-based Overflows" LISA'03 2998 http://www.usenix.org/events/lisa03/tech/robertson.html The footer 2999 of an inuse chunk holds the xor of its mstate and a random seed, 3000 that is checked upon calls to free() and realloc(). This is 3001 (probabalistically) unguessable from outside the program, but can be 3002 computed by any code successfully malloc'ing any chunk, so does not 3003 itself provide protection against code that has already broken 3004 security through some other means. Unlike Robertson et al, we 3005 always dynamically check addresses of all offset chunks (previous, 3006 next, etc). This turns out to be cheaper than relying on hashes. 3007 */ 3008 3009 #if !INSECURE 3010 /* Check if address a is at least as high as any from MORECORE or MMAP */ 3011 #define ok_address(M, a) ((char*)(a) >= (M)->least_addr) 3012 /* Check if address of next chunk n is higher than base chunk p */ 3013 #define ok_next(p, n) ((char*)(p) < (char*)(n)) 3014 /* Check if p has inuse status */ 3015 #define ok_inuse(p) is_inuse(p) 3016 /* Check if p has its pinuse bit on */ 3017 #define ok_pinuse(p) pinuse(p) 3018 3019 #else /* !INSECURE */ 3020 #define ok_address(M, a) (1) 3021 #define ok_next(b, n) (1) 3022 #define ok_inuse(p) (1) 3023 #define ok_pinuse(p) (1) 3024 #endif /* !INSECURE */ 3025 3026 #if (FOOTERS && !INSECURE) 3027 /* Check if (alleged) mstate m has expected magic field */ 3028 #define ok_magic(M) ((M)->magic == mparams.magic) 3029 #else /* (FOOTERS && !INSECURE) */ 3030 #define ok_magic(M) (1) 3031 #endif /* (FOOTERS && !INSECURE) */ 3032 3033 /* In gcc, use __builtin_expect to minimize impact of checks */ 3034 #if !INSECURE 3035 #if defined(__GNUC__) && __GNUC__ >= 3 3036 #define RTCHECK(e) __builtin_expect(e, 1) 3037 #else /* GNUC */ 3038 #define RTCHECK(e) (e) 3039 #endif /* GNUC */ 3040 #else /* !INSECURE */ 3041 #define RTCHECK(e) (1) 3042 #endif /* !INSECURE */ 3043 3044 /* macros to set up inuse chunks with or without footers */ 3045 3046 #if !FOOTERS 3047 3048 #define mark_inuse_foot(M,p,s) 3049 3050 /* Macros for setting head/foot of non-mmapped chunks */ 3051 3052 /* Set cinuse bit and pinuse bit of next chunk */ 3053 #define set_inuse(M,p,s)\ 3054 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ 3055 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) 3056 3057 /* Set cinuse and pinuse of this chunk and pinuse of next chunk */ 3058 #define set_inuse_and_pinuse(M,p,s)\ 3059 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ 3060 ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) 3061 3062 /* Set size, cinuse and pinuse bit of this chunk */ 3063 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ 3064 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT)) 3065 3066 #else /* FOOTERS */ 3067 3068 /* Set foot of inuse chunk to be xor of mstate and seed */ 3069 #define mark_inuse_foot(M,p,s)\ 3070 (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic)) 3071 3072 #define get_mstate_for(p)\ 3073 ((mstate)(((mchunkptr)((char*)(p) +\ 3074 (chunksize(p))))->prev_foot ^ mparams.magic)) 3075 3076 #define set_inuse(M,p,s)\ 3077 ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ 3078 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \ 3079 mark_inuse_foot(M,p,s)) 3080 3081 #define set_inuse_and_pinuse(M,p,s)\ 3082 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ 3083 (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\ 3084 mark_inuse_foot(M,p,s)) 3085 3086 #define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ 3087 ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ 3088 mark_inuse_foot(M, p, s)) 3089 3090 #endif /* !FOOTERS */ 3091 3092 /* ---------------------------- setting mparams -------------------------- */ 3093 3094 #if LOCK_AT_FORK 3095 static void pre_fork(void) { ACQUIRE_LOCK(&(gm)->mutex); } 3096 static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); } 3097 static void post_fork_child(void) { INITIAL_LOCK(&(gm)->mutex); } 3098 #endif /* LOCK_AT_FORK */ 3099 3100 /* Initialize mparams */ 3101 static int init_mparams(void) { 3102 /* BEGIN android-added: move pthread_atfork outside of lock */ 3103 int first_run = 0; 3104 /* END android-added */ 3105 #ifdef NEED_GLOBAL_LOCK_INIT 3106 if (malloc_global_mutex_status <= 0) 3107 init_malloc_global_mutex(); 3108 #endif 3109 3110 ACQUIRE_MALLOC_GLOBAL_LOCK(); 3111 if (mparams.magic == 0) { 3112 size_t magic; 3113 size_t psize; 3114 size_t gsize; 3115 /* BEGIN android-added: move pthread_atfork outside of lock */ 3116 first_run = 1; 3117 /* END android-added */ 3118 3119 #ifndef WIN32 3120 psize = malloc_getpagesize; 3121 gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize); 3122 #else /* WIN32 */ 3123 { 3124 SYSTEM_INFO system_info; 3125 GetSystemInfo(&system_info); 3126 psize = system_info.dwPageSize; 3127 gsize = ((DEFAULT_GRANULARITY != 0)? 3128 DEFAULT_GRANULARITY : system_info.dwAllocationGranularity); 3129 } 3130 #endif /* WIN32 */ 3131 3132 /* Sanity-check configuration: 3133 size_t must be unsigned and as wide as pointer type. 3134 ints must be at least 4 bytes. 3135 alignment must be at least 8. 3136 Alignment, min chunk size, and page size must all be powers of 2. 3137 */ 3138 if ((sizeof(size_t) != sizeof(char*)) || 3139 (MAX_SIZE_T < MIN_CHUNK_SIZE) || 3140 (sizeof(int) < 4) || 3141 (MALLOC_ALIGNMENT < (size_t)8U) || 3142 ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) || 3143 ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) || 3144 ((gsize & (gsize-SIZE_T_ONE)) != 0) || 3145 ((psize & (psize-SIZE_T_ONE)) != 0)) 3146 ABORT; 3147 mparams.granularity = gsize; 3148 mparams.page_size = psize; 3149 mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD; 3150 mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD; 3151 #if MORECORE_CONTIGUOUS 3152 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT; 3153 #else /* MORECORE_CONTIGUOUS */ 3154 mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT; 3155 #endif /* MORECORE_CONTIGUOUS */ 3156 3157 #if !ONLY_MSPACES 3158 /* Set up lock for main malloc area */ 3159 gm->mflags = mparams.default_mflags; 3160 (void)INITIAL_LOCK(&gm->mutex); 3161 #endif 3162 /* BEGIN android-removed: move pthread_atfork outside of lock */ 3163 #if 0 && LOCK_AT_FORK 3164 pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child); 3165 #endif 3166 /* END android-removed */ 3167 3168 { 3169 #if USE_DEV_RANDOM 3170 int fd; 3171 unsigned char buf[sizeof(size_t)]; 3172 /* Try to use /dev/urandom, else fall back on using time */ 3173 if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 && 3174 read(fd, buf, sizeof(buf)) == sizeof(buf)) { 3175 magic = *((size_t *) buf); 3176 close(fd); 3177 } 3178 else 3179 #endif /* USE_DEV_RANDOM */ 3180 #ifdef WIN32 3181 magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U); 3182 #elif defined(LACKS_TIME_H) 3183 magic = (size_t)&magic ^ (size_t)0x55555555U; 3184 #else 3185 magic = (size_t)(time(0) ^ (size_t)0x55555555U); 3186 #endif 3187 magic |= (size_t)8U; /* ensure nonzero */ 3188 magic &= ~(size_t)7U; /* improve chances of fault for bad values */ 3189 /* Until memory modes commonly available, use volatile-write */ 3190 (*(volatile size_t *)(&(mparams.magic))) = magic; 3191 } 3192 } 3193 3194 RELEASE_MALLOC_GLOBAL_LOCK(); 3195 /* BEGIN android-added: move pthread_atfork outside of lock */ 3196 if (first_run != 0) { 3197 #if LOCK_AT_FORK 3198 pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child); 3199 #endif 3200 } 3201 /* END android-added */ 3202 return 1; 3203 } 3204 3205 /* support for mallopt */ 3206 static int change_mparam(int param_number, int value) { 3207 size_t val; 3208 ensure_initialization(); 3209 val = (value == -1)? MAX_SIZE_T : (size_t)value; 3210 switch(param_number) { 3211 case M_TRIM_THRESHOLD: 3212 mparams.trim_threshold = val; 3213 return 1; 3214 case M_GRANULARITY: 3215 if (val >= mparams.page_size && ((val & (val-1)) == 0)) { 3216 mparams.granularity = val; 3217 return 1; 3218 } 3219 else 3220 return 0; 3221 case M_MMAP_THRESHOLD: 3222 mparams.mmap_threshold = val; 3223 return 1; 3224 default: 3225 return 0; 3226 } 3227 } 3228 3229 #if DEBUG 3230 /* ------------------------- Debugging Support --------------------------- */ 3231 3232 /* Check properties of any chunk, whether free, inuse, mmapped etc */ 3233 static void do_check_any_chunk(mstate m, mchunkptr p) { 3234 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); 3235 assert(ok_address(m, p)); 3236 } 3237 3238 /* Check properties of top chunk */ 3239 static void do_check_top_chunk(mstate m, mchunkptr p) { 3240 msegmentptr sp = segment_holding(m, (char*)p); 3241 size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */ 3242 assert(sp != 0); 3243 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); 3244 assert(ok_address(m, p)); 3245 assert(sz == m->topsize); 3246 assert(sz > 0); 3247 assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE); 3248 assert(pinuse(p)); 3249 assert(!pinuse(chunk_plus_offset(p, sz))); 3250 } 3251 3252 /* Check properties of (inuse) mmapped chunks */ 3253 static void do_check_mmapped_chunk(mstate m, mchunkptr p) { 3254 size_t sz = chunksize(p); 3255 size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD); 3256 assert(is_mmapped(p)); 3257 assert(use_mmap(m)); 3258 assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); 3259 assert(ok_address(m, p)); 3260 assert(!is_small(sz)); 3261 assert((len & (mparams.page_size-SIZE_T_ONE)) == 0); 3262 assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD); 3263 assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0); 3264 } 3265 3266 /* Check properties of inuse chunks */ 3267 static void do_check_inuse_chunk(mstate m, mchunkptr p) { 3268 do_check_any_chunk(m, p); 3269 assert(is_inuse(p)); 3270 assert(next_pinuse(p)); 3271 /* If not pinuse and not mmapped, previous chunk has OK offset */ 3272 assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p); 3273 if (is_mmapped(p)) 3274 do_check_mmapped_chunk(m, p); 3275 } 3276 3277 /* Check properties of free chunks */ 3278 static void do_check_free_chunk(mstate m, mchunkptr p) { 3279 size_t sz = chunksize(p); 3280 mchunkptr next = chunk_plus_offset(p, sz); 3281 do_check_any_chunk(m, p); 3282 assert(!is_inuse(p)); 3283 assert(!next_pinuse(p)); 3284 assert (!is_mmapped(p)); 3285 if (p != m->dv && p != m->top) { 3286 if (sz >= MIN_CHUNK_SIZE) { 3287 assert((sz & CHUNK_ALIGN_MASK) == 0); 3288 assert(is_aligned(chunk2mem(p))); 3289 assert(next->prev_foot == sz); 3290 assert(pinuse(p)); 3291 assert (next == m->top || is_inuse(next)); 3292 assert(p->fd->bk == p); 3293 assert(p->bk->fd == p); 3294 } 3295 else /* markers are always of size SIZE_T_SIZE */ 3296 assert(sz == SIZE_T_SIZE); 3297 } 3298 } 3299 3300 /* Check properties of malloced chunks at the point they are malloced */ 3301 static void do_check_malloced_chunk(mstate m, void* mem, size_t s) { 3302 if (mem != 0) { 3303 mchunkptr p = mem2chunk(mem); 3304 size_t sz = p->head & ~INUSE_BITS; 3305 do_check_inuse_chunk(m, p); 3306 assert((sz & CHUNK_ALIGN_MASK) == 0); 3307 assert(sz >= MIN_CHUNK_SIZE); 3308 assert(sz >= s); 3309 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */ 3310 assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE)); 3311 } 3312 } 3313 3314 /* Check a tree and its subtrees. */ 3315 static void do_check_tree(mstate m, tchunkptr t) { 3316 tchunkptr head = 0; 3317 tchunkptr u = t; 3318 bindex_t tindex = t->index; 3319 size_t tsize = chunksize(t); 3320 bindex_t idx; 3321 compute_tree_index(tsize, idx); 3322 assert(tindex == idx); 3323 assert(tsize >= MIN_LARGE_SIZE); 3324 assert(tsize >= minsize_for_tree_index(idx)); 3325 assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1)))); 3326 3327 do { /* traverse through chain of same-sized nodes */ 3328 do_check_any_chunk(m, ((mchunkptr)u)); 3329 assert(u->index == tindex); 3330 assert(chunksize(u) == tsize); 3331 assert(!is_inuse(u)); 3332 assert(!next_pinuse(u)); 3333 assert(u->fd->bk == u); 3334 assert(u->bk->fd == u); 3335 if (u->parent == 0) { 3336 assert(u->child[0] == 0); 3337 assert(u->child[1] == 0); 3338 } 3339 else { 3340 assert(head == 0); /* only one node on chain has parent */ 3341 head = u; 3342 assert(u->parent != u); 3343 assert (u->parent->child[0] == u || 3344 u->parent->child[1] == u || 3345 *((tbinptr*)(u->parent)) == u); 3346 if (u->child[0] != 0) { 3347 assert(u->child[0]->parent == u); 3348 assert(u->child[0] != u); 3349 do_check_tree(m, u->child[0]); 3350 } 3351 if (u->child[1] != 0) { 3352 assert(u->child[1]->parent == u); 3353 assert(u->child[1] != u); 3354 do_check_tree(m, u->child[1]); 3355 } 3356 if (u->child[0] != 0 && u->child[1] != 0) { 3357 assert(chunksize(u->child[0]) < chunksize(u->child[1])); 3358 } 3359 } 3360 u = u->fd; 3361 } while (u != t); 3362 assert(head != 0); 3363 } 3364 3365 /* Check all the chunks in a treebin. */ 3366 static void do_check_treebin(mstate m, bindex_t i) { 3367 tbinptr* tb = treebin_at(m, i); 3368 tchunkptr t = *tb; 3369 int empty = (m->treemap & (1U << i)) == 0; 3370 if (t == 0) 3371 assert(empty); 3372 if (!empty) 3373 do_check_tree(m, t); 3374 } 3375 3376 /* Check all the chunks in a smallbin. */ 3377 static void do_check_smallbin(mstate m, bindex_t i) { 3378 sbinptr b = smallbin_at(m, i); 3379 mchunkptr p = b->bk; 3380 unsigned int empty = (m->smallmap & (1U << i)) == 0; 3381 if (p == b) 3382 assert(empty); 3383 if (!empty) { 3384 for (; p != b; p = p->bk) { 3385 size_t size = chunksize(p); 3386 mchunkptr q; 3387 /* each chunk claims to be free */ 3388 do_check_free_chunk(m, p); 3389 /* chunk belongs in bin */ 3390 assert(small_index(size) == i); 3391 assert(p->bk == b || chunksize(p->bk) == chunksize(p)); 3392 /* chunk is followed by an inuse chunk */ 3393 q = next_chunk(p); 3394 if (q->head != FENCEPOST_HEAD) 3395 do_check_inuse_chunk(m, q); 3396 } 3397 } 3398 } 3399 3400 /* Find x in a bin. Used in other check functions. */ 3401 static int bin_find(mstate m, mchunkptr x) { 3402 size_t size = chunksize(x); 3403 if (is_small(size)) { 3404 bindex_t sidx = small_index(size); 3405 sbinptr b = smallbin_at(m, sidx); 3406 if (smallmap_is_marked(m, sidx)) { 3407 mchunkptr p = b; 3408 do { 3409 if (p == x) 3410 return 1; 3411 } while ((p = p->fd) != b); 3412 } 3413 } 3414 else { 3415 bindex_t tidx; 3416 compute_tree_index(size, tidx); 3417 if (treemap_is_marked(m, tidx)) { 3418 tchunkptr t = *treebin_at(m, tidx); 3419 size_t sizebits = size << leftshift_for_tree_index(tidx); 3420 while (t != 0 && chunksize(t) != size) { 3421 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; 3422 sizebits <<= 1; 3423 } 3424 if (t != 0) { 3425 tchunkptr u = t; 3426 do { 3427 if (u == (tchunkptr)x) 3428 return 1; 3429 } while ((u = u->fd) != t); 3430 } 3431 } 3432 } 3433 return 0; 3434 } 3435 3436 /* Traverse each chunk and check it; return total */ 3437 static size_t traverse_and_check(mstate m) { 3438 size_t sum = 0; 3439 if (is_initialized(m)) { 3440 msegmentptr s = &m->seg; 3441 sum += m->topsize + TOP_FOOT_SIZE; 3442 while (s != 0) { 3443 mchunkptr q = align_as_chunk(s->base); 3444 mchunkptr lastq = 0; 3445 assert(pinuse(q)); 3446 while (segment_holds(s, q) && 3447 q != m->top && q->head != FENCEPOST_HEAD) { 3448 sum += chunksize(q); 3449 if (is_inuse(q)) { 3450 assert(!bin_find(m, q)); 3451 do_check_inuse_chunk(m, q); 3452 } 3453 else { 3454 assert(q == m->dv || bin_find(m, q)); 3455 assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */ 3456 do_check_free_chunk(m, q); 3457 } 3458 lastq = q; 3459 q = next_chunk(q); 3460 } 3461 s = s->next; 3462 } 3463 } 3464 return sum; 3465 } 3466 3467 3468 /* Check all properties of malloc_state. */ 3469 static void do_check_malloc_state(mstate m) { 3470 bindex_t i; 3471 size_t total; 3472 /* check bins */ 3473 for (i = 0; i < NSMALLBINS; ++i) 3474 do_check_smallbin(m, i); 3475 for (i = 0; i < NTREEBINS; ++i) 3476 do_check_treebin(m, i); 3477 3478 if (m->dvsize != 0) { /* check dv chunk */ 3479 do_check_any_chunk(m, m->dv); 3480 assert(m->dvsize == chunksize(m->dv)); 3481 assert(m->dvsize >= MIN_CHUNK_SIZE); 3482 assert(bin_find(m, m->dv) == 0); 3483 } 3484 3485 if (m->top != 0) { /* check top chunk */ 3486 do_check_top_chunk(m, m->top); 3487 /*assert(m->topsize == chunksize(m->top)); redundant */ 3488 assert(m->topsize > 0); 3489 assert(bin_find(m, m->top) == 0); 3490 } 3491 3492 total = traverse_and_check(m); 3493 assert(total <= m->footprint); 3494 assert(m->footprint <= m->max_footprint); 3495 } 3496 #endif /* DEBUG */ 3497 3498 /* ----------------------------- statistics ------------------------------ */ 3499 3500 #if !NO_MALLINFO 3501 static struct mallinfo internal_mallinfo(mstate m) { 3502 struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; 3503 ensure_initialization(); 3504 if (!PREACTION(m)) { 3505 check_malloc_state(m); 3506 if (is_initialized(m)) { 3507 size_t nfree = SIZE_T_ONE; /* top always free */ 3508 size_t mfree = m->topsize + TOP_FOOT_SIZE; 3509 size_t sum = mfree; 3510 msegmentptr s = &m->seg; 3511 while (s != 0) { 3512 mchunkptr q = align_as_chunk(s->base); 3513 while (segment_holds(s, q) && 3514 q != m->top && q->head != FENCEPOST_HEAD) { 3515 size_t sz = chunksize(q); 3516 sum += sz; 3517 if (!is_inuse(q)) { 3518 mfree += sz; 3519 ++nfree; 3520 } 3521 q = next_chunk(q); 3522 } 3523 s = s->next; 3524 } 3525 3526 nm.arena = sum; 3527 nm.ordblks = nfree; 3528 nm.hblkhd = m->footprint - sum; 3529 nm.usmblks = m->max_footprint; 3530 nm.uordblks = m->footprint - mfree; 3531 nm.fordblks = mfree; 3532 nm.keepcost = m->topsize; 3533 } 3534 3535 POSTACTION(m); 3536 } 3537 return nm; 3538 } 3539 #endif /* !NO_MALLINFO */ 3540 3541 #if !NO_MALLOC_STATS 3542 static void internal_malloc_stats(mstate m) { 3543 ensure_initialization(); 3544 if (!PREACTION(m)) { 3545 size_t maxfp = 0; 3546 size_t fp = 0; 3547 size_t used = 0; 3548 check_malloc_state(m); 3549 if (is_initialized(m)) { 3550 msegmentptr s = &m->seg; 3551 maxfp = m->max_footprint; 3552 fp = m->footprint; 3553 used = fp - (m->topsize + TOP_FOOT_SIZE); 3554 3555 while (s != 0) { 3556 mchunkptr q = align_as_chunk(s->base); 3557 while (segment_holds(s, q) && 3558 q != m->top && q->head != FENCEPOST_HEAD) { 3559 if (!is_inuse(q)) 3560 used -= chunksize(q); 3561 q = next_chunk(q); 3562 } 3563 s = s->next; 3564 } 3565 } 3566 POSTACTION(m); /* drop lock */ 3567 fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp)); 3568 fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp)); 3569 fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used)); 3570 } 3571 } 3572 #endif /* NO_MALLOC_STATS */ 3573 3574 /* ----------------------- Operations on smallbins ----------------------- */ 3575 3576 /* 3577 Various forms of linking and unlinking are defined as macros. Even 3578 the ones for trees, which are very long but have very short typical 3579 paths. This is ugly but reduces reliance on inlining support of 3580 compilers. 3581 */ 3582 3583 /* Link a free chunk into a smallbin */ 3584 #define insert_small_chunk(M, P, S) {\ 3585 bindex_t I = small_index(S);\ 3586 mchunkptr B = smallbin_at(M, I);\ 3587 mchunkptr F = B;\ 3588 assert(S >= MIN_CHUNK_SIZE);\ 3589 if (!smallmap_is_marked(M, I))\ 3590 mark_smallmap(M, I);\ 3591 else if (RTCHECK(ok_address(M, B->fd)))\ 3592 F = B->fd;\ 3593 else {\ 3594 CORRUPTION_ERROR_ACTION(M);\ 3595 }\ 3596 B->fd = P;\ 3597 F->bk = P;\ 3598 P->fd = F;\ 3599 P->bk = B;\ 3600 } 3601 3602 /* Unlink a chunk from a smallbin */ 3603 #define unlink_small_chunk(M, P, S) {\ 3604 mchunkptr F = P->fd;\ 3605 mchunkptr B = P->bk;\ 3606 bindex_t I = small_index(S);\ 3607 assert(P != B);\ 3608 assert(P != F);\ 3609 assert(chunksize(P) == small_index2size(I));\ 3610 if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \ 3611 if (B == F) {\ 3612 clear_smallmap(M, I);\ 3613 }\ 3614 else if (RTCHECK(B == smallbin_at(M,I) ||\ 3615 (ok_address(M, B) && B->fd == P))) {\ 3616 F->bk = B;\ 3617 B->fd = F;\ 3618 }\ 3619 else {\ 3620 CORRUPTION_ERROR_ACTION(M);\ 3621 }\ 3622 }\ 3623 else {\ 3624 CORRUPTION_ERROR_ACTION(M);\ 3625 }\ 3626 } 3627 3628 /* Unlink the first chunk from a smallbin */ 3629 #define unlink_first_small_chunk(M, B, P, I) {\ 3630 mchunkptr F = P->fd;\ 3631 assert(P != B);\ 3632 assert(P != F);\ 3633 assert(chunksize(P) == small_index2size(I));\ 3634 if (B == F) {\ 3635 clear_smallmap(M, I);\ 3636 }\ 3637 else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\ 3638 F->bk = B;\ 3639 B->fd = F;\ 3640 }\ 3641 else {\ 3642 CORRUPTION_ERROR_ACTION(M);\ 3643 }\ 3644 } 3645 3646 /* Replace dv node, binning the old one */ 3647 /* Used only when dvsize known to be small */ 3648 #define replace_dv(M, P, S) {\ 3649 size_t DVS = M->dvsize;\ 3650 assert(is_small(DVS));\ 3651 if (DVS != 0) {\ 3652 mchunkptr DV = M->dv;\ 3653 insert_small_chunk(M, DV, DVS);\ 3654 }\ 3655 M->dvsize = S;\ 3656 M->dv = P;\ 3657 } 3658 3659 /* ------------------------- Operations on trees ------------------------- */ 3660 3661 /* Insert chunk into tree */ 3662 #define insert_large_chunk(M, X, S) {\ 3663 tbinptr* H;\ 3664 bindex_t I;\ 3665 compute_tree_index(S, I);\ 3666 H = treebin_at(M, I);\ 3667 X->index = I;\ 3668 X->child[0] = X->child[1] = 0;\ 3669 if (!treemap_is_marked(M, I)) {\ 3670 mark_treemap(M, I);\ 3671 *H = X;\ 3672 X->parent = (tchunkptr)H;\ 3673 X->fd = X->bk = X;\ 3674 }\ 3675 else {\ 3676 tchunkptr T = *H;\ 3677 size_t K = S << leftshift_for_tree_index(I);\ 3678 for (;;) {\ 3679 if (chunksize(T) != S) {\ 3680 tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\ 3681 K <<= 1;\ 3682 if (*C != 0)\ 3683 T = *C;\ 3684 else if (RTCHECK(ok_address(M, C))) {\ 3685 *C = X;\ 3686 X->parent = T;\ 3687 X->fd = X->bk = X;\ 3688 break;\ 3689 }\ 3690 else {\ 3691 CORRUPTION_ERROR_ACTION(M);\ 3692 break;\ 3693 }\ 3694 }\ 3695 else {\ 3696 tchunkptr F = T->fd;\ 3697 if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\ 3698 T->fd = F->bk = X;\ 3699 X->fd = F;\ 3700 X->bk = T;\ 3701 X->parent = 0;\ 3702 break;\ 3703 }\ 3704 else {\ 3705 CORRUPTION_ERROR_ACTION(M);\ 3706 break;\ 3707 }\ 3708 }\ 3709 }\ 3710 }\ 3711 } 3712 3713 /* 3714 Unlink steps: 3715 3716 1. If x is a chained node, unlink it from its same-sized fd/bk links 3717 and choose its bk node as its replacement. 3718 2. If x was the last node of its size, but not a leaf node, it must 3719 be replaced with a leaf node (not merely one with an open left or 3720 right), to make sure that lefts and rights of descendents 3721 correspond properly to bit masks. We use the rightmost descendent 3722 of x. We could use any other leaf, but this is easy to locate and 3723 tends to counteract removal of leftmosts elsewhere, and so keeps 3724 paths shorter than minimally guaranteed. This doesn't loop much 3725 because on average a node in a tree is near the bottom. 3726 3. If x is the base of a chain (i.e., has parent links) relink 3727 x's parent and children to x's replacement (or null if none). 3728 */ 3729 3730 #define unlink_large_chunk(M, X) {\ 3731 tchunkptr XP = X->parent;\ 3732 tchunkptr R;\ 3733 if (X->bk != X) {\ 3734 tchunkptr F = X->fd;\ 3735 R = X->bk;\ 3736 if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\ 3737 F->bk = R;\ 3738 R->fd = F;\ 3739 }\ 3740 else {\ 3741 CORRUPTION_ERROR_ACTION(M);\ 3742 }\ 3743 }\ 3744 else {\ 3745 tchunkptr* RP;\ 3746 if (((R = *(RP = &(X->child[1]))) != 0) ||\ 3747 ((R = *(RP = &(X->child[0]))) != 0)) {\ 3748 tchunkptr* CP;\ 3749 while ((*(CP = &(R->child[1])) != 0) ||\ 3750 (*(CP = &(R->child[0])) != 0)) {\ 3751 R = *(RP = CP);\ 3752 }\ 3753 if (RTCHECK(ok_address(M, RP)))\ 3754 *RP = 0;\ 3755 else {\ 3756 CORRUPTION_ERROR_ACTION(M);\ 3757 }\ 3758 }\ 3759 }\ 3760 if (XP != 0) {\ 3761 tbinptr* H = treebin_at(M, X->index);\ 3762 if (X == *H) {\ 3763 if ((*H = R) == 0) \ 3764 clear_treemap(M, X->index);\ 3765 }\ 3766 else if (RTCHECK(ok_address(M, XP))) {\ 3767 if (XP->child[0] == X) \ 3768 XP->child[0] = R;\ 3769 else \ 3770 XP->child[1] = R;\ 3771 }\ 3772 else\ 3773 CORRUPTION_ERROR_ACTION(M);\ 3774 if (R != 0) {\ 3775 if (RTCHECK(ok_address(M, R))) {\ 3776 tchunkptr C0, C1;\ 3777 R->parent = XP;\ 3778 if ((C0 = X->child[0]) != 0) {\ 3779 if (RTCHECK(ok_address(M, C0))) {\ 3780 R->child[0] = C0;\ 3781 C0->parent = R;\ 3782 }\ 3783 else\ 3784 CORRUPTION_ERROR_ACTION(M);\ 3785 }\ 3786 if ((C1 = X->child[1]) != 0) {\ 3787 if (RTCHECK(ok_address(M, C1))) {\ 3788 R->child[1] = C1;\ 3789 C1->parent = R;\ 3790 }\ 3791 else\ 3792 CORRUPTION_ERROR_ACTION(M);\ 3793 }\ 3794 }\ 3795 else\ 3796 CORRUPTION_ERROR_ACTION(M);\ 3797 }\ 3798 }\ 3799 } 3800 3801 /* Relays to large vs small bin operations */ 3802 3803 #define insert_chunk(M, P, S)\ 3804 if (is_small(S)) insert_small_chunk(M, P, S)\ 3805 else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); } 3806 3807 #define unlink_chunk(M, P, S)\ 3808 if (is_small(S)) unlink_small_chunk(M, P, S)\ 3809 else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); } 3810 3811 3812 /* Relays to internal calls to malloc/free from realloc, memalign etc */ 3813 3814 #if ONLY_MSPACES 3815 #define internal_malloc(m, b) mspace_malloc(m, b) 3816 #define internal_free(m, mem) mspace_free(m,mem); 3817 #else /* ONLY_MSPACES */ 3818 #if MSPACES 3819 #define internal_malloc(m, b)\ 3820 ((m == gm)? dlmalloc(b) : mspace_malloc(m, b)) 3821 #define internal_free(m, mem)\ 3822 if (m == gm) dlfree(mem); else mspace_free(m,mem); 3823 #else /* MSPACES */ 3824 #define internal_malloc(m, b) dlmalloc(b) 3825 #define internal_free(m, mem) dlfree(mem) 3826 #endif /* MSPACES */ 3827 #endif /* ONLY_MSPACES */ 3828 3829 /* ----------------------- Direct-mmapping chunks ----------------------- */ 3830 3831 /* 3832 Directly mmapped chunks are set up with an offset to the start of 3833 the mmapped region stored in the prev_foot field of the chunk. This 3834 allows reconstruction of the required argument to MUNMAP when freed, 3835 and also allows adjustment of the returned chunk to meet alignment 3836 requirements (especially in memalign). 3837 */ 3838 3839 /* Malloc using mmap */ 3840 static void* mmap_alloc(mstate m, size_t nb) { 3841 size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); 3842 if (m->footprint_limit != 0) { 3843 size_t fp = m->footprint + mmsize; 3844 if (fp <= m->footprint || fp > m->footprint_limit) 3845 return 0; 3846 } 3847 if (mmsize > nb) { /* Check for wrap around 0 */ 3848 char* mm = (char*)(CALL_DIRECT_MMAP(mmsize)); 3849 if (mm != CMFAIL) { 3850 size_t offset = align_offset(chunk2mem(mm)); 3851 size_t psize = mmsize - offset - MMAP_FOOT_PAD; 3852 mchunkptr p = (mchunkptr)(mm + offset); 3853 p->prev_foot = offset; 3854 p->head = psize; 3855 mark_inuse_foot(m, p, psize); 3856 chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD; 3857 chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0; 3858 3859 if (m->least_addr == 0 || mm < m->least_addr) 3860 m->least_addr = mm; 3861 if ((m->footprint += mmsize) > m->max_footprint) 3862 m->max_footprint = m->footprint; 3863 assert(is_aligned(chunk2mem(p))); 3864 check_mmapped_chunk(m, p); 3865 return chunk2mem(p); 3866 } 3867 } 3868 return 0; 3869 } 3870 3871 /* Realloc using mmap */ 3872 static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) { 3873 size_t oldsize = chunksize(oldp); 3874 (void)flags; /* placate people compiling -Wunused */ 3875 if (is_small(nb)) /* Can't shrink mmap regions below small size */ 3876 return 0; 3877 /* Keep old chunk if big enough but not too big */ 3878 if (oldsize >= nb + SIZE_T_SIZE && 3879 (oldsize - nb) <= (mparams.granularity << 1)) 3880 return oldp; 3881 else { 3882 size_t offset = oldp->prev_foot; 3883 size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD; 3884 size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); 3885 char* cp = (char*)CALL_MREMAP((char*)oldp - offset, 3886 oldmmsize, newmmsize, flags); 3887 if (cp != CMFAIL) { 3888 mchunkptr newp = (mchunkptr)(cp + offset); 3889 size_t psize = newmmsize - offset - MMAP_FOOT_PAD; 3890 newp->head = psize; 3891 mark_inuse_foot(m, newp, psize); 3892 chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD; 3893 chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0; 3894 3895 if (cp < m->least_addr) 3896 m->least_addr = cp; 3897 if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint) 3898 m->max_footprint = m->footprint; 3899 check_mmapped_chunk(m, newp); 3900 return newp; 3901 } 3902 } 3903 return 0; 3904 } 3905 3906 3907 /* -------------------------- mspace management -------------------------- */ 3908 3909 /* Initialize top chunk and its size */ 3910 static void init_top(mstate m, mchunkptr p, size_t psize) { 3911 /* Ensure alignment */ 3912 size_t offset = align_offset(chunk2mem(p)); 3913 p = (mchunkptr)((char*)p + offset); 3914 psize -= offset; 3915 3916 m->top = p; 3917 m->topsize = psize; 3918 p->head = psize | PINUSE_BIT; 3919 /* set size of fake trailing chunk holding overhead space only once */ 3920 chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE; 3921 m->trim_check = mparams.trim_threshold; /* reset on each update */ 3922 } 3923 3924 /* Initialize bins for a new mstate that is otherwise zeroed out */ 3925 static void init_bins(mstate m) { 3926 /* Establish circular links for smallbins */ 3927 bindex_t i; 3928 for (i = 0; i < NSMALLBINS; ++i) { 3929 sbinptr bin = smallbin_at(m,i); 3930 bin->fd = bin->bk = bin; 3931 } 3932 } 3933 3934 #if PROCEED_ON_ERROR 3935 3936 /* default corruption action */ 3937 static void reset_on_error(mstate m) { 3938 int i; 3939 ++malloc_corruption_error_count; 3940 /* Reinitialize fields to forget about all memory */ 3941 m->smallmap = m->treemap = 0; 3942 m->dvsize = m->topsize = 0; 3943 m->seg.base = 0; 3944 m->seg.size = 0; 3945 m->seg.next = 0; 3946 m->top = m->dv = 0; 3947 for (i = 0; i < NTREEBINS; ++i) 3948 *treebin_at(m, i) = 0; 3949 init_bins(m); 3950 } 3951 #endif /* PROCEED_ON_ERROR */ 3952 3953 /* Allocate chunk and prepend remainder with chunk in successor base. */ 3954 static void* prepend_alloc(mstate m, char* newbase, char* oldbase, 3955 size_t nb) { 3956 mchunkptr p = align_as_chunk(newbase); 3957 mchunkptr oldfirst = align_as_chunk(oldbase); 3958 size_t psize = (char*)oldfirst - (char*)p; 3959 mchunkptr q = chunk_plus_offset(p, nb); 3960 size_t qsize = psize - nb; 3961 set_size_and_pinuse_of_inuse_chunk(m, p, nb); 3962 3963 assert((char*)oldfirst > (char*)q); 3964 assert(pinuse(oldfirst)); 3965 assert(qsize >= MIN_CHUNK_SIZE); 3966 3967 /* consolidate remainder with first chunk of old base */ 3968 if (oldfirst == m->top) { 3969 size_t tsize = m->topsize += qsize; 3970 m->top = q; 3971 q->head = tsize | PINUSE_BIT; 3972 check_top_chunk(m, q); 3973 } 3974 else if (oldfirst == m->dv) { 3975 size_t dsize = m->dvsize += qsize; 3976 m->dv = q; 3977 set_size_and_pinuse_of_free_chunk(q, dsize); 3978 } 3979 else { 3980 if (!is_inuse(oldfirst)) { 3981 size_t nsize = chunksize(oldfirst); 3982 unlink_chunk(m, oldfirst, nsize); 3983 oldfirst = chunk_plus_offset(oldfirst, nsize); 3984 qsize += nsize; 3985 } 3986 set_free_with_pinuse(q, qsize, oldfirst); 3987 insert_chunk(m, q, qsize); 3988 check_free_chunk(m, q); 3989 } 3990 3991 check_malloced_chunk(m, chunk2mem(p), nb); 3992 return chunk2mem(p); 3993 } 3994 3995 /* Add a segment to hold a new noncontiguous region */ 3996 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) { 3997 /* Determine locations and sizes of segment, fenceposts, old top */ 3998 char* old_top = (char*)m->top; 3999 msegmentptr oldsp = segment_holding(m, old_top); 4000 char* old_end = oldsp->base + oldsp->size; 4001 size_t ssize = pad_request(sizeof(struct malloc_segment)); 4002 char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK); 4003 size_t offset = align_offset(chunk2mem(rawsp)); 4004 char* asp = rawsp + offset; 4005 char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp; 4006 mchunkptr sp = (mchunkptr)csp; 4007 msegmentptr ss = (msegmentptr)(chunk2mem(sp)); 4008 mchunkptr tnext = chunk_plus_offset(sp, ssize); 4009 mchunkptr p = tnext; 4010 int nfences = 0; 4011 4012 /* reset top to new space */ 4013 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); 4014 4015 /* Set up segment record */ 4016 assert(is_aligned(ss)); 4017 set_size_and_pinuse_of_inuse_chunk(m, sp, ssize); 4018 *ss = m->seg; /* Push current record */ 4019 m->seg.base = tbase; 4020 m->seg.size = tsize; 4021 m->seg.sflags = mmapped; 4022 m->seg.next = ss; 4023 4024 /* Insert trailing fenceposts */ 4025 for (;;) { 4026 mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE); 4027 p->head = FENCEPOST_HEAD; 4028 ++nfences; 4029 if ((char*)(&(nextp->head)) < old_end) 4030 p = nextp; 4031 else 4032 break; 4033 } 4034 assert(nfences >= 2); 4035 4036 /* Insert the rest of old top into a bin as an ordinary free chunk */ 4037 if (csp != old_top) { 4038 mchunkptr q = (mchunkptr)old_top; 4039 size_t psize = csp - old_top; 4040 mchunkptr tn = chunk_plus_offset(q, psize); 4041 set_free_with_pinuse(q, psize, tn); 4042 insert_chunk(m, q, psize); 4043 } 4044 4045 check_top_chunk(m, m->top); 4046 } 4047 4048 /* -------------------------- System allocation -------------------------- */ 4049 4050 /* Get memory from system using MORECORE or MMAP */ 4051 static void* sys_alloc(mstate m, size_t nb) { 4052 char* tbase = CMFAIL; 4053 size_t tsize = 0; 4054 flag_t mmap_flag = 0; 4055 size_t asize; /* allocation size */ 4056 4057 ensure_initialization(); 4058 4059 /* Directly map large chunks, but only if already initialized */ 4060 if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) { 4061 void* mem = mmap_alloc(m, nb); 4062 if (mem != 0) 4063 return mem; 4064 } 4065 4066 asize = granularity_align(nb + SYS_ALLOC_PADDING); 4067 if (asize <= nb) { 4068 /* BEGIN android-added: set errno */ 4069 MALLOC_FAILURE_ACTION; 4070 /* END android-added */ 4071 return 0; /* wraparound */ 4072 } 4073 if (m->footprint_limit != 0) { 4074 size_t fp = m->footprint + asize; 4075 if (fp <= m->footprint || fp > m->footprint_limit) { 4076 /* BEGIN android-added: set errno */ 4077 MALLOC_FAILURE_ACTION; 4078 /* END android-added */ 4079 return 0; 4080 } 4081 } 4082 4083 /* 4084 Try getting memory in any of three ways (in most-preferred to 4085 least-preferred order): 4086 1. A call to MORECORE that can normally contiguously extend memory. 4087 (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or 4088 or main space is mmapped or a previous contiguous call failed) 4089 2. A call to MMAP new space (disabled if not HAVE_MMAP). 4090 Note that under the default settings, if MORECORE is unable to 4091 fulfill a request, and HAVE_MMAP is true, then mmap is 4092 used as a noncontiguous system allocator. This is a useful backup 4093 strategy for systems with holes in address spaces -- in this case 4094 sbrk cannot contiguously expand the heap, but mmap may be able to 4095 find space. 4096 3. A call to MORECORE that cannot usually contiguously extend memory. 4097 (disabled if not HAVE_MORECORE) 4098 4099 In all cases, we need to request enough bytes from system to ensure 4100 we can malloc nb bytes upon success, so pad with enough space for 4101 top_foot, plus alignment-pad to make sure we don't lose bytes if 4102 not on boundary, and round this up to a granularity unit. 4103 */ 4104 4105 if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) { 4106 char* br = CMFAIL; 4107 size_t ssize = asize; /* sbrk call size */ 4108 msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top); 4109 ACQUIRE_MALLOC_GLOBAL_LOCK(); 4110 4111 if (ss == 0) { /* First time through or recovery */ 4112 char* base = (char*)CALL_MORECORE(0); 4113 if (base != CMFAIL) { 4114 size_t fp; 4115 /* Adjust to end on a page boundary */ 4116 if (!is_page_aligned(base)) 4117 ssize += (page_align((size_t)base) - (size_t)base); 4118 fp = m->footprint + ssize; /* recheck limits */ 4119 if (ssize > nb && ssize < HALF_MAX_SIZE_T && 4120 (m->footprint_limit == 0 || 4121 (fp > m->footprint && fp <= m->footprint_limit)) && 4122 (br = (char*)(CALL_MORECORE(ssize))) == base) { 4123 tbase = base; 4124 tsize = ssize; 4125 } 4126 } 4127 } 4128 else { 4129 /* Subtract out existing available top space from MORECORE request. */ 4130 ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING); 4131 /* Use mem here only if it did continuously extend old space */ 4132 if (ssize < HALF_MAX_SIZE_T && 4133 (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) { 4134 tbase = br; 4135 tsize = ssize; 4136 } 4137 } 4138 4139 if (tbase == CMFAIL) { /* Cope with partial failure */ 4140 if (br != CMFAIL) { /* Try to use/extend the space we did get */ 4141 if (ssize < HALF_MAX_SIZE_T && 4142 ssize < nb + SYS_ALLOC_PADDING) { 4143 size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize); 4144 if (esize < HALF_MAX_SIZE_T) { 4145 char* end = (char*)CALL_MORECORE(esize); 4146 if (end != CMFAIL) 4147 ssize += esize; 4148 else { /* Can't use; try to release */ 4149 (void) CALL_MORECORE(-ssize); 4150 br = CMFAIL; 4151 } 4152 } 4153 } 4154 } 4155 if (br != CMFAIL) { /* Use the space we did get */ 4156 tbase = br; 4157 tsize = ssize; 4158 } 4159 else 4160 disable_contiguous(m); /* Don't try contiguous path in the future */ 4161 } 4162 4163 RELEASE_MALLOC_GLOBAL_LOCK(); 4164 } 4165 4166 if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */ 4167 char* mp = (char*)(CALL_MMAP(asize)); 4168 if (mp != CMFAIL) { 4169 tbase = mp; 4170 tsize = asize; 4171 mmap_flag = USE_MMAP_BIT; 4172 } 4173 } 4174 4175 if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */ 4176 if (asize < HALF_MAX_SIZE_T) { 4177 char* br = CMFAIL; 4178 char* end = CMFAIL; 4179 ACQUIRE_MALLOC_GLOBAL_LOCK(); 4180 br = (char*)(CALL_MORECORE(asize)); 4181 end = (char*)(CALL_MORECORE(0)); 4182 RELEASE_MALLOC_GLOBAL_LOCK(); 4183 if (br != CMFAIL && end != CMFAIL && br < end) { 4184 size_t ssize = end - br; 4185 if (ssize > nb + TOP_FOOT_SIZE) { 4186 tbase = br; 4187 tsize = ssize; 4188 } 4189 } 4190 } 4191 } 4192 4193 if (tbase != CMFAIL) { 4194 4195 if ((m->footprint += tsize) > m->max_footprint) 4196 m->max_footprint = m->footprint; 4197 4198 if (!is_initialized(m)) { /* first-time initialization */ 4199 if (m->least_addr == 0 || tbase < m->least_addr) 4200 m->least_addr = tbase; 4201 m->seg.base = tbase; 4202 m->seg.size = tsize; 4203 m->seg.sflags = mmap_flag; 4204 m->magic = mparams.magic; 4205 m->release_checks = MAX_RELEASE_CHECK_RATE; 4206 init_bins(m); 4207 #if !ONLY_MSPACES 4208 if (is_global(m)) 4209 init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); 4210 else 4211 #endif 4212 { 4213 /* Offset top by embedded malloc_state */ 4214 mchunkptr mn = next_chunk(mem2chunk(m)); 4215 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE); 4216 } 4217 } 4218 4219 else { 4220 /* Try to merge with an existing segment */ 4221 msegmentptr sp = &m->seg; 4222 /* Only consider most recent segment if traversal suppressed */ 4223 while (sp != 0 && tbase != sp->base + sp->size) 4224 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next; 4225 if (sp != 0 && 4226 !is_extern_segment(sp) && 4227 (sp->sflags & USE_MMAP_BIT) == mmap_flag && 4228 segment_holds(sp, m->top)) { /* append */ 4229 sp->size += tsize; 4230 init_top(m, m->top, m->topsize + tsize); 4231 } 4232 else { 4233 if (tbase < m->least_addr) 4234 m->least_addr = tbase; 4235 sp = &m->seg; 4236 while (sp != 0 && sp->base != tbase + tsize) 4237 sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next; 4238 if (sp != 0 && 4239 !is_extern_segment(sp) && 4240 (sp->sflags & USE_MMAP_BIT) == mmap_flag) { 4241 char* oldbase = sp->base; 4242 sp->base = tbase; 4243 sp->size += tsize; 4244 return prepend_alloc(m, tbase, oldbase, nb); 4245 } 4246 else 4247 add_segment(m, tbase, tsize, mmap_flag); 4248 } 4249 } 4250 4251 if (nb < m->topsize) { /* Allocate from new or extended top space */ 4252 size_t rsize = m->topsize -= nb; 4253 mchunkptr p = m->top; 4254 mchunkptr r = m->top = chunk_plus_offset(p, nb); 4255 r->head = rsize | PINUSE_BIT; 4256 set_size_and_pinuse_of_inuse_chunk(m, p, nb); 4257 check_top_chunk(m, m->top); 4258 check_malloced_chunk(m, chunk2mem(p), nb); 4259 return chunk2mem(p); 4260 } 4261 } 4262 4263 MALLOC_FAILURE_ACTION; 4264 return 0; 4265 } 4266 4267 /* ----------------------- system deallocation -------------------------- */ 4268 4269 /* Unmap and unlink any mmapped segments that don't contain used chunks */ 4270 static size_t release_unused_segments(mstate m) { 4271 size_t released = 0; 4272 int nsegs = 0; 4273 msegmentptr pred = &m->seg; 4274 msegmentptr sp = pred->next; 4275 while (sp != 0) { 4276 char* base = sp->base; 4277 size_t size = sp->size; 4278 msegmentptr next = sp->next; 4279 ++nsegs; 4280 if (is_mmapped_segment(sp) && !is_extern_segment(sp)) { 4281 mchunkptr p = align_as_chunk(base); 4282 size_t psize = chunksize(p); 4283 /* Can unmap if first chunk holds entire segment and not pinned */ 4284 if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) { 4285 tchunkptr tp = (tchunkptr)p; 4286 assert(segment_holds(sp, (char*)sp)); 4287 if (p == m->dv) { 4288 m->dv = 0; 4289 m->dvsize = 0; 4290 } 4291 else { 4292 unlink_large_chunk(m, tp); 4293 } 4294 if (CALL_MUNMAP(base, size) == 0) { 4295 released += size; 4296 m->footprint -= size; 4297 /* unlink obsoleted record */ 4298 sp = pred; 4299 sp->next = next; 4300 } 4301 else { /* back out if cannot unmap */ 4302 insert_large_chunk(m, tp, psize); 4303 } 4304 } 4305 } 4306 if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */ 4307 break; 4308 pred = sp; 4309 sp = next; 4310 } 4311 /* Reset check counter */ 4312 m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)? 4313 (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE); 4314 return released; 4315 } 4316 4317 static int sys_trim(mstate m, size_t pad) { 4318 size_t released = 0; 4319 ensure_initialization(); 4320 if (pad < MAX_REQUEST && is_initialized(m)) { 4321 pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */ 4322 4323 if (m->topsize > pad) { 4324 /* Shrink top space in granularity-size units, keeping at least one */ 4325 size_t unit = mparams.granularity; 4326 size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit - 4327 SIZE_T_ONE) * unit; 4328 msegmentptr sp = segment_holding(m, (char*)m->top); 4329 4330 if (!is_extern_segment(sp)) { 4331 if (is_mmapped_segment(sp)) { 4332 if (HAVE_MMAP && 4333 sp->size >= extra && 4334 !has_segment_link(m, sp)) { /* can't shrink if pinned */ 4335 size_t newsize = sp->size - extra; 4336 (void)newsize; /* placate people compiling -Wunused-variable */ 4337 /* Prefer mremap, fall back to munmap */ 4338 if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) || 4339 (CALL_MUNMAP(sp->base + newsize, extra) == 0)) { 4340 released = extra; 4341 } 4342 } 4343 } 4344 else if (HAVE_MORECORE) { 4345 if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */ 4346 extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit; 4347 ACQUIRE_MALLOC_GLOBAL_LOCK(); 4348 { 4349 /* Make sure end of memory is where we last set it. */ 4350 char* old_br = (char*)(CALL_MORECORE(0)); 4351 if (old_br == sp->base + sp->size) { 4352 char* rel_br = (char*)(CALL_MORECORE(-extra)); 4353 char* new_br = (char*)(CALL_MORECORE(0)); 4354 if (rel_br != CMFAIL && new_br < old_br) 4355 released = old_br - new_br; 4356 } 4357 } 4358 RELEASE_MALLOC_GLOBAL_LOCK(); 4359 } 4360 } 4361 4362 if (released != 0) { 4363 sp->size -= released; 4364 m->footprint -= released; 4365 init_top(m, m->top, m->topsize - released); 4366 check_top_chunk(m, m->top); 4367 } 4368 } 4369 4370 /* Unmap any unused mmapped segments */ 4371 if (HAVE_MMAP) 4372 released += release_unused_segments(m); 4373 4374 /* On failure, disable autotrim to avoid repeated failed future calls */ 4375 if (released == 0 && m->topsize > m->trim_check) 4376 m->trim_check = MAX_SIZE_T; 4377 } 4378 4379 return (released != 0)? 1 : 0; 4380 } 4381 4382 /* Consolidate and bin a chunk. Differs from exported versions 4383 of free mainly in that the chunk need not be marked as inuse. 4384 */ 4385 static void dispose_chunk(mstate m, mchunkptr p, size_t psize) { 4386 mchunkptr next = chunk_plus_offset(p, psize); 4387 if (!pinuse(p)) { 4388 mchunkptr prev; 4389 size_t prevsize = p->prev_foot; 4390 if (is_mmapped(p)) { 4391 psize += prevsize + MMAP_FOOT_PAD; 4392 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) 4393 m->footprint -= psize; 4394 return; 4395 } 4396 prev = chunk_minus_offset(p, prevsize); 4397 psize += prevsize; 4398 p = prev; 4399 if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */ 4400 if (p != m->dv) { 4401 unlink_chunk(m, p, prevsize); 4402 } 4403 else if ((next->head & INUSE_BITS) == INUSE_BITS) { 4404 m->dvsize = psize; 4405 set_free_with_pinuse(p, psize, next); 4406 return; 4407 } 4408 } 4409 else { 4410 CORRUPTION_ERROR_ACTION(m); 4411 return; 4412 } 4413 } 4414 if (RTCHECK(ok_address(m, next))) { 4415 if (!cinuse(next)) { /* consolidate forward */ 4416 if (next == m->top) { 4417 size_t tsize = m->topsize += psize; 4418 m->top = p; 4419 p->head = tsize | PINUSE_BIT; 4420 if (p == m->dv) { 4421 m->dv = 0; 4422 m->dvsize = 0; 4423 } 4424 return; 4425 } 4426 else if (next == m->dv) { 4427 size_t dsize = m->dvsize += psize; 4428 m->dv = p; 4429 set_size_and_pinuse_of_free_chunk(p, dsize); 4430 return; 4431 } 4432 else { 4433 size_t nsize = chunksize(next); 4434 psize += nsize; 4435 unlink_chunk(m, next, nsize); 4436 set_size_and_pinuse_of_free_chunk(p, psize); 4437 if (p == m->dv) { 4438 m->dvsize = psize; 4439 return; 4440 } 4441 } 4442 } 4443 else { 4444 set_free_with_pinuse(p, psize, next); 4445 } 4446 insert_chunk(m, p, psize); 4447 } 4448 else { 4449 CORRUPTION_ERROR_ACTION(m); 4450 } 4451 } 4452 4453 /* ---------------------------- malloc --------------------------- */ 4454 4455 /* allocate a large request from the best fitting chunk in a treebin */ 4456 static void* tmalloc_large(mstate m, size_t nb) { 4457 tchunkptr v = 0; 4458 size_t rsize = -nb; /* Unsigned negation */ 4459 tchunkptr t; 4460 bindex_t idx; 4461 compute_tree_index(nb, idx); 4462 if ((t = *treebin_at(m, idx)) != 0) { 4463 /* Traverse tree for this bin looking for node with size == nb */ 4464 size_t sizebits = nb << leftshift_for_tree_index(idx); 4465 tchunkptr rst = 0; /* The deepest untaken right subtree */ 4466 for (;;) { 4467 tchunkptr rt; 4468 size_t trem = chunksize(t) - nb; 4469 if (trem < rsize) { 4470 v = t; 4471 if ((rsize = trem) == 0) 4472 break; 4473 } 4474 rt = t->child[1]; 4475 t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; 4476 if (rt != 0 && rt != t) 4477 rst = rt; 4478 if (t == 0) { 4479 t = rst; /* set t to least subtree holding sizes > nb */ 4480 break; 4481 } 4482 sizebits <<= 1; 4483 } 4484 } 4485 if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */ 4486 binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap; 4487 if (leftbits != 0) { 4488 bindex_t i; 4489 binmap_t leastbit = least_bit(leftbits); 4490 compute_bit2idx(leastbit, i); 4491 t = *treebin_at(m, i); 4492 } 4493 } 4494 4495 while (t != 0) { /* find smallest of tree or subtree */ 4496 size_t trem = chunksize(t) - nb; 4497 if (trem < rsize) { 4498 rsize = trem; 4499 v = t; 4500 } 4501 t = leftmost_child(t); 4502 } 4503 4504 /* If dv is a better fit, return 0 so malloc will use it */ 4505 if (v != 0 && rsize < (size_t)(m->dvsize - nb)) { 4506 if (RTCHECK(ok_address(m, v))) { /* split */ 4507 mchunkptr r = chunk_plus_offset(v, nb); 4508 assert(chunksize(v) == rsize + nb); 4509 if (RTCHECK(ok_next(v, r))) { 4510 unlink_large_chunk(m, v); 4511 if (rsize < MIN_CHUNK_SIZE) 4512 set_inuse_and_pinuse(m, v, (rsize + nb)); 4513 else { 4514 set_size_and_pinuse_of_inuse_chunk(m, v, nb); 4515 set_size_and_pinuse_of_free_chunk(r, rsize); 4516 insert_chunk(m, r, rsize); 4517 } 4518 return chunk2mem(v); 4519 } 4520 } 4521 CORRUPTION_ERROR_ACTION(m); 4522 } 4523 return 0; 4524 } 4525 4526 /* allocate a small request from the best fitting chunk in a treebin */ 4527 static void* tmalloc_small(mstate m, size_t nb) { 4528 tchunkptr t, v; 4529 size_t rsize; 4530 bindex_t i; 4531 binmap_t leastbit = least_bit(m->treemap); 4532 compute_bit2idx(leastbit, i); 4533 v = t = *treebin_at(m, i); 4534 rsize = chunksize(t) - nb; 4535 4536 while ((t = leftmost_child(t)) != 0) { 4537 size_t trem = chunksize(t) - nb; 4538 if (trem < rsize) { 4539 rsize = trem; 4540 v = t; 4541 } 4542 } 4543 4544 if (RTCHECK(ok_address(m, v))) { 4545 mchunkptr r = chunk_plus_offset(v, nb); 4546 assert(chunksize(v) == rsize + nb); 4547 if (RTCHECK(ok_next(v, r))) { 4548 unlink_large_chunk(m, v); 4549 if (rsize < MIN_CHUNK_SIZE) 4550 set_inuse_and_pinuse(m, v, (rsize + nb)); 4551 else { 4552 set_size_and_pinuse_of_inuse_chunk(m, v, nb); 4553 set_size_and_pinuse_of_free_chunk(r, rsize); 4554 replace_dv(m, r, rsize); 4555 } 4556 return chunk2mem(v); 4557 } 4558 } 4559 4560 CORRUPTION_ERROR_ACTION(m); 4561 return 0; 4562 } 4563 4564 #if !ONLY_MSPACES 4565 4566 void* dlmalloc(size_t bytes) { 4567 /* 4568 Basic algorithm: 4569 If a small request (< 256 bytes minus per-chunk overhead): 4570 1. If one exists, use a remainderless chunk in associated smallbin. 4571 (Remainderless means that there are too few excess bytes to 4572 represent as a chunk.) 4573 2. If it is big enough, use the dv chunk, which is normally the 4574 chunk adjacent to the one used for the most recent small request. 4575 3. If one exists, split the smallest available chunk in a bin, 4576 saving remainder in dv. 4577 4. If it is big enough, use the top chunk. 4578 5. If available, get memory from system and use it 4579 Otherwise, for a large request: 4580 1. Find the smallest available binned chunk that fits, and use it 4581 if it is better fitting than dv chunk, splitting if necessary. 4582 2. If better fitting than any binned chunk, use the dv chunk. 4583 3. If it is big enough, use the top chunk. 4584 4. If request size >= mmap threshold, try to directly mmap this chunk. 4585 5. If available, get memory from system and use it 4586 4587 The ugly goto's here ensure that postaction occurs along all paths. 4588 */ 4589 4590 #if USE_LOCKS 4591 ensure_initialization(); /* initialize in sys_alloc if not using locks */ 4592 #endif 4593 4594 if (!PREACTION(gm)) { 4595 void* mem; 4596 size_t nb; 4597 if (bytes <= MAX_SMALL_REQUEST) { 4598 bindex_t idx; 4599 binmap_t smallbits; 4600 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); 4601 idx = small_index(nb); 4602 smallbits = gm->smallmap >> idx; 4603 4604 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ 4605 mchunkptr b, p; 4606 idx += ~smallbits & 1; /* Uses next bin if idx empty */ 4607 b = smallbin_at(gm, idx); 4608 p = b->fd; 4609 assert(chunksize(p) == small_index2size(idx)); 4610 unlink_first_small_chunk(gm, b, p, idx); 4611 set_inuse_and_pinuse(gm, p, small_index2size(idx)); 4612 mem = chunk2mem(p); 4613 check_malloced_chunk(gm, mem, nb); 4614 goto postaction; 4615 } 4616 4617 else if (nb > gm->dvsize) { 4618 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ 4619 mchunkptr b, p, r; 4620 size_t rsize; 4621 bindex_t i; 4622 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); 4623 binmap_t leastbit = least_bit(leftbits); 4624 compute_bit2idx(leastbit, i); 4625 b = smallbin_at(gm, i); 4626 p = b->fd; 4627 assert(chunksize(p) == small_index2size(i)); 4628 unlink_first_small_chunk(gm, b, p, i); 4629 rsize = small_index2size(i) - nb; 4630 /* Fit here cannot be remainderless if 4byte sizes */ 4631 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) 4632 set_inuse_and_pinuse(gm, p, small_index2size(i)); 4633 else { 4634 set_size_and_pinuse_of_inuse_chunk(gm, p, nb); 4635 r = chunk_plus_offset(p, nb); 4636 set_size_and_pinuse_of_free_chunk(r, rsize); 4637 replace_dv(gm, r, rsize); 4638 } 4639 mem = chunk2mem(p); 4640 check_malloced_chunk(gm, mem, nb); 4641 goto postaction; 4642 } 4643 4644 else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) { 4645 check_malloced_chunk(gm, mem, nb); 4646 goto postaction; 4647 } 4648 } 4649 } 4650 else if (bytes >= MAX_REQUEST) 4651 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ 4652 else { 4653 nb = pad_request(bytes); 4654 if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) { 4655 check_malloced_chunk(gm, mem, nb); 4656 goto postaction; 4657 } 4658 } 4659 4660 if (nb <= gm->dvsize) { 4661 size_t rsize = gm->dvsize - nb; 4662 mchunkptr p = gm->dv; 4663 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */ 4664 mchunkptr r = gm->dv = chunk_plus_offset(p, nb); 4665 gm->dvsize = rsize; 4666 set_size_and_pinuse_of_free_chunk(r, rsize); 4667 set_size_and_pinuse_of_inuse_chunk(gm, p, nb); 4668 } 4669 else { /* exhaust dv */ 4670 size_t dvs = gm->dvsize; 4671 gm->dvsize = 0; 4672 gm->dv = 0; 4673 set_inuse_and_pinuse(gm, p, dvs); 4674 } 4675 mem = chunk2mem(p); 4676 check_malloced_chunk(gm, mem, nb); 4677 goto postaction; 4678 } 4679 4680 else if (nb < gm->topsize) { /* Split top */ 4681 size_t rsize = gm->topsize -= nb; 4682 mchunkptr p = gm->top; 4683 mchunkptr r = gm->top = chunk_plus_offset(p, nb); 4684 r->head = rsize | PINUSE_BIT; 4685 set_size_and_pinuse_of_inuse_chunk(gm, p, nb); 4686 mem = chunk2mem(p); 4687 check_top_chunk(gm, gm->top); 4688 check_malloced_chunk(gm, mem, nb); 4689 goto postaction; 4690 } 4691 4692 mem = sys_alloc(gm, nb); 4693 4694 postaction: 4695 POSTACTION(gm); 4696 return mem; 4697 } 4698 4699 return 0; 4700 } 4701 4702 /* ---------------------------- free --------------------------- */ 4703 4704 void dlfree(void* mem) { 4705 /* 4706 Consolidate freed chunks with preceeding or succeeding bordering 4707 free chunks, if they exist, and then place in a bin. Intermixed 4708 with special cases for top, dv, mmapped chunks, and usage errors. 4709 */ 4710 4711 if (mem != 0) { 4712 mchunkptr p = mem2chunk(mem); 4713 #if FOOTERS 4714 mstate fm = get_mstate_for(p); 4715 if (!ok_magic(fm)) { 4716 USAGE_ERROR_ACTION(fm, p); 4717 return; 4718 } 4719 #else /* FOOTERS */ 4720 #define fm gm 4721 #endif /* FOOTERS */ 4722 if (!PREACTION(fm)) { 4723 check_inuse_chunk(fm, p); 4724 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) { 4725 size_t psize = chunksize(p); 4726 mchunkptr next = chunk_plus_offset(p, psize); 4727 if (!pinuse(p)) { 4728 size_t prevsize = p->prev_foot; 4729 if (is_mmapped(p)) { 4730 psize += prevsize + MMAP_FOOT_PAD; 4731 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) 4732 fm->footprint -= psize; 4733 goto postaction; 4734 } 4735 else { 4736 mchunkptr prev = chunk_minus_offset(p, prevsize); 4737 psize += prevsize; 4738 p = prev; 4739 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ 4740 if (p != fm->dv) { 4741 unlink_chunk(fm, p, prevsize); 4742 } 4743 else if ((next->head & INUSE_BITS) == INUSE_BITS) { 4744 fm->dvsize = psize; 4745 set_free_with_pinuse(p, psize, next); 4746 goto postaction; 4747 } 4748 } 4749 else 4750 goto erroraction; 4751 } 4752 } 4753 4754 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { 4755 if (!cinuse(next)) { /* consolidate forward */ 4756 if (next == fm->top) { 4757 size_t tsize = fm->topsize += psize; 4758 fm->top = p; 4759 p->head = tsize | PINUSE_BIT; 4760 if (p == fm->dv) { 4761 fm->dv = 0; 4762 fm->dvsize = 0; 4763 } 4764 if (should_trim(fm, tsize)) 4765 sys_trim(fm, 0); 4766 goto postaction; 4767 } 4768 else if (next == fm->dv) { 4769 size_t dsize = fm->dvsize += psize; 4770 fm->dv = p; 4771 set_size_and_pinuse_of_free_chunk(p, dsize); 4772 goto postaction; 4773 } 4774 else { 4775 size_t nsize = chunksize(next); 4776 psize += nsize; 4777 unlink_chunk(fm, next, nsize); 4778 set_size_and_pinuse_of_free_chunk(p, psize); 4779 if (p == fm->dv) { 4780 fm->dvsize = psize; 4781 goto postaction; 4782 } 4783 } 4784 } 4785 else 4786 set_free_with_pinuse(p, psize, next); 4787 4788 if (is_small(psize)) { 4789 insert_small_chunk(fm, p, psize); 4790 check_free_chunk(fm, p); 4791 } 4792 else { 4793 tchunkptr tp = (tchunkptr)p; 4794 insert_large_chunk(fm, tp, psize); 4795 check_free_chunk(fm, p); 4796 if (--fm->release_checks == 0) 4797 release_unused_segments(fm); 4798 } 4799 goto postaction; 4800 } 4801 } 4802 erroraction: 4803 USAGE_ERROR_ACTION(fm, p); 4804 postaction: 4805 POSTACTION(fm); 4806 } 4807 } 4808 #if !FOOTERS 4809 #undef fm 4810 #endif /* FOOTERS */ 4811 } 4812 4813 void* dlcalloc(size_t n_elements, size_t elem_size) { 4814 void* mem; 4815 size_t req = 0; 4816 if (n_elements != 0) { 4817 req = n_elements * elem_size; 4818 if (((n_elements | elem_size) & ~(size_t)0xffff) && 4819 (req / n_elements != elem_size)) 4820 req = MAX_SIZE_T; /* force downstream failure on overflow */ 4821 } 4822 mem = dlmalloc(req); 4823 if (mem != 0 && calloc_must_clear(mem2chunk(mem))) 4824 memset(mem, 0, req); 4825 return mem; 4826 } 4827 4828 #endif /* !ONLY_MSPACES */ 4829 4830 /* ------------ Internal support for realloc, memalign, etc -------------- */ 4831 4832 /* Try to realloc; only in-place unless can_move true */ 4833 static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb, 4834 int can_move) { 4835 mchunkptr newp = 0; 4836 size_t oldsize = chunksize(p); 4837 mchunkptr next = chunk_plus_offset(p, oldsize); 4838 if (RTCHECK(ok_address(m, p) && ok_inuse(p) && 4839 ok_next(p, next) && ok_pinuse(next))) { 4840 if (is_mmapped(p)) { 4841 newp = mmap_resize(m, p, nb, can_move); 4842 } 4843 else if (oldsize >= nb) { /* already big enough */ 4844 size_t rsize = oldsize - nb; 4845 if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */ 4846 mchunkptr r = chunk_plus_offset(p, nb); 4847 set_inuse(m, p, nb); 4848 set_inuse(m, r, rsize); 4849 dispose_chunk(m, r, rsize); 4850 } 4851 newp = p; 4852 } 4853 else if (next == m->top) { /* extend into top */ 4854 if (oldsize + m->topsize > nb) { 4855 size_t newsize = oldsize + m->topsize; 4856 size_t newtopsize = newsize - nb; 4857 mchunkptr newtop = chunk_plus_offset(p, nb); 4858 set_inuse(m, p, nb); 4859 newtop->head = newtopsize |PINUSE_BIT; 4860 m->top = newtop; 4861 m->topsize = newtopsize; 4862 newp = p; 4863 } 4864 } 4865 else if (next == m->dv) { /* extend into dv */ 4866 size_t dvs = m->dvsize; 4867 if (oldsize + dvs >= nb) { 4868 size_t dsize = oldsize + dvs - nb; 4869 if (dsize >= MIN_CHUNK_SIZE) { 4870 mchunkptr r = chunk_plus_offset(p, nb); 4871 mchunkptr n = chunk_plus_offset(r, dsize); 4872 set_inuse(m, p, nb); 4873 set_size_and_pinuse_of_free_chunk(r, dsize); 4874 clear_pinuse(n); 4875 m->dvsize = dsize; 4876 m->dv = r; 4877 } 4878 else { /* exhaust dv */ 4879 size_t newsize = oldsize + dvs; 4880 set_inuse(m, p, newsize); 4881 m->dvsize = 0; 4882 m->dv = 0; 4883 } 4884 newp = p; 4885 } 4886 } 4887 else if (!cinuse(next)) { /* extend into next free chunk */ 4888 size_t nextsize = chunksize(next); 4889 if (oldsize + nextsize >= nb) { 4890 size_t rsize = oldsize + nextsize - nb; 4891 unlink_chunk(m, next, nextsize); 4892 if (rsize < MIN_CHUNK_SIZE) { 4893 size_t newsize = oldsize + nextsize; 4894 set_inuse(m, p, newsize); 4895 } 4896 else { 4897 mchunkptr r = chunk_plus_offset(p, nb); 4898 set_inuse(m, p, nb); 4899 set_inuse(m, r, rsize); 4900 dispose_chunk(m, r, rsize); 4901 } 4902 newp = p; 4903 } 4904 } 4905 } 4906 else { 4907 USAGE_ERROR_ACTION(m, chunk2mem(p)); 4908 } 4909 return newp; 4910 } 4911 4912 static void* internal_memalign(mstate m, size_t alignment, size_t bytes) { 4913 void* mem = 0; 4914 if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */ 4915 alignment = MIN_CHUNK_SIZE; 4916 if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */ 4917 size_t a = MALLOC_ALIGNMENT << 1; 4918 while (a < alignment) a <<= 1; 4919 alignment = a; 4920 } 4921 if (bytes >= MAX_REQUEST - alignment) { 4922 if (m != 0) { /* Test isn't needed but avoids compiler warning */ 4923 MALLOC_FAILURE_ACTION; 4924 } 4925 } 4926 else { 4927 size_t nb = request2size(bytes); 4928 size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD; 4929 mem = internal_malloc(m, req); 4930 if (mem != 0) { 4931 mchunkptr p = mem2chunk(mem); 4932 if (PREACTION(m)) 4933 return 0; 4934 if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */ 4935 /* 4936 Find an aligned spot inside chunk. Since we need to give 4937 back leading space in a chunk of at least MIN_CHUNK_SIZE, if 4938 the first calculation places us at a spot with less than 4939 MIN_CHUNK_SIZE leader, we can move to the next aligned spot. 4940 We've allocated enough total room so that this is always 4941 possible. 4942 */ 4943 char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment - 4944 SIZE_T_ONE)) & 4945 -alignment)); 4946 char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)? 4947 br : br+alignment; 4948 mchunkptr newp = (mchunkptr)pos; 4949 size_t leadsize = pos - (char*)(p); 4950 size_t newsize = chunksize(p) - leadsize; 4951 4952 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */ 4953 newp->prev_foot = p->prev_foot + leadsize; 4954 newp->head = newsize; 4955 } 4956 else { /* Otherwise, give back leader, use the rest */ 4957 set_inuse(m, newp, newsize); 4958 set_inuse(m, p, leadsize); 4959 dispose_chunk(m, p, leadsize); 4960 } 4961 p = newp; 4962 } 4963 4964 /* Give back spare room at the end */ 4965 if (!is_mmapped(p)) { 4966 size_t size = chunksize(p); 4967 if (size > nb + MIN_CHUNK_SIZE) { 4968 size_t remainder_size = size - nb; 4969 mchunkptr remainder = chunk_plus_offset(p, nb); 4970 set_inuse(m, p, nb); 4971 set_inuse(m, remainder, remainder_size); 4972 dispose_chunk(m, remainder, remainder_size); 4973 } 4974 } 4975 4976 mem = chunk2mem(p); 4977 assert (chunksize(p) >= nb); 4978 assert(((size_t)mem & (alignment - 1)) == 0); 4979 check_inuse_chunk(m, p); 4980 POSTACTION(m); 4981 } 4982 } 4983 return mem; 4984 } 4985 4986 /* 4987 Common support for independent_X routines, handling 4988 all of the combinations that can result. 4989 The opts arg has: 4990 bit 0 set if all elements are same size (using sizes[0]) 4991 bit 1 set if elements should be zeroed 4992 */ 4993 static void** ialloc(mstate m, 4994 size_t n_elements, 4995 size_t* sizes, 4996 int opts, 4997 void* chunks[]) { 4998 4999 size_t element_size; /* chunksize of each element, if all same */ 5000 size_t contents_size; /* total size of elements */ 5001 size_t array_size; /* request size of pointer array */ 5002 void* mem; /* malloced aggregate space */ 5003 mchunkptr p; /* corresponding chunk */ 5004 size_t remainder_size; /* remaining bytes while splitting */ 5005 void** marray; /* either "chunks" or malloced ptr array */ 5006 mchunkptr array_chunk; /* chunk for malloced ptr array */ 5007 flag_t was_enabled; /* to disable mmap */ 5008 size_t size; 5009 size_t i; 5010 5011 ensure_initialization(); 5012 /* compute array length, if needed */ 5013 if (chunks != 0) { 5014 if (n_elements == 0) 5015 return chunks; /* nothing to do */ 5016 marray = chunks; 5017 array_size = 0; 5018 } 5019 else { 5020 /* if empty req, must still return chunk representing empty array */ 5021 if (n_elements == 0) 5022 return (void**)internal_malloc(m, 0); 5023 marray = 0; 5024 array_size = request2size(n_elements * (sizeof(void*))); 5025 } 5026 5027 /* compute total element size */ 5028 if (opts & 0x1) { /* all-same-size */ 5029 element_size = request2size(*sizes); 5030 contents_size = n_elements * element_size; 5031 } 5032 else { /* add up all the sizes */ 5033 element_size = 0; 5034 contents_size = 0; 5035 for (i = 0; i != n_elements; ++i) 5036 contents_size += request2size(sizes[i]); 5037 } 5038 5039 size = contents_size + array_size; 5040 5041 /* 5042 Allocate the aggregate chunk. First disable direct-mmapping so 5043 malloc won't use it, since we would not be able to later 5044 free/realloc space internal to a segregated mmap region. 5045 */ 5046 was_enabled = use_mmap(m); 5047 disable_mmap(m); 5048 mem = internal_malloc(m, size - CHUNK_OVERHEAD); 5049 if (was_enabled) 5050 enable_mmap(m); 5051 if (mem == 0) 5052 return 0; 5053 5054 if (PREACTION(m)) return 0; 5055 p = mem2chunk(mem); 5056 remainder_size = chunksize(p); 5057 5058 assert(!is_mmapped(p)); 5059 5060 if (opts & 0x2) { /* optionally clear the elements */ 5061 memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size); 5062 } 5063 5064 /* If not provided, allocate the pointer array as final part of chunk */ 5065 if (marray == 0) { 5066 size_t array_chunk_size; 5067 array_chunk = chunk_plus_offset(p, contents_size); 5068 array_chunk_size = remainder_size - contents_size; 5069 marray = (void**) (chunk2mem(array_chunk)); 5070 set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size); 5071 remainder_size = contents_size; 5072 } 5073 5074 /* split out elements */ 5075 for (i = 0; ; ++i) { 5076 marray[i] = chunk2mem(p); 5077 if (i != n_elements-1) { 5078 if (element_size != 0) 5079 size = element_size; 5080 else 5081 size = request2size(sizes[i]); 5082 remainder_size -= size; 5083 set_size_and_pinuse_of_inuse_chunk(m, p, size); 5084 p = chunk_plus_offset(p, size); 5085 } 5086 else { /* the final element absorbs any overallocation slop */ 5087 set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size); 5088 break; 5089 } 5090 } 5091 5092 #if DEBUG 5093 if (marray != chunks) { 5094 /* final element must have exactly exhausted chunk */ 5095 if (element_size != 0) { 5096 assert(remainder_size == element_size); 5097 } 5098 else { 5099 assert(remainder_size == request2size(sizes[i])); 5100 } 5101 check_inuse_chunk(m, mem2chunk(marray)); 5102 } 5103 for (i = 0; i != n_elements; ++i) 5104 check_inuse_chunk(m, mem2chunk(marray[i])); 5105 5106 #endif /* DEBUG */ 5107 5108 POSTACTION(m); 5109 return marray; 5110 } 5111 5112 /* Try to free all pointers in the given array. 5113 Note: this could be made faster, by delaying consolidation, 5114 at the price of disabling some user integrity checks, We 5115 still optimize some consolidations by combining adjacent 5116 chunks before freeing, which will occur often if allocated 5117 with ialloc or the array is sorted. 5118 */ 5119 static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) { 5120 size_t unfreed = 0; 5121 if (!PREACTION(m)) { 5122 void** a; 5123 void** fence = &(array[nelem]); 5124 for (a = array; a != fence; ++a) { 5125 void* mem = *a; 5126 if (mem != 0) { 5127 mchunkptr p = mem2chunk(mem); 5128 size_t psize = chunksize(p); 5129 #if FOOTERS 5130 if (get_mstate_for(p) != m) { 5131 ++unfreed; 5132 continue; 5133 } 5134 #endif 5135 check_inuse_chunk(m, p); 5136 *a = 0; 5137 if (RTCHECK(ok_address(m, p) && ok_inuse(p))) { 5138 void ** b = a + 1; /* try to merge with next chunk */ 5139 mchunkptr next = next_chunk(p); 5140 if (b != fence && *b == chunk2mem(next)) { 5141 size_t newsize = chunksize(next) + psize; 5142 set_inuse(m, p, newsize); 5143 *b = chunk2mem(p); 5144 } 5145 else 5146 dispose_chunk(m, p, psize); 5147 } 5148 else { 5149 CORRUPTION_ERROR_ACTION(m); 5150 break; 5151 } 5152 } 5153 } 5154 if (should_trim(m, m->topsize)) 5155 sys_trim(m, 0); 5156 POSTACTION(m); 5157 } 5158 return unfreed; 5159 } 5160 5161 /* Traversal */ 5162 #if MALLOC_INSPECT_ALL 5163 static void internal_inspect_all(mstate m, 5164 void(*handler)(void *start, 5165 void *end, 5166 size_t used_bytes, 5167 void* callback_arg), 5168 void* arg) { 5169 if (is_initialized(m)) { 5170 mchunkptr top = m->top; 5171 msegmentptr s; 5172 for (s = &m->seg; s != 0; s = s->next) { 5173 mchunkptr q = align_as_chunk(s->base); 5174 while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) { 5175 mchunkptr next = next_chunk(q); 5176 size_t sz = chunksize(q); 5177 size_t used; 5178 void* start; 5179 if (is_inuse(q)) { 5180 used = sz - CHUNK_OVERHEAD; /* must not be mmapped */ 5181 start = chunk2mem(q); 5182 } 5183 else { 5184 used = 0; 5185 if (is_small(sz)) { /* offset by possible bookkeeping */ 5186 start = (void*)((char*)q + sizeof(struct malloc_chunk)); 5187 } 5188 else { 5189 start = (void*)((char*)q + sizeof(struct malloc_tree_chunk)); 5190 } 5191 } 5192 if (start < (void*)next) /* skip if all space is bookkeeping */ 5193 handler(start, next, used, arg); 5194 if (q == top) 5195 break; 5196 q = next; 5197 } 5198 } 5199 } 5200 } 5201 #endif /* MALLOC_INSPECT_ALL */ 5202 5203 /* ------------------ Exported realloc, memalign, etc -------------------- */ 5204 5205 #if !ONLY_MSPACES 5206 5207 void* dlrealloc(void* oldmem, size_t bytes) { 5208 void* mem = 0; 5209 if (oldmem == 0) { 5210 mem = dlmalloc(bytes); 5211 } 5212 else if (bytes >= MAX_REQUEST) { 5213 MALLOC_FAILURE_ACTION; 5214 } 5215 #ifdef REALLOC_ZERO_BYTES_FREES 5216 else if (bytes == 0) { 5217 dlfree(oldmem); 5218 } 5219 #endif /* REALLOC_ZERO_BYTES_FREES */ 5220 else { 5221 size_t nb = request2size(bytes); 5222 mchunkptr oldp = mem2chunk(oldmem); 5223 #if ! FOOTERS 5224 mstate m = gm; 5225 #else /* FOOTERS */ 5226 mstate m = get_mstate_for(oldp); 5227 if (!ok_magic(m)) { 5228 USAGE_ERROR_ACTION(m, oldmem); 5229 return 0; 5230 } 5231 #endif /* FOOTERS */ 5232 if (!PREACTION(m)) { 5233 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1); 5234 POSTACTION(m); 5235 if (newp != 0) { 5236 check_inuse_chunk(m, newp); 5237 mem = chunk2mem(newp); 5238 } 5239 else { 5240 mem = internal_malloc(m, bytes); 5241 if (mem != 0) { 5242 size_t oc = chunksize(oldp) - overhead_for(oldp); 5243 memcpy(mem, oldmem, (oc < bytes)? oc : bytes); 5244 internal_free(m, oldmem); 5245 } 5246 } 5247 } 5248 } 5249 return mem; 5250 } 5251 5252 void* dlrealloc_in_place(void* oldmem, size_t bytes) { 5253 void* mem = 0; 5254 if (oldmem != 0) { 5255 if (bytes >= MAX_REQUEST) { 5256 MALLOC_FAILURE_ACTION; 5257 } 5258 else { 5259 size_t nb = request2size(bytes); 5260 mchunkptr oldp = mem2chunk(oldmem); 5261 #if ! FOOTERS 5262 mstate m = gm; 5263 #else /* FOOTERS */ 5264 mstate m = get_mstate_for(oldp); 5265 if (!ok_magic(m)) { 5266 USAGE_ERROR_ACTION(m, oldmem); 5267 return 0; 5268 } 5269 #endif /* FOOTERS */ 5270 if (!PREACTION(m)) { 5271 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0); 5272 POSTACTION(m); 5273 if (newp == oldp) { 5274 check_inuse_chunk(m, newp); 5275 mem = oldmem; 5276 } 5277 } 5278 } 5279 } 5280 return mem; 5281 } 5282 5283 void* dlmemalign(size_t alignment, size_t bytes) { 5284 if (alignment <= MALLOC_ALIGNMENT) { 5285 return dlmalloc(bytes); 5286 } 5287 return internal_memalign(gm, alignment, bytes); 5288 } 5289 5290 int dlposix_memalign(void** pp, size_t alignment, size_t bytes) { 5291 void* mem = 0; 5292 if (alignment == MALLOC_ALIGNMENT) 5293 mem = dlmalloc(bytes); 5294 else { 5295 size_t d = alignment / sizeof(void*); 5296 size_t r = alignment % sizeof(void*); 5297 if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0) 5298 return EINVAL; 5299 else if (bytes <= MAX_REQUEST - alignment) { 5300 if (alignment < MIN_CHUNK_SIZE) 5301 alignment = MIN_CHUNK_SIZE; 5302 mem = internal_memalign(gm, alignment, bytes); 5303 } 5304 } 5305 if (mem == 0) 5306 return ENOMEM; 5307 else { 5308 *pp = mem; 5309 return 0; 5310 } 5311 } 5312 5313 void* dlvalloc(size_t bytes) { 5314 size_t pagesz; 5315 ensure_initialization(); 5316 pagesz = mparams.page_size; 5317 return dlmemalign(pagesz, bytes); 5318 } 5319 5320 void* dlpvalloc(size_t bytes) { 5321 size_t pagesz; 5322 ensure_initialization(); 5323 pagesz = mparams.page_size; 5324 return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE)); 5325 } 5326 5327 void** dlindependent_calloc(size_t n_elements, size_t elem_size, 5328 void* chunks[]) { 5329 size_t sz = elem_size; /* serves as 1-element array */ 5330 return ialloc(gm, n_elements, &sz, 3, chunks); 5331 } 5332 5333 void** dlindependent_comalloc(size_t n_elements, size_t sizes[], 5334 void* chunks[]) { 5335 return ialloc(gm, n_elements, sizes, 0, chunks); 5336 } 5337 5338 size_t dlbulk_free(void* array[], size_t nelem) { 5339 return internal_bulk_free(gm, array, nelem); 5340 } 5341 5342 #if MALLOC_INSPECT_ALL 5343 void dlmalloc_inspect_all(void(*handler)(void *start, 5344 void *end, 5345 size_t used_bytes, 5346 void* callback_arg), 5347 void* arg) { 5348 ensure_initialization(); 5349 if (!PREACTION(gm)) { 5350 internal_inspect_all(gm, handler, arg); 5351 POSTACTION(gm); 5352 } 5353 } 5354 #endif /* MALLOC_INSPECT_ALL */ 5355 5356 int dlmalloc_trim(size_t pad) { 5357 int result = 0; 5358 ensure_initialization(); 5359 if (!PREACTION(gm)) { 5360 result = sys_trim(gm, pad); 5361 POSTACTION(gm); 5362 } 5363 return result; 5364 } 5365 5366 size_t dlmalloc_footprint(void) { 5367 return gm->footprint; 5368 } 5369 5370 size_t dlmalloc_max_footprint(void) { 5371 return gm->max_footprint; 5372 } 5373 5374 size_t dlmalloc_footprint_limit(void) { 5375 size_t maf = gm->footprint_limit; 5376 return maf == 0 ? MAX_SIZE_T : maf; 5377 } 5378 5379 size_t dlmalloc_set_footprint_limit(size_t bytes) { 5380 size_t result; /* invert sense of 0 */ 5381 if (bytes == 0) 5382 result = granularity_align(1); /* Use minimal size */ 5383 if (bytes == MAX_SIZE_T) 5384 result = 0; /* disable */ 5385 else 5386 result = granularity_align(bytes); 5387 return gm->footprint_limit = result; 5388 } 5389 5390 #if !NO_MALLINFO 5391 struct mallinfo dlmallinfo(void) { 5392 return internal_mallinfo(gm); 5393 } 5394 #endif /* NO_MALLINFO */ 5395 5396 #if !NO_MALLOC_STATS 5397 void dlmalloc_stats() { 5398 internal_malloc_stats(gm); 5399 } 5400 #endif /* NO_MALLOC_STATS */ 5401 5402 int dlmallopt(int param_number, int value) { 5403 return change_mparam(param_number, value); 5404 } 5405 5406 /* BEGIN android-changed: added const */ 5407 size_t dlmalloc_usable_size(const void* mem) { 5408 /* END android-change */ 5409 if (mem != 0) { 5410 mchunkptr p = mem2chunk(mem); 5411 if (is_inuse(p)) 5412 return chunksize(p) - overhead_for(p); 5413 } 5414 return 0; 5415 } 5416 5417 #endif /* !ONLY_MSPACES */ 5418 5419 /* ----------------------------- user mspaces ---------------------------- */ 5420 5421 #if MSPACES 5422 5423 static mstate init_user_mstate(char* tbase, size_t tsize) { 5424 size_t msize = pad_request(sizeof(struct malloc_state)); 5425 mchunkptr mn; 5426 mchunkptr msp = align_as_chunk(tbase); 5427 mstate m = (mstate)(chunk2mem(msp)); 5428 memset(m, 0, msize); 5429 (void)INITIAL_LOCK(&m->mutex); 5430 msp->head = (msize|INUSE_BITS); 5431 m->seg.base = m->least_addr = tbase; 5432 m->seg.size = m->footprint = m->max_footprint = tsize; 5433 m->magic = mparams.magic; 5434 m->release_checks = MAX_RELEASE_CHECK_RATE; 5435 m->mflags = mparams.default_mflags; 5436 m->extp = 0; 5437 m->exts = 0; 5438 disable_contiguous(m); 5439 init_bins(m); 5440 mn = next_chunk(mem2chunk(m)); 5441 init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE); 5442 check_top_chunk(m, m->top); 5443 return m; 5444 } 5445 5446 mspace create_mspace(size_t capacity, int locked) { 5447 mstate m = 0; 5448 size_t msize; 5449 ensure_initialization(); 5450 msize = pad_request(sizeof(struct malloc_state)); 5451 if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) { 5452 size_t rs = ((capacity == 0)? mparams.granularity : 5453 (capacity + TOP_FOOT_SIZE + msize)); 5454 size_t tsize = granularity_align(rs); 5455 char* tbase = (char*)(CALL_MMAP(tsize)); 5456 if (tbase != CMFAIL) { 5457 m = init_user_mstate(tbase, tsize); 5458 m->seg.sflags = USE_MMAP_BIT; 5459 set_lock(m, locked); 5460 } 5461 } 5462 return (mspace)m; 5463 } 5464 5465 mspace create_mspace_with_base(void* base, size_t capacity, int locked) { 5466 mstate m = 0; 5467 size_t msize; 5468 ensure_initialization(); 5469 msize = pad_request(sizeof(struct malloc_state)); 5470 if (capacity > msize + TOP_FOOT_SIZE && 5471 capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) { 5472 m = init_user_mstate((char*)base, capacity); 5473 m->seg.sflags = EXTERN_BIT; 5474 set_lock(m, locked); 5475 } 5476 return (mspace)m; 5477 } 5478 5479 int mspace_track_large_chunks(mspace msp, int enable) { 5480 int ret = 0; 5481 mstate ms = (mstate)msp; 5482 if (!PREACTION(ms)) { 5483 if (!use_mmap(ms)) { 5484 ret = 1; 5485 } 5486 if (!enable) { 5487 enable_mmap(ms); 5488 } else { 5489 disable_mmap(ms); 5490 } 5491 POSTACTION(ms); 5492 } 5493 return ret; 5494 } 5495 5496 size_t destroy_mspace(mspace msp) { 5497 size_t freed = 0; 5498 mstate ms = (mstate)msp; 5499 if (ok_magic(ms)) { 5500 msegmentptr sp = &ms->seg; 5501 (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */ 5502 while (sp != 0) { 5503 char* base = sp->base; 5504 size_t size = sp->size; 5505 flag_t flag = sp->sflags; 5506 (void)base; /* placate people compiling -Wunused-variable */ 5507 sp = sp->next; 5508 if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) && 5509 CALL_MUNMAP(base, size) == 0) 5510 freed += size; 5511 } 5512 } 5513 else { 5514 USAGE_ERROR_ACTION(ms,ms); 5515 } 5516 return freed; 5517 } 5518 5519 /* 5520 mspace versions of routines are near-clones of the global 5521 versions. This is not so nice but better than the alternatives. 5522 */ 5523 5524 void* mspace_malloc(mspace msp, size_t bytes) { 5525 mstate ms = (mstate)msp; 5526 if (!ok_magic(ms)) { 5527 USAGE_ERROR_ACTION(ms,ms); 5528 return 0; 5529 } 5530 if (!PREACTION(ms)) { 5531 void* mem; 5532 size_t nb; 5533 if (bytes <= MAX_SMALL_REQUEST) { 5534 bindex_t idx; 5535 binmap_t smallbits; 5536 nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); 5537 idx = small_index(nb); 5538 smallbits = ms->smallmap >> idx; 5539 5540 if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ 5541 mchunkptr b, p; 5542 idx += ~smallbits & 1; /* Uses next bin if idx empty */ 5543 b = smallbin_at(ms, idx); 5544 p = b->fd; 5545 assert(chunksize(p) == small_index2size(idx)); 5546 unlink_first_small_chunk(ms, b, p, idx); 5547 set_inuse_and_pinuse(ms, p, small_index2size(idx)); 5548 mem = chunk2mem(p); 5549 check_malloced_chunk(ms, mem, nb); 5550 goto postaction; 5551 } 5552 5553 else if (nb > ms->dvsize) { 5554 if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ 5555 mchunkptr b, p, r; 5556 size_t rsize; 5557 bindex_t i; 5558 binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); 5559 binmap_t leastbit = least_bit(leftbits); 5560 compute_bit2idx(leastbit, i); 5561 b = smallbin_at(ms, i); 5562 p = b->fd; 5563 assert(chunksize(p) == small_index2size(i)); 5564 unlink_first_small_chunk(ms, b, p, i); 5565 rsize = small_index2size(i) - nb; 5566 /* Fit here cannot be remainderless if 4byte sizes */ 5567 if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) 5568 set_inuse_and_pinuse(ms, p, small_index2size(i)); 5569 else { 5570 set_size_and_pinuse_of_inuse_chunk(ms, p, nb); 5571 r = chunk_plus_offset(p, nb); 5572 set_size_and_pinuse_of_free_chunk(r, rsize); 5573 replace_dv(ms, r, rsize); 5574 } 5575 mem = chunk2mem(p); 5576 check_malloced_chunk(ms, mem, nb); 5577 goto postaction; 5578 } 5579 5580 else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) { 5581 check_malloced_chunk(ms, mem, nb); 5582 goto postaction; 5583 } 5584 } 5585 } 5586 else if (bytes >= MAX_REQUEST) 5587 nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ 5588 else { 5589 nb = pad_request(bytes); 5590 if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) { 5591 check_malloced_chunk(ms, mem, nb); 5592 goto postaction; 5593 } 5594 } 5595 5596 if (nb <= ms->dvsize) { 5597 size_t rsize = ms->dvsize - nb; 5598 mchunkptr p = ms->dv; 5599 if (rsize >= MIN_CHUNK_SIZE) { /* split dv */ 5600 mchunkptr r = ms->dv = chunk_plus_offset(p, nb); 5601 ms->dvsize = rsize; 5602 set_size_and_pinuse_of_free_chunk(r, rsize); 5603 set_size_and_pinuse_of_inuse_chunk(ms, p, nb); 5604 } 5605 else { /* exhaust dv */ 5606 size_t dvs = ms->dvsize; 5607 ms->dvsize = 0; 5608 ms->dv = 0; 5609 set_inuse_and_pinuse(ms, p, dvs); 5610 } 5611 mem = chunk2mem(p); 5612 check_malloced_chunk(ms, mem, nb); 5613 goto postaction; 5614 } 5615 5616 else if (nb < ms->topsize) { /* Split top */ 5617 size_t rsize = ms->topsize -= nb; 5618 mchunkptr p = ms->top; 5619 mchunkptr r = ms->top = chunk_plus_offset(p, nb); 5620 r->head = rsize | PINUSE_BIT; 5621 set_size_and_pinuse_of_inuse_chunk(ms, p, nb); 5622 mem = chunk2mem(p); 5623 check_top_chunk(ms, ms->top); 5624 check_malloced_chunk(ms, mem, nb); 5625 goto postaction; 5626 } 5627 5628 mem = sys_alloc(ms, nb); 5629 5630 postaction: 5631 POSTACTION(ms); 5632 return mem; 5633 } 5634 5635 return 0; 5636 } 5637 5638 void mspace_free(mspace msp, void* mem) { 5639 if (mem != 0) { 5640 mchunkptr p = mem2chunk(mem); 5641 #if FOOTERS 5642 mstate fm = get_mstate_for(p); 5643 (void)msp; /* placate people compiling -Wunused */ 5644 #else /* FOOTERS */ 5645 mstate fm = (mstate)msp; 5646 #endif /* FOOTERS */ 5647 if (!ok_magic(fm)) { 5648 USAGE_ERROR_ACTION(fm, p); 5649 return; 5650 } 5651 if (!PREACTION(fm)) { 5652 check_inuse_chunk(fm, p); 5653 if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) { 5654 size_t psize = chunksize(p); 5655 mchunkptr next = chunk_plus_offset(p, psize); 5656 if (!pinuse(p)) { 5657 size_t prevsize = p->prev_foot; 5658 if (is_mmapped(p)) { 5659 psize += prevsize + MMAP_FOOT_PAD; 5660 if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) 5661 fm->footprint -= psize; 5662 goto postaction; 5663 } 5664 else { 5665 mchunkptr prev = chunk_minus_offset(p, prevsize); 5666 psize += prevsize; 5667 p = prev; 5668 if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ 5669 if (p != fm->dv) { 5670 unlink_chunk(fm, p, prevsize); 5671 } 5672 else if ((next->head & INUSE_BITS) == INUSE_BITS) { 5673 fm->dvsize = psize; 5674 set_free_with_pinuse(p, psize, next); 5675 goto postaction; 5676 } 5677 } 5678 else 5679 goto erroraction; 5680 } 5681 } 5682 5683 if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { 5684 if (!cinuse(next)) { /* consolidate forward */ 5685 if (next == fm->top) { 5686 size_t tsize = fm->topsize += psize; 5687 fm->top = p; 5688 p->head = tsize | PINUSE_BIT; 5689 if (p == fm->dv) { 5690 fm->dv = 0; 5691 fm->dvsize = 0; 5692 } 5693 if (should_trim(fm, tsize)) 5694 sys_trim(fm, 0); 5695 goto postaction; 5696 } 5697 else if (next == fm->dv) { 5698 size_t dsize = fm->dvsize += psize; 5699 fm->dv = p; 5700 set_size_and_pinuse_of_free_chunk(p, dsize); 5701 goto postaction; 5702 } 5703 else { 5704 size_t nsize = chunksize(next); 5705 psize += nsize; 5706 unlink_chunk(fm, next, nsize); 5707 set_size_and_pinuse_of_free_chunk(p, psize); 5708 if (p == fm->dv) { 5709 fm->dvsize = psize; 5710 goto postaction; 5711 } 5712 } 5713 } 5714 else 5715 set_free_with_pinuse(p, psize, next); 5716 5717 if (is_small(psize)) { 5718 insert_small_chunk(fm, p, psize); 5719 check_free_chunk(fm, p); 5720 } 5721 else { 5722 tchunkptr tp = (tchunkptr)p; 5723 insert_large_chunk(fm, tp, psize); 5724 check_free_chunk(fm, p); 5725 if (--fm->release_checks == 0) 5726 release_unused_segments(fm); 5727 } 5728 goto postaction; 5729 } 5730 } 5731 erroraction: 5732 USAGE_ERROR_ACTION(fm, p); 5733 postaction: 5734 POSTACTION(fm); 5735 } 5736 } 5737 } 5738 5739 void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) { 5740 void* mem; 5741 size_t req = 0; 5742 mstate ms = (mstate)msp; 5743 if (!ok_magic(ms)) { 5744 USAGE_ERROR_ACTION(ms,ms); 5745 return 0; 5746 } 5747 if (n_elements != 0) { 5748 req = n_elements * elem_size; 5749 if (((n_elements | elem_size) & ~(size_t)0xffff) && 5750 (req / n_elements != elem_size)) 5751 req = MAX_SIZE_T; /* force downstream failure on overflow */ 5752 } 5753 mem = internal_malloc(ms, req); 5754 if (mem != 0 && calloc_must_clear(mem2chunk(mem))) 5755 memset(mem, 0, req); 5756 return mem; 5757 } 5758 5759 void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) { 5760 void* mem = 0; 5761 if (oldmem == 0) { 5762 mem = mspace_malloc(msp, bytes); 5763 } 5764 else if (bytes >= MAX_REQUEST) { 5765 MALLOC_FAILURE_ACTION; 5766 } 5767 #ifdef REALLOC_ZERO_BYTES_FREES 5768 else if (bytes == 0) { 5769 mspace_free(msp, oldmem); 5770 } 5771 #endif /* REALLOC_ZERO_BYTES_FREES */ 5772 else { 5773 size_t nb = request2size(bytes); 5774 mchunkptr oldp = mem2chunk(oldmem); 5775 #if ! FOOTERS 5776 mstate m = (mstate)msp; 5777 #else /* FOOTERS */ 5778 mstate m = get_mstate_for(oldp); 5779 if (!ok_magic(m)) { 5780 USAGE_ERROR_ACTION(m, oldmem); 5781 return 0; 5782 } 5783 #endif /* FOOTERS */ 5784 if (!PREACTION(m)) { 5785 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1); 5786 POSTACTION(m); 5787 if (newp != 0) { 5788 check_inuse_chunk(m, newp); 5789 mem = chunk2mem(newp); 5790 } 5791 else { 5792 mem = mspace_malloc(m, bytes); 5793 if (mem != 0) { 5794 size_t oc = chunksize(oldp) - overhead_for(oldp); 5795 memcpy(mem, oldmem, (oc < bytes)? oc : bytes); 5796 mspace_free(m, oldmem); 5797 } 5798 } 5799 } 5800 } 5801 return mem; 5802 } 5803 5804 void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) { 5805 void* mem = 0; 5806 if (oldmem != 0) { 5807 if (bytes >= MAX_REQUEST) { 5808 MALLOC_FAILURE_ACTION; 5809 } 5810 else { 5811 size_t nb = request2size(bytes); 5812 mchunkptr oldp = mem2chunk(oldmem); 5813 #if ! FOOTERS 5814 mstate m = (mstate)msp; 5815 #else /* FOOTERS */ 5816 mstate m = get_mstate_for(oldp); 5817 (void)msp; /* placate people compiling -Wunused */ 5818 if (!ok_magic(m)) { 5819 USAGE_ERROR_ACTION(m, oldmem); 5820 return 0; 5821 } 5822 #endif /* FOOTERS */ 5823 if (!PREACTION(m)) { 5824 mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0); 5825 POSTACTION(m); 5826 if (newp == oldp) { 5827 check_inuse_chunk(m, newp); 5828 mem = oldmem; 5829 } 5830 } 5831 } 5832 } 5833 return mem; 5834 } 5835 5836 void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) { 5837 mstate ms = (mstate)msp; 5838 if (!ok_magic(ms)) { 5839 USAGE_ERROR_ACTION(ms,ms); 5840 return 0; 5841 } 5842 if (alignment <= MALLOC_ALIGNMENT) 5843 return mspace_malloc(msp, bytes); 5844 return internal_memalign(ms, alignment, bytes); 5845 } 5846 5847 void** mspace_independent_calloc(mspace msp, size_t n_elements, 5848 size_t elem_size, void* chunks[]) { 5849 size_t sz = elem_size; /* serves as 1-element array */ 5850 mstate ms = (mstate)msp; 5851 if (!ok_magic(ms)) { 5852 USAGE_ERROR_ACTION(ms,ms); 5853 return 0; 5854 } 5855 return ialloc(ms, n_elements, &sz, 3, chunks); 5856 } 5857 5858 void** mspace_independent_comalloc(mspace msp, size_t n_elements, 5859 size_t sizes[], void* chunks[]) { 5860 mstate ms = (mstate)msp; 5861 if (!ok_magic(ms)) { 5862 USAGE_ERROR_ACTION(ms,ms); 5863 return 0; 5864 } 5865 return ialloc(ms, n_elements, sizes, 0, chunks); 5866 } 5867 5868 size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) { 5869 return internal_bulk_free((mstate)msp, array, nelem); 5870 } 5871 5872 #if MALLOC_INSPECT_ALL 5873 void mspace_inspect_all(mspace msp, 5874 void(*handler)(void *start, 5875 void *end, 5876 size_t used_bytes, 5877 void* callback_arg), 5878 void* arg) { 5879 mstate ms = (mstate)msp; 5880 if (ok_magic(ms)) { 5881 if (!PREACTION(ms)) { 5882 internal_inspect_all(ms, handler, arg); 5883 POSTACTION(ms); 5884 } 5885 } 5886 else { 5887 USAGE_ERROR_ACTION(ms,ms); 5888 } 5889 } 5890 #endif /* MALLOC_INSPECT_ALL */ 5891 5892 int mspace_trim(mspace msp, size_t pad) { 5893 int result = 0; 5894 mstate ms = (mstate)msp; 5895 if (ok_magic(ms)) { 5896 if (!PREACTION(ms)) { 5897 result = sys_trim(ms, pad); 5898 POSTACTION(ms); 5899 } 5900 } 5901 else { 5902 USAGE_ERROR_ACTION(ms,ms); 5903 } 5904 return result; 5905 } 5906 5907 #if !NO_MALLOC_STATS 5908 void mspace_malloc_stats(mspace msp) { 5909 mstate ms = (mstate)msp; 5910 if (ok_magic(ms)) { 5911 internal_malloc_stats(ms); 5912 } 5913 else { 5914 USAGE_ERROR_ACTION(ms,ms); 5915 } 5916 } 5917 #endif /* NO_MALLOC_STATS */ 5918 5919 size_t mspace_footprint(mspace msp) { 5920 size_t result = 0; 5921 mstate ms = (mstate)msp; 5922 if (ok_magic(ms)) { 5923 result = ms->footprint; 5924 } 5925 else { 5926 USAGE_ERROR_ACTION(ms,ms); 5927 } 5928 return result; 5929 } 5930 5931 size_t mspace_max_footprint(mspace msp) { 5932 size_t result = 0; 5933 mstate ms = (mstate)msp; 5934 if (ok_magic(ms)) { 5935 result = ms->max_footprint; 5936 } 5937 else { 5938 USAGE_ERROR_ACTION(ms,ms); 5939 } 5940 return result; 5941 } 5942 5943 size_t mspace_footprint_limit(mspace msp) { 5944 size_t result = 0; 5945 mstate ms = (mstate)msp; 5946 if (ok_magic(ms)) { 5947 size_t maf = ms->footprint_limit; 5948 result = (maf == 0) ? MAX_SIZE_T : maf; 5949 } 5950 else { 5951 USAGE_ERROR_ACTION(ms,ms); 5952 } 5953 return result; 5954 } 5955 5956 size_t mspace_set_footprint_limit(mspace msp, size_t bytes) { 5957 size_t result = 0; 5958 mstate ms = (mstate)msp; 5959 if (ok_magic(ms)) { 5960 if (bytes == 0) 5961 result = granularity_align(1); /* Use minimal size */ 5962 if (bytes == MAX_SIZE_T) 5963 result = 0; /* disable */ 5964 else 5965 result = granularity_align(bytes); 5966 ms->footprint_limit = result; 5967 } 5968 else { 5969 USAGE_ERROR_ACTION(ms,ms); 5970 } 5971 return result; 5972 } 5973 5974 #if !NO_MALLINFO 5975 struct mallinfo mspace_mallinfo(mspace msp) { 5976 mstate ms = (mstate)msp; 5977 if (!ok_magic(ms)) { 5978 USAGE_ERROR_ACTION(ms,ms); 5979 } 5980 return internal_mallinfo(ms); 5981 } 5982 #endif /* NO_MALLINFO */ 5983 5984 size_t mspace_usable_size(const void* mem) { 5985 if (mem != 0) { 5986 mchunkptr p = mem2chunk(mem); 5987 if (is_inuse(p)) 5988 return chunksize(p) - overhead_for(p); 5989 } 5990 return 0; 5991 } 5992 5993 int mspace_mallopt(int param_number, int value) { 5994 return change_mparam(param_number, value); 5995 } 5996 5997 #endif /* MSPACES */ 5998 5999 6000 /* -------------------- Alternative MORECORE functions ------------------- */ 6001 6002 /* 6003 Guidelines for creating a custom version of MORECORE: 6004 6005 * For best performance, MORECORE should allocate in multiples of pagesize. 6006 * MORECORE may allocate more memory than requested. (Or even less, 6007 but this will usually result in a malloc failure.) 6008 * MORECORE must not allocate memory when given argument zero, but 6009 instead return one past the end address of memory from previous 6010 nonzero call. 6011 * For best performance, consecutive calls to MORECORE with positive 6012 arguments should return increasing addresses, indicating that 6013 space has been contiguously extended. 6014 * Even though consecutive calls to MORECORE need not return contiguous 6015 addresses, it must be OK for malloc'ed chunks to span multiple 6016 regions in those cases where they do happen to be contiguous. 6017 * MORECORE need not handle negative arguments -- it may instead 6018 just return MFAIL when given negative arguments. 6019 Negative arguments are always multiples of pagesize. MORECORE 6020 must not misinterpret negative args as large positive unsigned 6021 args. You can suppress all such calls from even occurring by defining 6022 MORECORE_CANNOT_TRIM, 6023 6024 As an example alternative MORECORE, here is a custom allocator 6025 kindly contributed for pre-OSX macOS. It uses virtually but not 6026 necessarily physically contiguous non-paged memory (locked in, 6027 present and won't get swapped out). You can use it by uncommenting 6028 this section, adding some #includes, and setting up the appropriate 6029 defines above: 6030 6031 #define MORECORE osMoreCore 6032 6033 There is also a shutdown routine that should somehow be called for 6034 cleanup upon program exit. 6035 6036 #define MAX_POOL_ENTRIES 100 6037 #define MINIMUM_MORECORE_SIZE (64 * 1024U) 6038 static int next_os_pool; 6039 void *our_os_pools[MAX_POOL_ENTRIES]; 6040 6041 void *osMoreCore(int size) 6042 { 6043 void *ptr = 0; 6044 static void *sbrk_top = 0; 6045 6046 if (size > 0) 6047 { 6048 if (size < MINIMUM_MORECORE_SIZE) 6049 size = MINIMUM_MORECORE_SIZE; 6050 if (CurrentExecutionLevel() == kTaskLevel) 6051 ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0); 6052 if (ptr == 0) 6053 { 6054 return (void *) MFAIL; 6055 } 6056 // save ptrs so they can be freed during cleanup 6057 our_os_pools[next_os_pool] = ptr; 6058 next_os_pool++; 6059 ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK); 6060 sbrk_top = (char *) ptr + size; 6061 return ptr; 6062 } 6063 else if (size < 0) 6064 { 6065 // we don't currently support shrink behavior 6066 return (void *) MFAIL; 6067 } 6068 else 6069 { 6070 return sbrk_top; 6071 } 6072 } 6073 6074 // cleanup any allocated memory pools 6075 // called as last thing before shutting down driver 6076 6077 void osCleanupMem(void) 6078 { 6079 void **ptr; 6080 6081 for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++) 6082 if (*ptr) 6083 { 6084 PoolDeallocate(*ptr); 6085 *ptr = 0; 6086 } 6087 } 6088 6089 */ 6090 6091 6092 /* ----------------------------------------------------------------------- 6093 History: 6094 v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea 6095 * fix bad comparison in dlposix_memalign 6096 * don't reuse adjusted asize in sys_alloc 6097 * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion 6098 * reduce compiler warnings -- thanks to all who reported/suggested these 6099 6100 v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee) 6101 * Always perform unlink checks unless INSECURE 6102 * Add posix_memalign. 6103 * Improve realloc to expand in more cases; expose realloc_in_place. 6104 Thanks to Peter Buhr for the suggestion. 6105 * Add footprint_limit, inspect_all, bulk_free. Thanks 6106 to Barry Hayes and others for the suggestions. 6107 * Internal refactorings to avoid calls while holding locks 6108 * Use non-reentrant locks by default. Thanks to Roland McGrath 6109 for the suggestion. 6110 * Small fixes to mspace_destroy, reset_on_error. 6111 * Various configuration extensions/changes. Thanks 6112 to all who contributed these. 6113 6114 V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu) 6115 * Update Creative Commons URL 6116 6117 V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee) 6118 * Use zeros instead of prev foot for is_mmapped 6119 * Add mspace_track_large_chunks; thanks to Jean Brouwers 6120 * Fix set_inuse in internal_realloc; thanks to Jean Brouwers 6121 * Fix insufficient sys_alloc padding when using 16byte alignment 6122 * Fix bad error check in mspace_footprint 6123 * Adaptations for ptmalloc; thanks to Wolfram Gloger. 6124 * Reentrant spin locks; thanks to Earl Chew and others 6125 * Win32 improvements; thanks to Niall Douglas and Earl Chew 6126 * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options 6127 * Extension hook in malloc_state 6128 * Various small adjustments to reduce warnings on some compilers 6129 * Various configuration extensions/changes for more platforms. Thanks 6130 to all who contributed these. 6131 6132 V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee) 6133 * Add max_footprint functions 6134 * Ensure all appropriate literals are size_t 6135 * Fix conditional compilation problem for some #define settings 6136 * Avoid concatenating segments with the one provided 6137 in create_mspace_with_base 6138 * Rename some variables to avoid compiler shadowing warnings 6139 * Use explicit lock initialization. 6140 * Better handling of sbrk interference. 6141 * Simplify and fix segment insertion, trimming and mspace_destroy 6142 * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x 6143 * Thanks especially to Dennis Flanagan for help on these. 6144 6145 V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee) 6146 * Fix memalign brace error. 6147 6148 V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee) 6149 * Fix improper #endif nesting in C++ 6150 * Add explicit casts needed for C++ 6151 6152 V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee) 6153 * Use trees for large bins 6154 * Support mspaces 6155 * Use segments to unify sbrk-based and mmap-based system allocation, 6156 removing need for emulation on most platforms without sbrk. 6157 * Default safety checks 6158 * Optional footer checks. Thanks to William Robertson for the idea. 6159 * Internal code refactoring 6160 * Incorporate suggestions and platform-specific changes. 6161 Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas, 6162 Aaron Bachmann, Emery Berger, and others. 6163 * Speed up non-fastbin processing enough to remove fastbins. 6164 * Remove useless cfree() to avoid conflicts with other apps. 6165 * Remove internal memcpy, memset. Compilers handle builtins better. 6166 * Remove some options that no one ever used and rename others. 6167 6168 V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee) 6169 * Fix malloc_state bitmap array misdeclaration 6170 6171 V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee) 6172 * Allow tuning of FIRST_SORTED_BIN_SIZE 6173 * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte. 6174 * Better detection and support for non-contiguousness of MORECORE. 6175 Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger 6176 * Bypass most of malloc if no frees. Thanks To Emery Berger. 6177 * Fix freeing of old top non-contiguous chunk im sysmalloc. 6178 * Raised default trim and map thresholds to 256K. 6179 * Fix mmap-related #defines. Thanks to Lubos Lunak. 6180 * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield. 6181 * Branch-free bin calculation 6182 * Default trim and mmap thresholds now 256K. 6183 6184 V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee) 6185 * Introduce independent_comalloc and independent_calloc. 6186 Thanks to Michael Pachos for motivation and help. 6187 * Make optional .h file available 6188 * Allow > 2GB requests on 32bit systems. 6189 * new WIN32 sbrk, mmap, munmap, lock code from <Walter (at) GeNeSys-e.de>. 6190 Thanks also to Andreas Mueller <a.mueller at paradatec.de>, 6191 and Anonymous. 6192 * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for 6193 helping test this.) 6194 * memalign: check alignment arg 6195 * realloc: don't try to shift chunks backwards, since this 6196 leads to more fragmentation in some programs and doesn't 6197 seem to help in any others. 6198 * Collect all cases in malloc requiring system memory into sysmalloc 6199 * Use mmap as backup to sbrk 6200 * Place all internal state in malloc_state 6201 * Introduce fastbins (although similar to 2.5.1) 6202 * Many minor tunings and cosmetic improvements 6203 * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK 6204 * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS 6205 Thanks to Tony E. Bennett <tbennett (at) nvidia.com> and others. 6206 * Include errno.h to support default failure action. 6207 6208 V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee) 6209 * return null for negative arguments 6210 * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com> 6211 * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h' 6212 (e.g. WIN32 platforms) 6213 * Cleanup header file inclusion for WIN32 platforms 6214 * Cleanup code to avoid Microsoft Visual C++ compiler complaints 6215 * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing 6216 memory allocation routines 6217 * Set 'malloc_getpagesize' for WIN32 platforms (needs more work) 6218 * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to 6219 usage of 'assert' in non-WIN32 code 6220 * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to 6221 avoid infinite loop 6222 * Always call 'fREe()' rather than 'free()' 6223 6224 V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee) 6225 * Fixed ordering problem with boundary-stamping 6226 6227 V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee) 6228 * Added pvalloc, as recommended by H.J. Liu 6229 * Added 64bit pointer support mainly from Wolfram Gloger 6230 * Added anonymously donated WIN32 sbrk emulation 6231 * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen 6232 * malloc_extend_top: fix mask error that caused wastage after 6233 foreign sbrks 6234 * Add linux mremap support code from HJ Liu 6235 6236 V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee) 6237 * Integrated most documentation with the code. 6238 * Add support for mmap, with help from 6239 Wolfram Gloger (Gloger (at) lrz.uni-muenchen.de). 6240 * Use last_remainder in more cases. 6241 * Pack bins using idea from colin (at) nyx10.cs.du.edu 6242 * Use ordered bins instead of best-fit threshhold 6243 * Eliminate block-local decls to simplify tracing and debugging. 6244 * Support another case of realloc via move into top 6245 * Fix error occuring when initial sbrk_base not word-aligned. 6246 * Rely on page size for units instead of SBRK_UNIT to 6247 avoid surprises about sbrk alignment conventions. 6248 * Add mallinfo, mallopt. Thanks to Raymond Nijssen 6249 (raymond (at) es.ele.tue.nl) for the suggestion. 6250 * Add `pad' argument to malloc_trim and top_pad mallopt parameter. 6251 * More precautions for cases where other routines call sbrk, 6252 courtesy of Wolfram Gloger (Gloger (at) lrz.uni-muenchen.de). 6253 * Added macros etc., allowing use in linux libc from 6254 H.J. Lu (hjl (at) gnu.ai.mit.edu) 6255 * Inverted this history list 6256 6257 V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee) 6258 * Re-tuned and fixed to behave more nicely with V2.6.0 changes. 6259 * Removed all preallocation code since under current scheme 6260 the work required to undo bad preallocations exceeds 6261 the work saved in good cases for most test programs. 6262 * No longer use return list or unconsolidated bins since 6263 no scheme using them consistently outperforms those that don't 6264 given above changes. 6265 * Use best fit for very large chunks to prevent some worst-cases. 6266 * Added some support for debugging 6267 6268 V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee) 6269 * Removed footers when chunks are in use. Thanks to 6270 Paul Wilson (wilson (at) cs.texas.edu) for the suggestion. 6271 6272 V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee) 6273 * Added malloc_trim, with help from Wolfram Gloger 6274 (wmglo (at) Dent.MED.Uni-Muenchen.DE). 6275 6276 V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g) 6277 6278 V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g) 6279 * realloc: try to expand in both directions 6280 * malloc: swap order of clean-bin strategy; 6281 * realloc: only conditionally expand backwards 6282 * Try not to scavenge used bins 6283 * Use bin counts as a guide to preallocation 6284 * Occasionally bin return list chunks in first scan 6285 * Add a few optimizations from colin (at) nyx10.cs.du.edu 6286 6287 V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g) 6288 * faster bin computation & slightly different binning 6289 * merged all consolidations to one part of malloc proper 6290 (eliminating old malloc_find_space & malloc_clean_bin) 6291 * Scan 2 returns chunks (not just 1) 6292 * Propagate failure in realloc if malloc returns 0 6293 * Add stuff to allow compilation on non-ANSI compilers 6294 from kpv (at) research.att.com 6295 6296 V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu) 6297 * removed potential for odd address access in prev_chunk 6298 * removed dependency on getpagesize.h 6299 * misc cosmetics and a bit more internal documentation 6300 * anticosmetics: mangled names in macros to evade debugger strangeness 6301 * tested on sparc, hp-700, dec-mips, rs6000 6302 with gcc & native cc (hp, dec only) allowing 6303 Detlefs & Zorn comparison study (in SIGPLAN Notices.) 6304 6305 Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu) 6306 * Based loosely on libg++-1.2X malloc. (It retains some of the overall 6307 structure of old version, but most details differ.) 6308 6309 */ 6310