Lines Matching full:mmapped
310 set will attempt to check every non-mmapped allocated and free chunk
462 the benefits that: Mmapped space can always be individually released
662 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
666 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
832 arena: current total non-mmapped bytes allocated from system
835 hblks: current number of mmapped regions
836 hblkhd: total bytes held in mmapped regions
840 uordblks: current total allocated space (normal or mmapped)
1636 also used to hold the offset this chunk has within its mmapped
1637 region, which is needed to preserve alignment. Each mmapped
1667 /* MMapped chunks need a second word of overhead ... */
1702 mmapped region to the base of the chunk.
1917 other surrounding mmapped segments and trimmed/de-allocated
2563 /* Check properties of any chunk, whether free, inuse, mmapped etc */
2583 /* Check properties of (inuse) mmapped chunks */
2602 /* If not pinuse and not mmapped, previous chunk has OK offset */
2640 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3155 Directly mmapped chunks are set up with an offset to the start of
3156 the mmapped region stored in the prev_foot field of the chunk. This
3316 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3341 m->seg.sflags = mmapped;
3390 or main space is mmapped or a previous contiguous call failed)
3562 /* Unmap and unlink any mmapped segments that don't contain used chunks */
3654 /* Unmap any unused mmapped segments */
3898 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4207 with special cases for top, dv, mmapped chunks, and usage errors.