Home | History | Annotate | Download | only in src

Lines Matching full:mmapped

274   set will attempt to check every non-mmapped allocated and free chunk
426 the benefits that: Mmapped space can always be individually released
618 MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */
622 MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */
788 arena: current total non-mmapped bytes allocated from system
791 hblks: current number of mmapped regions
792 hblkhd: total bytes held in mmapped regions
796 uordblks: current total allocated space (normal or mmapped)
1590 also used to hold the offset this chunk has within its mmapped
1591 mmapped
1621 /* MMapped chunks need a second word of overhead ... */
1656 mmapped region to the base of the chunk.
1871 other surrounding mmapped segments and trimmed/de-allocated
2553 /* Check properties of any chunk, whether free, inuse, mmapped etc */
2573 /* Check properties of (inuse) mmapped chunks */
2592 /* If not pinuse and not mmapped, previous chunk has OK offset */
2630 /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */
3143 Directly mmapped chunks are set up with an offset to the start of
3144 the mmapped region stored in the prev_foot field of the chunk. This
3304 static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) {
3329 set_segment_flags(&m->seg, mmapped);
3378 or main space is mmapped or a previous contiguous call failed)
3552 /* Unmap and unlink any mmapped segments that don't contain used chunks */
3644 /* Unmap any unused mmapped segments */
3888 if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */
4197 with special cases for top, dv, mmapped chunks, and usage errors.