Home | History | Annotate | Download | only in src

Lines Matching defs:To

36 // Page::kMaxHeapObjectSize, so that they do not have to move during
40 // A store-buffer based write barrier is used to keep track of intergenerational
45 // object maps so if the page belongs to old pointer space or large object
46 // space it is essential to guarantee that the page does not contain any
47 // garbage pointers to new space: every pointer aligned word which satisfies
48 // the Heap::InNewSpace() predicate must be a pointer to a live heap object in
51 // apply to map space which is iterated in a special fashion. However we still
52 // require pointer fields of dead maps to be cleaned.
54 // To enable lazy cleaning of old space pages we can mark chunks of the page
58 // together to form a free list after a GC. Garbage sections created outside
63 // Each page may have up to one special garbage section. The start of this
67 // is to enable linear allocation without having to constantly update the byte
366 // Every n write barrier invocations we go to runtime even though
393 // Even if the mutator writes to them they will be kept black and a white
394 // to grey transition is performed in the value.
439 // are set to the value in "flags", the rest retain the current value
478 // Manage live byte count (count of bytes known to be live,
565 // The start offset of the object area in a page. Aligned to both maps and
566 // code alignment to be suitable for both. Also aligned to 32 words because
688 // If the chunk needs to remember its memory reservation, it is stored here.
695 // Used by the store buffer to keep track of which pages to mark scan-on-
703 // Used by the incremental marker to keep track of the scanning progress in
743 // The only way to get a page pointer is by calling factory methods:
757 // top address can be the upper bound of the page, we need to subtract
776 // Returns the offset of a given address to this page.
782 // Returns the address for a given offset to the this page.
795 // also applies to new space allocation, since objects are never migrated
796 // from new space to large object space. Takes double alignment into account.
928 // Frees the range of virtual memory, and frees the data structures used to
976 // Freed blocks of memory are added to the free list. When the allocation
977 // list is exhausted, the free list is sorted and merged to make the new
1053 // Each space has to manage it's own pages.
1105 // Returns a MemoryChunk in which the memory region from commit_area_size to
1190 // to our heap. The range is [lowest, highest[, inclusive on the low end
1212 // collector to rebuild page headers in the from space, which is
1227 // Interface for heap object iterator to be implemented by all object space
1231 // method which is used to avoid using virtual functions
1246 // to its top or from the bottom of the given page to its top.
1250 // iterator in order to be sure to visit these new objects.
1260 // Advance to the next object, skipping free spaces and other fillers and
1381 // the number of bytes that are not allocated and not available to
1383 // to internal fragmentation, top of page areas in map space), and the bytes
1390 // decreases to the non-capacity stats.
1488 // function also writes a map to the first word of the block so that it
1489 // looks like a heap object to the garbage collector and heap iteration
1511 // The free list category holds a pointer to the top element and a pointer to
1562 // top_ points to the top FreeListNode* in the free list category.
1573 // as to encourage objects allocated around the same time to be near each
1574 // other. The normal way to allocate is intended to be by bumping a 'top'
1575 // pointer until it hits a 'limit' pointer. When the limit is hit we need to
1576 // find a new space to allocate from. This is done with the free list, which
1577 // is divided up into rough categories to cut down on waste. Having finer
1585 // limit when the object we need to allocate is 1-31 words in size. These
1588 // limit when the object we need to allocate is 32-255 words in size. These
1591 // and limit when the object we need to allocate is 256-2047 words in size.
1594 // Empty pages are added to this list. These spaces are called huge.
1612 // number of bytes that have been lost due to internal fragmentation by
1613 // freeing the block. Bookkeeping information will be written to the block,
1620 // number of bytes lost to fragmentation is returned in the output parameter
1688 bool To(T** obj) {
1725 // addresses is not big enough to contain a single page-aligned page, a
1734 // to the initial chunk, uncommits addresses in the initial chunk.
1748 // to write it into the free list nodes that were already created.
1781 // Sets the capacity, the available space and the wasted space to zero.
1782 // The stats are rebuilt during sweeping by adding each page to the
1785 // to the available and wasted totals.
1799 // immediately added to the free list so they show up here.
1802 // Allocated bytes in this space. Garbage bytes that were not found due to
1813 // due to being too small to use for allocation. They do not include the
1814 // free bytes that were not found at all due to lazy sweeping.
1835 // Give a block of memory to the space's free list. It might be added to
1838 // no attempt to add area to free list is made.
1858 // Empty space allocation info, returning unused area to free list.
1883 // Overridden by subclasses to verify space-specific object
1904 // Evacuation candidates are swept by evacuator. Needs to return a valid
1934 // This function tries to steal size_in_bytes memory from the sweeper threads
1936 // for the sweeper threads to finish sweeping.
2039 // class is used for collecting statistics to print to the log file.
2063 // GC related flags copied from from-space to to-space when
2141 // Only uses the prev/next links, and sets flags to not be in new-space.
2177 // Grow the semispace to the new capacity. The new capacity
2182 // Shrinks the semispace to the new capacity. The new capacity
2215 // Resets the space to using the first page.
2256 // The "from" address must be on a page prior to the "to" address,
2258 static void AssertValidRange(Address from, Address to);
2261 inline static void AssertValidRange(Address from, Address to) {}
2275 static void Swap(SemiSpace* from, SemiSpace* to);
2284 // Flips the semispace between being from-space and to-space.
2302 // Used to govern object promotion during mark-compact collection.
2305 // Masks and comparison values to test for containment in this semispace.
2325 // semispace from a given start address (defaulting to the bottom of the
2326 // semispace) to the top of the semispace. New objects allocated after the
2334 // Iterate over all of allocated to-space.
2336 // Iterate over all of allocated to-space, with a custome size function.
2338 // Iterate over part of allocated to-space, from start to the end
2341 // Iterate from one address to another in the same semi-space.
2342 SemiSpaceIterator(Address from, Address to);
2382 // Make an iterator that runs over all pages in to-space.
2390 // to the page that contains limit in the same semispace.
2410 // forwards most functions to the appropriate semispace.
2462 // The same, but returning an int. We have to have the one that returns
2568 // Reset the allocation pointer to the beginning of the active semispace.
2579 // or to zap it). Notice: space-addresses are not necessarily on the
2603 // Try to switch the active semispace to a new, empty, page.
2619 // Iterates the active semispace to collect statistics.
2628 // to space during a scavenge GC.
2650 // Update allocation info to match the current to-space page.
2673 // to be lower than actual limit and then will gradually increase it
2674 // in steps to guarantee that we do incremental marking steps even
2731 // TODO(1600): this limit is artifical just to keep code compilable
2817 // A large object always starts at Page::kObjectStartOffset to a page.
2907 // Map MemoryChunk::kAlignment-aligned chunks to large pages covering them
2934 // pointers to new space.