Home | History | Annotate | Download | only in gc

Lines Matching defs:it

307     // Reserve the non moving mem map before the other two since it needs to be at a specific
424 // Remove the main backup space since it slows down the GC to have unused extra spaces.
462 // It's still too early to take a lock because there are no threads yet, but we can create locks
463 // now. We don't create it earlier to make it clear that you can't use locks during heap
576 // If we are the zygote and don't yet have a zygote space, it means that the zygote fork will
623 // The allocation stack may have non movable objects in it. We need to flush it since the GC
688 // Don't use find space since it only finds spaces which actually contain objects instead of
737 // we attempt to disable it.
793 // collection where it's not easily known which objects are alive
847 for (auto* it = allocation_stack_->Begin(), *end = allocation_stack_->End(); it < end; ++it) {
848 mirror::Object* const obj = it->AsMirrorPtr();
932 auto it = std::find(continuous_spaces_.begin(), continuous_spaces_.end(), continuous_space);
933 DCHECK(it != continuous_spaces_.end());
934 continuous_spaces_.erase(it);
940 auto it = std::find(discontinuous_spaces_.begin(), discontinuous_spaces_.end(),
942 DCHECK(it != discontinuous_spaces_.end());
943 discontinuous_spaces_.erase(it);
946 auto it = std::find(alloc_spaces_.begin(), alloc_spaces_.end(), space->AsAllocSpace());
947 DCHECK(it != alloc_spaces_.end());
948 alloc_spaces_.erase(it);
1167 // Launch homogeneous space compaction if it is desired.
1287 // Jemalloc does it's own internal trimming.
1387 // then clear the stack containing it.
1508 // The allocation failed. If the GC is running, block until it completes, and then retry the
1668 // Need SuspendAll here to prevent lock violation if RosAlloc does it during InspectAll.
1821 // Homogeneous space compaction is a copying transition, can't run it if the moving GC disable count
1897 // If someone else beat us to it and changed the collector before we could, exit.
1900 // then it would get blocked on WaitForGcToCompleteLocked.
1928 // pointer space last transition it will be protected.
1943 // Remove the main space so that we don't try to trim it, this doens't work for debug
1947 delete main_space_; // Delete the space since it has been removed.
2138 auto it = bins_.lower_bound(alloc_size);
2139 if (it == bins_.end()) {
2140 // No available space in the bins, place it in the target space instead (grows the zygote
2151 size_t size = it->first;
2152 uintptr_t pos = it->second;
2153 bins_.erase(it); // Erase the old bin which we replace with the new smaller bin.
2205 // The end of the non-moving space may be protected, unprotect it so that we can copy the zygote
2266 // Save the old space so that we can remove it after we complete creating the zygote space.
2329 for (auto* it = stack->Begin(); it != limit; ++it) {
2330 const mirror::Object* obj = it->AsMirrorPtr();
2460 // This gets recalculated in GrowForUtilization. It is important that it is disabled /
2461 // calculated in the same thread so that there aren't any races that can cause it to become
2486 // Print the GC if it is an explicit GC (e.g. Runtime.gc()) or a slow GC
2490 // GC for alloc pauses the allocating thread, so consider it as a pause.
2744 // be live or else how did we find it in the live bitmap?
2746 // The class doesn't count as a reference but we should verify it anyways.
2863 // If the object is not dirty and it is referencing something in the live stack other than
2864 // class, then it must be on a dirty card.
2870 // Card should be either kCardDirty if it got re-dirtied after we aged it, or
2871 // kCardDirty - 1 if it didnt get touched since we aged it.
2943 // We need to sort the live stack since we binary search it.
2951 for (auto* it = live_stack_->Begin(); it != live_stack_->End(); ++it) {
2952 if (!kUseThreadLocalAllocationStack || it->AsMirrorPtr() != nullptr) {
2953 visitor(it->AsMirrorPtr());
2998 auto it = mod_union_tables_.find(space);
2999 if (it == mod_union_tables_.end()) {
3002 return it->second;
3006 auto it = remembered_sets_.find(space);
3007 if (it == remembered_sets_.end()) {
3010 return it->second;
3041 // scan either card. If we end up with the non aged card, we scan it it in the pause.
3069 // Sort the live stack so that we can quickly binary search it later.
3203 // is not the heap task daemon thread, it's considered as a
3274 // This doesn't actually resize any memory. It just lets the heap grow more when necessary.
3389 // Restore object in case it gets moved.
3514 // a space it will hold its lock and can become a cause of jank.
3603 // The second watermark is higher than the gc watermark. If you hit this it means you are
3610 // Native bytes allocated may be updated by finalization, refresh it.
3620 // We have just run finalizers, update the native watermark since it is very likely that
3674 auto it = remembered_sets_.find(space);
3675 CHECK(it != remembered_sets_.end());
3676 delete it->second;
3677 remembered_sets_.erase(it);
3707 // The first stack frame is get_backtrace itself. Skip it.
3712 // ip may be off for ARM but it shouldn't matter since we only use it for hashing.