Home | History | Annotate | Download | only in qemu

Lines Matching defs:page

2  *  virtual page mapping and translated block handling
91 /* any access to the tbs or the page table must use this lock */
137 /* list of TBs intersecting this ram page */
140 of lookups we do to a given page to use a bitmap */
149 /* offset in host memory of the page + io_index in the low bits */
706 printf("ERROR page flags: PC=%08lx size=%04x f1=%x f2=%x\n",
796 /* remove the TB from the page list */
886 /* NOTE: tb_end may be after the end of the page, but
932 /* check next page if needed */
942 /* invalidate all TBs which intersect with the target physical page
944 the same physical page. 'is_cpu_write_access' should be true if called
983 /* NOTE: tb_end may be after the end of the page, but
1140 /* add the tb in the target page and protect it if necessary */
1162 /* force the host page as non writable (writes will have a
1163 page fault + mprotect overhead) */
1179 printf("protecting code page: 0x" TARGET_FMT_lx "\n",
1186 allocated in a physical page */
1225 /* add a new TB and link it to the physical page tables. phys_page2 is
1226 (-1) to indicate that only one page contains the TB. */
1242 /* add in the page list */
1776 overlap the flushed page. */
1852 /* update the TLBs so that writes to code in the virtual page 'addr'
1861 /* update the TLB so that writes in physical page 'phys_addr' are no longer
1973 /* update the TLB corresponding to virtual page vaddr
1987 is permitted. Return 0 if OK or 2 if the page could not be mapped
2089 * If we have memchecker running, we need to make sure that page, cached
2094 * We need to check with memory checker if we should invalidate this page
2097 * - Page that's been cached belongs to the user space.
2098 * - Request to cache this page didn't come from softmmu. We're covered
2099 * there, because after page was cached here we will invalidate it in
2101 * - Cached page belongs to RAM, not I/O area.
2102 * - Page is cached for read, or write access.
2213 /* Modify the flags of a page and invalidate the code if necessary.
2268 /* unprotect the page if it was put read-only because it
2281 page. Return TRUE if the fault was successfully handled. */
2307 /* if the page was really writable, then we change its
2362 For RAM, 'size' must be a multiple of the target page size.
2364 io memory page. The address used when calling the IO function is
2366 start_addr and region_offset are rounded down to a page boundary
3276 target_ulong page;
3280 page = addr & TARGET_PAGE_MASK;
3281 l = (page + TARGET_PAGE_SIZE) - addr;
3284 flags = page_get_flags(page);
3331 target_phys_addr_t page;
3336 page = addr & TARGET_PAGE_MASK;
3337 l = (page + TARGET_PAGE_SIZE) - addr;
3340 p = phys_page_find(page >> TARGET_PAGE_BITS);
3422 target_phys_addr_t page;
3427 page = addr & TARGET_PAGE_MASK;
3428 l = (page + TARGET_PAGE_SIZE) - addr;
3431 p = phys_page_find(page >> TARGET_PAGE_BITS);
3518 target_phys_addr_t page;
3524 page = addr & TARGET_PAGE_MASK;
3525 l = (page + TARGET_PAGE_SIZE) - addr;
3528 p = phys_page_find(page >> TARGET_PAGE_BITS);
3680 /* warning: addr must be aligned. The ram page is not masked as dirty
3810 target_ulong page;
3813 page = addr & TARGET_PAGE_MASK;
3814 phys_addr = cpu_get_phys_page_debug(env, page);
3815 /* if no physical page mapped, return an error */
3818 l = (page + TARGET_PAGE_SIZE) - addr;
3933 cpu_fprintf(f, "cross page TB count %d (%d%%)\n",