Home | History | Annotate | Download | only in linux

Lines Matching refs:zone

35  * zone->lock and zone->lru_lock are two of the hottest locks in the kernel.
37 * cachelines. There are very few zone structures in the machine, so space
64 NUMA_INTERLEAVE_HIT, /* interleaver preferred this zone */
103 * allocator in the gfp_mask, in the zone modifier bits. These bits
106 * the gfp_mask should be considered as zone modifiers. Each valid
107 * combination of the zone modifier bits has a corresponding list
108 * of zones (in node_zonelists). Thus for two zone modifiers there
111 * combinations of zone modifiers in "zone modifier space".
113 * As an optimisation any zone modifier bits which are only valid when
114 * no other zone modifier bits are set (loners) should be placed in
117 * of three zone modifier bits, we could require up to eight zonelists.
118 * If the left most zone modifier is a "loner" then the highest valid
139 struct zone {
146 * GB of ram we must reserve some of the lower zone memory (otherwise we risk
155 * zone reclaim becomes active if more unmapped pages exist.
186 /* A count of how many reclaimers are scanning this zone */
189 /* Zone statistics */
193 * prev_priority holds the scanning priority for this zone. It is
203 * this zone was successfully refilled to free_pages == pages_high.
253 * to be read outside of zone->lock, and it is done in the main
256 * The lock is declared along with zone->lock because it is
257 * frequently read in proximity to zone->lock. It's good to
289 struct zone *zones[MAX_NUMNODES * MAX_NR_ZONES + 1]; // NULL delimited
295 * (mostly NUMA machines?) to denote a higher-level memory zone than the
296 * zone denotes.
302 * per-zone basis.
306 struct zone node_zones[MAX_NR_ZONES];
319 * Nests above zone->lock and zone->size_seqlock.
349 void wakeup_kswapd(struct zone *zone, int order);
350 int zone_watermark_ok(struct zone *z, int order, unsigned long mark,
353 extern int init_currently_empty_zone(struct zone *zone, unsigned long start_pfn,
367 * zone_idx() returns 0 for the ZONE_DMA zone, 1 for the ZONE_NORMAL zone, etc.
369 #define zone_idx(zone) ((zone) - (zone)->zone_pgdat->node_zones)
371 static inline int populated_zone(struct zone *zone)
373 return (!!zone->present_pages);
387 * is_highmem - helper function to quickly check if a struct zone is a
388 * highmem zone or not. This is an attempt to keep references
390 * @zone - pointer to struct zone variable
392 static inline int is_highmem(struct zone *zone)
394 return zone == zone->zone_pgdat->node_zones + ZONE_HIGHMEM;
397 static inline int is_normal(struct zone *zone)
399 return zone == zone->zone_pgdat->node_zones + ZONE_NORMAL;
402 static inline int is_dma32(struct zone *zone)
404 return zone == zone->zone_pgdat->node_zones + ZONE_DMA32;
407 static inline int is_dma(struct zone *zone)
409 return zone == zone->zone_pgdat->node_zones + ZONE_DMA;
412 /* These two functions are used to setup the per zone pages min values */
446 extern struct zone *next_zone(struct zone *zone);
458 * @zone - pointer to struct zone variable
460 * The user only needs to declare the zone variable, for_each_zone
463 #define for_each_zone(zone) \
464 for (zone = (first_online_pgdat())->node_zones; \
465 zone; \
466 zone = next_zone(zone))
474 * with 32 bit page->flags field, we reserve 9 bits for node/zone info.