summaryrefslogtreecommitdiff
path: root/mm/internal.h
diff options
context:
space:
mode:
authorMinchan Kim <minchan@kernel.org>2012-10-08 23:31:55 (GMT)
committerLinus Torvalds <torvalds@linux-foundation.org>2012-10-09 07:22:43 (GMT)
commit02c6de8d757cb32c0829a45d81c3dfcbcafd998b (patch)
tree0d8f0d182a44ba4ec4af0c909d01eb663e03e254 /mm/internal.h
parent70400303ce0c4ced3139499c676d5c79636b0c72 (diff)
downloadlinux-fsl-qoriq-02c6de8d757cb32c0829a45d81c3dfcbcafd998b.tar.xz
mm: cma: discard clean pages during contiguous allocation instead of migration
Drop clean cache pages instead of migration during alloc_contig_range() to minimise allocation latency by reducing the amount of migration that is necessary. It's useful for CMA because latency of migration is more important than evicting the background process's working set. In addition, as pages are reclaimed then fewer free pages for migration targets are required so it avoids memory reclaiming to get free pages, which is a contributory factor to increased latency. I measured elapsed time of __alloc_contig_migrate_range() which migrates 10M in 40M movable zone in QEMU machine. Before - 146ms, After - 7ms [akpm@linux-foundation.org: fix nommu build] Signed-off-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Minchan Kim <minchan@kernel.org> Reviewed-by: Mel Gorman <mgorman@suse.de> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Michal Nazarewicz <mina86@mina86.com> Cc: Rik van Riel <riel@redhat.com> Tested-by: Kyungmin Park <kyungmin.park@samsung.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/internal.h')
-rw-r--r--mm/internal.h3
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/internal.h b/mm/internal.h
index bbd7b34..8312d4f 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -356,5 +356,6 @@ extern unsigned long vm_mmap_pgoff(struct file *, unsigned long,
unsigned long, unsigned long);
extern void set_pageblock_order(void);
-
+unsigned long reclaim_clean_pages_from_list(struct zone *zone,
+ struct list_head *page_list);
#endif /* __MM_INTERNAL_H */