mm: cma: remove watermark hacks

Commits 2139cbe627 ("cma: fix counting of isolated pages") and
d95ea5d18e ("cma: fix watermark checking") introduced a reliable
method of free page accounting when memory is being allocated from CMA
regions, so the workaround introduced earlier by commit 49f223a9cd
("mm: trigger page reclaim in alloc_contig_range() to stabilise
watermarks") can be finally removed.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Mel Gorman <mel@csn.ul.ie>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Marek Szyprowski 2012-12-11 16:02:59 -08:00 committed by Linus Torvalds
parent 2e30abd173
commit bc357f431c
2 changed files with 0 additions and 67 deletions

View file

@ -63,10 +63,8 @@ enum {
#ifdef CONFIG_CMA
# define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA)
# define cma_wmark_pages(zone) zone->min_cma_pages
#else
# define is_migrate_cma(migratetype) false
# define cma_wmark_pages(zone) 0
#endif
#define for_each_migratetype_order(order, type) \
@ -382,13 +380,6 @@ struct zone {
#ifdef CONFIG_MEMORY_HOTPLUG
/* see spanned/present_pages for more description */
seqlock_t span_seqlock;
#endif
#ifdef CONFIG_CMA
/*
* CMA needs to increase watermark levels during the allocation
* process to make sure that the system is not starved.
*/
unsigned long min_cma_pages;
#endif
struct free_area free_area[MAX_ORDER];