From: Joonsoo Kim Date: Thu, 13 Nov 2014 23:19:07 +0000 (-0800) Subject: mm/compaction: skip the range until proper target pageblock is met X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=58420016303769f74c58248a59ca0f435041b352;p=GitHub%2Fmoto-9609%2Fandroid_kernel_motorola_exynos9610.git mm/compaction: skip the range until proper target pageblock is met Commit 7d49d8868336 ("mm, compaction: reduce zone checking frequency in the migration scanner") has a side-effect that changes the iteration range calculation. Before the change, block_end_pfn is calculated using start_pfn, but now it blindly adds pageblock_nr_pages to the previous value. This causes the problem that isolation_start_pfn is larger than block_end_pfn when we isolate the page with more than pageblock order. In this case, isolation would fail due to an invalid range parameter. To prevent this, this patch implements skipping the range until a proper target pageblock is met. Without this patch, CMA with more than pageblock order always fails but with this patch it will succeed. Signed-off-by: Joonsoo Kim Cc: Vlastimil Babka Cc: Minchan Kim Cc: Michal Nazarewicz Cc: Naoya Horiguchi Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/compaction.c b/mm/compaction.c index ec74cf0123ef..4f0151cfd238 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -479,6 +479,16 @@ isolate_freepages_range(struct compact_control *cc, block_end_pfn = min(block_end_pfn, end_pfn); + /* + * pfn could pass the block_end_pfn if isolated freepage + * is more than pageblock order. In this case, we adjust + * scanning range to right one. + */ + if (pfn >= block_end_pfn) { + block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages); + block_end_pfn = min(block_end_pfn, end_pfn); + } + if (!pageblock_pfn_to_page(pfn, block_end_pfn, cc->zone)) break;