mm, page_alloc: fallback to smallest page when not stealing whole pageblock
authorVlastimil Babka <vbabka@suse.cz>
Mon, 10 Jul 2017 22:47:14 +0000 (15:47 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Mon, 10 Jul 2017 23:32:30 +0000 (16:32 -0700)
Since commit 3bc48f96cf11 ("mm, page_alloc: split smallest stolen page
in fallback") we pick the smallest (but sufficient) page of all that
have been stolen from a pageblock of different migratetype.  However,
there are cases when we decide not to steal the whole pageblock.

Practically in the current implementation it means that we are trying to
fallback for a MIGRATE_MOVABLE allocation of order X, go through the
freelists from MAX_ORDER-1 down to X, and find free page of order Y.  If
Y is less than pageblock_order / 2, we decide not to steal all pages
from the pageblock.  When Y > X, it means we are potentially splitting a
larger page than we need, as there might be other pages of order Z,
where X <= Z < Y.  Since Y is already too small to steal whole
pageblock, picking smallest available Z will result in the same decision
and we avoid splitting a higher-order page in a MIGRATE_UNMOVABLE or
MIGRATE_RECLAIMABLE pageblock.

This patch therefore changes the fallback algorithm so that in the
situation described above, we switch the fallback search strategy to go
from order X upwards to find the smallest suitable fallback.  In theory
there shouldn't be a downside of this change wrt fragmentation.

This has been tested with mmtests' stress-highalloc performing
GFP_KERNEL order-4 allocations, here is the relevant extfrag tracepoint
statistics:

                                                        4.12.0-rc2      4.12.0-rc2
                                                         1-kernel4       2-kernel4
  Page alloc extfrag event                                  25640976    69680977
  Extfrag fragmenting                                       25621086    69661364
  Extfrag fragmenting for unmovable                            74409       73204
  Extfrag fragmenting unmovable placed with movable            69003       67684
  Extfrag fragmenting unmovable placed with reclaim.            5406        5520
  Extfrag fragmenting for reclaimable                           6398        8467
  Extfrag fragmenting reclaimable placed with movable            869         884
  Extfrag fragmenting reclaimable placed with unmov.            5529        7583
  Extfrag fragmenting for movable                           25540279    69579693

Since we force movable allocations to steal the smallest available page
(which we then practially always split), we steal less per fallback, so
the number of fallbacks increases and steals potentially happen from
different pageblocks.  This is however not an issue for movable pages
that can be compacted.

Importantly, the "unmovable placed with movable" statistics is lower,
which is the result of less fragmentation in the unmovable pageblocks.
The effect on reclaimable allocation is a bit unclear.

Link: http://lkml.kernel.org/r/20170529093947.22618-1-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/page_alloc.c

index bd65b60939b611e18d3772d26079421d0b4eb495..869035717048020b6bb9e88477e4e82a95760c87 100644 (file)
@@ -2216,7 +2216,11 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
        int fallback_mt;
        bool can_steal;
 
-       /* Find the largest possible block of pages in the other list */
+       /*
+        * Find the largest available free page in the other list. This roughly
+        * approximates finding the pageblock with the most free pages, which
+        * would be too costly to do exactly.
+        */
        for (current_order = MAX_ORDER-1;
                                current_order >= order && current_order <= MAX_ORDER-1;
                                --current_order) {
@@ -2226,19 +2230,50 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
                if (fallback_mt == -1)
                        continue;
 
-               page = list_first_entry(&area->free_list[fallback_mt],
-                                               struct page, lru);
+               /*
+                * We cannot steal all free pages from the pageblock and the
+                * requested migratetype is movable. In that case it's better to
+                * steal and split the smallest available page instead of the
+                * largest available page, because even if the next movable
+                * allocation falls back into a different pageblock than this
+                * one, it won't cause permanent fragmentation.
+                */
+               if (!can_steal && start_migratetype == MIGRATE_MOVABLE
+                                       && current_order > order)
+                       goto find_smallest;
 
-               steal_suitable_fallback(zone, page, start_migratetype,
-                                                               can_steal);
+               goto do_steal;
+       }
 
-               trace_mm_page_alloc_extfrag(page, order, current_order,
-                       start_migratetype, fallback_mt);
+       return false;
 
-               return true;
+find_smallest:
+       for (current_order = order; current_order < MAX_ORDER;
+                                                       current_order++) {
+               area = &(zone->free_area[current_order]);
+               fallback_mt = find_suitable_fallback(area, current_order,
+                               start_migratetype, false, &can_steal);
+               if (fallback_mt != -1)
+                       break;
        }
 
-       return false;
+       /*
+        * This should not happen - we already found a suitable fallback
+        * when looking for the largest page.
+        */
+       VM_BUG_ON(current_order == MAX_ORDER);
+
+do_steal:
+       page = list_first_entry(&area->free_list[fallback_mt],
+                                                       struct page, lru);
+
+       steal_suitable_fallback(zone, page, start_migratetype, can_steal);
+
+       trace_mm_page_alloc_extfrag(page, order, current_order,
+               start_migratetype, fallback_mt);
+
+       return true;
+
 }
 
 /*