mm, thp: really limit transparent hugepage allocation to local node
authorDavid Rientjes <rientjes@google.com>
Tue, 14 Apr 2015 22:46:58 +0000 (15:46 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 14 Apr 2015 23:49:03 +0000 (16:49 -0700)
Commit 077fcf116c8c ("mm/thp: allocate transparent hugepages on local
node") restructured alloc_hugepage_vma() with the intent of only
allocating transparent hugepages locally when there was not an effective
interleave mempolicy.

alloc_pages_exact_node() does not limit the allocation to the single node,
however, but rather prefers it.  This is because __GFP_THISNODE is not set
which would cause the node-local nodemask to be passed.  Without it, only
a nodemask that prefers the local node is passed.

Fix this by passing __GFP_THISNODE and falling back to small pages when
the allocation fails.

Commit 9f1b868a13ac ("mm: thp: khugepaged: add policy for finding target
node") suffers from a similar problem for khugepaged, which is also fixed.

Fixes: 077fcf116c8c ("mm/thp: allocate transparent hugepages on local node")
Fixes: 9f1b868a13ac ("mm: thp: khugepaged: add policy for finding target node")
Signed-off-by: David Rientjes <rientjes@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Pravin Shelar <pshelar@nicira.com>
Cc: Jarno Rajahalme <jrajahalme@nicira.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/huge_memory.c
mm/mempolicy.c

index 6352c1dfa898385e545ad76c179b882da884046f..3afb5cbe13128b51428d812e9b2d251fa3e8e4e1 100644 (file)
@@ -2328,8 +2328,14 @@ static struct page
                       struct vm_area_struct *vma, unsigned long address,
                       int node)
 {
+       gfp_t flags;
+
        VM_BUG_ON_PAGE(*hpage, *hpage);
 
+       /* Only allocate from the target node */
+       flags = alloc_hugepage_gfpmask(khugepaged_defrag(), __GFP_OTHER_NODE) |
+               __GFP_THISNODE;
+
        /*
         * Before allocating the hugepage, release the mmap_sem read lock.
         * The allocation can take potentially a long time if it involves
@@ -2338,8 +2344,7 @@ static struct page
         */
        up_read(&mm->mmap_sem);
 
-       *hpage = alloc_pages_exact_node(node, alloc_hugepage_gfpmask(
-               khugepaged_defrag(), __GFP_OTHER_NODE), HPAGE_PMD_ORDER);
+       *hpage = alloc_pages_exact_node(node, flags, HPAGE_PMD_ORDER);
        if (unlikely(!*hpage)) {
                count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
                *hpage = ERR_PTR(-ENOMEM);
index 69d05acfa18c83799920a143193511917b730c92..ede26291d4aa92ad120bfd006786414fd6d45c56 100644 (file)
@@ -1986,7 +1986,8 @@ retry_cpuset:
                nmask = policy_nodemask(gfp, pol);
                if (!nmask || node_isset(node, *nmask)) {
                        mpol_cond_put(pol);
-                       page = alloc_pages_exact_node(node, gfp, order);
+                       page = alloc_pages_exact_node(node,
+                                               gfp | __GFP_THISNODE, order);
                        goto out;
                }
        }