From 6b79c57b92cdd90853002980609af516d14c4f9c Mon Sep 17 00:00:00 2001 From: Naoya Horiguchi Date: Tue, 7 Apr 2015 14:26:47 -0700 Subject: [PATCH] mm: numa: disable change protection for vma(VM_HUGETLB) Currently when a process accesses a hugetlb range protected with PROTNONE, unexpected COWs are triggered, which finally puts the hugetlb subsystem into a broken/uncontrollable state, where for example h->resv_huge_pages is subtracted too much and wraps around to a very large number, and the free hugepage pool is no longer maintainable. This patch simply stops changing protection for vma(VM_HUGETLB) to fix the problem. And this also allows us to avoid useless overhead of minor faults. Signed-off-by: Naoya Horiguchi Suggested-by: Mel Gorman Cc: Hugh Dickins Cc: "Kirill A. Shutemov" Cc: David Rientjes Cc: Rik van Riel Cc: Peter Zijlstra Cc: Ingo Molnar Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- kernel/sched/fair.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index bcfe32088b37..241213be507c 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2165,8 +2165,10 @@ void task_numa_work(struct callback_head *work) vma = mm->mmap; } for (; vma; vma = vma->vm_next) { - if (!vma_migratable(vma) || !vma_policy_mof(vma)) + if (!vma_migratable(vma) || !vma_policy_mof(vma) || + is_vm_hugetlb_page(vma)) { continue; + } /* * Shared library pages mapped by multiple processes are not -- 2.20.1