thp+memcg-numa: fix BUG at include/linux/mm.h:370!
authorHugh Dickins <hughd@google.com>
Mon, 14 Mar 2011 08:08:47 +0000 (01:08 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Mon, 14 Mar 2011 15:29:50 +0000 (08:29 -0700)
THP's collapse_huge_page() has an understandable but ugly difference
in when its huge page is allocated: inside if NUMA but outside if not.
It's hardly surprising that the memcg failure path forgot that, freeing
the page in the non-NUMA case, then hitting a VM_BUG_ON in get_page()
(or even worse, using the freed page).

Signed-off-by: Hugh Dickins <hughd@google.com>
Reviewed-by: Minchan Kim <minchan.kim@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/huge_memory.c

index dbe99a5f2073927741442a13f99123de127973c7..113e35c4750209b7cf6f61f54a041f37219ebc4d 100644 (file)
@@ -1762,6 +1762,10 @@ static void collapse_huge_page(struct mm_struct *mm,
 #ifndef CONFIG_NUMA
        VM_BUG_ON(!*hpage);
        new_page = *hpage;
+       if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) {
+               up_read(&mm->mmap_sem);
+               return;
+       }
 #else
        VM_BUG_ON(*hpage);
        /*
@@ -1781,12 +1785,12 @@ static void collapse_huge_page(struct mm_struct *mm,
                *hpage = ERR_PTR(-ENOMEM);
                return;
        }
-#endif
        if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) {
                up_read(&mm->mmap_sem);
                put_page(new_page);
                return;
        }
+#endif
 
        /* after allocating the hugepage upgrade to mmap_sem write mode */
        up_read(&mm->mmap_sem);