mm/memory.c: make apply_to_page_range() more robust
authorMika Penttilä <mika.penttila@nextfour.com>
Tue, 15 Mar 2016 21:56:45 +0000 (14:56 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 15 Mar 2016 23:55:16 +0000 (16:55 -0700)
Arm and arm64 used to trigger this BUG_ON() - this has now been fixed.

But a WARN_ON() here is sufficient to catch future buggy callers.

Signed-off-by: Mika Penttilä <mika.penttila@nextfour.com>
Reviewed-by: Pekka Enberg <penberg@kernel.org>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memory.c

index 8132787ae4d509d475ed6a77705076ddfee30630..8adb5b75626462a26dcf4019e0b7725d2a844b64 100644 (file)
@@ -1876,7 +1876,9 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
        unsigned long end = addr + size;
        int err;
 
-       BUG_ON(addr >= end);
+       if (WARN_ON(addr >= end))
+               return -EINVAL;
+
        pgd = pgd_offset(mm, addr);
        do {
                next = pgd_addr_end(addr, end);