mm, kasan: fix to call kasan_free_pages() after poisoning page
authorseokhoon.yoon <iamyooon@gmail.com>
Fri, 20 May 2016 23:58:47 +0000 (16:58 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Sat, 21 May 2016 00:58:30 +0000 (17:58 -0700)
When CONFIG_PAGE_POISONING and CONFIG_KASAN is enabled,
free_pages_prepare()'s codeflow is below.

  1)kmemcheck_free_shadow()
  2)kasan_free_pages()
    - set shadow byte of page is freed
  3)kernel_poison_pages()
  3.1) check access to page is valid or not using kasan
    ---> error occur, kasan think it is invalid access
  3.2) poison page
  4)kernel_map_pages()

So kasan_free_pages() should be called after poisoning the page.

Link: http://lkml.kernel.org/r/1463220405-7455-1-git-send-email-iamyooon@gmail.com
Signed-off-by: seokhoon.yoon <iamyooon@gmail.com>
Cc: Andrey Ryabinin <a.ryabinin@samsung.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/page_alloc.c

index 2dd1ba4e70cc8fa59eaae5c27d34520488299837..383b14b4f61d5315f8bb134453bf4f0c7551e49c 100644 (file)
@@ -993,7 +993,6 @@ static __always_inline bool free_pages_prepare(struct page *page,
 
        trace_mm_page_free(page, order);
        kmemcheck_free_shadow(page, order);
-       kasan_free_pages(page, order);
 
        /*
         * Check tail pages before head page information is cleared to
@@ -1035,6 +1034,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
        arch_free_page(page, order);
        kernel_poison_pages(page, 1 << order, 0);
        kernel_map_pages(page, 1 << order, 0);
+       kasan_free_pages(page, order);
 
        return true;
 }