x86/mm: Flush lazy MMU when DEBUG_PAGEALLOC is set
authorBoris Ostrovsky <boris.ostrovsky@oracle.com>
Thu, 11 Apr 2013 17:59:52 +0000 (13:59 -0400)
committerIngo Molnar <mingo@kernel.org>
Fri, 12 Apr 2013 05:19:19 +0000 (07:19 +0200)
When CONFIG_DEBUG_PAGEALLOC is set page table updates made by
kernel_map_pages() are not made visible (via TLB flush)
immediately if lazy MMU is on. In environments that support lazy
MMU (e.g. Xen) this may lead to fatal page faults, for example,
when zap_pte_range() needs to allocate pages in
__tlb_remove_page() -> tlb_next_batch().

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: konrad.wilk@oracle.com
Link: http://lkml.kernel.org/r/1365703192-2089-1-git-send-email-boris.ostrovsky@oracle.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/mm/pageattr.c

index 7896f7190fda3de59736b44bff6e8900bea8ec81..fb4e73ec24d8d1b8b894b6c09f7c082a49dc1dcf 100644 (file)
@@ -1413,6 +1413,8 @@ void kernel_map_pages(struct page *page, int numpages, int enable)
         * but that can deadlock->flush only current cpu:
         */
        __flush_tlb_all();
+
+       arch_flush_lazy_mmu_mode();
 }
 
 #ifdef CONFIG_HIBERNATION