x86, mm: Make spurious_fault check explicitly check the PRESENT bit
authorShaohua Li <shaohua.li@intel.com>
Tue, 27 Jul 2010 08:06:28 +0000 (16:06 +0800)
committerH. Peter Anvin <hpa@linux.intel.com>
Thu, 26 Aug 2010 23:00:21 +0000 (16:00 -0700)
pte_present() returns true even present bit isn't set but _PAGE_PROTNONE
(global bit) bit is set. While with CONFIG_DEBUG_PAGEALLOC, free pages have
global bit set but present bit clear. This patch makes we could catch
free pages access with CONFIG_DEBUG_PAGEALLOC enabled.

[ hpa: added a comment in the code as a warning to janitors ]

Signed-off-by: Shaohua Li <shaohua.li@intel.com>
LKML-Reference: <1280217988.32400.75.camel@sli10-desk.sh.intel.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
arch/x86/mm/fault.c

index 51f7ee71d6c716c7fb1868f5a7ecc8027ea30a86..caec22906d7c4a9e76a69913099cb9aaf1570dc8 100644 (file)
@@ -872,8 +872,14 @@ spurious_fault(unsigned long error_code, unsigned long address)
        if (pmd_large(*pmd))
                return spurious_fault_check(error_code, (pte_t *) pmd);
 
+       /*
+        * Note: don't use pte_present() here, since it returns true
+        * if the _PAGE_PROTNONE bit is set.  However, this aliases the
+        * _PAGE_GLOBAL bit, which for kernel pages give false positives
+        * when CONFIG_DEBUG_PAGEALLOC is used.
+        */
        pte = pte_offset_kernel(pmd, address);
-       if (!pte_present(*pte))
+       if (!(pte_flags(*pte) & _PAGE_PRESENT))
                return 0;
 
        ret = spurious_fault_check(error_code, pte);