x86/mm: Check for pfn instead of page in vmalloc_sync_one()
authorJoerg Roedel <jroedel@suse.de>
Fri, 19 Jul 2019 18:46:50 +0000 (20:46 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sun, 25 Aug 2019 08:51:19 +0000 (10:51 +0200)
commit 51b75b5b563a2637f9d8dc5bd02a31b2ff9e5ea0 upstream.

Do not require a struct page for the mapped memory location because it
might not exist. This can happen when an ioremapped region is mapped with
2MB pages.

Fixes: 5d72b4fba40ef ('x86, mm: support huge I/O mapping capability I/F')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Link: https://lkml.kernel.org/r/20190719184652.11391-2-joro@8bytes.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/x86/mm/fault.c

index c140198d9fa5e6ae0c0b7b3c3d14ebd01b63e645..2870424bda1ffc89a4418822607cfa208f1d6003 100644 (file)
@@ -279,7 +279,7 @@ static inline pmd_t *vmalloc_sync_one(pgd_t *pgd, unsigned long address)
        if (!pmd_present(*pmd))
                set_pmd(pmd, *pmd_k);
        else
-               BUG_ON(pmd_page(*pmd) != pmd_page(*pmd_k));
+               BUG_ON(pmd_pfn(*pmd) != pmd_pfn(*pmd_k));
 
        return pmd_k;
 }