BACKPORT: arm64: Correctly bounds check virt_addr_valid
authorLaura Abbott <labbott@redhat.com>
Wed, 21 Sep 2016 22:25:04 +0000 (15:25 -0700)
committerSami Tolvanen <samitolvanen@google.com>
Wed, 5 Oct 2016 15:09:19 +0000 (08:09 -0700)
virt_addr_valid is supposed to return true if and only if virt_to_page
returns a valid page structure. The current macro does math on whatever
address is given and passes that to pfn_valid to verify. vmalloc and
module addresses can happen to generate a pfn that 'happens' to be
valid. Fix this by only performing the pfn_valid check on addresses that
have the potential to be valid.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 31374226
Change-Id: I75cbeb3edb059f19af992b7f5d0baa283f95991b
(cherry picked from commit ca219452c6b8a6cd1369b6a78b1cf069d0386865)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
arch/arm64/include/asm/memory.h

index 12f8a00fb3f1767a645a04358dcaca08fd4f6b43..ba1b3409d7edd1fc349ef474631d1fd263226e78 100644 (file)
@@ -193,7 +193,11 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define ARCH_PFN_OFFSET                ((unsigned long)PHYS_PFN_OFFSET)
 
 #define virt_to_page(kaddr)    pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
-#define        virt_addr_valid(kaddr)  pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
+#define _virt_addr_valid(kaddr)        pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
+
+#define _virt_addr_is_linear(kaddr)    (((u64)(kaddr)) >= PAGE_OFFSET)
+#define virt_addr_valid(kaddr)         (_virt_addr_is_linear(kaddr) && \
+                                        _virt_addr_valid(kaddr))
 
 #endif