The current definition of pte_page() masks out valid bits from the
physical address, causing vmalloc_to_page() to misbehave. This may
lead to everything from mmap() silently accessing the wrong data to
"invalid pte" errors dumped by the kernel.
Also remove the now-unused definition of PTE_PHYS_MASK.
Thanks to Matteo Vit for discovering this bug.
Signed-off-by: Haavard Skinnemoen <hskinnemoen@atmel.com>
#define USER_PTRS_PER_PGD (TASK_SIZE / PGDIR_SIZE)
#define FIRST_USER_ADDRESS 0
-#define PTE_PHYS_MASK 0x1ffff000
-
#ifndef __ASSEMBLY__
extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
extern void paging_init(void);
* trivial.
*/
#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
-#define pte_page(x) phys_to_page(pte_val(x) & PTE_PHYS_MASK)
+#define pte_page(x) (pfn_to_page(pte_pfn(x)))
/*
* Mark the prot value as uncacheable and unbufferable