From: Jann Horn Date: Fri, 25 Nov 2022 21:37:14 +0000 (+0100) Subject: mm/khugepaged: invoke MMU notifiers in shmem/file collapse paths X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=c23105673228c349739e958fa33955ed8faddcaf;p=GitHub%2FLineageOS%2Fandroid_kernel_motorola_exynos9610.git mm/khugepaged: invoke MMU notifiers in shmem/file collapse paths commit f268f6cf875f3220afc77bdd0bf1bb136eb54db9 upstream. Any codepath that zaps page table entries must invoke MMU notifiers to ensure that secondary MMUs (like KVM) don't keep accessing pages which aren't mapped anymore. Secondary MMUs don't hold their own references to pages that are mirrored over, so failing to notify them can lead to page use-after-free. I'm marking this as addressing an issue introduced in commit f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages"), but most of the security impact of this only came in commit 27e1f8273113 ("khugepaged: enable collapse pmd for pte-mapped THP"), which actually omitted flushes for the removal of present PTEs, not just for the removal of empty page tables. Link: https://lkml.kernel.org/r/20221129154730.2274278-3-jannh@google.com Link: https://lkml.kernel.org/r/20221128180252.1684965-3-jannh@google.com Link: https://lkml.kernel.org/r/20221125213714.4115729-3-jannh@google.com Fixes: f3f0e1d2150b ("khugepaged: add support of collapse for tmpfs/shmem pages") Signed-off-by: Jann Horn Acked-by: David Hildenbrand Reviewed-by: Yang Shi Cc: John Hubbard Cc: Peter Xu Cc: Signed-off-by: Andrew Morton [manual backport: this code was refactored from two copies into a common helper between 5.15 and 6.0; pmd collapse for PTE-mapped THP was only added in 5.4; MMU notifier API changed between 4.19 and 5.4] Signed-off-by: Jann Horn Signed-off-by: Greg Kroah-Hartman --- diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 6f8a1b423538..644f0a9c8a55 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1304,13 +1304,20 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff) */ if (down_write_trylock(&mm->mmap_sem)) { if (!khugepaged_test_exit(mm)) { - spinlock_t *ptl = pmd_lock(mm, pmd); + spinlock_t *ptl; + unsigned long end = addr + HPAGE_PMD_SIZE; + + mmu_notifier_invalidate_range_start(mm, addr, + end); + ptl = pmd_lock(mm, pmd); /* assume page table is clear */ _pmd = pmdp_collapse_flush(vma, addr, pmd); spin_unlock(ptl); atomic_long_dec(&mm->nr_ptes); tlb_remove_table_sync_one(); pte_free(mm, pmd_pgtable(_pmd)); + mmu_notifier_invalidate_range_end(mm, addr, + end); } up_write(&mm->mmap_sem); }