From: Vlastimil Babka Date: Wed, 1 Nov 2017 07:21:25 +0000 (+0100) Subject: x86/mm: fix use-after-free of vma during userfaultfd fault X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=cb0631fd3cf9e989cd48293fe631cbc402aec9a9;p=GitHub%2FLineageOS%2Fandroid_kernel_motorola_exynos9610.git x86/mm: fix use-after-free of vma during userfaultfd fault Syzkaller with KASAN has reported a use-after-free of vma->vm_flags in __do_page_fault() with the following reproducer: mmap(&(0x7f0000000000/0xfff000)=nil, 0xfff000, 0x3, 0x32, 0xffffffffffffffff, 0x0) mmap(&(0x7f0000011000/0x3000)=nil, 0x3000, 0x1, 0x32, 0xffffffffffffffff, 0x0) r0 = userfaultfd(0x0) ioctl$UFFDIO_API(r0, 0xc018aa3f, &(0x7f0000002000-0x18)={0xaa, 0x0, 0x0}) ioctl$UFFDIO_REGISTER(r0, 0xc020aa00, &(0x7f0000019000)={{&(0x7f0000012000/0x2000)=nil, 0x2000}, 0x1, 0x0}) r1 = gettid() syz_open_dev$evdev(&(0x7f0000013000-0x12)="2f6465762f696e7075742f6576656e742300", 0x0, 0x0) tkill(r1, 0x7) The vma should be pinned by mmap_sem, but handle_userfault() might (in a return to userspace scenario) release it and then acquire again, so when we return to __do_page_fault() (with other result than VM_FAULT_RETRY), the vma might be gone. Specifically, per Andrea the scenario is "A return to userland to repeat the page fault later with a VM_FAULT_NOPAGE retval (potentially after handling any pending signal during the return to userland). The return to userland is identified whenever FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in vmf->flags" However, since commit a3c4fb7c9c2e ("x86/mm: Fix fault error path using unsafe vma pointer") there is a vma_pkey() read of vma->vm_flags after that point, which can thus become use-after-free. Fix this by moving the read before calling handle_mm_fault(). Reported-by: syzbot Reported-by: Dmitry Vyukov Suggested-by: Kirill A. Shutemov Fixes: 3c4fb7c9c2e ("x86/mm: Fix fault error path using unsafe vma pointer") Reviewed-by: Andrea Arcangeli Signed-off-by: Vlastimil Babka Signed-off-by: Linus Torvalds --- diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index e2baeaa053a5..7101c281c7ce 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1440,7 +1440,17 @@ good_area: * make sure we exit gracefully rather than endlessly redo * the fault. Since we never set FAULT_FLAG_RETRY_NOWAIT, if * we get VM_FAULT_RETRY back, the mmap_sem has been unlocked. + * + * Note that handle_userfault() may also release and reacquire mmap_sem + * (and not return with VM_FAULT_RETRY), when returning to userland to + * repeat the page fault later with a VM_FAULT_NOPAGE retval + * (potentially after handling any pending signal during the return to + * userland). The return to userland is identified whenever + * FAULT_FLAG_USER|FAULT_FLAG_KILLABLE are both set in flags. + * Thus we have to be careful about not touching vma after handling the + * fault, so we read the pkey beforehand. */ + pkey = vma_pkey(vma); fault = handle_mm_fault(vma, address, flags); major |= fault & VM_FAULT_MAJOR; @@ -1467,7 +1477,6 @@ good_area: return; } - pkey = vma_pkey(vma); up_read(&mm->mmap_sem); if (unlikely(fault & VM_FAULT_ERROR)) { mm_fault_error(regs, error_code, address, &pkey, fault);