From: Larry Woodman Date: Tue, 15 Dec 2009 01:59:37 +0000 (-0800) Subject: hugetlb: prevent deadlock in __unmap_hugepage_range() when alloc_huge_page() fails X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=b76c8cfbff94263fdf2f408e94b78b049c24a9dc;p=GitHub%2FLineageOS%2Fandroid_kernel_motorola_exynos9610.git hugetlb: prevent deadlock in __unmap_hugepage_range() when alloc_huge_page() fails hugetlb_fault() takes the mm->page_table_lock spinlock then calls hugetlb_cow(). If the alloc_huge_page() in hugetlb_cow() fails due to an insufficient huge page pool it calls unmap_ref_private() with the mm->page_table_lock held. unmap_ref_private() then calls unmap_hugepage_range() which tries to acquire the mm->page_table_lock. [] print_circular_bug_tail+0x80/0x9f [] ? check_noncircular+0xb0/0xe8 [] __lock_acquire+0x956/0xc0e [] lock_acquire+0xee/0x12e [] ? unmap_hugepage_range+0x3e/0x84 [] ? unmap_hugepage_range+0x3e/0x84 [] _spin_lock+0x40/0x89 [] ? unmap_hugepage_range+0x3e/0x84 [] ? alloc_huge_page+0x218/0x318 [] unmap_hugepage_range+0x3e/0x84 [] hugetlb_cow+0x1e2/0x3f4 [] ? hugetlb_fault+0x453/0x4f6 [] hugetlb_fault+0x480/0x4f6 [] follow_hugetlb_page+0x116/0x2d9 [] ? _spin_unlock_irq+0x3a/0x5c [] __get_user_pages+0x2a3/0x427 [] get_user_pages+0x3e/0x54 [] get_user_pages_fast+0x170/0x1b5 [] dio_get_page+0x64/0x14a [] __blockdev_direct_IO+0x4b7/0xb31 [] blkdev_direct_IO+0x58/0x6e [] ? blkdev_get_blocks+0x0/0xb8 [] generic_file_aio_read+0xdd/0x528 [] ? avc_has_perm+0x66/0x8c [] do_sync_read+0xf5/0x146 [] ? autoremove_wake_function+0x0/0x5a [] ? security_file_permission+0x24/0x3a [] vfs_read+0xb5/0x126 [] ? fget_light+0x5e/0xf8 [] sys_read+0x54/0x8c [] system_call_fastpath+0x16/0x1b This can be fixed by dropping the mm->page_table_lock around the call to unmap_ref_private() if alloc_huge_page() fails, its dropped right below in the normal path anyway. However, earlier in the that function, it's also possible to call into the page allocator with the same spinlock held. What this patch does is drop the spinlock before the page allocator is potentially entered. The check for page allocation failure can be made without the page_table_lock as well as the copy of the huge page. Even if the PTE changed while the spinlock was held, the consequence is that a huge page is copied unnecessarily. This resolves both the double taking of the lock and sleeping with the spinlock held. [mel@csn.ul.ie: Cover also the case where process can sleep with spinlock] Signed-off-by: Larry Woodman Signed-off-by: Mel Gorman Acked-by: Adam Litke Cc: Andy Whitcroft Cc: Lee Schermerhorn Cc: Hugh Dickins Cc: David Gibson Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 450493d25572..2ef66a2a148d 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2293,6 +2293,9 @@ retry_avoidcopy: outside_reserve = 1; page_cache_get(old_page); + + /* Drop page_table_lock as buddy allocator may be called */ + spin_unlock(&mm->page_table_lock); new_page = alloc_huge_page(vma, address, outside_reserve); if (IS_ERR(new_page)) { @@ -2310,19 +2313,25 @@ retry_avoidcopy: if (unmap_ref_private(mm, vma, old_page, address)) { BUG_ON(page_count(old_page) != 1); BUG_ON(huge_pte_none(pte)); + spin_lock(&mm->page_table_lock); goto retry_avoidcopy; } WARN_ON_ONCE(1); } + /* Caller expects lock to be held */ + spin_lock(&mm->page_table_lock); return -PTR_ERR(new_page); } - spin_unlock(&mm->page_table_lock); copy_huge_page(new_page, old_page, address, vma); __SetPageUptodate(new_page); - spin_lock(&mm->page_table_lock); + /* + * Retake the page_table_lock to check for racing updates + * before the page tables are altered + */ + spin_lock(&mm->page_table_lock); ptep = huge_pte_offset(mm, address & huge_page_mask(h)); if (likely(pte_same(huge_ptep_get(ptep), pte))) { /* Break COW */