From: Kirill A. Shutemov Date: Wed, 12 Dec 2012 21:51:02 +0000 (-0800) Subject: thp: setup huge zero page on non-write page fault X-Git-Tag: MMI-PSA29.97-13-9~15389^2~44 X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=80371957f09814d25c38733d2d08de47f59a13c2;p=GitHub%2FMotorolaMobilityLLC%2Fkernel-slsi.git thp: setup huge zero page on non-write page fault All code paths seems covered. Now we can map huge zero page on read page fault. We setup it in do_huge_pmd_anonymous_page() if area around fault address is suitable for THP and we've got read page fault. If we fail to setup huge zero page (ENOMEM) we fallback to handle_pte_fault() as we normally do in THP. Signed-off-by: Kirill A. Shutemov Cc: Andrea Arcangeli Cc: Andi Kleen Cc: "H. Peter Anvin" Cc: Mel Gorman Cc: David Rientjes Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ea0e23fd6967..e1b6f4e13b91 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -733,6 +733,16 @@ int do_huge_pmd_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma, return VM_FAULT_OOM; if (unlikely(khugepaged_enter(vma))) return VM_FAULT_OOM; + if (!(flags & FAULT_FLAG_WRITE)) { + pgtable_t pgtable; + pgtable = pte_alloc_one(mm, haddr); + if (unlikely(!pgtable)) + return VM_FAULT_OOM; + spin_lock(&mm->page_table_lock); + set_huge_zero_page(pgtable, mm, vma, haddr, pmd); + spin_unlock(&mm->page_table_lock); + return 0; + } page = alloc_hugepage_vma(transparent_hugepage_defrag(vma), vma, haddr, numa_node_id(), 0); if (unlikely(!page)) {