KVM: arm/arm64: Ensure only THP is candidate for adjustment
authorPunit Agrawal <punit.agrawal@arm.com>
Mon, 1 Oct 2018 15:54:35 +0000 (16:54 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 16 May 2019 17:42:26 +0000 (19:42 +0200)
[ Upstream commit fd2ef358282c849c193aa36dadbf4f07f7dcd29b ]

PageTransCompoundMap() returns true for hugetlbfs and THP
hugepages. This behaviour incorrectly leads to stage 2 faults for
unsupported hugepage sizes (e.g., 64K hugepage with 4K pages) to be
treated as THP faults.

Tighten the check to filter out hugetlbfs pages. This also leads to
consistently mapping all unsupported hugepage sizes as PTE level
entries at stage 2.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Reviewed-by: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Christoffer Dall <christoffer.dall@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: stable@vger.kernel.org # v4.13+
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
virt/kvm/arm/mmu.c

index 225dc671ae31beba68a8ff255ca816f8c54d652a..1f4cac53b92349d62dba6853e605e24e35729337 100644 (file)
@@ -1068,8 +1068,14 @@ static bool transparent_hugepage_adjust(kvm_pfn_t *pfnp, phys_addr_t *ipap)
 {
        kvm_pfn_t pfn = *pfnp;
        gfn_t gfn = *ipap >> PAGE_SHIFT;
+       struct page *page = pfn_to_page(pfn);
 
-       if (PageTransCompoundMap(pfn_to_page(pfn))) {
+       /*
+        * PageTransCompoungMap() returns true for THP and
+        * hugetlbfs. Make sure the adjustment is done only for THP
+        * pages.
+        */
+       if (!PageHuge(page) && PageTransCompoundMap(page)) {
                unsigned long mask;
                /*
                 * The address we faulted on is backed by a transparent huge