KVM: PPC: Book3S PR: Exit KVM on failed mapping
authorAlexey Kardashevskiy <aik@ozlabs.ru>
Fri, 24 Mar 2017 06:48:10 +0000 (17:48 +1100)
committerPaul Mackerras <paulus@ozlabs.org>
Thu, 20 Apr 2017 01:38:04 +0000 (11:38 +1000)
At the moment kvmppc_mmu_map_page() returns -1 if
mmu_hash_ops.hpte_insert() fails for any reason so the page fault handler
resumes the guest and it faults on the same address again.

This adds distinction to kvmppc_mmu_map_page() to return -EIO if
mmu_hash_ops.hpte_insert() failed for a reason other than full pteg.
At the moment only pSeries_lpar_hpte_insert() returns -2 if
plpar_pte_enter() failed with a code other than H_PTEG_FULL.
Other mmu_hash_ops.hpte_insert() instances can only fail with
-1 "full pteg".

With this change, if PR KVM fails to update HPT, it can signal
the userspace about this instead of returning to guest and having
the very same page fault over and over again.

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
arch/powerpc/kvm/book3s_64_mmu_host.c
arch/powerpc/kvm/book3s_pr.c

index a587e8f4fd2648caf61535176ef5a1d03a2c79ae..4b4e927c4822c3cac691a8d84dadf843a1b5efba 100644 (file)
@@ -177,12 +177,15 @@ map_again:
        ret = mmu_hash_ops.hpte_insert(hpteg, vpn, hpaddr, rflags, vflags,
                                       hpsize, hpsize, MMU_SEGSIZE_256M);
 
-       if (ret < 0) {
+       if (ret == -1) {
                /* If we couldn't map a primary PTE, try a secondary */
                hash = ~hash;
                vflags ^= HPTE_V_SECONDARY;
                attempt++;
                goto map_again;
+       } else if (ret < 0) {
+               r = -EIO;
+               goto out_unlock;
        } else {
                trace_kvm_book3s_64_mmu_map(rflags, hpteg,
                                            vpn, hpaddr, orig_pte);
index 633502f52bbbf388e8d0790a7064ffd5fb04d783..ce437b98477e4cd72185deb7285dbc6be393aefa 100644 (file)
@@ -625,7 +625,11 @@ int kvmppc_handle_pagefault(struct kvm_run *run, struct kvm_vcpu *vcpu,
                        kvmppc_mmu_unmap_page(vcpu, &pte);
                }
                /* The guest's PTE is not mapped yet. Map on the host */
-               kvmppc_mmu_map_page(vcpu, &pte, iswrite);
+               if (kvmppc_mmu_map_page(vcpu, &pte, iswrite) == -EIO) {
+                       /* Exit KVM if mapping failed */
+                       run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
+                       return RESUME_HOST;
+               }
                if (data)
                        vcpu->stat.sp_storage++;
                else if (vcpu->arch.mmu.is_dcbz32(vcpu) &&