KVM: x86: drop calling kvm_mmu_zap_all in emulator_fix_hypercall
authorXiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Fri, 31 May 2013 00:36:20 +0000 (08:36 +0800)
committerGleb Natapov <gleb@redhat.com>
Wed, 5 Jun 2013 09:32:00 +0000 (12:32 +0300)
Quote Gleb's mail:

| Back then kvm->lock protected memslot access so code like:
|
| mutex_lock(&vcpu->kvm->lock);
| kvm_mmu_zap_all(vcpu->kvm);
| mutex_unlock(&vcpu->kvm->lock);
|
| which is what 7aa81cc0 does was enough to guaranty that no vcpu will
| run while code is patched. This is no longer the case and
| mutex_lock(&vcpu->kvm->lock); is gone from that code path long time ago,
| so now kvm_mmu_zap_all() there is useless and the code is incorrect.

So we drop it and it will be fixed later

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
arch/x86/kvm/x86.c

index 8d28810a5f8886ba6e4750dd8068f12392338f42..6739b1d4ce7cf8cfcf2ab6a726c7c137c4cad92e 100644 (file)
@@ -5523,13 +5523,6 @@ static int emulator_fix_hypercall(struct x86_emulate_ctxt *ctxt)
        char instruction[3];
        unsigned long rip = kvm_rip_read(vcpu);
 
-       /*
-        * Blow out the MMU to ensure that no other VCPU has an active mapping
-        * to ensure that the updated hypercall appears atomically across all
-        * VCPUs.
-        */
-       kvm_mmu_zap_all(vcpu->kvm);
-
        kvm_x86_ops->patch_hypercall(vcpu, instruction);
 
        return emulator_write_emulated(ctxt, rip, instruction, 3, NULL);