x86/kvm: virt_xxx memory barriers instead of mandatory barriers
authorWanpeng Li <wanpeng.li@hotmail.com>
Tue, 11 Apr 2017 09:49:21 +0000 (02:49 -0700)
committerRadim Krčmář <rkrcmar@redhat.com>
Wed, 12 Apr 2017 18:17:38 +0000 (20:17 +0200)
virt_xxx memory barriers are implemented trivially using the low-level
__smp_xxx macros, __smp_xxx is equal to a compiler barrier for strong
TSO memory model, however, mandatory barriers will unconditional add
memory barriers, this patch replaces the rmb() in kvm_steal_clock() by
virt_rmb().

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
arch/x86/kernel/kvm.c

index 14f65a5f938e4f829de60e868e7ae8ec4954a6d3..da5c0978998488c612b1495a3659742b57b55071 100644 (file)
@@ -396,9 +396,9 @@ static u64 kvm_steal_clock(int cpu)
        src = &per_cpu(steal_time, cpu);
        do {
                version = src->version;
-               rmb();
+               virt_rmb();
                steal = src->steal;
-               rmb();
+               virt_rmb();
        } while ((version & 1) || (version != src->version));
 
        return steal;