KVM: MMU: using __xchg_spte more smarter
authorXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Fri, 16 Jul 2010 03:30:18 +0000 (11:30 +0800)
committerAvi Kivity <avi@redhat.com>
Mon, 2 Aug 2010 03:41:01 +0000 (06:41 +0300)
Sometimes, atomically set spte is not needed, this patch call __xchg_spte()
more smartly

Note: if the old mapping's access bit is already set, we no need atomic operation
since the access bit is not lost

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
arch/x86/kvm/mmu.c

index e4b862eb888517833bfafb471b1b9e855e5927c7..0dcc95e09876fc7094bc9b05bf3acf34ede51439 100644 (file)
@@ -682,9 +682,14 @@ static void rmap_remove(struct kvm *kvm, u64 *spte)
 static void set_spte_track_bits(u64 *sptep, u64 new_spte)
 {
        pfn_t pfn;
-       u64 old_spte;
+       u64 old_spte = *sptep;
+
+       if (!shadow_accessed_mask || !is_shadow_present_pte(old_spte) ||
+             old_spte & shadow_accessed_mask) {
+               __set_spte(sptep, new_spte);
+       } else
+               old_spte = __xchg_spte(sptep, new_spte);
 
-       old_spte = __xchg_spte(sptep, new_spte);
        if (!is_rmap_spte(old_spte))
                return;
        pfn = spte_to_pfn(old_spte);