From: Xiao Guangrong Date: Fri, 16 Jul 2010 03:30:18 +0000 (+0800) Subject: KVM: MMU: using __xchg_spte more smarter X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=9a3aad70572c3f4d55e7f09ac4eb313d41d0a484;p=GitHub%2Fexynos8895%2Fandroid_kernel_samsung_universal8895.git KVM: MMU: using __xchg_spte more smarter Sometimes, atomically set spte is not needed, this patch call __xchg_spte() more smartly Note: if the old mapping's access bit is already set, we no need atomic operation since the access bit is not lost Signed-off-by: Xiao Guangrong Signed-off-by: Avi Kivity --- diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index e4b862eb8885..0dcc95e09876 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -682,9 +682,14 @@ static void rmap_remove(struct kvm *kvm, u64 *spte) static void set_spte_track_bits(u64 *sptep, u64 new_spte) { pfn_t pfn; - u64 old_spte; + u64 old_spte = *sptep; + + if (!shadow_accessed_mask || !is_shadow_present_pte(old_spte) || + old_spte & shadow_accessed_mask) { + __set_spte(sptep, new_spte); + } else + old_spte = __xchg_spte(sptep, new_spte); - old_spte = __xchg_spte(sptep, new_spte); if (!is_rmap_spte(old_spte)) return; pfn = spte_to_pfn(old_spte);