KVM: MMU: simplify folding of dirty bit into accessed_dirty
authorGleb Natapov <gleb@redhat.com>
Thu, 27 Dec 2012 12:44:58 +0000 (14:44 +0200)
committerMarcelo Tosatti <mtosatti@redhat.com>
Mon, 7 Jan 2013 22:31:35 +0000 (20:31 -0200)
MMU code tries to avoid if()s HW is not able to predict reliably by using
bitwise operation to streamline code execution, but in case of a dirty bit
folding this gives us nothing since write_fault is checked right before
the folding code. Lets just piggyback onto the if() to make code more clear.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
arch/x86/kvm/paging_tmpl.h

index 891eb6d93b8b32b2d4bb7f85c45702d314d14615..a7b24cf59a3c2900c01e9ea233430984fba1f702 100644 (file)
@@ -249,16 +249,12 @@ retry_walk:
 
        if (!write_fault)
                protect_clean_gpte(&pte_access, pte);
-
-       /*
-        * On a write fault, fold the dirty bit into accessed_dirty by shifting it one
-        * place right.
-        *
-        * On a read fault, do nothing.
-        */
-       shift = write_fault >> ilog2(PFERR_WRITE_MASK);
-       shift *= PT_DIRTY_SHIFT - PT_ACCESSED_SHIFT;
-       accessed_dirty &= pte >> shift;
+       else
+               /*
+                * On a write fault, fold the dirty bit into accessed_dirty by
+                * shifting it one place right.
+                */
+               accessed_dirty &= pte >> (PT_DIRTY_SHIFT - PT_ACCESSED_SHIFT);
 
        if (unlikely(!accessed_dirty)) {
                ret = FNAME(update_accessed_dirty_bits)(vcpu, mmu, walker, write_fault);