KVM: MMU: Conditionally reschedule when kvm_mmu_slot_remove_write_access() takes...
authorTakuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Tue, 8 Jan 2013 10:47:33 +0000 (19:47 +0900)
committerGleb Natapov <gleb@redhat.com>
Mon, 14 Jan 2013 09:14:28 +0000 (11:14 +0200)
If the userspace starts dirty logging for a large slot, say 64GB of
memory, kvm_mmu_slot_remove_write_access() needs to hold mmu_lock for
a long time such as tens of milliseconds.  This patch controls the lock
hold time by asking the scheduler if we need to reschedule for others.

One penalty for this is that we need to flush TLBs before releasing
mmu_lock.  But since holding mmu_lock for a long time does affect not
only the guest, vCPU threads in other words, but also the host as a
whole, we should pay for that.

In practice, the cost will not be so high because we can protect a fair
amount of memory before being rescheduled: on my test environment,
cond_resched_lock() was called only once for protecting 12GB of memory
even without THP.  We can also revisit Avi's "unlocked TLB flush" work
later for completely suppressing extra TLB flushes if needed.

Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com>
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Signed-off-by: Gleb Natapov <gleb@redhat.com>
arch/x86/kvm/mmu.c

index e5dcae31cebc85b90adc1fc40b6727f8b0694f40..9f628f7a40b2a388e44be7f840d4d8b7d1952e9f 100644 (file)
@@ -4186,6 +4186,11 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, int slot)
                for (index = 0; index <= last_index; ++index, ++rmapp) {
                        if (*rmapp)
                                __rmap_write_protect(kvm, rmapp, false);
+
+                       if (need_resched() || spin_needbreak(&kvm->mmu_lock)) {
+                               kvm_flush_remote_tlbs(kvm);
+                               cond_resched_lock(&kvm->mmu_lock);
+                       }
                }
        }