KVM: SVM: Load %gs earlier if CONFIG_X86_32_LAZY_GS=n
authorAvi Kivity <avi@redhat.com>
Tue, 8 Mar 2011 14:09:51 +0000 (16:09 +0200)
committerMarcelo Tosatti <mtosatti@redhat.com>
Thu, 17 Mar 2011 16:08:33 +0000 (13:08 -0300)
With CONFIG_CC_STACKPROTECTOR, we need a valid %gs at all times, so disable
lazy reload and do an eager reload immediately after the vmexit.

Reported-by: IVAN ANGELOV <ivangotoy@gmail.com>
Acked-By: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
arch/x86/kvm/svm.c

index 8d61df4a02c79af936d98d93477f7a65ca15200a..6bb15d583e4786cf787d61e6ddf90fe9b7614bfd 100644 (file)
@@ -1155,7 +1155,9 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu)
        wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gs);
        load_gs_index(svm->host.gs);
 #else
+#ifdef CONFIG_X86_32_LAZY_GS
        loadsegment(gs, svm->host.gs);
+#endif
 #endif
        for (i = 0; i < NR_HOST_SAVE_USER_MSRS; i++)
                wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
@@ -3649,6 +3651,9 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
        wrmsrl(MSR_GS_BASE, svm->host.gs_base);
 #else
        loadsegment(fs, svm->host.fs);
+#ifndef CONFIG_X86_32_LAZY_GS
+       loadsegment(gs, svm->host.gs);
+#endif
 #endif
 
        reload_tss(vcpu);