KVM: arm/arm64: Do not use kern_hyp_va() with kvm_vgic_global_state
authorMarc Zyngier <marc.zyngier@arm.com>
Fri, 20 Jul 2018 09:56:19 +0000 (10:56 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sun, 22 Jul 2018 12:27:41 +0000 (14:27 +0200)
Commit 44a497abd621a71c645f06d3d545ae2f46448830 upstream.

kvm_vgic_global_state is part of the read-only section, and is
usually accessed using a PC-relative address generation (adrp + add).

It is thus useless to use kern_hyp_va() on it, and actively problematic
if kern_hyp_va() becomes non-idempotent. On the other hand, there is
no way that the compiler is going to guarantee that such access is
always PC relative.

So let's bite the bullet and provide our own accessor.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/arm/include/asm/kvm_mmu.h
arch/arm64/include/asm/kvm_mmu.h
virt/kvm/arm/hyp/vgic-v2-sr.c

index 7f66b1b3aca143288c7cd30a0917c6d46fa76b03..6a0d496c4145021ef2eb057289554e82d75d980b 100644 (file)
  */
 #define kern_hyp_va(kva)       (kva)
 
+/* Contrary to arm64, there is no need to generate a PC-relative address */
+#define hyp_symbol_addr(s)                                             \
+       ({                                                              \
+               typeof(s) *addr = &(s);                                 \
+               addr;                                                   \
+       })
+
 /*
  * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation levels.
  */
index 824c83db9b471e4818bb9a0884255d1638b85d0e..cd7c9a3e0a03430c894cbf919193c459fbcff5a4 100644 (file)
@@ -130,6 +130,26 @@ static inline unsigned long __kern_hyp_va(unsigned long v)
 
 #define kern_hyp_va(v)         ((typeof(v))(__kern_hyp_va((unsigned long)(v))))
 
+/*
+ * Obtain the PC-relative address of a kernel symbol
+ * s: symbol
+ *
+ * The goal of this macro is to return a symbol's address based on a
+ * PC-relative computation, as opposed to a loading the VA from a
+ * constant pool or something similar. This works well for HYP, as an
+ * absolute VA is guaranteed to be wrong. Only use this if trying to
+ * obtain the address of a symbol (i.e. not something you obtained by
+ * following a pointer).
+ */
+#define hyp_symbol_addr(s)                                             \
+       ({                                                              \
+               typeof(s) *addr;                                        \
+               asm("adrp       %0, %1\n"                               \
+                   "add        %0, %0, :lo12:%1\n"                     \
+                   : "=r" (addr) : "S" (&s));                          \
+               addr;                                                   \
+       })
+
 /*
  * We currently only support a 40bit IPA.
  */
index 95021246ee266171bf10f0fc1cb6c13bc5c9e6c1..3d6dbdf850aa50b51a9bda6f2313fee0e32777d2 100644 (file)
@@ -203,7 +203,7 @@ int __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
                return -1;
 
        rd = kvm_vcpu_dabt_get_rd(vcpu);
-       addr  = kern_hyp_va((kern_hyp_va(&kvm_vgic_global_state))->vcpu_base_va);
+       addr  = kern_hyp_va(hyp_symbol_addr(kvm_vgic_global_state)->vcpu_base_va);
        addr += fault_ipa - vgic->vgic_cpu_base;
 
        if (kvm_vcpu_dabt_iswrite(vcpu)) {