arm64: Move BP hardening to check_and_switch_context
authorMarc Zyngier <marc.zyngier@arm.com>
Fri, 19 Jan 2018 15:42:09 +0000 (15:42 +0000)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Fri, 16 Feb 2018 19:22:53 +0000 (20:22 +0100)
Commit a8e4c0a919ae upstream.

We call arm64_apply_bp_hardening() from post_ttbr_update_workaround,
which has the unexpected consequence of being triggered on every
exception return to userspace when ARM64_SW_TTBR0_PAN is selected,
even if no context switch actually occured.

This is a bit suboptimal, and it would be more logical to only
invalidate the branch predictor when we actually switch to
a different mm.

In order to solve this, move the call to arm64_apply_bp_hardening()
into check_and_switch_context(), where we're guaranteed to pick
a different mm context.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/arm64/mm/context.c

index abc56dc31ae06e0baa27e305b4a1f2077d746501..9284788733d606be0bf44fe69656187501428e13 100644 (file)
@@ -227,6 +227,9 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
        raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
 
 switch_mm_fastpath:
+
+       arm64_apply_bp_hardening();
+
        /*
         * Defer TTBR0_EL1 setting for user threads to uaccess_enable() when
         * emulating PAN.
@@ -242,8 +245,6 @@ asmlinkage void post_ttbr_update_workaround(void)
                        "ic iallu; dsb nsh; isb",
                        ARM64_WORKAROUND_CAVIUM_27456,
                        CONFIG_CAVIUM_ERRATUM_27456));
-
-       arm64_apply_bp_hardening();
 }
 
 static int asids_init(void)