arm64: traps: correctly handle MRS/MSR with XZR
authorMark Rutland <mark.rutland@arm.com>
Thu, 9 Feb 2017 15:19:19 +0000 (15:19 +0000)
committerWill Deacon <will.deacon@arm.com>
Wed, 15 Feb 2017 12:20:29 +0000 (12:20 +0000)
Currently we hand-roll XZR-safe register handling in
user_cache_maint_handler(), though we forget to do the same in
ctr_read_handler(), and may erroneously write back to the user SP rather
than XZR.

Use the new helpers to handle these cases correctly and consistently.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Fixes: 116c81f427ff6c53 ("arm64: Work around systems with mismatched cache line sizes")
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
arch/arm64/kernel/traps.c

index 7c3fc0634aa2d675c2c6a28cf68a7645efcaac53..350179becdf74f857cacb1d2c0134481729ef573 100644 (file)
@@ -466,7 +466,7 @@ static void user_cache_maint_handler(unsigned int esr, struct pt_regs *regs)
        int crm = (esr & ESR_ELx_SYS64_ISS_CRM_MASK) >> ESR_ELx_SYS64_ISS_CRM_SHIFT;
        int ret = 0;
 
-       address = (rt == 31) ? 0 : regs->regs[rt];
+       address = pt_regs_read_reg(regs, rt);
 
        switch (crm) {
        case ESR_ELx_SYS64_ISS_CRM_DC_CVAU:     /* DC CVAU, gets promoted */
@@ -495,8 +495,10 @@ static void user_cache_maint_handler(unsigned int esr, struct pt_regs *regs)
 static void ctr_read_handler(unsigned int esr, struct pt_regs *regs)
 {
        int rt = (esr & ESR_ELx_SYS64_ISS_RT_MASK) >> ESR_ELx_SYS64_ISS_RT_SHIFT;
+       unsigned long val = arm64_ftr_reg_user_value(&arm64_ftr_reg_ctrel0);
+
+       pt_regs_write_reg(regs, rt, val);
 
-       regs->regs[rt] = arm64_ftr_reg_user_value(&arm64_ftr_reg_ctrel0);
        regs->pc += 4;
 }