From fdbbe8e7914765aef82c696dcefc97fe462c3925 Mon Sep 17 00:00:00 2001 From: Christian Borntraeger Date: Fri, 9 Oct 2015 12:34:23 +0200 Subject: [PATCH] s390/spinlock: remove unneeded serializations at unlock the kernel locks have aqcuire/release semantics. No operation done after the lock can be "moved" before the lock and no operation before the unlock can be moved after the unlock. But it is perfectly fine that memory accesses which happen code wise after unlock are performed within the critical section. On s390x, reads are in-order with other reads (PoP section "Storage-Operand Fetch References") and writes are in-order with other writes (PoP section "Storage-Operand Store References"). Writes are also in-order with reads to the same memory location (PoP section "Storage-Operand Store References"). To other CPUs (and the channel subsystem), reads additionally appear to be performed prior to reads or writes that happen after them in the conceptual sequence (PoP section "Relation between Operand Accesses"). So at least as observed by other CPUs and the channel subsystem, reads inside the critical sections will not happen after unlock (and writes are in-order anyway). That's exactly what we need for "RELEASE operations" (memory-barriers.txt): "It guarantees that all memory operations before the RELEASE operation will appear to happen before the RELEASE operation with respect to the other components of the system." Suggested-by: Paolo Bonzini Signed-off-by: Christian Borntraeger Reviewed-By: Sascha Silbe [cross-reading and lot of improvements for the patch description] Signed-off-by: Martin Schwidefsky --- arch/s390/include/asm/spinlock.h | 3 --- 1 file changed, 3 deletions(-) diff --git a/arch/s390/include/asm/spinlock.h b/arch/s390/include/asm/spinlock.h index 0e37cd041241..63ebf37d3143 100644 --- a/arch/s390/include/asm/spinlock.h +++ b/arch/s390/include/asm/spinlock.h @@ -87,7 +87,6 @@ static inline void arch_spin_unlock(arch_spinlock_t *lp) { typecheck(unsigned int, lp->lock); asm volatile( - __ASM_BARRIER "st %1,%0\n" : "+Q" (lp->lock) : "d" (0) @@ -169,7 +168,6 @@ static inline int arch_write_trylock_once(arch_rwlock_t *rw) \ typecheck(unsigned int *, ptr); \ asm volatile( \ - "bcr 14,0\n" \ op_string " %0,%2,%1\n" \ : "=d" (old_val), "+Q" (*ptr) \ : "d" (op_val) \ @@ -243,7 +241,6 @@ static inline void arch_write_unlock(arch_rwlock_t *rw) rw->owner = 0; asm volatile( - __ASM_BARRIER "st %1,%0\n" : "+Q" (rw->lock) : "d" (0) -- 2.20.1