locking: Introduce smp_mb__after_spinlock()
authorPeter Zijlstra <peterz@infradead.org>
Mon, 5 Sep 2016 09:37:53 +0000 (11:37 +0200)
committerIngo Molnar <mingo@kernel.org>
Thu, 10 Aug 2017 10:29:02 +0000 (12:29 +0200)
commitd89e588ca4081615216cc25f2489b0281ac0bfe9
tree9f3fd5958adb8b6a0a86065ca0c0603fc73c3c06
parentff7a5fb0f1d510997a845e0d227f30831ff38d9d
locking: Introduce smp_mb__after_spinlock()

Since its inception, our understanding of ACQUIRE, esp. as applied to
spinlocks, has changed somewhat. Also, I wonder if, with a simple
change, we cannot make it provide more.

The problem with the comment is that the STORE done by spin_lock isn't
itself ordered by the ACQUIRE, and therefore a later LOAD can pass over
it and cross with any prior STORE, rendering the default WMB
insufficient (pointed out by Alan).

Now, this is only really a problem on PowerPC and ARM64, both of
which already defined smp_mb__before_spinlock() as a smp_mb().

At the same time, we can get a much stronger construct if we place
that same barrier _inside_ the spin_lock(). In that case we upgrade
the RCpc spinlock to an RCsc.  That would make all schedule() calls
fully transitive against one another.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/arm64/include/asm/spinlock.h
arch/powerpc/include/asm/spinlock.h
include/linux/atomic.h
include/linux/spinlock.h
kernel/sched/core.c