locking/atomic, arch/qrwlock: Employ atomic_fetch_add_acquire()
authorPeter Zijlstra <peterz@infradead.org>
Sun, 17 Apr 2016 23:27:03 +0000 (01:27 +0200)
committerIngo Molnar <mingo@kernel.org>
Thu, 16 Jun 2016 08:48:34 +0000 (10:48 +0200)
The only reason for the current code is to make GCC emit only the
"LOCK XADD" instruction on x86 (and not do a pointless extra ADD on
the result), do so nicer.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <waiman.long@hpe.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arch@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/locking/qrwlock.c

index fec08233866875ca1962d836b4e5d6f88d475092..19248ddf37cea70b7a0e5cf0f2481fa5061bf8d5 100644 (file)
@@ -93,7 +93,7 @@ void queued_read_lock_slowpath(struct qrwlock *lock, u32 cnts)
         * that accesses can't leak upwards out of our subsequent critical
         * section in the case that the lock is currently held for write.
         */
-       cnts = atomic_add_return_acquire(_QR_BIAS, &lock->cnts) - _QR_BIAS;
+       cnts = atomic_fetch_add_acquire(_QR_BIAS, &lock->cnts);
        rspin_until_writer_unlock(lock, cnts);
 
        /*