locking/rwsem: Reduce spinlock contention in wakeup after up_read()/up_write()
authorWaiman Long <Waiman.Long@hp.com>
Thu, 30 Apr 2015 21:12:16 +0000 (17:12 -0400)
committerIngo Molnar <mingo@kernel.org>
Fri, 8 May 2015 10:27:59 +0000 (12:27 +0200)
In up_write()/up_read(), rwsem_wake() will be called whenever it
detects that some writers/readers are waiting. The rwsem_wake()
function will take the wait_lock and call __rwsem_do_wake() to do the
real wakeup.  For a heavily contended rwsem, doing a spin_lock() on
wait_lock will cause further contention on the heavily contended rwsem
cacheline resulting in delay in the completion of the up_read/up_write
operations.

This patch makes the wait_lock taking and the call to __rwsem_do_wake()
optional if at least one spinning writer is present. The spinning
writer will be able to take the rwsem and call rwsem_wake() later
when it calls up_write(). With the presence of a spinning writer,
rwsem_wake() will now try to acquire the lock using trylock. If that
fails, it will just quit.

Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Davidlohr Bueso <dave@stgolabs.net>
Acked-by: Jason Low <jason.low2@hp.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Douglas Hatch <doug.hatch@hp.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Scott J Norton <scott.norton@hp.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1430428337-16802-2-git-send-email-Waiman.Long@hp.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
include/linux/osq_lock.h
kernel/locking/rwsem-xadd.c

index 3a6490e81b2856821ca190f438c039136a7fa207..703ea5c30a33f84bd60fcc19415b48c53084f5c7 100644 (file)
@@ -32,4 +32,9 @@ static inline void osq_lock_init(struct optimistic_spin_queue *lock)
 extern bool osq_lock(struct optimistic_spin_queue *lock);
 extern void osq_unlock(struct optimistic_spin_queue *lock);
 
+static inline bool osq_is_locked(struct optimistic_spin_queue *lock)
+{
+       return atomic_read(&lock->tail) != OSQ_UNLOCKED_VAL;
+}
+
 #endif
index 3417d0172a5d2e7cd69460ed4ef96c02f6c578d0..0f189714e457016ba7801c788a44673907c7746b 100644 (file)
@@ -409,11 +409,24 @@ done:
        return taken;
 }
 
+/*
+ * Return true if the rwsem has active spinner
+ */
+static inline bool rwsem_has_spinner(struct rw_semaphore *sem)
+{
+       return osq_is_locked(&sem->osq);
+}
+
 #else
 static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
 {
        return false;
 }
+
+static inline bool rwsem_has_spinner(struct rw_semaphore *sem)
+{
+       return false;
+}
 #endif
 
 /*
@@ -496,7 +509,38 @@ struct rw_semaphore *rwsem_wake(struct rw_semaphore *sem)
 {
        unsigned long flags;
 
+       /*
+        * If a spinner is present, it is not necessary to do the wakeup.
+        * Try to do wakeup only if the trylock succeeds to minimize
+        * spinlock contention which may introduce too much delay in the
+        * unlock operation.
+        *
+        *    spinning writer           up_write/up_read caller
+        *    ---------------           -----------------------
+        * [S]   osq_unlock()           [L]   osq
+        *       MB                           RMB
+        * [RmW] rwsem_try_write_lock() [RmW] spin_trylock(wait_lock)
+        *
+        * Here, it is important to make sure that there won't be a missed
+        * wakeup while the rwsem is free and the only spinning writer goes
+        * to sleep without taking the rwsem. Even when the spinning writer
+        * is just going to break out of the waiting loop, it will still do
+        * a trylock in rwsem_down_write_failed() before sleeping. IOW, if
+        * rwsem_has_spinner() is true, it will guarantee at least one
+        * trylock attempt on the rwsem later on.
+        */
+       if (rwsem_has_spinner(sem)) {
+               /*
+                * The smp_rmb() here is to make sure that the spinner
+                * state is consulted before reading the wait_lock.
+                */
+               smp_rmb();
+               if (!raw_spin_trylock_irqsave(&sem->wait_lock, flags))
+                       return sem;
+               goto locked;
+       }
        raw_spin_lock_irqsave(&sem->wait_lock, flags);
+locked:
 
        /* do nothing if list empty */
        if (!list_empty(&sem->wait_list))