signals: avoid unnecessary taking of sighand->siglock
authorWaiman Long <Waiman.Long@hpe.com>
Wed, 14 Dec 2016 23:04:10 +0000 (15:04 -0800)
committerLinus Torvalds <torvalds@linux-foundation.org>
Thu, 15 Dec 2016 00:04:07 +0000 (16:04 -0800)
When running certain database workload on a high-end system with many
CPUs, it was found that spinlock contention in the sigprocmask syscalls
became a significant portion of the overall CPU cycles as shown below.

  9.30%  9.30%  905387  dataserver  /proc/kcore 0x7fff8163f4d2
  [k] _raw_spin_lock_irq
            |
            ---_raw_spin_lock_irq
               |
               |--99.34%-- __set_current_blocked
               |          sigprocmask
               |          sys_rt_sigprocmask
               |          system_call_fastpath
               |          |
               |          |--50.63%-- __swapcontext
               |          |          |
               |          |          |--99.91%-- upsleepgeneric
               |          |
               |          |--49.36%-- __setcontext
               |          |          ktskRun

Looking further into the swapcontext function in glibc, it was found that
the function always call sigprocmask() without checking if there are
changes in the signal mask.

A check was added to the __set_current_blocked() function to avoid taking
the sighand->siglock spinlock if there is no change in the signal mask.
This will prevent unneeded spinlock contention when many threads are
trying to call sigprocmask().

With this patch applied, the spinlock contention in sigprocmask() was
gone.

Link: http://lkml.kernel.org/r/1474979209-11867-1-git-send-email-Waiman.Long@hpe.com
Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Stas Sergeev <stsp@list.ru>
Cc: Scott J Norton <scott.norton@hpe.com>
Cc: Douglas Hatch <doug.hatch@hpe.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
include/linux/signal.h
kernel/signal.c

index b63f63eaa39c5ee2b904eedd5cad91755215c843..5308304993bea584105a9747d85c9abcbadcc5da 100644 (file)
@@ -97,6 +97,23 @@ static inline int sigisemptyset(sigset_t *set)
        }
 }
 
+static inline int sigequalsets(const sigset_t *set1, const sigset_t *set2)
+{
+       switch (_NSIG_WORDS) {
+       case 4:
+               return  (set1->sig[3] == set2->sig[3]) &&
+                       (set1->sig[2] == set2->sig[2]) &&
+                       (set1->sig[1] == set2->sig[1]) &&
+                       (set1->sig[0] == set2->sig[0]);
+       case 2:
+               return  (set1->sig[1] == set2->sig[1]) &&
+                       (set1->sig[0] == set2->sig[0]);
+       case 1:
+               return  set1->sig[0] == set2->sig[0];
+       }
+       return 0;
+}
+
 #define sigmask(sig)   (1UL << ((sig) - 1))
 
 #ifndef __HAVE_ARCH_SIG_SETOPS
index 29a410780aa912f4daacbba5882c8b190501d8e7..ae60996fedffcf9e79ab208d650af7d155cff694 100644 (file)
@@ -2491,6 +2491,13 @@ void __set_current_blocked(const sigset_t *newset)
 {
        struct task_struct *tsk = current;
 
+       /*
+        * In case the signal mask hasn't changed, there is nothing we need
+        * to do. The current->blocked shouldn't be modified by other task.
+        */
+       if (sigequalsets(&tsk->blocked, newset))
+               return;
+
        spin_lock_irq(&tsk->sighand->siglock);
        __set_task_blocked(tsk, newset);
        spin_unlock_irq(&tsk->sighand->siglock);