sched, cgroup: replace signal_struct->group_rwsem with a global percpu_rwsem
authorTejun Heo <tj@kernel.org>
Wed, 16 Sep 2015 16:53:17 +0000 (12:53 -0400)
committerTejun Heo <tj@kernel.org>
Wed, 16 Sep 2015 16:53:17 +0000 (12:53 -0400)
commit1ed1328792ff46e4bb86a3d7f7be2971f4549f6c
tree53719cfc0bf81bc7e6fb522944553d9b4fa36cbf
parent0c986253b939cc14c69d4adbe2b4121bdf4aa220
sched, cgroup: replace signal_struct->group_rwsem with a global percpu_rwsem

Note: This commit was originally committed as d59cfc09c32a but got
      reverted by 0c986253b939 due to the performance regression from
      the percpu_rwsem write down/up operations added to cgroup task
      migration path.  percpu_rwsem changes which alleviate the
      performance issue are pending for v4.4-rc1 merge window.
      Re-apply.

The cgroup side of threadgroup locking uses signal_struct->group_rwsem
to synchronize against threadgroup changes.  This per-process rwsem
adds small overhead to thread creation, exit and exec paths, forces
cgroup code paths to do lock-verify-unlock-retry dance in a couple
places and makes it impossible to atomically perform operations across
multiple processes.

This patch replaces signal_struct->group_rwsem with a global
percpu_rwsem cgroup_threadgroup_rwsem which is cheaper on the reader
side and contained in cgroups proper.  This patch converts one-to-one.

This does make writer side heavier and lower the granularity; however,
cgroup process migration is a fairly cold path, we do want to optimize
thread operations over it and cgroup migration operations don't take
enough time for the lower granularity to matter.

Signed-off-by: Tejun Heo <tj@kernel.org>
Link: http://lkml.kernel.org/g/55F8097A.7000206@de.ibm.com
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
include/linux/cgroup-defs.h
include/linux/init_task.h
include/linux/sched.h
kernel/cgroup.c
kernel/fork.c