RFC: FROMLIST: cgroup: avoid synchronize_sched() in __cgroup_procs_write()
authorPeter Zijlstra <peterz@infradead.org>
Thu, 11 Aug 2016 16:54:13 +0000 (18:54 +0200)
committerDmitry Shmidt <dimitrysh@google.com>
Fri, 26 Aug 2016 16:37:43 +0000 (09:37 -0700)
commit0c3240a1ef2e840aaa17f593326e3642bc857aa7
tree50e2d3e85f71543d0a8c4724e944ac4d2c96cc94
parent3228c5eb7af2b4cb981706b88ed3c3e81ab8e80a
RFC: FROMLIST: cgroup: avoid synchronize_sched() in __cgroup_procs_write()

The current percpu-rwsem read side is entirely free of serializing insns
at the cost of having a synchronize_sched() in the write path.

The latency of the synchronize_sched() is too high for cgroups. The
commit 1ed1328792ff talks about the write path being a fairly cold path
but this is not the case for Android which moves task to the foreground
cgroup and back around binder IPC calls from foreground processes to
background processes, so it is significantly hotter than human initiated
operations.

Switch cgroup_threadgroup_rwsem into the slow mode for now to avoid the
problem, hopefully it should not be that slow after another commit
80127a39681b ("locking/percpu-rwsem: Optimize readers and reduce global
impact").

We could just add rcu_sync_enter() into cgroup_init() but we do not want
another synchronize_sched() at boot time, so this patch adds the new helper
which doesn't block but currently can only be called before the first use.

Cc: Tejun Heo <tj@kernel.org>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Reported-by: John Stultz <john.stultz@linaro.org>
Reported-by: Dmitry Shmidt <dimitrysh@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
[jstultz: backported to 4.4]
Change-Id: I34aa9c394d3052779b56976693e96d861bd255f2
Mailing-list-URL: https://lkml.org/lkml/2016/8/11/557
Signed-off-by: John Stultz <john.stultz@linaro.org>
include/linux/rcu_sync.h
kernel/cgroup.c
kernel/rcu/sync.c