From: Vikram Mulukutla Date: Fri, 22 Sep 2017 00:24:24 +0000 (-0700) Subject: Revert "ANDROID: sched/tune: Initialize raw_spin_lock in boosted_groups" X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=8e2a0d8b27c4951792732fbf9c859345ddcf6af3;p=GitHub%2Fmoto-9609%2Fandroid_kernel_motorola_exynos9610.git Revert "ANDROID: sched/tune: Initialize raw_spin_lock in boosted_groups" This reverts commit c5616f2f874faa20b59b116177b99bf3948586df. If we re-init the per-cpu boostgroup spinlock every time that we add a new boosted cgroup, we can easily wipe out (reinit) a spinlock struct while in a critical section. We should only be setting up the per-cpu boostgroup data, and the spin_lock initialization need only happen once - which we're already doing in a postcore_initcall. For example: -------- CPU 0 -------- | -------- CPU1 -------- cgroupX boost group added | schedtune_enqueue_task | acquires(bg->lock) | cgroupY boost group added | for_each_cpu() | raw_spin_lock_init(bg->lock) releases(bg->lock) | BUG (already unlocked) | | This results in the following BUG from the debug spinlock code: BUG: spinlock already unlocked on CPU#5, rcuop/6/68 Bug: 32668852 Change-Id: I3016702780b461a0cd95e26c538cd18df27d6316 Signed-off-by: Vikram Mulukutla --- diff --git a/kernel/sched/tune.c b/kernel/sched/tune.c index 2e6ef5faaad0..57fd9173d6b5 100644 --- a/kernel/sched/tune.c +++ b/kernel/sched/tune.c @@ -451,7 +451,6 @@ schedtune_boostgroup_init(struct schedtune *st) bg = &per_cpu(cpu_boost_groups, cpu); bg->group[st->idx].boost = 0; bg->group[st->idx].tasks = 0; - raw_spin_lock_init(&bg->lock); } return 0;