This can be reproduced by running rt-migrate-test:
WARNING: CPU: 2 PID: 2195 at kernel/locking/lockdep.c:3670 lock_unpin_lock()
unpinning an unpinned lock
...
Call Trace:
dump_stack()
__warn()
warn_slowpath_fmt()
lock_unpin_lock()
__balance_callback()
__schedule()
schedule()
futex_wait_queue_me()
futex_wait()
do_futex()
SyS_futex()
do_syscall_64()
entry_SYSCALL64_slow_path()
Revert the rq_lock_irqsave() usage here, the whole point of the
balance_callback() was to allow dropping rq->lock.
Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes:
8a8c69c32778 ("sched/core: Add rq->lock wrappers")
Link: http://lkml.kernel.org/r/1489718719-3951-1-git-send-email-wanpeng.li@hotmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
{
struct callback_head *head, *next;
void (*func)(struct rq *rq);
- struct rq_flags rf;
+ unsigned long flags;
- rq_lock_irqsave(rq, &rf);
+ raw_spin_lock_irqsave(&rq->lock, flags);
head = rq->balance_callback;
rq->balance_callback = NULL;
while (head) {
func(rq);
}
- rq_unlock_irqrestore(rq, &rf);
+ raw_spin_unlock_irqrestore(&rq->lock, flags);
}
static inline void balance_callback(struct rq *rq)