[PATCH] lockdep: core, fix rq-lock handling on __ARCH_WANT_UNLOCKED_CTXSW
authorIngo Molnar <mingo@elte.hu>
Fri, 14 Jul 2006 07:24:27 +0000 (00:24 -0700)
committerLinus Torvalds <torvalds@g5.osdl.org>
Sat, 15 Jul 2006 04:53:55 +0000 (21:53 -0700)
On platforms that have __ARCH_WANT_UNLOCKED_CTXSW set and want to implement
lock validator support there's a bug in rq->lock handling: in this case we
dont 'carry over' the runqueue lock into another task - but still we did a
spinlock_release() of it.  Fix this by making the spinlock_release() in
context_switch() dependent on !__ARCH_WANT_UNLOCKED_CTXSW.

(Reported by Ralf Baechle on MIPS, which has __ARCH_WANT_UNLOCKED_CTXSW.
This fixes a lockdep-internal BUG message on such platforms.)

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Arjan van de Ven <arjan@linux.intel.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
kernel/sched.c

index d714611f1691fb1da042bedcf748e5cf38854d3d..e9a0b61f12ab3c2679e283f5c0ed26a18d9bbabb 100644 (file)
@@ -1788,7 +1788,15 @@ context_switch(struct rq *rq, struct task_struct *prev,
                WARN_ON(rq->prev_mm);
                rq->prev_mm = oldmm;
        }
+       /*
+        * Since the runqueue lock will be released by the next
+        * task (which is an invalid locking op but in the case
+        * of the scheduler it's an obvious special-case), so we
+        * do an early lockdep release here:
+        */
+#ifndef __ARCH_WANT_UNLOCKED_CTXSW
        spin_release(&rq->lock.dep_map, 1, _THIS_IP_);
+#endif
 
        /* Here we just switch the register state and the stack. */
        switch_to(prev, next, prev);