sched/core: Avoid _cond_resched() for PREEMPT=y
authorPeter Zijlstra <peterz@infradead.org>
Mon, 19 Sep 2016 10:57:53 +0000 (12:57 +0200)
committerIngo Molnar <mingo@kernel.org>
Thu, 22 Sep 2016 12:53:46 +0000 (14:53 +0200)
On fully preemptible kernels _cond_resched() is pointless, so avoid
emitting any code for it.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mikulas Patocka <mpatocka@redhat.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
include/linux/sched.h
kernel/sched/core.c

index f00ee8e90a294bd8b9dadd01c0517f1d5cbe4e2d..b99fcd1b341e6a00d6c616529cc557c933bcaaea 100644 (file)
@@ -3209,7 +3209,11 @@ static inline int signal_pending_state(long state, struct task_struct *p)
  * cond_resched_lock() will drop the spinlock before scheduling,
  * cond_resched_softirq() will enable bhs before scheduling.
  */
+#ifndef CONFIG_PREEMPT
 extern int _cond_resched(void);
+#else
+static inline int _cond_resched(void) { return 0; }
+#endif
 
 #define cond_resched() ({                      \
        ___might_sleep(__FILE__, __LINE__, 0);  \
index b2ec53c1a9746a54ac58d38509c927928b98fd05..d7babcc7cb76e8311399de29b9db708ed5c2dbb9 100644 (file)
@@ -4883,6 +4883,7 @@ SYSCALL_DEFINE0(sched_yield)
        return 0;
 }
 
+#ifndef CONFIG_PREEMPT
 int __sched _cond_resched(void)
 {
        if (should_resched(0)) {
@@ -4892,6 +4893,7 @@ int __sched _cond_resched(void)
        return 0;
 }
 EXPORT_SYMBOL(_cond_resched);
+#endif
 
 /*
  * __cond_resched_lock() - if a reschedule is pending, drop the given lock,