sched/numa: Override part of migrate_degrades_locality() when idle balancing
authorRik van Riel <riel@redhat.com>
Fri, 23 Jun 2017 16:55:27 +0000 (12:55 -0400)
committerIngo Molnar <mingo@kernel.org>
Sat, 24 Jun 2017 06:57:46 +0000 (08:57 +0200)
Several tests in the NAS benchmark seem to run a lot slower with
NUMA balancing enabled, than with NUMA balancing disabled. The
slower run time corresponds with increased idle time.

Overriding the final test of migrate_degrades_locality (but still
doing the other NUMA tests first) seems to improve performance
of those benchmarks.

Reported-by: Jirka Hladky <jhladky@redhat.com>
Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/20170623165530.22514-2-riel@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index 694c258b8771c578aaec87153429d2810f5557d6..6e0c0524131ee9cad7459965d0b573d88810754e 100644 (file)
@@ -6688,6 +6688,10 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
        if (dst_nid == p->numa_preferred_nid)
                return 0;
 
+       /* Leaving a core idle is often worse than degrading locality. */
+       if (env->idle != CPU_NOT_IDLE)
+               return -1;
+
        if (numa_group) {
                src_faults = group_faults(p, src_nid);
                dst_faults = group_faults(p, dst_nid);