ANDROID: sched: Prevent unnecessary active balance of single task in sched group
authorMorten Rasmussen <morten.rasmussen@arm.com>
Thu, 2 Jul 2015 16:16:34 +0000 (17:16 +0100)
committerChris Redpath <chris.redpath@arm.com>
Thu, 14 Dec 2017 21:41:02 +0000 (21:41 +0000)
Scenarios with the busiest group having just one task and the local
being idle on topologies with sched groups with different numbers of
cpus manage to dodge all load-balance bailout conditions resulting the
nr_balance_failed counter to be incremented. This eventually causes a
pointless active migration of the task. This patch prevents this by not
incrementing the counter when the busiest group only has one task.
ASYM_PACKING migrations and migrations due to reduced capacity should
still take place as these are explicitly captured by
need_active_balance().

A better solution would be to not attempt the load-balance in the first
place, but that requires significant changes to the order of bailout
conditions and statistics gathering.

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
Change-Id: I6a2f51017c49614d5e4224f0e16f240ad8af6d0f
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
kernel/sched/fair.c

index 5c09ddf8c8321ca1aa18a6ff7f78d3d2937ec92e..803eca831bb8c8fbd12de6503ce3c0adbd5f9eb1 100644 (file)
@@ -6580,6 +6580,7 @@ struct lb_env {
        int                     new_dst_cpu;
        enum cpu_idle_type      idle;
        long                    imbalance;
+       unsigned int            src_grp_nr_running;
        /* The set of CPUs under consideration for load-balancing */
        struct cpumask          *cpus;
 
@@ -7629,6 +7630,8 @@ next_group:
        if (env->sd->flags & SD_NUMA)
                env->fbq_type = fbq_classify_group(&sds->busiest_stat);
 
+       env->src_grp_nr_running = sds->busiest_stat.sum_nr_running;
+
        if (!env->sd->parent) {
                /* update overload indicator if we are at root domain */
                if (env->dst_rq->rd->overload != overload)
@@ -8251,7 +8254,8 @@ more_balance:
                 * excessive cache_hot migrations and active balances.
                 */
                if (idle != CPU_NEWLY_IDLE)
-                       sd->nr_balance_failed++;
+                       if (env.src_grp_nr_running > 1)
+                               sd->nr_balance_failed++;
 
                if (need_active_balance(&env)) {
                        unsigned long flags;