From: Srikar Dronamraju Date: Wed, 6 Apr 2016 13:17:40 +0000 (+0530) Subject: sched/fair: Fix asym packing to select correct CPU X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=1f621e028baf391f6684003e32e009bc934b750f;p=GitHub%2Fmoto-9609%2Fandroid_kernel_motorola_exynos9610.git sched/fair: Fix asym packing to select correct CPU When asymmetric packing is set in the sched_domain and target CPU is busy, update_sd_pick_busiest() may not select the busiest runqueue. When target CPU is busy, find_busiest_group() will ignore checks for asym packing and may continue to load balance using the currently selected not-the-busiest runqueue as source runqueue. Selecting the busiest runqueue as source when the target CPU is busy, should result in achieving much better load balance. Also when target CPU is not busy and asymmetric packing is set in sd, select higher CPU as source CPU for load balancing. While doing this change, move the check to see if target CPU is busy into check_asym_packing(). The extent of performance benefit from this change decreases with the increasing load. However there is benefit in undercommit as well as overcommit conditions. 1. Record per second ebizzy (32 threads) on a 64 CPU power 7 box. (5 iterations) 4.6.0-rc2 Testcase: Min Max Avg StdDev ebizzy: 5223767.00 10368236.00 7946971.00 1753094.76 4.6.0-rc2+asym-changes Testcase: Min Max Avg StdDev %Change ebizzy: 8617191.00 13872356.00 11383980.00 1783400.89 +24.78% 2. Record per second ebizzy (64 threads) on a 64 CPU power 7 box. (5 iterations) 4.6.0-rc2 Testcase: Min Max Avg StdDev ebizzy: 6497666.00 18399783.00 10818093.20 4051452.08 4.6.0-rc2+asym-changes Testcase: Min Max Avg StdDev %Change ebizzy: 7567365.00 19456937.00 11674063.60 4295407.48 +4.40% 3. Record per second ebizzy (128 threads) on a 64 CPU power 7 box. (5 iterations) 4.6.0-rc2 Testcase: Min Max Avg StdDev ebizzy: 37073983.00 40341911.00 38776241.80 1259766.82 4.6.0-rc2+asym-changes Testcase: Min Max Avg StdDev %Change ebizzy: 38030399.00 41333378.00 39827404.40 1255001.86 +2.54% Signed-off-by: Srikar Dronamraju Signed-off-by: Peter Zijlstra (Intel) Cc: Gautham R Shenoy Cc: Michael Neuling Cc: Mike Galbraith Cc: Peter Zijlstra Cc: Thomas Gleixner Cc: Vaidyanathan Srinivasan Link: http://lkml.kernel.org/r/1459948660-16073-1-git-send-email-srikar@linux.vnet.ibm.com Signed-off-by: Ingo Molnar --- diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index b8cc1c35cd7c..6e371f43fc80 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -6679,6 +6679,9 @@ static bool update_sd_pick_busiest(struct lb_env *env, if (!(env->sd->flags & SD_ASYM_PACKING)) return true; + /* No ASYM_PACKING if target cpu is already busy */ + if (env->idle == CPU_NOT_IDLE) + return true; /* * ASYM_PACKING needs to move all the work to the lowest * numbered CPUs in the group, therefore mark all groups @@ -6688,7 +6691,8 @@ static bool update_sd_pick_busiest(struct lb_env *env, if (!sds->busiest) return true; - if (group_first_cpu(sds->busiest) > group_first_cpu(sg)) + /* Prefer to move from highest possible cpu's work */ + if (group_first_cpu(sds->busiest) < group_first_cpu(sg)) return true; } @@ -6834,6 +6838,9 @@ static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds) if (!(env->sd->flags & SD_ASYM_PACKING)) return 0; + if (env->idle == CPU_NOT_IDLE) + return 0; + if (!sds->busiest) return 0; @@ -7026,8 +7033,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env) busiest = &sds.busiest_stat; /* ASYM feature bypasses nice load balance check */ - if ((env->idle == CPU_IDLE || env->idle == CPU_NEWLY_IDLE) && - check_asym_packing(env, &sds)) + if (check_asym_packing(env, &sds)) return sds.busiest; /* There is no busy sibling group to pull tasks from */