sched: Remove double calculation in fix_small_imbalance()
authorVincent Guittot <vincent.guittot@linaro.org>
Tue, 11 Mar 2014 16:26:06 +0000 (17:26 +0100)
committerIngo Molnar <mingo@kernel.org>
Wed, 12 Mar 2014 09:49:00 +0000 (10:49 +0100)
The tmp value has been already calculated in:

  scaled_busy_load_per_task =
(busiest->load_per_task * SCHED_POWER_SCALE) /
busiest->group_power;

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1394555166-22894-1-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index f1eedae1e83e6f825e799964d7abd3ef613828e2..b301918ed51074636982705484c003a434e7d696 100644 (file)
@@ -6061,12 +6061,10 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
        pwr_now /= SCHED_POWER_SCALE;
 
        /* Amount of load we'd subtract */
-       tmp = (busiest->load_per_task * SCHED_POWER_SCALE) /
-               busiest->group_power;
-       if (busiest->avg_load > tmp) {
+       if (busiest->avg_load > scaled_busy_load_per_task) {
                pwr_move += busiest->group_power *
                            min(busiest->load_per_task,
-                               busiest->avg_load - tmp);
+                               busiest->avg_load - scaled_busy_load_per_task);
        }
 
        /* Amount of load we'd add */