BACKPORT: sched/fair: Update util_est before updating schedutil
authorPatrick Bellasi <patrick.bellasi@arm.com>
Thu, 24 May 2018 14:10:23 +0000 (15:10 +0100)
committerPatrick Bellasi <patrick.bellasi@arm.com>
Wed, 18 Jul 2018 10:28:58 +0000 (11:28 +0100)
When a task is enqueued the estimated utilization of a CPU is updated
to better support the selection of the required frequency.

However, schedutil is (implicitly) updated by update_load_avg() which
always happens before util_est_{en,de}queue(), thus potentially
introducing a latency between estimated utilization updates and
frequency selections.

Let's update util_est at the beginning of enqueue_task_fair(),
which will ensure that all schedutil updates will see the most
updated estimated utilization value for a CPU.

Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Acked-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Joel Fernandes <joelaf@google.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rafael J . Wysocki <rafael.j.wysocki@intel.com>
Cc: Steve Muckle <smuckle@google.com>
Fixes: 7f65ea42eb00 ("sched/fair: Add util_est on top of PELT")
Link: http://lkml.kernel.org/r/20180524141023.13765-3-patrick.bellasi@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
[
 backport from upstream:
 commit 2539fc82aa9b ("sched/fair: Update util_est before updating schedutil")
]
Signed-off-by: Patrick Bellasi <patrick.bellasi@arm.com>
Change-Id: If84d40045bd283b0bbba2513a0aee0aa8c5058db

kernel/sched/fair.c

index 32cbe41c01e37491f9b011e601a954d839341708..68ab57d77c81d97f1807efe8d137674cefee0dc9 100644 (file)
@@ -5164,6 +5164,14 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
        struct sched_entity *se = &p->se;
        int task_new = !(flags & ENQUEUE_WAKEUP);
 
+       /*
+        * The code below (indirectly) updates schedutil which looks at
+        * the cfs_rq utilization to select a frequency.
+        * Let's add the task's estimated utilization to the cfs_rq's
+        * estimated utilization, before we update schedutil.
+        */
+       util_est_enqueue(&rq->cfs, p);
+
        /*
         * If in_iowait is set, the code below may not trigger any cpufreq
         * utilization updates, so do it here explicitly with the IOWAIT flag
@@ -5230,7 +5238,6 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
                walt_inc_cumulative_runnable_avg(rq, p);
        }
 
-       util_est_enqueue(&rq->cfs, p);
        hrtick_update(rq);
 }