sched/fair: Update rq clock before updating nohz CPU load
authorMatt Fleming <matt@codeblueprint.co.uk>
Tue, 3 May 2016 19:46:54 +0000 (20:46 +0100)
committerIngo Molnar <mingo@kernel.org>
Thu, 5 May 2016 07:41:09 +0000 (09:41 +0200)
If we're accessing rq_clock() (e.g. in sched_avg_update()) we should
update the rq clock before calling cpu_load_update(), otherwise any
time calculations will be stale.

All other paths currently call update_rq_clock().

Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Wanpeng Li <wanpeng.li@hotmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1462304814-11715-1-git-send-email-matt@codeblueprint.co.uk
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index 8c381a6d56e3da6b6188f61b0bcb43ab3a6cabe9..7a00c7c2dad0f5c5982a09ceeb502e437dd13746 100644 (file)
@@ -4724,6 +4724,7 @@ void cpu_load_update_nohz_stop(void)
 
        load = weighted_cpuload(cpu_of(this_rq));
        raw_spin_lock(&this_rq->lock);
+       update_rq_clock(this_rq);
        cpu_load_update_nohz(this_rq, curr_jiffies, load);
        raw_spin_unlock(&this_rq->lock);
 }