sched/fair: Avoid integer overflow
authorMichal Nazarewicz <mina86@mina86.com>
Sun, 10 Nov 2013 19:42:01 +0000 (20:42 +0100)
committerIngo Molnar <mingo@kernel.org>
Wed, 13 Nov 2013 12:33:55 +0000 (13:33 +0100)
sa->runnable_avg_sum is of type u32 but after shifting it by NICE_0_SHIFT
bits it is promoted to u64.  This of course makes no sense, since the
result will never be more then 32-bit long.  Casting sa->runnable_avg_sum
to u64 before it is shifted, fixes this problem.

Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1384112521-25177-1-git-send-email-mpn@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
kernel/sched/fair.c

index 201be782b5b3cae8be30bd01783fc217dbee4944..e8b652ebe027c481e87122f629a300058cf82679 100644 (file)
@@ -2178,7 +2178,7 @@ static inline void __update_tg_runnable_avg(struct sched_avg *sa,
        long contrib;
 
        /* The fraction of a cpu used by this cfs_rq */
-       contrib = div_u64(sa->runnable_avg_sum << NICE_0_SHIFT,
+       contrib = div_u64((u64)sa->runnable_avg_sum << NICE_0_SHIFT,
                          sa->runnable_avg_period + 1);
        contrib -= cfs_rq->tg_runnable_contrib;