sched: small schedstat fix
authorIngo Molnar <mingo@elte.hu>
Tue, 28 Aug 2007 10:53:24 +0000 (12:53 +0200)
committerIngo Molnar <mingo@elte.hu>
Tue, 28 Aug 2007 10:53:24 +0000 (12:53 +0200)
small schedstat fix: the cfs_rq->wait_runtime 'sum of all runtimes'
statistics counters missed newly forked tasks and thus had a constant
negative skew. Fix this.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Mike Galbraith <efault@gmx.de>
kernel/sched_fair.c

index 0c718857176fba7b02038e045a87aef5711b4c59..75f025da6f7c618f1de7625c3bf02ca1c5ce7fd1 100644 (file)
@@ -1121,8 +1121,10 @@ static void task_new_fair(struct rq *rq, struct task_struct *p)
         * The statistical average of wait_runtime is about
         * -granularity/2, so initialize the task with that:
         */
-       if (sysctl_sched_features & SCHED_FEAT_START_DEBIT)
+       if (sysctl_sched_features & SCHED_FEAT_START_DEBIT) {
                p->se.wait_runtime = -(sched_granularity(cfs_rq) / 2);
+               schedstat_add(cfs_rq, wait_runtime, se->wait_runtime);
+       }
 
        __enqueue_entity(cfs_rq, se);
 }