summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorVladimir Davydov <vdavydov@parallels.com>2013-09-14 15:39:46 (GMT)
committerIngo Molnar <mingo@kernel.org>2013-09-20 09:59:39 (GMT)
commit7e3115ef5149fc502e3a2e80719dba54a8e7409d (patch)
tree48d20522106c153c20cb813531ad05dc8027b589
parent3029ede39373c368f402a76896600d85a4f7121b (diff)
downloadlinux-7e3115ef5149fc502e3a2e80719dba54a8e7409d.tar.xz
sched/balancing: Fix cfs_rq->task_h_load calculation
Patch a003a2 (sched: Consider runnable load average in move_tasks()) sets all top-level cfs_rqs' h_load to rq->avg.load_avg_contrib, which is always 0. This mistype leads to all tasks having weight 0 when load balancing in a cpu-cgroup enabled setup. There obviously should be sum of weights of all runnable tasks there instead. Fix it. Signed-off-by: Vladimir Davydov <vdavydov@parallels.com> Reviewed-by: Paul Turner <pjt@google.com> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/1379173186-11944-1-git-send-email-vdavydov@parallels.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
-rw-r--r--kernel/sched/fair.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2aedacc..7c70201 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4242,7 +4242,7 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
}
if (!se) {
- cfs_rq->h_load = rq->avg.load_avg_contrib;
+ cfs_rq->h_load = cfs_rq->runnable_load_avg;
cfs_rq->last_h_load_update = now;
}