summaryrefslogtreecommitdiff
path: root/kernel/sched
diff options
context:
space:
mode:
authorPrashanth Nageshappa <prashanth@linux.vnet.ibm.com>2012-06-19 12:22:07 (GMT)
committerIngo Molnar <mingo@kernel.org>2012-07-24 11:55:37 (GMT)
commitbbf18b19495942cc730e8ff11fc3ffadf20cbfe1 (patch)
tree230cd53d8669da7c3881ded77651422b89615c16 /kernel/sched
parent85c1e7dae165acd004429f81fe52bfbf55b57a98 (diff)
downloadlinux-bbf18b19495942cc730e8ff11fc3ffadf20cbfe1.tar.xz
sched: Reset loop counters if all tasks are pinned and we need to redo load balance
While load balancing, if all tasks on the source runqueue are pinned, we retry after excluding the corresponding source cpu. However, loop counters env.loop and env.loop_break are not reset before retrying, which can lead to failure in moving the tasks. In this patch we reset env.loop and env.loop_break to their inital values before we retry. Signed-off-by: Prashanth Nageshappa <prashanth@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/4FE06EEF.2090709@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched')
-rw-r--r--kernel/sched/fair.c5
1 files changed, 4 insertions, 1 deletions
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9361669..f9f9aa0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4288,8 +4288,11 @@ more_balance:
/* All tasks on this runqueue were pinned by CPU affinity */
if (unlikely(env.flags & LBF_ALL_PINNED)) {
cpumask_clear_cpu(cpu_of(busiest), cpus);
- if (!cpumask_empty(cpus))
+ if (!cpumask_empty(cpus)) {
+ env.loop = 0;
+ env.loop_break = sched_nr_migrate_break;
goto redo;
+ }
goto out_balanced;
}
}