summaryrefslogtreecommitdiff
path: root/net/bridge
diff options
context:
space:
mode:
authorPeter Zijlstra <a.p.zijlstra@chello.nl>2008-04-24 22:25:08 (GMT)
committerIngo Molnar <mingo@elte.hu>2008-04-24 22:25:08 (GMT)
commit3f5087a2bae5d1ce10a3d698dec8f879a96f5419 (patch)
treead28e2dd5d36e7ea435032dd8a5fbd94340342ca /net/bridge
parent126e01bf92dfc5f0ba91e88be02c473e1506d7d9 (diff)
downloadlinux-fsl-qoriq-3f5087a2bae5d1ce10a3d698dec8f879a96f5419.tar.xz
sched: fix share (re)distribution
fix __aggregate_redistribute_shares() related lockup reported by David S. Miller. The problem this code tries to solve is 'accurately' calculating the 'fair' share of the group weight for each cpu. The current code falls back to a global group rebalance in case the sched_domain's span it looks at has no shares, but does have tasks. The reason it gets stuck here, is because its inherently racy - if someone steals the last task after we compute the agg->rq_weight, but before we rebalance, we'll never get out of the loop. We could of course go fix that, but while looking at this issue I found that this 'fallback' wasn't nearly as rare as I'd hoped it to be. In fact its quite common - and given it walks the whole machine, thats very bad. The new approach is simple (why didn't I think of it before?), we set the aggregate shares to the full task group weight, and each larger sched domain that encounters an aggregate shares larger than the weight, clips it (it already re-distributes anyway). This nicely converges to the desired global picture where the sum of all shares equals the task group weight. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'net/bridge')
0 files changed, 0 insertions, 0 deletions