summaryrefslogtreecommitdiff
path: root/fs
diff options
context:
space:
mode:
authorMark Fasheh <mfasheh@suse.com>2010-03-12 02:43:46 (GMT)
committerJoel Becker <joel.becker@oracle.com>2010-03-18 20:22:42 (GMT)
commitb22b63ebafb97b66d1054e69941ee049d790c6cf (patch)
tree98c6049f6ab23ad71ed4ba6dad43558296ae69b6 /fs
parentfcefd25ac89239cb57fa198f125a79ff85468c75 (diff)
downloadlinux-b22b63ebafb97b66d1054e69941ee049d790c6cf.tar.xz
ocfs2: Always try for maximum bits with new local alloc windows
What we were doing before was to ask for the current window size as the maximum allocation. This had the effect of limiting the amount of allocation we could get for the local alloc during times when the window size was shrunk due to fragmentation. In some cases, that could actually *increase* fragmentation by artificially limiting the number of bits we can accept. So while we still want to ask for a minimum number of bits equal to window size, there is no reason why we should limit the number of bits the local alloc should accept. Hence always allow the maximum number of local alloc bits. Signed-off-by: Mark Fasheh <mfasheh@suse.com> Signed-off-by: Joel Becker <joel.becker@oracle.com>
Diffstat (limited to 'fs')
-rw-r--r--fs/ocfs2/localalloc.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c
index ca992d9..171c691 100644
--- a/fs/ocfs2/localalloc.c
+++ b/fs/ocfs2/localalloc.c
@@ -984,8 +984,7 @@ static int ocfs2_local_alloc_reserve_for_window(struct ocfs2_super *osb,
}
retry_enospc:
- (*ac)->ac_bits_wanted = osb->local_alloc_bits;
-
+ (*ac)->ac_bits_wanted = osb->local_alloc_default_bits;
status = ocfs2_reserve_cluster_bitmap_bits(osb, *ac);
if (status == -ENOSPC) {
if (ocfs2_recalc_la_window(osb, OCFS2_LA_EVENT_ENOSPC) ==
@@ -1061,6 +1060,7 @@ retry_enospc:
OCFS2_LA_DISABLED)
goto bail;
+ ac->ac_bits_wanted = osb->local_alloc_default_bits;
status = ocfs2_claim_clusters(osb, handle, ac,
osb->local_alloc_bits,
&cluster_off,