summaryrefslogtreecommitdiff
path: root/fs/btrfs/free-space-cache.c
diff options
context:
space:
mode:
authorJosef Bacik <jbacik@fusionio.com>2012-09-27 21:07:30 (GMT)
committerChris Mason <chris.mason@fusionio.com>2012-10-09 13:15:41 (GMT)
commite6138876ad8327250d77291b3262fee356267211 (patch)
treeffc3fe0a05e0fd7e55b92e3ef8bad42d3c73d68c /fs/btrfs/free-space-cache.c
parentce1953325662fa597197ce728e4195582fc21c8d (diff)
downloadlinux-fsl-qoriq-e6138876ad8327250d77291b3262fee356267211.tar.xz
Btrfs: cache extent state when writing out dirty metadata pages
Everytime we write out dirty pages we search for an offset in the tree, convert the bits in the state, and then when we wait we search for the offset again and clear the bits. So for every dirty range in the io tree we are doing 4 rb searches, which is suboptimal. With this patch we are only doing 2 searches for every cycle (modulo weird things happening). Thanks, Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Diffstat (limited to 'fs/btrfs/free-space-cache.c')
-rw-r--r--fs/btrfs/free-space-cache.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
index b107e68..1027b85 100644
--- a/fs/btrfs/free-space-cache.c
+++ b/fs/btrfs/free-space-cache.c
@@ -966,7 +966,7 @@ int __btrfs_write_out_cache(struct btrfs_root *root, struct inode *inode,
block_group->key.offset)) {
ret = find_first_extent_bit(unpin, start,
&extent_start, &extent_end,
- EXTENT_DIRTY);
+ EXTENT_DIRTY, NULL);
if (ret) {
ret = 0;
break;