summaryrefslogtreecommitdiff
path: root/lib/crc-t10dif.c
diff options
context:
space:
mode:
authornpiggin@suse.de <npiggin@suse.de>2009-04-26 10:25:54 (GMT)
committerAl Viro <viro@zeniv.linux.org.uk>2009-06-12 01:36:02 (GMT)
commitd3ef3d7351ccfbef3e5d926efc5ee332136f40d4 (patch)
treebd875a2b267ae03b350e259675ccb1a04453b9b9 /lib/crc-t10dif.c
parent3174c21b74b56c6a53fddd41a30fd6f757a32bd0 (diff)
downloadlinux-d3ef3d7351ccfbef3e5d926efc5ee332136f40d4.tar.xz
fs: mnt_want_write speedup
This patch speeds up lmbench lat_mmap test by about 8%. lat_mmap is set up basically to mmap a 64MB file on tmpfs, fault in its pages, then unmap it. A microbenchmark yes, but it exercises some important paths in the mm. Before: avg = 501.9 std = 14.7773 After: avg = 462.286 std = 5.46106 (50 runs of each, stddev gives a reasonable confidence, but there is quite a bit of variation there still) It does this by removing the complex per-cpu locking and counter-cache and replaces it with a percpu counter in struct vfsmount. This makes the code much simpler, and avoids spinlocks (although the msync is still pretty costly, unfortunately). It results in about 900 bytes smaller code too. It does increase the size of a vfsmount, however. It should also give a speedup on large systems if CPUs are frequently operating on different mounts (because the existing scheme has to operate on an atomic in the struct vfsmount when switching between mounts). But I'm most interested in the single threaded path performance for the moment. [AV: minor cleanup] Cc: Dave Hansen <haveblue@us.ibm.com> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Diffstat (limited to 'lib/crc-t10dif.c')
0 files changed, 0 insertions, 0 deletions