summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2011-07-20superblock: move pin_sb_for_writeback() to fs/super.cDave Chinner
The per-sb shrinker has the same requirement as the writeback threads of ensuring that the superblock is usable and pinned for the time it takes to run the work. Both need to take a passive reference to the sb, take a read lock on the s_umount lock and then only continue if an unmount is not in progress. pin_sb_for_writeback() does this exactly, so move it to fs/super.c and rename it to grab_super_passive() and exporting it via fs/internal.h for all the VFS code to be able to use. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20inode: move to per-sb LRU locksDave Chinner
With the inode LRUs moving to per-sb structures, there is no longer a need for a global inode_lru_lock. The locking can be made more fine-grained by moving to a per-sb LRU lock, isolating the LRU operations of different filesytsems completely from each other. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20inode: Make unused inode LRU per superblockDave Chinner
The inode unused list is currently a global LRU. This does not match the other global filesystem cache - the dentry cache - which uses per-superblock LRU lists. Hence we have related filesystem object types using different LRU reclaimation schemes. To enable a per-superblock filesystem cache shrinker, both of these caches need to have per-sb unused object LRU lists. Hence this patch converts the global inode LRU to per-sb LRUs. The patch only does rudimentary per-sb propotioning in the shrinker infrastructure, as this gets removed when the per-sb shrinker callouts are introduced later on. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20inode: convert inode_stat.nr_unused to per-cpu countersDave Chinner
Before we split up the inode_lru_lock, the unused inode counter needs to be made independent of the global inode_lru_lock. Convert it to per-cpu counters to do this. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20vmscan: add customisable shrinker batch sizeDave Chinner
For shrinkers that have their own cond_resched* calls, having shrink_slab break the work down into small batches is not paticularly efficient. Add a custom batchsize field to the struct shrinker so that shrinkers can use a larger batch size if they desire. A value of zero (uninitialised) means "use the default", so behaviour is unchanged by this patch. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20vmscan: reduce wind up shrinker->nr when shrinker can't do workDave Chinner
When a shrinker returns -1 to shrink_slab() to indicate it cannot do any work given the current memory reclaim requirements, it adds the entire total_scan count to shrinker->nr. The idea ehind this is that whenteh shrinker is next called and can do work, it will do the work of the previously aborted shrinker call as well. However, if a filesystem is doing lots of allocation with GFP_NOFS set, then we get many, many more aborts from the shrinkers than we do successful calls. The result is that shrinker->nr winds up to it's maximum permissible value (twice the current cache size) and then when the next shrinker call that can do work is issued, it has enough scan count built up to free the entire cache twice over. This manifests itself in the cache going from full to empty in a matter of seconds, even when only a small part of the cache is needed to be emptied to free sufficient memory. Under metadata intensive workloads on ext4 and XFS, I'm seeing the VFS caches increase memory consumption up to 75% of memory (no page cache pressure) over a period of 30-60s, and then the shrinker empties them down to zero in the space of 2-3s. This cycle repeats over and over again, with the shrinker completely trashing the inode and dentry caches every minute or so the workload continues. This behaviour was made obvious by the shrink_slab tracepoints added earlier in the series, and made worse by the patch that corrected the concurrent accounting of shrinker->nr. To avoid this problem, stop repeated small increments of the total scan value from winding shrinker->nr up to a value that can cause the entire cache to be freed. We still need to allow it to wind up, so use the delta as the "large scan" threshold check - if the delta is more than a quarter of the entire cache size, then it is a large scan and allowed to cause lots of windup because we are clearly needing to free lots of memory. If it isn't a large scan then limit the total scan to half the size of the cache so that windup never increases to consume the whole cache. Reducing the total scan limit further does not allow enough wind-up to maintain the current levels of performance, whilst a higher threshold does not prevent the windup from freeing the entire cache under sustained workloads. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20vmscan: shrinker->nr updates race and go wrongDave Chinner
shrink_slab() allows shrinkers to be called in parallel so the struct shrinker can be updated concurrently. It does not provide any exclusio for such updates, so we can get the shrinker->nr value increasing or decreasing incorrectly. As a result, when a shrinker repeatedly returns a value of -1 (e.g. a VFS shrinker called w/ GFP_NOFS), the shrinker->nr goes haywire, sometimes updating with the scan count that wasn't used, sometimes losing it altogether. Worse is when a shrinker does work and that update is lost due to racy updates, which means the shrinker will do the work again! Fix this by making the total_scan calculations independent of shrinker->nr, and making the shrinker->nr updates atomic w.r.t. to other updates via cmpxchg loops. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20vmscan: add shrink_slab tracepointsDave Chinner
It is impossible to understand what the shrinkers are actually doing without instrumenting the code, so add a some tracepoints to allow insight to be gained. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20make d_splice_alias(ERR_PTR(err), dentry) = ERR_PTR(err)Al Viro
... and simplify the living hell out of callers Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20deuglify squashfs_lookup()Al Viro
d_splice_alias(NULL, dentry) is equivalent to d_add(dentry, NULL), NULL so no need for that if (inode) ... in there (or ERR_PTR(0), for that matter) Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20nfsd4_list_rec_dir(): don't bother with reopening rec_fileAl Viro
just rewind it to the beginning before vfs_readdir() and be done with that... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20kill useless checks for sb->s_op == NULLAl Viro
never is... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20btrfs: kill magical embedded struct superblockAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20get rid of pointless checks for dentry->sb == NULLAl Viro
it never is... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20Make ->d_sb assign-once and always non-NULLAl Viro
New helper (non-exported, fs/internal.h-only): __d_alloc(sb, name). Allocates dentry, sets its ->d_sb to given superblock and sets ->d_op accordingly. Old d_alloc(NULL, name) callers are converted to that (all of them know what superblock they want). d_alloc() itself is left only for parent != NULl case; uses __d_alloc(), inserts result into the list of parent's children. Note that now ->d_sb is assign-once and never NULL *and* ->d_parent is never NULL either. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20unexport kern_path_parent()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20switch vfs_path_lookup() to struct pathAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20kill lookup_create()Al Viro
folded into the only caller (kern_path_create()) Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20devtmpfs: get rid of bogus mkdir in create_path()Al Viro
We do _NOT_ want to mkdir the path itself - we are preparing to mknod it, after all. Normally it'll fail with -ENOENT and just do nothing, but if somebody has created the parent in the meanwhile, we'll get buggered... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20switch devtmpfs to kern_path_create()Al Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20switch devtmpfs object creation/removal to separate kernel threadAl Viro
... and give it a namespace where devtmpfs would be mounted on root, thus avoiding abuses of vfs_path_lookup() (it was never intended to be used with LOOKUP_PARENT). Games with credentials are also gone. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20make sure that nsproxy_cache is initialized early enoughAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20switch do_spufs_create() to user_path_create(), fix double-unlockAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20new helpers: kern_path_create/user_path_createAl Viro
combination of kern_path_parent() and lookup_create(). Does *not* expose struct nameidata to caller. Syscalls converted to that... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20kill LOOKUP_CONTINUEAl Viro
LOOKUP_PARENT is equivalent to it now Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20nfs: LOOKUP_{OPEN,CREATE,EXCL} is set only on the last stepAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20cifs_lookup(): LOOKUP_OPEN is set only on the last componentAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20ceph: LOOKUP_OPEN is set only when it's the last componentAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20jfs_ci_revalidate() is safe from RCU modeAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20LOOKUP_CREATE and LOOKUP_RENAME_TARGET can be set only on the last stepAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20no need to check for LOOKUP_OPEN in ->create() instancesAl Viro
... it will be set in nd->flag for all cases with non-NULL nd (i.e. when called from do_last()). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20don't pass nameidata to vfs_create() from ecryptfs_create()Al Viro
Instead of playing with removal of LOOKUP_OPEN, mangling (and restoring) nd->path, just pass NULL to vfs_create(). The whole point of what's being done there is to suppress any attempts to open file by underlying fs, which is what nd == NULL indicates. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20don't transliterate lower bits of ->intent.open.flags to FMODE_...Al Viro
->create() instances are much happier that way... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20Don't pass nameidata when calling vfs_create() from mknod()Al Viro
All instances can cope with that now (and ceph one actually starts working properly). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20fix mknod() on nfs4 (hopefully)Al Viro
a) check the right flags in ->create() (LOOKUP_OPEN, not LOOKUP_CREATE) b) default (!LOOKUP_OPEN) open_flags is O_CREAT|O_EXCL|FMODE_READ, not 0 c) lookup_instantiate_filp() should be done only with LOOKUP_OPEN; otherwise we need to issue CLOSE, lest we leak stateid on server. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20nameidata_to_nfs_open_context() doesn't need nameidata, actually...Al Viro
just open flags; switched to passing just those and renamed to create_nfs_open_context() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20nfs_open_context doesn't need struct path eitherAl Viro
just dentry, please... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20nfs4_opendata doesn't need struct path eitherAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20nfs4_closedata doesn't need to mess with struct pathAl Viro
instead of path_get()/path_put(), we can just use nfs_sb_{,de}active() to pin the superblock down. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20cifs: fix the type of cifs_demultiplex_thread()Al Viro
... and get rid of a bogus typecast, while we are at it; it's not just that we want a function returning int and not void, but cast to pointer to function taking void * and returning void would be (void (*)(void *)) and not (void *)(void *), TYVM... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20ecryptfs_inode_permission() doesn't need to bail out on RCUAl Viro
... now that inode_permission() can take MAY_NOT_BLOCK and handle it properly. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20kill IPERM_FLAG_RCUAl Viro
not used anymore Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20->permission() sanitizing: document API changesAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20merge do_revalidate() into its only callerAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20no reason to keep exec_permission() separate nowAl Viro
cache footprint alone makes it a bad idea... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20massage generic_permission() to treat directories on a separate pathAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20->permission() sanitizing: don't pass flags to exec_permission()Al Viro
pass mask instead; kill security_inode_exec_permission() since we can use security_inode_permission() instead. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20selinux: don't transliterate MAY_NOT_BLOCK to IPERM_FLAG_RCUAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20->permission() sanitizing: don't pass flags to ->inode_permission()Al Viro
pass that via mask instead. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2011-07-20->permission() sanitizing: don't pass flags to ->permission()Al Viro
not used by the instances anymore. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>