summaryrefslogtreecommitdiff
path: root/fs
AgeCommit message (Collapse)Author
2015-07-01fuse: device fd cloneMiklos Szeredi
Allow an open fuse device to be "cloned". Userspace can create a clone by: newfd = open("/dev/fuse", O_RDWR) ioctl(newfd, FUSE_DEV_IOC_CLONE, &oldfd); At this point newfd will refer to the same fuse connection as oldfd. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: abort: no fc->lock needed for request endingMiklos Szeredi
In fuse_abort_conn() when all requests are on private lists we no longer need fc->lock protection. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: no fc->lock for pqueue partsMiklos Szeredi
Remove fc->lock protection from processing queue members, now protected by fpq->lock. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: no fc->lock in request_end()Miklos Szeredi
No longer need to call request_end() with the connection lock held. We still protect the background counters and queue with fc->lock, so acquire it if necessary. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: cleanup request_end()Miklos Szeredi
Now that we atomically test having already done everything we no longer need other protection. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: request_end(): do onceMiklos Szeredi
When the connection is aborted it is possible that request_end() will be called twice. Use atomic test and set to do the actual ending only once. test_and_set_bit() also provides the necessary barrier semantics so no explicit smp_wmb() is necessary. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: add req flag for private listMiklos Szeredi
When an unlocked request is aborted, it is moved from fpq->io to a private list. Then, after unlocking fpq->lock, the private list is processed and the requests are finished off. To protect the private list, we need to mark the request with a flag, so if in the meantime the request is unlocked the list is not corrupted. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: pqueue lockingMiklos Szeredi
Add a fpq->lock for protecting members of struct fuse_pqueue and FR_LOCKED request flag. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: abort: group pqueue accessesMiklos Szeredi
Rearrange fuse_abort_conn() so that processing queue accesses are grouped together. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: cleanup fuse_dev_do_read()Miklos Szeredi
- locked list_add() + list_del_init() cancel out - common handling of case when request is ended here in the read phase Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: move list_del_init() from request_end() into callersMiklos Szeredi
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
2015-07-01fuse: duplicate ->connected in pqueueMiklos Szeredi
This will allow checking ->connected just with the processing queue lock. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: separate out processing queueMiklos Szeredi
This is just two fields: fc->io and fc->processing. This patch just rearranges the fields, no functional change. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: simplify request_wait()Miklos Szeredi
wait_event_interruptible_exclusive_locked() will do everything request_wait() does, so replace it. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: no fc->lock for iqueue partsMiklos Szeredi
Remove fc->lock protection from input queue members, now protected by fiq->waitq.lock. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: allow interrupt queuing without fc->lockMiklos Szeredi
Interrupt is only queued after the request has been sent to userspace. This is either done in request_wait_answer() or fuse_dev_do_read() depending on which state the request is in at the time of the interrupt. If it's not yet sent, then queuing the interrupt is postponed until the request is read. Otherwise (the request has already been read and is waiting for an answer) the interrupt is queued immedidately. We want to call queue_interrupt() without fc->lock protection, in which case there can be a race between the two functions: - neither of them queue the interrupt (thinking the other one has already done it). - both of them queue the interrupt The first one is prevented by adding memory barriers, the second is prevented by checking (under fiq->waitq.lock) if the interrupt has already been queued. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
2015-07-01fuse: iqueue lockingMiklos Szeredi
Use fiq->waitq.lock for protecting members of struct fuse_iqueue and FR_PENDING request flag, previously protected by fc->lock. Following patches will remove fc->lock protection from these members. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: dev read: split list_moveMiklos Szeredi
Different lists will need different locks. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: abort: group iqueue accessesMiklos Szeredi
Rearrange fuse_abort_conn() so that input queue accesses are grouped together. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: duplicate ->connected in iqueueMiklos Szeredi
This will allow checking ->connected just with the input queue lock. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: separate out input queueMiklos Szeredi
The input queue contains normal requests (fc->pending), forgets (fc->forget_*) and interrupts (fc->interrupts). There's also fc->waitq and fc->fasync for waking up the readers of the fuse device when a request is available. The fc->reqctr is also moved to the input queue (assigned to the request when the request is added to the input queue. This patch just rearranges the fields, no functional change. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: req state use flagsMiklos Szeredi
Use flags for representing the state in fuse_req. This is needed since req->list will be protected by different locks in different states, hence we'll want the state itself to be split into distinct bits, each protected with the relevant lock in that state. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
2015-07-01fuse: simplify req statesMiklos Szeredi
FUSE_REQ_INIT is actually the same state as FUSE_REQ_PENDING and FUSE_REQ_READING and FUSE_REQ_WRITING can be merged into a common FUSE_REQ_IO state. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: don't hold lock over request_wait_answer()Miklos Szeredi
Only hold fc->lock over sections of request_wait_answer() that actually need it. If wait_event_interruptible() returns zero, it means that the request finished. Need to add memory barriers, though, to make sure that all relevant data in the request is synchronized. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
2015-07-01fuse: simplify unique ctrMiklos Szeredi
Since it's a 64bit counter, it's never gonna wrap around. Remove code dealing with that possibility. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: rework abortMiklos Szeredi
Splice fc->pending and fc->processing lists into a common kill list while holding fc->lock. By the time we release fc->lock, pending and processing lists are empty and the io list contains only locked requests. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: fold helpers into abortMiklos Szeredi
Fold end_io_requests() and end_queued_requests() into fuse_abort_conn(). Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: use per req lock for lock/unlock_request()Miklos Szeredi
Reuse req->waitq.lock for protecting FR_ABORTED and FR_LOCKED flags. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: req use bitopsMiklos Szeredi
Finer grained locking will mean there's no single lock to protect modification of bitfileds in fuse_req. So move to using bitops. Can use the non-atomic variants for those which happen while the request definitely has only one reference. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: simplify request abortMiklos Szeredi
- don't end the request while req->locked is true - make unlock_request() return an error if the connection was aborted Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: call fuse_abort_conn() in dev releaseMiklos Szeredi
fuse_abort_conn() does all the work done by fuse_dev_release() and more. "More" consists of: end_io_requests(fc); wake_up_all(&fc->waitq); kill_fasync(&fc->fasync, SIGIO, POLL_IN); All of which should be no-op (WARN_ON's added). Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: fold fuse_request_send_nowait() into single callerMiklos Szeredi
And the same with fuse_request_send_nowait_locked(). Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: check conn_error earlierMiklos Szeredi
fc->conn_error is set once in FUSE_INIT reply and never cleared. Check it in request allocation, there's no sense in doing all the preparation if sending will surely fail. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: account as waiting before queuing for backgroundMiklos Szeredi
Move accounting of fc->num_waiting to the point where the request actually starts waiting. This is earlier than the current queue_request() for background requests, since they might be waiting on the fc->bg_queue before being queued on fc->pending. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: reset waitingMiklos Szeredi
Reset req->waiting in fuse_put_request(). This is needed for correct accounting in fc->num_waiting for reserved requests. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
2015-07-01fuse: fix background request if not connectedMiklos Szeredi
request_end() expects fc->num_background and fc->active_background to have been incremented, which is not the case in fuse_request_send_nowait() failure path. So instead just call the ->end() callback (which is actually set by all callers). Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Reviewed-by: Ashish Samant <ashish.samant@oracle.com>
2015-07-01fuse: initialize fc->release before calling itMiklos Szeredi
fc->release is called from fuse_conn_put() which was used in the error cleanup before fc->release was initialized. [Jeremiah Mahler <jmmahler@gmail.com>: assign fc->release after calling fuse_conn_init(fc) instead of before.] Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Fixes: a325f9b92273 ("fuse: update fuse_conn_init() and separate out fuse_conn_kill()") Cc: <stable@vger.kernel.org> #v2.6.31+
2015-06-02vfs: read file_handle only once in handle_to_pathSasha Levin
We used to read file_handle twice. Once to get the amount of extra bytes, and once to fetch the entire structure. This may be problematic since we do size verifications only after the first read, so if the number of extra bytes changes in userspace between the first and second calls, we'll have an incoherent view of file_handle. Instead, read the constant size once, and copy that over to the final structure without having to re-read it again. Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-31Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull vfs fix from Al Viro: "Off-by-one in d_walk()/__dentry_kill() race fix. It's very hard to hit; possible in the same conditions as the original bug, except that you need the skipped branch to contain all the remaining evictables, so that the d_walk()-calling loop in d_invalidate() decides there's nothing more to do and doesn't go for another pass - otherwise that next pass will sweep the sucker. So it's not too urgent, but seeing that the fix is obvious and the original commit has spread into all -stable branches..." * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: d_walk() might skip too much
2015-05-29Merge tag 'xfs-for-linus-4.1-rc6' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs Pull xfs fixes from Dave Chinner: "This is a little larger than I'd like late in the release cycle, but all the fixes are for regressions introduced in the 4.1-rc1 merge, or are needed back in -stable kernels fairly quickly as they are filesystem corruption or userspace visible correctness issues. Changes in this update: - regression fix for new rename whiteout code - regression fixes for new superblock generic per-cpu counter code - fix for incorrect error return sign introduced in 3.17 - metadata corruption fixes that need to go back to -stable kernels" * tag 'xfs-for-linus-4.1-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/dgc/linux-xfs: xfs: fix broken i_nlink accounting for whiteout tmpfile inode xfs: xfs_iozero can return positive errno xfs: xfs_attr_inactive leaves inconsistent attr fork state behind xfs: extent size hints can round up extents past MAXEXTLEN xfs: inode and free block counters need to use __percpu_counter_compare percpu_counter: batch size aware __percpu_counter_compare() xfs: use percpu_counter_read_positive for mp->m_icount
2015-05-29d_walk() might skip too muchAl Viro
when we find that a child has died while we'd been trying to ascend, we should go into the first live sibling itself, rather than its sibling. Off-by-one in question had been introduced in "deal with deadlock in d_walk()" and the fix needs to be backported to all branches this one has been backported to. Cc: stable@vger.kernel.org # 3.2 and later Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2015-05-29omfs: fix potential integer overflow in allocatorBob Copeland
Both 'i' and 'bits_per_entry' are signed integers but the result is a u64 block number. Cast i to u64 to avoid truncation on 32-bit targets. Found by Coverity (CID 200679). Signed-off-by: Bob Copeland <me@bobcopeland.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-29omfs: fix sign confusion for bitmap loop counterBob Copeland
The count variable is used to iterate down to (below) zero from the size of the bitmap and handle the one-filling the remainder of the last partial bitmap block. The loop conditional expects count to be signed in order to detect when the final block is processed, after which count goes negative. Unfortunately, a recent change made this unsigned along with some other related fields. The result of is this is that during mount, omfs_get_imap will overrun the bitmap array and corrupt memory unless number of blocks happens to be a multiple of 8 * blocksize. Fix by changing count back to signed: it is guaranteed to fit in an s32 without overflow due to an enforced limit on the number of blocks in the filesystem. Signed-off-by: Bob Copeland <me@bobcopeland.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-29omfs: set error return when d_make_root() failsBob Copeland
A static checker found the following issue in the error path for omfs_fill_super: fs/omfs/inode.c:552 omfs_fill_super() warn: missing error code here? 'd_make_root()' failed. 'ret' = '0' Fix by returning -ENOMEM in this case. Signed-off-by: Bob Copeland <me@bobcopeland.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-29fs, omfs: add NULL terminator in the end up the token listSasha Levin
match_token() expects a NULL terminator at the end of the token list so that it would know where to stop. Not having one causes it to overrun to invalid memory. In practice, passing a mount option that omfs didn't recognize would sometimes panic the system. Signed-off-by: Sasha Levin <sasha.levin@oracle.com> Signed-off-by: Bob Copeland <me@bobcopeland.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-29fs/binfmt_elf.c:load_elf_binary(): return -EINVAL on zero-length mappingsAndrew Morton
load_elf_binary() returns `retval', not `error'. Fixes: a87938b2e246b81b4fb ("fs/binfmt_elf.c: fix bug in loading of PIE binaries") Reported-by: James Hogan <james.hogan@imgtec.com> Cc: Michael Davidson <md@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-05-28xfs: fix broken i_nlink accounting for whiteout tmpfile inodeBrian Foster
XFS uses the internal tmpfile() infrastructure for the whiteout inode used for RENAME_WHITEOUT operations. For tmpfile inodes, XFS allocates the inode, drops di_nlink, adds the inode to the agi unlinked list, calls d_tmpfile() which correspondingly drops i_nlink of the vfs inode, and then finishes the common inode setup (e.g., clear I_NEW and unlock). The d_tmpfile() call was originally made inxfs_create_tmpfile(), but was pulled up out of that function as part of the following commit to resolve a deadlock issue: 330033d6 xfs: fix tmpfile/selinux deadlock and initialize security As a result, callers of xfs_create_tmpfile() are responsible for either calling d_tmpfile() or fixing up i_nlink appropriately. The whiteout tmpfile allocation helper does neither. As a result, the vfs ->i_nlink becomes inconsistent with the on-disk ->di_nlink once xfs_rename() links it back into the source dentry and calls xfs_bumplink(). Update the assert in xfs_rename() to help detect this problem in the future and update xfs_rename_alloc_whiteout() to decrement the link count as part of the manual tmpfile inode setup. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2015-05-28xfs: xfs_iozero can return positive errnoDave Chinner
It was missed when we converted everything in XFs to use negative error numbers, so fix it now. Bug introduced in 3.17 by commit 2451337 ("xfs: global error sign conversion"), and should go back to stable kernels. Thanks to Brian Foster for noticing it. cc: <stable@vger.kernel.org> # 3.17, 3.18, 3.19, 4.0 Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2015-05-28xfs: xfs_attr_inactive leaves inconsistent attr fork state behindDave Chinner
xfs_attr_inactive() is supposed to clean up the attribute fork when the inode is being freed. While it removes attribute fork extents, it completely ignores attributes in local format, which means that there can still be active attributes on the inode after xfs_attr_inactive() has run. This leads to problems with concurrent inode writeback - the in-core inode attribute fork is removed without locking on the assumption that nothing will be attempting to access the attribute fork after a call to xfs_attr_inactive() because it isn't supposed to exist on disk any more. To fix this, make xfs_attr_inactive() completely remove all traces of the attribute fork from the inode, regardless of it's state. Further, also remove the in-core attribute fork structure safely so that there is nothing further that needs to be done by callers to clean up the attribute fork. This means we can remove the in-core and on-disk attribute forks atomically. Also, on error simply remove the in-memory attribute fork. There's nothing that can be done with it once we have failed to remove the on-disk attribute fork, so we may as well just blow it away here anyway. cc: <stable@vger.kernel.org> # 3.12 to 4.0 Reported-by: Waiman Long <waiman.long@hp.com> Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
2015-05-28xfs: extent size hints can round up extents past MAXEXTLENDave Chinner
This results in BMBT corruption, as seen by this test: # mkfs.xfs -f -d size=40051712b,agcount=4 /dev/vdc .... # mount /dev/vdc /mnt/scratch # xfs_io -ft -c "extsize 16m" -c "falloc 0 30g" -c "bmap -vp" /mnt/scratch/foo which results in this failure on a debug kernel: XFS: Assertion failed: (blockcount & xfs_mask64hi(64-BMBT_BLOCKCOUNT_BITLEN)) == 0, file: fs/xfs/libxfs/xfs_bmap_btree.c, line: 211 .... Call Trace: [<ffffffff814cf0ff>] xfs_bmbt_set_allf+0x8f/0x100 [<ffffffff814cf18d>] xfs_bmbt_set_all+0x1d/0x20 [<ffffffff814f2efe>] xfs_iext_insert+0x9e/0x120 [<ffffffff814c7956>] ? xfs_bmap_add_extent_hole_real+0x1c6/0xc70 [<ffffffff814c7956>] xfs_bmap_add_extent_hole_real+0x1c6/0xc70 [<ffffffff814caaab>] xfs_bmapi_write+0x72b/0xed0 [<ffffffff811c72ac>] ? kmem_cache_alloc+0x15c/0x170 [<ffffffff814fe070>] xfs_alloc_file_space+0x160/0x400 [<ffffffff81ddcc29>] ? down_write+0x29/0x60 [<ffffffff815063eb>] xfs_file_fallocate+0x29b/0x310 [<ffffffff811d2bc8>] ? __sb_start_write+0x58/0x120 [<ffffffff811e3e18>] ? do_vfs_ioctl+0x318/0x570 [<ffffffff811cd680>] vfs_fallocate+0x140/0x260 [<ffffffff811ce6f8>] SyS_fallocate+0x48/0x80 [<ffffffff81ddec09>] system_call_fastpath+0x12/0x17 The tracepoint that indicates the extent that triggered the assert failure is: xfs_iext_insert: idx 0 offset 0 block 16777224 count 2097152 flag 1 Clearly indicating that the extent length is greater than MAXEXTLEN, which is 2097151. A prior trace point shows the allocation was an exact size match and that a length greater than MAXEXTLEN was asked for: xfs_alloc_size_done: agno 1 agbno 8 minlen 2097152 maxlen 2097152 ^^^^^^^ ^^^^^^^ We don't see this problem with extent size hints through the IO path because we can't do single IOs large enough to trigger MAXEXTLEN allocation. fallocate(), OTOH, is not limited in it's allocation sizes and so needs help here. The issue is that the extent size hint alignment is rounding up the extent size past MAXEXTLEN, because xfs_bmapi_write() is not taking into account extent size hints when calculating the maximum extent length to allocate. xfs_bmapi_reserve_delalloc() is already doing this, but direct extent allocation is not. Unfortunately, the calculation in xfs_bmapi_reserve_delalloc() is wrong, and it works only because delayed allocation extents are not limited in size to MAXEXTLEN in the in-core extent tree. hence this calculation does not work for direct allocation, and the delalloc code needs fixing. This may, in fact be the underlying bug that occassionally causes transaction overruns in delayed allocation extent conversion, so now we know it's wrong we should fix it, too. Many thanks to Brian Foster for finding this problem during review of this patch. Hence the fix, after much code reading, is to allow xfs_bmap_extsize_align() to align partial extents when full alignment would extend the alignment past MAXEXTLEN. We can safely do this because all callers have higher layer allocation loops that already handle short allocations, and so will simply run another allocation to cover the remainder of the requested allocation range that we ignored during alignment. The advantage of this approach is that it also removes the need for callers to do anything other than limit their requests to MAXEXTLEN - they don't really need to be aware of extent size hints at all. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>