diff options
author | J. Bruce Fields <bfields@redhat.com> | 2014-02-04 15:36:59 (GMT) |
---|---|---|
committer | J. Bruce Fields <bfields@redhat.com> | 2014-05-30 21:32:08 (GMT) |
commit | b0e35fda827e72cf4b065b52c4c472c28c004fca (patch) | |
tree | 63378e1441b4b8a211901dcda99e503bdd08c06d /fs/btrfs/hash.h | |
parent | ccae70a9ee415a89b70e97a36886ab55191ebaea (diff) | |
download | linux-b0e35fda827e72cf4b065b52c4c472c28c004fca.tar.xz |
nfsd4: turn off zero-copy-read in exotic cases
We currently allow only one read per compound, with operations before
and after whose responses will require no more than about a page to
encode.
While we don't expect clients to violate those limits any time soon,
this limitation isn't really condoned by the spec, so to future proof
the server we should lift the limitation.
At the same time we'd like to continue to support zero-copy reads.
Supporting multiple zero-copy-reads per compound would require a new
data structure to replace struct xdr_buf, which can represent only one
set of included pages.
So for now we plan to modify encode_read() to support either zero-copy
or non-zero-copy reads, and use some heuristics at the start of the
compound processing to decide whether a zero-copy read will work.
This will allow us to support more exotic compounds without introducing
a performance regression in the normal case.
Later patches handle those "exotic compounds", this one just makes sure
zero-copy is turned off in those cases.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Diffstat (limited to 'fs/btrfs/hash.h')
0 files changed, 0 insertions, 0 deletions