diff options
author | Herbert Xu <herbert@gondor.apana.org.au> | 2008-03-22 22:47:05 (GMT) |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2008-03-22 22:47:05 (GMT) |
commit | 69d1506731168d6845a76a303b2c45f7c05f3f2c (patch) | |
tree | 3bedf2680b30c09b0375616a1c2b0d291a9f376f /fs/ncpfs | |
parent | 7512cbf6efc97644812f137527a54b8e92b6a90a (diff) | |
download | linux-69d1506731168d6845a76a303b2c45f7c05f3f2c.tar.xz |
[TCP]: Let skbs grow over a page on fast peers
While testing the virtio-net driver on KVM with TSO I noticed
that TSO performance with a 1500 MTU is significantly worse
compared to the performance of non-TSO with a 16436 MTU. The
packet dump shows that most of the packets sent are smaller
than a page.
Looking at the code this actually is quite obvious as it always
stop extending the packet if it's the first packet yet to be
sent and if it's larger than the MSS. Since each extension is
bound by the page size, this means that (given a 1500 MTU) we're
very unlikely to construct packets greater than a page, provided
that the receiver and the path is fast enough so that packets can
always be sent immediately.
The fix is also quite obvious. The push calls inside the loop
is just an optimisation so that we don't end up doing all the
sending at the end of the loop. Therefore there is no specific
reason why it has to do so at MSS boundaries. For TSO, the
most natural extension of this optimisation is to do the pushing
once the skb exceeds the TSO size goal.
This is what the patch does and testing with KVM shows that the
TSO performance with a 1500 MTU easily surpasses that of a 16436
MTU and indeed the packet sizes sent are generally larger than
16436.
I don't see any obvious downsides for slower peers or connections,
but it would be prudent to test this extensively to ensure that
those cases don't regress.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'fs/ncpfs')
0 files changed, 0 insertions, 0 deletions