vhost_net: use packet weight for rx handler, too
authorPaolo Abeni <pabeni@redhat.com>
Fri, 16 Aug 2019 23:00:28 +0000 (00:00 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sun, 25 Aug 2019 08:51:41 +0000 (10:51 +0200)
commit73f768b7684cf2e3ff99e74bc99c9ff253aa979f
tree7850eb24e0af3d73213b16c21881f1bf54ad4248
parent43f7e9b81a81e4aae273d4199b5c59d581d1d83f
vhost_net: use packet weight for rx handler, too

commit db688c24eada63b1efe6d0d7d835e5c3bdd71fd3 upstream.

Similar to commit a2ac99905f1e ("vhost-net: set packet weight of
tx polling to 2 * vq size"), we need a packet-based limit for
handler_rx, too - elsewhere, under rx flood with small packets,
tx can be delayed for a very long time, even without busypolling.

The pkt limit applied to handle_rx must be the same applied by
handle_tx, or we will get unfair scheduling between rx and tx.
Tying such limit to the queue length makes it less effective for
large queue length values and can introduce large process
scheduler latencies, so a constant valued is used - likewise
the existing bytes limit.

The selected limit has been validated with PVP[1] performance
test with different queue sizes:

queue size 256 512 1024

baseline 366 354 362
weight 128 715 723 670
weight 256 740 745 733
weight 512 600 460 583
weight 1024 423 427 418

A packet weight of 256 gives peek performances in under all the
tested scenarios.

No measurable regression in unidirectional performance tests has
been detected.

[1] https://developers.redhat.com/blog/2017/06/05/measuring-and-comparing-open-vswitch-performance/

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
drivers/vhost/net.c