net: revert 8728c544a9c ("net: dev_pick_tx() fix")
authorEric Dumazet <edumazet@google.com>
Thu, 29 Aug 2013 01:10:43 +0000 (18:10 -0700)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Sat, 14 Sep 2013 13:54:56 +0000 (06:54 -0700)
[ Upstream commit 702821f4ea6f68db18aa1de7d8ed62c6ba586a64 ]

commit 8728c544a9cbdc ("net: dev_pick_tx() fix") and commit
b6fe83e9525a ("bonding: refine IFF_XMIT_DST_RELEASE capability")
are quite incompatible : Queue selection is disabled because skb
dst was dropped before entering bonding device.

This causes major performance regression, mainly because TCP packets
for a given flow can be sent to multiple queues.

This is particularly visible when using the new FQ packet scheduler
with MQ + FQ setup on the slaves.

We can safely revert the first commit now that 416186fbf8c5b
("net: Split core bits of netdev_pick_tx into __netdev_pick_tx")
properly caps the queue_index.

Reported-by: Xi Wang <xii@google.com>
Diagnosed-by: Xi Wang <xii@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Denys Fedorysychenko <nuclearcat@nuclearcat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
net/core/flow_dissector.c

index 00ee068efc1c2374bc0195d9458eca00392359c6..c99cc371bbd78a63a25c04fe2735aa27259336d3 100644 (file)
@@ -345,14 +345,9 @@ u16 __netdev_pick_tx(struct net_device *dev, struct sk_buff *skb)
                if (new_index < 0)
                        new_index = skb_tx_hash(dev, skb);
 
-               if (queue_index != new_index && sk) {
-                       struct dst_entry *dst =
-                                   rcu_dereference_check(sk->sk_dst_cache, 1);
-
-                       if (dst && skb_dst(skb) == dst)
-                               sk_tx_queue_set(sk, queue_index);
-
-               }
+               if (queue_index != new_index && sk &&
+                   rcu_access_pointer(sk->sk_dst_cache))
+                       sk_tx_queue_set(sk, queue_index);
 
                queue_index = new_index;
        }