tcp: reduce out_of_order memory use
authorEric Dumazet <eric.dumazet@gmail.com>
Sun, 18 Mar 2012 11:07:47 +0000 (11:07 +0000)
committerDavid S. Miller <davem@davemloft.net>
Mon, 19 Mar 2012 20:53:08 +0000 (16:53 -0400)
With increasing receive window sizes, but speed of light not improved
that much, out of order queue can contain a huge number of skbs, waiting
to be moved to receive_queue when missing packets can fill the holes.

Some devices happen to use fat skbs (truesize of 4096 + sizeof(struct
sk_buff)) to store regular (MTU <= 1500) frames. This makes highly
probable sk_rmem_alloc hits sk_rcvbuf limit, which can be 4Mbytes in
many cases.

When limit is hit, tcp stack calls tcp_collapse_ofo_queue(), a true
latency killer and cpu cache blower.

Doing the coalescing attempt each time we add a frame in ofo queue
permits to keep memory use tight and in many cases avoid the
tcp_collapse() thing later.

Tested on various wireless setups (b43, ath9k, ...) known to use big skb
truesize, this patch removed the "packets collapsed in receive queue due
to low socket buffer" I had before.

This also reduced average memory used by tcp sockets.

With help from Neal Cardwell.

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Yuchung Cheng <ycheng@google.com>
Cc: H.K. Jerry Chu <hkchu@google.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
include/linux/snmp.h
net/ipv4/proc.c
net/ipv4/tcp_input.c

index 8ee8af4e6da966c661123b0bcd2a26fbb1ceb1d8..2e68f5ba03896947942c9b9886476c4af2a5da03 100644 (file)
@@ -233,6 +233,7 @@ enum
        LINUX_MIB_TCPREQQFULLDOCOOKIES,         /* TCPReqQFullDoCookies */
        LINUX_MIB_TCPREQQFULLDROP,              /* TCPReqQFullDrop */
        LINUX_MIB_TCPRETRANSFAIL,               /* TCPRetransFail */
+       LINUX_MIB_TCPRCVCOALESCE,                       /* TCPRcvCoalesce */
        __LINUX_MIB_MAX
 };
 
index 02d61079f08b0c7639b5c1864fd126f4e5658a03..8af0d44e4e2219cee75126adc6f2348e1d1f2d70 100644 (file)
@@ -257,6 +257,7 @@ static const struct snmp_mib snmp4_net_list[] = {
        SNMP_MIB_ITEM("TCPReqQFullDoCookies", LINUX_MIB_TCPREQQFULLDOCOOKIES),
        SNMP_MIB_ITEM("TCPReqQFullDrop", LINUX_MIB_TCPREQQFULLDROP),
        SNMP_MIB_ITEM("TCPRetransFail", LINUX_MIB_TCPRETRANSFAIL),
+       SNMP_MIB_ITEM("TCPRcvCoalesce", LINUX_MIB_TCPRCVCOALESCE),
        SNMP_MIB_SENTINEL
 };
 
index fa7de12c4a5263f7ab95db95cff460474c9a96ff..e886e2f7fa8d03edc8644179a6f1ef7ca6a374f6 100644 (file)
@@ -4484,7 +4484,24 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
        end_seq = TCP_SKB_CB(skb)->end_seq;
 
        if (seq == TCP_SKB_CB(skb1)->end_seq) {
-               __skb_queue_after(&tp->out_of_order_queue, skb1, skb);
+               /* Packets in ofo can stay in queue a long time.
+                * Better try to coalesce them right now
+                * to avoid future tcp_collapse_ofo_queue(),
+                * probably the most expensive function in tcp stack.
+                */
+               if (skb->len <= skb_tailroom(skb1) && !tcp_hdr(skb)->fin) {
+                       NET_INC_STATS_BH(sock_net(sk),
+                                        LINUX_MIB_TCPRCVCOALESCE);
+                       BUG_ON(skb_copy_bits(skb, 0,
+                                            skb_put(skb1, skb->len),
+                                            skb->len));
+                       TCP_SKB_CB(skb1)->end_seq = end_seq;
+                       TCP_SKB_CB(skb1)->ack_seq = TCP_SKB_CB(skb)->ack_seq;
+                       __kfree_skb(skb);
+                       skb = NULL;
+               } else {
+                       __skb_queue_after(&tp->out_of_order_queue, skb1, skb);
+               }
 
                if (!tp->rx_opt.num_sacks ||
                    tp->selective_acks[0].end_seq != seq)