From: Eric Dumazet Date: Thu, 9 Jan 2014 22:12:19 +0000 (-0800) Subject: net: gro: change GRO overflow strategy X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=600adc18eba823f9fd8ed5fec8b04f11dddf3884;p=GitHub%2Fexynos8895%2Fandroid_kernel_samsung_universal8895.git net: gro: change GRO overflow strategy GRO layer has a limit of 8 flows being held in GRO list, for performance reason. When a packet comes for a flow not yet in the list, and list is full, we immediately give it to upper stacks, lowering aggregation performance. With TSO auto sizing and FQ packet scheduler, this situation happens more often. This patch changes strategy to simply evict the oldest flow of the list. This works better because of the nature of packet trains for which GRO is efficient. This also has the effect of lowering the GRO latency if many flows are competing. Tested : Used a 40Gbps NIC, with 4 RX queues, and 200 concurrent TCP_STREAM netperf. Before patch, aggregate rate is 11Gbps (while a single flow can reach 30Gbps) After patch, line rate is reached. Signed-off-by: Eric Dumazet Cc: Jerry Chu Cc: Neal Cardwell Acked-by: Neal Cardwell Signed-off-by: David S. Miller --- diff --git a/net/core/dev.c b/net/core/dev.c index ce01847793c0..a8280154c42a 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -3882,10 +3882,23 @@ static enum gro_result dev_gro_receive(struct napi_struct *napi, struct sk_buff if (same_flow) goto ok; - if (NAPI_GRO_CB(skb)->flush || napi->gro_count >= MAX_GRO_SKBS) + if (NAPI_GRO_CB(skb)->flush) goto normal; - napi->gro_count++; + if (unlikely(napi->gro_count >= MAX_GRO_SKBS)) { + struct sk_buff *nskb = napi->gro_list; + + /* locate the end of the list to select the 'oldest' flow */ + while (nskb->next) { + pp = &nskb->next; + nskb = *pp; + } + *pp = NULL; + nskb->next = NULL; + napi_gro_complete(nskb); + } else { + napi->gro_count++; + } NAPI_GRO_CB(skb)->count = 1; NAPI_GRO_CB(skb)->age = jiffies; skb_shinfo(skb)->gso_size = skb_gro_len(skb);