net: mvneta: fix error path for building skb
authorMarcin Wojtas <mw@semihalf.com>
Mon, 30 Nov 2015 12:27:44 +0000 (13:27 +0100)
committerDavid S. Miller <davem@davemloft.net>
Thu, 3 Dec 2015 04:35:05 +0000 (23:35 -0500)
In the actual RX processing, there is same error path for both descriptor
ring refilling and building skb fails. This is not correct, because after
successful refill, the ring is already updated with newly allocated
buffer. Then, in case of build_skb() fail, hitherto code left the original
buffer unmapped.

This patch fixes above situation by swapping error check of skb build with
DMA-unmap of original buffer.

Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Acked-by: Simon Guinot <simon.guinot@sequanux.org>
Cc: <stable@vger.kernel.org> # v4.2+
Fixes a84e32894191 ("net: mvneta: fix refilling for Rx DMA buffers")
Signed-off-by: David S. Miller <davem@davemloft.net>
drivers/net/ethernet/marvell/mvneta.c

index 5dffb68312368dc2470e23863be16aa37d040bd7..5a98c5d61a2bb9c381a983a223e89569562a612c 100644 (file)
@@ -1580,12 +1580,16 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo,
                }
 
                skb = build_skb(data, pp->frag_size > PAGE_SIZE ? 0 : pp->frag_size);
-               if (!skb)
-                       goto err_drop_frame;
 
+               /* After refill old buffer has to be unmapped regardless
+                * the skb is successfully built or not.
+                */
                dma_unmap_single(dev->dev.parent, phys_addr,
                                 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
 
+               if (!skb)
+                       goto err_drop_frame;
+
                rcvd_pkts++;
                rcvd_bytes += rx_bytes;