RDMA/qedr: Fix kernel panic when running fio over NFSoRDMA
authorKalderon, Michal <Michal.Kalderon@cavium.com>
Mon, 5 Mar 2018 08:50:10 +0000 (10:50 +0200)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 30 May 2018 05:52:12 +0000 (07:52 +0200)
[ Upstream commit e3fd112cbf21d049faf64ba1471d72b93c22109a ]

Race in qedr_poll_cq, lastest_cqe wasn't protected by lock,
leading to a case where two context's accessing poll_cq at
the same time lead to one of them having a pointer to an old
latest_cqe and reading an invalid cqe element

Signed-off-by: Amit Radzi <Amit.Radzi@cavium.com>
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
drivers/infiniband/hw/qedr/verbs.c

index 769ac07c3c8eb72d3bd2bf95446236f9c07dc34b..dc8fca0707e793064e151279673723f9eb299baf 100644 (file)
@@ -3518,7 +3518,7 @@ int qedr_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
 {
        struct qedr_dev *dev = get_qedr_dev(ibcq->device);
        struct qedr_cq *cq = get_qedr_cq(ibcq);
-       union rdma_cqe *cqe = cq->latest_cqe;
+       union rdma_cqe *cqe;
        u32 old_cons, new_cons;
        unsigned long flags;
        int update = 0;
@@ -3535,6 +3535,7 @@ int qedr_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *wc)
                return qedr_gsi_poll_cq(ibcq, num_entries, wc);
 
        spin_lock_irqsave(&cq->cq_lock, flags);
+       cqe = cq->latest_cqe;
        old_cons = qed_chain_get_cons_idx_u32(&cq->pbl);
        while (num_entries && is_valid_cqe(cq, cqe)) {
                struct qedr_qp *qp;