blk-mq: sync the update nr_hw_queues with blk_mq_queue_tag_busy_iter
authorJianchao Wang <jianchao.w.wang@oracle.com>
Tue, 21 Aug 2018 07:15:04 +0000 (15:15 +0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Mon, 13 Apr 2020 08:34:22 +0000 (10:34 +0200)
commit f5bbbbe4d63577026f908a809f22f5fd5a90ea1f upstream.

For blk-mq, part_in_flight/rw will invoke blk_mq_in_flight/rw to
account the inflight requests. It will access the queue_hw_ctx and
nr_hw_queues w/o any protection. When updating nr_hw_queues and
blk_mq_in_flight/rw occur concurrently, panic comes up.

Before update nr_hw_queues, the q will be frozen. So we could use
q_usage_counter to avoid the race. percpu_ref_is_zero is used here
so that we will not miss any in-flight request. The access to
nr_hw_queues and queue_hw_ctx in blk_mq_queue_tag_busy_iter are
under rcu critical section, __blk_mq_update_nr_hw_queues could use
synchronize_rcu to ensure the zeroed q_usage_counter to be globally
visible.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Cc: Giuliano Procida <gprocida@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
block/blk-mq-tag.c
block/blk-mq.c

index 3d2ab65d2dd15a012d3ff1efb71ad5b5aca6ddd5..4c623ba0af861a78135b3fa33d364c146b030682 100644 (file)
@@ -334,6 +334,18 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn,
        struct blk_mq_hw_ctx *hctx;
        int i;
 
+       /*
+        * __blk_mq_update_nr_hw_queues will update the nr_hw_queues and
+        * queue_hw_ctx after freeze the queue. So we could use q_usage_counter
+        * to avoid race with it. __blk_mq_update_nr_hw_queues will users
+        * synchronize_rcu to ensure all of the users go out of the critical
+        * section below and see zeroed q_usage_counter.
+        */
+       rcu_read_lock();
+       if (percpu_ref_is_zero(&q->q_usage_counter)) {
+               rcu_read_unlock();
+               return;
+       }
 
        queue_for_each_hw_ctx(q, hctx, i) {
                struct blk_mq_tags *tags = hctx->tags;
@@ -349,7 +361,7 @@ void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn,
                        bt_for_each(hctx, &tags->breserved_tags, fn, priv, true);
                bt_for_each(hctx, &tags->bitmap_tags, fn, priv, false);
        }
-
+       rcu_read_unlock();
 }
 
 static int bt_alloc(struct sbitmap_queue *bt, unsigned int depth,
index eac4448047366bb60db76788833f6e5b977dd467..9d53f476c5179713a41b5e9e672d9152626519cd 100644 (file)
@@ -2748,6 +2748,10 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
 
        list_for_each_entry(q, &set->tag_list, tag_set_list)
                blk_mq_unfreeze_queue(q);
+       /*
+        * Sync with blk_mq_queue_tag_busy_iter.
+        */
+       synchronize_rcu();
 }
 
 void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues)