block: let io_schedule() flush the plug inline
authorJens Axboe <jaxboe@fusionio.com>
Sat, 16 Apr 2011 11:27:55 +0000 (13:27 +0200)
committerJens Axboe <jaxboe@fusionio.com>
Sat, 16 Apr 2011 11:27:55 +0000 (13:27 +0200)
Linus correctly observes that the most important dispatch cases
are now done from kblockd, this isn't ideal for latency reasons.
The original reason for switching dispatches out-of-line was to
avoid too deep a stack, so by _only_ letting the "accidental"
flush directly in schedule() be guarded by offload to kblockd,
we should be able to get the best of both worlds.

So add a blk_schedule_flush_plug() that offloads to kblockd,
and only use that from the schedule() path.

Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
include/linux/blkdev.h
kernel/sched.c

index 1c76506fcf11cfe260276475762fadc6d4b09dfc..ec0357d8c4a58679c96bc3e937b5de97e9a4063c 100644 (file)
@@ -871,6 +871,14 @@ static inline void blk_flush_plug(struct task_struct *tsk)
 {
        struct blk_plug *plug = tsk->plug;
 
+       if (plug)
+               blk_flush_plug_list(plug, false);
+}
+
+static inline void blk_schedule_flush_plug(struct task_struct *tsk)
+{
+       struct blk_plug *plug = tsk->plug;
+
        if (plug)
                blk_flush_plug_list(plug, true);
 }
@@ -1317,6 +1325,11 @@ static inline void blk_flush_plug(struct task_struct *task)
 {
 }
 
+static inline void blk_schedule_flush_plug(struct task_struct *task)
+{
+}
+
+
 static inline bool blk_needs_flush_plug(struct task_struct *tsk)
 {
        return false;
index a187c3fe027b58debb8c15d6ec98f76d6eb62b75..312f8b95c2d44fbbc7c7097de04b52d92abc0648 100644 (file)
@@ -4118,7 +4118,7 @@ need_resched:
                         */
                        if (blk_needs_flush_plug(prev)) {
                                raw_spin_unlock(&rq->lock);
-                               blk_flush_plug(prev);
+                               blk_schedule_flush_plug(prev);
                                raw_spin_lock(&rq->lock);
                        }
                }