Linus correctly observes that the most important dispatch cases
are now done from kblockd, this isn't ideal for latency reasons.
The original reason for switching dispatches out-of-line was to
avoid too deep a stack, so by _only_ letting the "accidental"
flush directly in schedule() be guarded by offload to kblockd,
we should be able to get the best of both worlds.
So add a blk_schedule_flush_plug() that offloads to kblockd,
and only use that from the schedule() path.
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>
{
struct blk_plug *plug = tsk->plug;
+ if (plug)
+ blk_flush_plug_list(plug, false);
+}
+
+static inline void blk_schedule_flush_plug(struct task_struct *tsk)
+{
+ struct blk_plug *plug = tsk->plug;
+
if (plug)
blk_flush_plug_list(plug, true);
}
{
}
+static inline void blk_schedule_flush_plug(struct task_struct *task)
+{
+}
+
+
static inline bool blk_needs_flush_plug(struct task_struct *tsk)
{
return false;
*/
if (blk_needs_flush_plug(prev)) {
raw_spin_unlock(&rq->lock);
- blk_flush_plug(prev);
+ blk_schedule_flush_plug(prev);
raw_spin_lock(&rq->lock);
}
}