xen/blkback: Stick REQ_SYNC on WRITEs to deal with CFQ I/O scheduler.
authorKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tue, 26 Apr 2011 20:24:18 +0000 (16:24 -0400)
committerKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Tue, 26 Apr 2011 20:24:18 +0000 (16:24 -0400)
If one runs a simple fio request with random read/write with a
20%/80% ratio, the numbers are incredibly bad when using the CFQ scheduler.

IOmeter       |       |      |          |
64K, randrw   |  NOOP | CFQ  | deadline |
randrwmix=80  |       |      |          |
--------------+-------+------+----------+
blkback       |103/27 |32/10 | 102/27   |
--------------+-------+------+----------+
QEMU qdisk    |103/27 |102/27| 102/27   |

The problem as explained by Vivek Goyal was:

".. that difference is that sync vs async requests. In the case of
a kernel thread submitting IO, [..] all the WRITES might be being
considered as async and will go in a different queue. If you mix those
with some READS, they are always sync and will go in differnet queue.
In presence of sync queue, CFQ will idle and choke up WRITES in
an attempt to improve latencies of READs.

In case of AIO [note: this is what QEMU qdisk is doing] , [..]
it is direct IO and both READS and WRITES will be considered SYNC
and will go in a single queue and no choking of WRITES will take place."

The solution is quite simple, tack on REQ_SYNC (which is
what the WRITE_ODIRECT macro points to) and the numbers go
back up.

Suggested-by: Vivek Goyal <vgoyal@redhat.com
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
drivers/block/xen-blkback/blkback.c

index ed85ba94b2e0a85a5962d4621c2ef55b035e3562..8583b130499af63e1b09c79aa0a698c4fcfd1399 100644 (file)
@@ -559,7 +559,7 @@ static void dispatch_rw_block_io(struct blkif_st *blkif,
                operation = READ;
                break;
        case BLKIF_OP_WRITE:
-               operation = WRITE;
+               operation = WRITE_ODIRECT;
                break;
        case BLKIF_OP_WRITE_BARRIER:
                operation = WRITE_BARRIER;