drbd: fix potential protocol error and resulting disconnect/reconnect
authorLars Ellenberg <lars.ellenberg@linbit.com>
Mon, 21 Jan 2013 14:43:41 +0000 (15:43 +0100)
committerPhilipp Reisner <philipp.reisner@linbit.com>
Mon, 21 Jan 2013 21:58:36 +0000 (22:58 +0100)
commit2681f7f6ce6c7416eb619d0fb19422bcc68bd9e1
tree31b6a1a1830c4c81d917b545e659c1f46bcc1218
parentd2ec180c23a5a1bfe34d8638b0342a47c00cf70f
drbd: fix potential protocol error and resulting disconnect/reconnect

When we notice a disk failure on the receiving side,
we stop sending it new incoming writes.

Depending on exact timing of various events, the same transfer log epoch
could end up containing both replicated (before we noticed the failure)
and local-only requests (after we noticed the failure).

The sanity checks in tl_release(), called when receiving a
P_BARRIER_ACK, check that the ack'ed transfer log epoch matches
the expected epoch, and the number of contained writes matches
the number of ack'ed writes.

In this case, they counted both replicated and local-only writes,
but the peer only acknowledges those it has seen.  We get a mismatch,
resulting in a protocol error and disconnect/reconnect cycle.

Messages logged are
  "BAD! BarrierAck #%u received with n_writes=%u, expected n_writes=%u!\n"

A similar issue can also be triggered when starting a resync while
having a healthy replication link, by invalidating one side, forcing a
full sync, or attaching to a diskless node.

Fix this by closing the current epoch if the state changes in a way
that would cause the replication intent of the next write.

Epochs now contain either only non-replicated,
or only replicated writes.

Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
drivers/block/drbd/drbd_req.c
drivers/block/drbd/drbd_req.h
drivers/block/drbd/drbd_state.c