This:
# mkfs.btrfs /dev/sdb{1,2} ; wipefs -a /dev/sdb1; mount /dev/sdb2 /mnt/test
would lead to a blkdev open/close mismatch when the mount fails, and
a permanently busy (opened O_EXCL) sdb2:
# wipefs -a /dev/sdb2
wipefs: error: /dev/sdb2: probing initialization failed: Device or resource busy
It's because btrfs_open_devices() may open some devices, fail on
the last one, and return that failure stored in "ret." The mount
then fails, but the caller then does not clean up the open devices.
Chris assures me that:
"btrfs_open_devices just means: go off and open every bdev you can from
this uuid. It should return success if we opened any of them at all."
So change the logic to ignore any open failures; just skip processing
of that device. Later on it's decided whether we have enough devices
to continue.
Reported-by: Jan Safranek <jsafrane@redhat.com>
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
if (!device->name)
continue;
- ret = btrfs_get_bdev_and_sb(device->name->str, flags, holder, 1,
- &bdev, &bh);
- if (ret)
+ /* Just open everything we can; ignore failures here */
+ if (btrfs_get_bdev_and_sb(device->name->str, flags, holder, 1,
+ &bdev, &bh))
continue;
disk_super = (struct btrfs_super_block *)bh->b_data;