From: Takahiro Yasui Date: Tue, 16 Feb 2010 18:42:58 +0000 (+0000) Subject: dm raid1: fix null pointer dereference in suspend X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=558569aa9d83e016295bac77d900342908d7fd85;p=GitHub%2FLineageOS%2Fandroid_kernel_motorola_exynos9610.git dm raid1: fix null pointer dereference in suspend When suspending a failed mirror, bios are completed by mirror_end_io() and __rh_lookup() in dm_rh_dec() returns NULL where a non-NULL return value is required by design. Fix this by not changing the state of the recovery failed region from DM_RH_RECOVERING to DM_RH_NOSYNC in dm_rh_recovery_end(). Issue On 2.6.33-rc1 kernel, I hit the bug when I suspended the failed mirror by dmsetup command. BUG: unable to handle kernel NULL pointer dereference at 00000020 IP: [] dm_rh_dec+0x35/0xa1 [dm_region_hash] ... EIP: 0060:[] EFLAGS: 00010046 CPU: 0 EIP is at dm_rh_dec+0x35/0xa1 [dm_region_hash] EAX: 00000286 EBX: 00000000 ECX: 00000286 EDX: 00000000 ESI: eff79eac EDI: eff79e80 EBP: f6915cd4 ESP: f6915cc4 DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 Process dmsetup (pid: 2849, ti=f6914000 task=eff03e80 task.ti=f6914000) ... Call Trace: [] ? mirror_end_io+0x53/0x1b1 [dm_mirror] [] ? clone_endio+0x4d/0xa2 [dm_mod] [] ? mirror_end_io+0x0/0x1b1 [dm_mirror] [] ? clone_endio+0x0/0xa2 [dm_mod] [] ? bio_endio+0x28/0x2b [] ? hold_bio+0x2d/0x62 [dm_mirror] [] ? mirror_presuspend+0xeb/0xf7 [dm_mirror] [] ? vmap_page_range+0xb/0xd [] ? suspend_targets+0x2d/0x3b [dm_mod] [] ? dm_table_presuspend_targets+0xe/0x10 [dm_mod] [] ? dm_suspend+0x4d/0x150 [dm_mod] [] ? dev_suspend+0x55/0x18a [dm_mod] [] ? _copy_from_user+0x42/0x56 [] ? dm_ctl_ioctl+0x22c/0x281 [dm_mod] [] ? dev_suspend+0x0/0x18a [dm_mod] [] ? dm_ctl_ioctl+0x0/0x281 [dm_mod] [] ? vfs_ioctl+0x22/0x85 [] ? do_vfs_ioctl+0x4cb/0x516 [] ? sys_ioctl+0x40/0x5a [] ? sysenter_do_call+0x12/0x28 Analysis When recovery process of a region failed, dm_rh_recovery_end() function changes the state of the region from RM_RH_RECOVERING to DM_RH_NOSYNC. When recovery_complete() is executed between dm_rh_update_states() and dm_writes() in do_mirror(), bios are processed with the region state, DM_RH_NOSYNC. However, the region data is freed without checking its pending count when dm_rh_update_states() is called next time. When bios are finished by mirror_end_io(), __rh_lookup() in dm_rh_dec() returns NULL even though a valid return value are expected. Solution Remove the state change of the recovery failed region from DM_RH_RECOVERING to DM_RH_NOSYNC in dm_rh_recovery_end(). We can remove the state change because: - If the region data has been released by dm_rh_update_states(), a new region data is created with the state of DM_RH_NOSYNC, and bios are processed according to the DM_RH_NOSYNC state. - If the region data has not been released by dm_rh_update_states(), a state of the region is DM_RH_RECOVERING and bios are put in the delayed_bio list. The flag change from DM_RH_RECOVERING to DM_RH_NOSYNC in dm_rh_recovery_end() was added in the following commit: dm raid1: handle resync failures author Jonathan Brassow Thu, 12 Jul 2007 16:29:04 +0000 (17:29 +0100) http://git.kernel.org/linus/f44db678edcc6f4c2779ac43f63f0b9dfa28b724 Signed-off-by: Takahiro Yasui Reviewed-by: Mikulas Patocka Signed-off-by: Alasdair G Kergon --- diff --git a/drivers/md/dm-region-hash.c b/drivers/md/dm-region-hash.c index 5f19ceb6fe91..168bd38f5006 100644 --- a/drivers/md/dm-region-hash.c +++ b/drivers/md/dm-region-hash.c @@ -660,10 +660,9 @@ void dm_rh_recovery_end(struct dm_region *reg, int success) spin_lock_irq(&rh->region_lock); if (success) list_add(®->list, ®->rh->recovered_regions); - else { - reg->state = DM_RH_NOSYNC; + else list_add(®->list, ®->rh->failed_recovered_regions); - } + spin_unlock_irq(&rh->region_lock); rh->wakeup_workers(rh->context);