From: Tim Sell Date: Fri, 31 Jul 2015 17:21:33 +0000 (-0400) Subject: staging: unisys: visornic - prevent lock recursion after IO recovery X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=6f562b21612d938cf43202ca3fe29636893aa5df;p=GitHub%2FLineageOS%2Fandroid_kernel_motorola_exynos9610.git staging: unisys: visornic - prevent lock recursion after IO recovery In the patch which changed the serverdown logic to by synchronous, we were mistakenly holding on to devdata->priv_lock in the call to visornic_serverdown_complete(), which ultimately ended up recursively attempting to grab the same lock via the path: --> dev_close --> visornic_close() --> visornic_disable_with_timeout() Evidence: BUG: spinlock recursion on CPU#0, kworker/u2:0/1567 lock: 0xffff88002d7e4c90, .magic: dead4ead, .owner: kworker/ .owner_cpu: 0 CPU: 0 PID: 1567 Comm: kworker/u2:0 Tainted: G WC 4.2.0-rc3-ARCH+ #60 Hardware name: Dell Inc. PowerEdge T110/ , BIOS 1.23 12/15/2009 Workqueue: visorchipset_controlvm controlvm_periodic_work [visorbus] ffff8800216a9380 ffff88002d167878 ffffffff81476874 000000000000061f ffff88002d7e4c90 ffff88002d167898 ffffffff8109e2bc ffff88002d7e4c90 ffffffff81763d7c ffff88002d1678b8 ffffffff8109e330 ffff88002d7e4c90 Call Trace: [] dump_stack+0x4f/0x73 [] spin_dump+0x7c/0xc0 [] spin_bug+0x30/0x40 [] do_raw_spin_lock+0x127/0x140 [] _raw_spin_lock_irqsave+0x4b/0x60 [] ? visornic_disable_with_timeout.clone.2+0x3c/ [visornic] [] ? _raw_spin_unlock_bh+0x39/0x40 [] visornic_disable_with_timeout.clone.2+0x3c/ [visornic] [] visornic_close+0xe/0x20 [visornic] [] __dev_close_many+0x92/0xe0 [] dev_close_many+0x7a/0x110 [] ? down+0x16/0x50 [] dev_close+0x3f/0x50 [] visornic_serverdown+0x91/0x1a0 [visornic] [] ? device_changestate_responder.clone. [visorbus] [] visornic_pause+0x15/0x20 [visornic] [] initiate_chipset_device_pause_resume+0x9f/0xe0 [visorbus] [] chipset_device_pause+0x13/0x20 [visorbus] [] device_epilog+0x12b/0x1a0 [visorbus] [] handle_command+0x72b/0x970 [visorbus] [] ? visorchannel_signalremove+0x6e/0x80 [visorbus] [] controlvm_periodic_work+0x271/0x420 [visorbus] [] process_one_work+0x1d2/0x540 [] ? process_one_work+0x139/0x540 [] ? __schedule+0x807/0xc30 [] worker_thread+0x57/0x4c0 [] ? process_scheduled_works+0x40/0x40 [] ? process_scheduled_works+0x40/0x40 [] kthread+0xe9/0x110 [] ? __init_kthread_worker+0x70/0x70 [] ret_from_fork+0x3f/0x70 [] ? __init_kthread_worker+0x70/0x70 BUG: spinlock lockup suspected on CPU#0, kworker/u2:0/1567 Fixes: f2b70efaf48f ("staging: unisys: Make serverdown synchronous") Signed-off-by: Tim Sell Signed-off-by: Benjamin Romer Signed-off-by: Greg Kroah-Hartman --- diff --git a/drivers/staging/unisys/visornic/visornic_main.c b/drivers/staging/unisys/visornic/visornic_main.c index f4c0c9fd077b..801e66abf58e 100644 --- a/drivers/staging/unisys/visornic/visornic_main.c +++ b/drivers/staging/unisys/visornic/visornic_main.c @@ -416,14 +416,15 @@ visornic_serverdown(struct visornic_devdata *devdata, } devdata->server_change_state = true; devdata->server_down_complete_func = complete_func; + spin_unlock_irqrestore(&devdata->priv_lock, flags); visornic_serverdown_complete(devdata); } else if (devdata->server_change_state) { dev_dbg(&devdata->dev->device, "%s changing state\n", __func__); spin_unlock_irqrestore(&devdata->priv_lock, flags); return -EINVAL; - } - spin_unlock_irqrestore(&devdata->priv_lock, flags); + } else + spin_unlock_irqrestore(&devdata->priv_lock, flags); return 0; }