[RAMEN9610-11757] irq/work: Use llist_for_each_entry_safe
authorThomas Gleixner <tglx@linutronix.de>
Sun, 12 Nov 2017 12:02:51 +0000 (13:02 +0100)
committerCosmin Tanislav <demonsingur@gmail.com>
Mon, 22 Apr 2024 17:23:15 +0000 (20:23 +0300)
The llist_for_each_entry() loop in irq_work_run_list() is unsafe because
once the works PENDING bit is cleared it can be requeued on another CPU.

Use llist_for_each_entry_safe() instead.

Change-Id: I8680657f89c008f879b4e88e3499a7e44f2978a1
Fixes: 16c0890dc66d ("irq/work: Don't reinvent the wheel but use existing llist API")
Reported-by:Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Frederic Weisbecker <frederic@kernel.org>
Cc: Byungchul Park <byungchul.park@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Petri Latvala <petri.latvala@intel.com>
Link: http://lkml.kernel.org/r/151027307351.14762.4611888896020658384@mail.alporthouse.com
kernel/irq_work.c

index eb6ccf11a857aecef50989d1153bc3abc69b587a..9c562ffdf8854b70f58c09f4811fefc5a8fff5fa 100644 (file)
@@ -131,9 +131,9 @@ bool irq_work_needs_cpu(void)
 
 static void irq_work_run_list(struct llist_head *list)
 {
-       unsigned long flags;
-       struct irq_work *work;
+       struct irq_work *work, *tmp;
        struct llist_node *llnode;
+       unsigned long flags;
 
        BUG_ON(!irqs_disabled());
 
@@ -141,7 +141,7 @@ static void irq_work_run_list(struct llist_head *list)
                return;
 
        llnode = llist_del_all(list);
-       llist_for_each_entry(work, llnode, llnode) {
+       llist_for_each_entry_safe(work, tmp, llnode, llnode) {
                /*
                 * Clear the PENDING bit, after this point the @work
                 * can be re-used.