drm/i915: queue work outside spinlock in hsw_pm_irq_handler
authorDaniel Vetter <daniel.vetter@ffwll.ch>
Thu, 4 Jul 2013 21:35:27 +0000 (23:35 +0200)
committerDaniel Vetter <daniel.vetter@ffwll.ch>
Thu, 11 Jul 2013 12:36:34 +0000 (14:36 +0200)
And kill the comment about it. Queueing work is a barrier type event,
no amount of locking will help in ordering things (as long as we queue
the work after having updated all relevant data structures). Also, the
queue_work works itself as a sufficient memory barrier.

Again on the surface this is just a tiny micro-optimization to reduce
the hold-time of dev_priv->irq_lock. But the better reason is that it
reduces superficial locking and so makes it clearer what we actually
need for correctness.

Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
drivers/gpu/drm/i915/i915_irq.c

index d4af11541287ecfc4fac4b49f42fe2e94f6be449..04861995fe1bc06b0ea3eaa380010cf684480b6d 100644 (file)
@@ -969,9 +969,9 @@ static void hsw_pm_irq_handler(struct drm_i915_private *dev_priv,
                I915_WRITE(GEN6_PMIMR, dev_priv->rps.pm_iir);
                /* never want to mask useful interrupts. (also posting read) */
                WARN_ON(I915_READ_NOTRACE(GEN6_PMIMR) & ~GEN6_PM_RPS_EVENTS);
-               /* TODO: if queue_work is slow, move it out of the spinlock */
-               queue_work(dev_priv->wq, &dev_priv->rps.work);
                spin_unlock(&dev_priv->rps.lock);
+
+               queue_work(dev_priv->wq, &dev_priv->rps.work);
        }
 
        if (pm_iir & PM_VEBOX_USER_INTERRUPT)