We need to ensure that we always serialize updates to the bottom-half
using the breadcrumbs.irq_lock so that we don't race with a concurrent
interrupt handler. This is most important just prior to leaving the
waiter (when the intel_wait will be overwritten), so make sure we are
not the current bottom-half when skipping the irq locks.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170315210726.12095-4-chris@chris-wilson.co.uk
struct intel_wait *wait)
{
lockdep_assert_held(&b->rb_lock);
+ GEM_BUG_ON(b->irq_wait == wait);
/* This request is completed, so remove it from the tree, mark it as
* complete, and *then* wake up the associated task.
* the tree by the bottom-half to avoid contention on the spinlock
* by the herd.
*/
- if (RB_EMPTY_NODE(&wait->node))
+ if (RB_EMPTY_NODE(&wait->node)) {
+ GEM_BUG_ON(READ_ONCE(b->irq_wait) == wait);
return;
+ }
spin_lock_irq(&b->rb_lock);
__intel_engine_remove_wait(engine, wait);