drm/i915/execlists: Reduce lock contention between schedule/submit_request
authorChris Wilson <chris@chris-wilson.co.uk>
Wed, 17 May 2017 12:10:05 +0000 (13:10 +0100)
committerChris Wilson <chris@chris-wilson.co.uk>
Wed, 17 May 2017 12:38:13 +0000 (13:38 +0100)
If we do not require to perform priority bumping, and we haven't yet
submitted the request, we can update its priority in situ and skip
acquiring the engine locks -- thus avoiding any contention between us
and submit/execute.

v2: Remove the stack element from the list if we can do the early
assignment.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170517121007.27224-10-chris@chris-wilson.co.uk
drivers/gpu/drm/i915/intel_lrc.c

index 8529746dd7cc9a8c04cacf5100e3c2f109a1d1b5..014b30ace8a0af394960d4a8ed259dc712758b52 100644 (file)
@@ -779,6 +779,19 @@ static void execlists_schedule(struct drm_i915_gem_request *request, int prio)
                list_safe_reset_next(dep, p, dfs_link);
        }
 
+       /* If we didn't need to bump any existing priorities, and we haven't
+        * yet submitted this request (i.e. there is no potential race with
+        * execlists_submit_request()), we can set our own priority and skip
+        * acquiring the engine locks.
+        */
+       if (request->priotree.priority == INT_MIN) {
+               GEM_BUG_ON(!list_empty(&request->priotree.link));
+               request->priotree.priority = prio;
+               if (stack.dfs_link.next == stack.dfs_link.prev)
+                       return;
+               __list_del_entry(&stack.dfs_link);
+       }
+
        engine = request->engine;
        spin_lock_irq(&engine->timeline->lock);