drm/i915/perf: avoid poll, read, EAGAIN busy loops
authorRobert Bragg <robert@sixbynine.org>
Thu, 11 May 2017 15:43:25 +0000 (16:43 +0100)
committerChris Wilson <chris@chris-wilson.co.uk>
Sat, 13 May 2017 09:59:07 +0000 (10:59 +0100)
If the function for checking whether there is OA buffer data available
(during a poll or blocking read) has false positives then we want to
avoid a situation where the subsequent read() returns EAGAIN (after
a more accurate check) followed by a poll() immediately reporting
the same false positive POLLIN event and effectively maintaining a
busy loop until there really is data.

This makes sure that we clear the .pollin event status whenever we
return EAGAIN to userspace which will throttle subsequent POLLIN events
and repeated attempts to read to the 5ms intervals of the hrtimer
callback we have.

Signed-off-by: Robert Bragg <robert@sixbynine.org>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170511154345.962-3-lionel.g.landwerlin@intel.com
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
drivers/gpu/drm/i915/i915_perf.c

index 6227e487eecdc95939ba7b36a9af0270b4caa13a..e1158893c6cbfdf13a7a7fad52e3b4a0330ebd63 100644 (file)
@@ -1351,7 +1351,15 @@ static ssize_t i915_perf_read(struct file *file,
                mutex_unlock(&dev_priv->perf.lock);
        }
 
-       if (ret >= 0) {
+       /* We allow the poll checking to sometimes report false positive POLLIN
+        * events where we might actually report EAGAIN on read() if there's
+        * not really any data available. In this situation though we don't
+        * want to enter a busy loop between poll() reporting a POLLIN event
+        * and read() returning -EAGAIN. Clearing the oa.pollin state here
+        * effectively ensures we back off until the next hrtimer callback
+        * before reporting another POLLIN event.
+        */
+       if (ret >= 0 || ret == -EAGAIN) {
                /* Maybe make ->pollin per-stream state if we support multiple
                 * concurrent streams in the future.
                 */