[Backport of upstream commit
f6ff4f67cdf8455d0a4226eeeaf5af17c37d05eb, with
an additional NULL pointer guard that is required for kernels 3.17 and older.
To be precise, any kernel that does *not* have commit
954605ca3 "drm/radeon:
use common fence implementation for fences, v4" requires this additional
NULL pointer guard.]
An arbitrary amount of time can pass between spin_unlock and
radeon_fence_wait_any, so we need to ensure that nobody frees the
fences from under us.
Based on the analogous fix for amdgpu.
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com> (v1 + fix)
Tested-by: Lutz Euler <lutz.euler@freenet.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
/* see if we can skip over some allocations */
} while (radeon_sa_bo_next_hole(sa_manager, fences, tries));
+ for (i = 0; i < RADEON_NUM_RINGS; ++i) {
+ if (fences[i])
+ radeon_fence_ref(fences[i]);
+ }
+
spin_unlock(&sa_manager->wq.lock);
r = radeon_fence_wait_any(rdev, fences, false);
+ for (i = 0; i < RADEON_NUM_RINGS; ++i)
+ radeon_fence_unref(&fences[i]);
spin_lock(&sa_manager->wq.lock);
/* if we have nothing to wait for block */
if (r == -ENOENT && block) {