drm/i915/shrinker: Only shmemfs objects are backed by swap
authorChris Wilson <chris@chris-wilson.co.uk>
Wed, 20 Apr 2016 11:09:52 +0000 (12:09 +0100)
committerChris Wilson <chris@chris-wilson.co.uk>
Wed, 20 Apr 2016 12:49:44 +0000 (13:49 +0100)
Since we can only swap out shmemfs objects, those are the only ones that
can influence the ability of the shrinker to free pages. Currently, all
non-shmemfs objects have a raised pages_pin_count to protect them from
the shrinker, so this just makes the logic for can_release_pages()
clearer (and safer in future so that we don't over estimate our ability
to free up pages from future non-swappable objects).

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461150592-27818-3-git-send-email-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
drivers/gpu/drm/i915/i915_gem_shrinker.c

index 95cffe9b305d36dadcea4f8748c3c5c8dcba85d4..425e721aac58e5cd0d7dd3d80f984f118fcfc960 100644 (file)
@@ -70,6 +70,10 @@ static bool swap_available(void)
 
 static bool can_release_pages(struct drm_i915_gem_object *obj)
 {
+       /* Only shmemfs objects are backed by swap */
+       if (!obj->base.filp)
+               return false;
+
        /* Only report true if by unbinding the object and putting its pages
         * we can actually make forward progress towards freeing physical
         * pages.
@@ -349,18 +353,12 @@ i915_gem_shrinker_oom(struct notifier_block *nb, unsigned long event, void *ptr)
         */
        unbound = bound = unevictable = 0;
        list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list) {
-               if (!obj->base.filp) /* not backed by a freeable object */
-                       continue;
-
                if (!can_release_pages(obj))
                        unevictable += obj->base.size >> PAGE_SHIFT;
                else
                        unbound += obj->base.size >> PAGE_SHIFT;
        }
        list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
-               if (!obj->base.filp)
-                       continue;
-
                if (!can_release_pages(obj))
                        unevictable += obj->base.size >> PAGE_SHIFT;
                else