mm/memcg: avoid page count check for zone device
authorJérôme Glisse <jglisse@redhat.com>
Tue, 3 Oct 2017 23:14:57 +0000 (16:14 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Wed, 4 Oct 2017 00:54:24 +0000 (17:54 -0700)
Fix for 4.14, zone device page always have an elevated refcount of one
and thus page count sanity check in uncharge_page() is inappropriate for
them.

[mhocko@suse.com: nano-optimize VM_BUG_ON in uncharge_page]
Link: http://lkml.kernel.org/r/20170914190011.5217-1-jglisse@redhat.com
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reported-by: Evgeny Baskakov <ebaskakov@nvidia.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/memcontrol.c

index 696c6529e9000cf2572c2571ba418936f8d295f9..d5f3a62887cf958f6b657c0f542f0cf2c3e86e8d 100644 (file)
@@ -5658,7 +5658,8 @@ static void uncharge_batch(const struct uncharge_gather *ug)
 static void uncharge_page(struct page *page, struct uncharge_gather *ug)
 {
        VM_BUG_ON_PAGE(PageLRU(page), page);
-       VM_BUG_ON_PAGE(!PageHWPoison(page) && page_count(page), page);
+       VM_BUG_ON_PAGE(page_count(page) && !is_zone_device_page(page) &&
+                       !PageHWPoison(page) , page);
 
        if (!page->mem_cgroup)
                return;