shmem: shmem_charge: verify max_block is not exceeded before inode update
authorMike Rapoport <rppt@linux.vnet.ibm.com>
Wed, 6 Sep 2017 23:22:56 +0000 (16:22 -0700)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 5 Dec 2018 18:42:37 +0000 (19:42 +0100)
commit b1cc94ab2f2ba31fcb2c59df0b9cf03f6d720553 upstream.

Patch series "userfaultfd: enable zeropage support for shmem".

These patches enable support for UFFDIO_ZEROPAGE for shared memory.

The first two patches are not strictly related to userfaultfd, they are
just minor refactoring to reduce amount of code duplication.

This patch (of 7):

Currently we update inode and shmem_inode_info before verifying that
used_blocks will not exceed max_blocks.  In case it will, we undo the
update.  Let's switch the order and move the verification of the blocks
count before the inode and shmem_inode_info update.

Link: http://lkml.kernel.org/r/1497939652-16528-2-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Pavel Emelyanov <xemul@virtuozzo.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
mm/shmem.c

index 358a92be43eb2f2585c1c5f5ca081bb44d28ef31..b26f11221ea8f35dabb7d2fdaae6619a39a54c2a 100644 (file)
@@ -254,6 +254,14 @@ bool shmem_charge(struct inode *inode, long pages)
 
        if (shmem_acct_block(info->flags, pages))
                return false;
+
+       if (sbinfo->max_blocks) {
+               if (percpu_counter_compare(&sbinfo->used_blocks,
+                                          sbinfo->max_blocks - pages) > 0)
+                       goto unacct;
+               percpu_counter_add(&sbinfo->used_blocks, pages);
+       }
+
        spin_lock_irqsave(&info->lock, flags);
        info->alloced += pages;
        inode->i_blocks += pages * BLOCKS_PER_PAGE;
@@ -261,20 +269,11 @@ bool shmem_charge(struct inode *inode, long pages)
        spin_unlock_irqrestore(&info->lock, flags);
        inode->i_mapping->nrpages += pages;
 
-       if (!sbinfo->max_blocks)
-               return true;
-       if (percpu_counter_compare(&sbinfo->used_blocks,
-                               sbinfo->max_blocks - pages) > 0) {
-               inode->i_mapping->nrpages -= pages;
-               spin_lock_irqsave(&info->lock, flags);
-               info->alloced -= pages;
-               shmem_recalc_inode(inode);
-               spin_unlock_irqrestore(&info->lock, flags);
-               shmem_unacct_blocks(info->flags, pages);
-               return false;
-       }
-       percpu_counter_add(&sbinfo->used_blocks, pages);
        return true;
+
+unacct:
+       shmem_unacct_blocks(info->flags, pages);
+       return false;
 }
 
 void shmem_uncharge(struct inode *inode, long pages)