From: Hugh Dickins Date: Tue, 29 May 2012 22:06:41 +0000 (-0700) Subject: tmpfs: support fallocate preallocation X-Git-Url: https://git.stricted.de/?a=commitdiff_plain;h=e2d12e22c59ce714008aa5266d769f8568d74eac;p=GitHub%2Fmoto-9609%2Fandroid_kernel_motorola_exynos9610.git tmpfs: support fallocate preallocation The systemd plumbers expressed a wish that tmpfs support preallocation. Cong Wang wrote a patch, but several kernel guys expressed scepticism: https://lkml.org/lkml/2011/11/18/137 Christoph Hellwig: What for exactly? Please explain why preallocating on tmpfs would make any sense. Kay Sievers: To be able to safely use mmap(), regarding SIGBUS, on files on the /dev/shm filesystem. The glibc fallback loop for -ENOSYS [or -EOPNOTSUPP] on fallocate is just ugly. Hugh Dickins: If tmpfs is going to support fallocate(FALLOC_FL_PUNCH_HOLE), it would seem perverse to permit the deallocation but fail the allocation. Christoph Hellwig: Agreed. Now that we do have shmem_fallocate() for hole-punching, plumb in basic support for preallocation mode too. It's fairly straightforward (though quite a few details needed attention), except for when it fails part way through. What a pity that fallocate(2) was not specified to return the length allocated, permitting short fallocations! As it is, when it fails part way through, we ought to free what has just been allocated by this system call; but must be very sure not to free any allocated earlier, or any allocated by racing accesses (not all excluded by i_mutex). But we cannot distinguish them: so in this patch simply leak allocations on partial failure (they will be freed later if the file is removed). An attractive alternative approach would have been for fallocate() not to allocate pages at all, but note reservations by entries in the radix-tree. But that would give less assurance, and, critically, would be hard to fit with mem cgroups (who owns the reservations?): allocating pages lets fallocate() behave in just the same way as write(). Based-on-patch-by: Cong Wang Signed-off-by: Hugh Dickins Cc: Christoph Hellwig Cc: Cong Wang Cc: Kay Sievers Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- diff --git a/mm/shmem.c b/mm/shmem.c index f368d0acb52c..9b90d89e54ce 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1602,7 +1602,9 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, loff_t len) { struct inode *inode = file->f_path.dentry->d_inode; - int error = -EOPNOTSUPP; + struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); + pgoff_t start, index, end; + int error; mutex_lock(&inode->i_mutex); @@ -1617,8 +1619,65 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, shmem_truncate_range(inode, offset, offset + len - 1); /* No need to unmap again: hole-punching leaves COWed pages */ error = 0; + goto out; } + /* We need to check rlimit even when FALLOC_FL_KEEP_SIZE */ + error = inode_newsize_ok(inode, offset + len); + if (error) + goto out; + + start = offset >> PAGE_CACHE_SHIFT; + end = (offset + len + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; + /* Try to avoid a swapstorm if len is impossible to satisfy */ + if (sbinfo->max_blocks && end - start > sbinfo->max_blocks) { + error = -ENOSPC; + goto out; + } + + for (index = start; index < end; index++) { + struct page *page; + + /* + * Good, the fallocate(2) manpage permits EINTR: we may have + * been interrupted because we are using up too much memory. + */ + if (signal_pending(current)) + error = -EINTR; + else + error = shmem_getpage(inode, index, &page, SGP_WRITE, + NULL); + if (error) { + /* + * We really ought to free what we allocated so far, + * but it would be wrong to free pages allocated + * earlier, or already now in use: i_mutex does not + * exclude all cases. We do not know what to free. + */ + goto ctime; + } + + if (!PageUptodate(page)) { + clear_highpage(page); + flush_dcache_page(page); + SetPageUptodate(page); + } + /* + * set_page_dirty so that memory pressure will swap rather + * than free the pages we are allocating (and SGP_CACHE pages + * might still be clean: we now need to mark those dirty too). + */ + set_page_dirty(page); + unlock_page(page); + page_cache_release(page); + cond_resched(); + } + + if (!(mode & FALLOC_FL_KEEP_SIZE) && offset + len > inode->i_size) + i_size_write(inode, offset + len); +ctime: + inode->i_ctime = CURRENT_TIME; +out: mutex_unlock(&inode->i_mutex); return error; }