tmpfs: fix race between truncate and writepage
authorHugh Dickins <hughd@google.com>
Sat, 28 May 2011 20:14:09 +0000 (13:14 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Sat, 28 May 2011 23:09:26 +0000 (16:09 -0700)
While running fsx on tmpfs with a memhog then swapoff, swapoff was hanging
(interruptibly), repeatedly failing to locate the owner of a 0xff entry in
the swap_map.

Although shmem_writepage() does abandon when it sees incoming page index
is beyond eof, there was still a window in which shmem_truncate_range()
could come in between writepage's dropping lock and updating swap_map,
find the half-completed swap_map entry, and in trying to free it,
leave it in a state that swap_shmem_alloc() could not correct.

Arguably a bug in __swap_duplicate()'s and swap_entry_free()'s handling
of the different cases, but easiest to fix by moving swap_shmem_alloc()
under cover of the lock.

More interesting than the bug: it's been there since 2.6.33, why could
I not see it with earlier kernels?  The mmotm of two weeks ago seems to
have some magic for generating races, this is just one of three I found.

With yesterday's git I first saw this in mainline, bisected in search of
that magic, but the easy reproducibility evaporated.  Oh well, fix the bug.

Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/shmem.c

index 1acfb2687bfa144da8fa9b6b2f6c815663d44145..d221a1cfd7b196175ab9134aac147b22400c13e6 100644 (file)
@@ -1114,8 +1114,8 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
                delete_from_page_cache(page);
                shmem_swp_set(info, entry, swap.val);
                shmem_swp_unmap(entry);
-               spin_unlock(&info->lock);
                swap_shmem_alloc(swap);
+               spin_unlock(&info->lock);
                BUG_ON(page_mapped(page));
                swap_writepage(page, wbc);
                return 0;