mm, swap: avoid lock swap_avail_lock when held cluster lock
authorHuang Ying <ying.huang@intel.com>
Wed, 3 May 2017 21:54:39 +0000 (14:54 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Wed, 3 May 2017 22:52:10 +0000 (15:52 -0700)
Cluster lock is used to protect the swap_cluster_info and corresponding
elements in swap_info_struct->swap_map[].  But it is found that now in
scan_swap_map_slots(), swap_avail_lock may be acquired when cluster lock
is held.  This does no good except making the locking more complex and
improving the potential locking contention, because the
swap_info_struct->lock is used to protect the data structure operated in
the code already.  Fix this via moving the corresponding operations in
scan_swap_map_slots() out of cluster lock.

Link: http://lkml.kernel.org/r/20170317064635.12792-3-ying.huang@intel.com
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Acked-by: Tim Chen <tim.c.chen@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/swapfile.c

index 42fd620dcf4cf3fb00a68c08f5389a1067213f7c..53b5881ee0d69ecd7cdc275e61c56b7c5f38f908 100644 (file)
@@ -672,6 +672,9 @@ checks:
                else
                        goto done;
        }
+       si->swap_map[offset] = usage;
+       inc_cluster_info_page(si, si->cluster_info, offset);
+       unlock_cluster(ci);
 
        if (offset == si->lowest_bit)
                si->lowest_bit++;
@@ -685,9 +688,6 @@ checks:
                plist_del(&si->avail_list, &swap_avail_head);
                spin_unlock(&swap_avail_lock);
        }
-       si->swap_map[offset] = usage;
-       inc_cluster_info_page(si, si->cluster_info, offset);
-       unlock_cluster(ci);
        si->cluster_next = offset + 1;
        slots[n_ret++] = swp_entry(si->type, offset);