mm: numa: guarantee that tlb_flush_pending updates are visible before page table...
authorMel Gorman <mgorman@suse.de>
Thu, 19 Dec 2013 01:08:45 +0000 (17:08 -0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Thu, 9 Jan 2014 20:24:23 +0000 (12:24 -0800)
commit af2c1401e6f9177483be4fad876d0073669df9df upstream.

According to documentation on barriers, stores issued before a LOCK can
complete after the lock implying that it's possible tlb_flush_pending
can be visible after a page table update.  As per revised documentation,
this patch adds a smp_mb__before_spinlock to guarantee the correct
ordering.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
include/linux/mm_types.h

index 49f0ada525a81c12f7cbae0102b7d795d039e9df..10a9a17342fcd83be68560170b1528262880a8fc 100644 (file)
@@ -480,7 +480,12 @@ static inline bool mm_tlb_flush_pending(struct mm_struct *mm)
 static inline void set_tlb_flush_pending(struct mm_struct *mm)
 {
        mm->tlb_flush_pending = true;
-       barrier();
+
+       /*
+        * Guarantee that the tlb_flush_pending store does not leak into the
+        * critical section updating the page tables
+        */
+       smp_mb__before_spinlock();
 }
 /* Clearing is done after a TLB flush, which also provides a barrier. */
 static inline void clear_tlb_flush_pending(struct mm_struct *mm)