rcu: Prevent rcu_barrier() from starting needless grace periods
authorPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Mon, 10 Apr 2017 22:40:35 +0000 (15:40 -0700)
committerPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Thu, 8 Jun 2017 15:25:22 +0000 (08:25 -0700)
Currently rcu_barrier() uses call_rcu() to enqueue new callbacks
on each CPU with a non-empty callback list.  This works, but means
that rcu_barrier() forces grace periods that are not otherwise needed.
The key point is that rcu_barrier() never needs to wait for a grace
period, but instead only for all pre-existing callbacks to be invoked.
This means that rcu_barrier()'s new callbacks should be placed in
the callback-list segment containing the last pre-existing callback.

This commit makes this change using the new rcu_segcblist_entrain()
function.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
include/trace/events/rcu.h
kernel/rcu/tree.c

index e3facb356838c912e86e1e4c7a9f25c2f10555c2..91dc089d65b7e3abfd9d22bf4ac5e4bc593237cc 100644 (file)
@@ -742,6 +742,7 @@ TRACE_EVENT(rcu_torture_read,
  *     "OnlineQ": _rcu_barrier() found online CPU with callbacks.
  *     "OnlineNQ": _rcu_barrier() found online CPU, no callbacks.
  *     "IRQ": An rcu_barrier_callback() callback posted on remote CPU.
+ *     "IRQNQ": An rcu_barrier_callback() callback found no callbacks.
  *     "CB": An rcu_barrier_callback() invoked a callback, not the last.
  *     "LastCB": An rcu_barrier_callback() invoked the last callback.
  *     "Inc2": _rcu_barrier() piggyback check counter incremented.
index e354e475e645ff4f0fab186913a274e931da0db3..657056c3e0cdf599b432ff0ba031ccbe48e3ab2f 100644 (file)
@@ -3578,8 +3578,14 @@ static void rcu_barrier_func(void *type)
        struct rcu_data *rdp = raw_cpu_ptr(rsp->rda);
 
        _rcu_barrier_trace(rsp, "IRQ", -1, rsp->barrier_sequence);
-       atomic_inc(&rsp->barrier_cpu_count);
-       rsp->call(&rdp->barrier_head, rcu_barrier_callback);
+       rdp->barrier_head.func = rcu_barrier_callback;
+       debug_rcu_head_queue(&rdp->barrier_head);
+       if (rcu_segcblist_entrain(&rdp->cblist, &rdp->barrier_head, 0)) {
+               atomic_inc(&rsp->barrier_cpu_count);
+       } else {
+               debug_rcu_head_unqueue(&rdp->barrier_head);
+               _rcu_barrier_trace(rsp, "IRQNQ", -1, rsp->barrier_sequence);
+       }
 }
 
 /*