GitHub/moto-9609/android_kernel_motorola_exynos9610.git
9 years agorcu,locking: Privatize smp_mb__after_unlock_lock()
Paul E. McKenney [Wed, 15 Jul 2015 01:35:23 +0000 (18:35 -0700)]
rcu,locking: Privatize smp_mb__after_unlock_lock()

RCU is the only thing that uses smp_mb__after_unlock_lock(), and is
likely the only thing that ever will use it, so this commit makes this
macro private to RCU.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: "linux-arch@vger.kernel.org" <linux-arch@vger.kernel.org>
9 years agoMerge branches 'doc.2015.07.15a' and 'torture.2015.07.15a' into HEAD
Paul E. McKenney [Tue, 4 Aug 2015 15:42:02 +0000 (08:42 -0700)]
Merge branches 'doc.2015.07.15a' and 'torture.2015.07.15a' into HEAD

doc.2015.07.15a: Documentation updates.
torture.2015.07.15a: Torture-test updates.

9 years agoMerge branches 'fixes.2015.07.22a' and 'initexp.2015.08.04a' into HEAD
Paul E. McKenney [Tue, 4 Aug 2015 15:40:58 +0000 (08:40 -0700)]
Merge branches 'fixes.2015.07.22a' and 'initexp.2015.08.04a' into HEAD

fixes.2015.07.22a: Miscellaneous fixes.
initexp.2015.08.04a: Initialization and expedited updates.
(Single branch due to conflicts.)

9 years agorcu: Silence lockdep false positive for expedited grace periods
Paul E. McKenney [Sun, 19 Jul 2015 22:13:40 +0000 (15:13 -0700)]
rcu: Silence lockdep false positive for expedited grace periods

In a CONFIG_PREEMPT=y kernel, synchronize_rcu_expedited()
acquires the ->exp_funnel_mutex in rcu_preempt_state, then invokes
synchronize_sched_expedited, which acquires the ->exp_funnel_mutex in
rcu_sched_state.  There can be no deadlock because rcu_preempt_state
->exp_funnel_mutex acquisition always precedes that of rcu_sched_state.
But lockdep does not know that, so it gives false-positive splats.

This commit therefore associates a separate lock_class_key structure
with the rcu_sched_state structure's ->exp_funnel_mutex, allowing
lockdep to see the lock ordering, avoiding the false positives.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Don't disable CPU hotplug during OOM notifiers
Paul E. McKenney [Tue, 14 Jul 2015 23:24:14 +0000 (16:24 -0700)]
rcu: Don't disable CPU hotplug during OOM notifiers

RCU's rcu_oom_notify() disables CPU hotplug in order to stabilize the
list of online CPUs, which it traverses.  However, this is completely
pointless because smp_call_function_single() will quietly fail if invoked
on an offline CPU.  Because the count of requests is incremented in the
rcu_oom_notify_cpu() function that is remotely invoked, everything works
nicely even in the face of concurrent CPU-hotplug operations.

Furthermore, in recent kernels, invoking get_online_cpus() from an OOM
notifier can result in deadlock.  This commit therefore removes the
call to get_online_cpus() and put_online_cpus() from rcu_oom_notify().

Reported-by: Marcin Ślusarz <marcin.slusarz@gmail.com>
Reported-by: David Rientjes <rientjes@google.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: David Rientjes <rientjes@google.com>
Tested-by: Marcin Ślusarz <marcin.slusarz@gmail.com>
9 years agoscripts: Make checkpatch.pl warn on expedited RCU grace periods
Paul E. McKenney [Thu, 2 Jul 2015 18:55:40 +0000 (11:55 -0700)]
scripts: Make checkpatch.pl warn on expedited RCU grace periods

The synchronize_rcu_expedited() and synchronize_sched_expedited()
expedited-grace-period primitives induce OS jitter, which can degrade
real-time response.  This commit therefore adds a checkpatch.pl warning
on any patch adding them.

Note that this patch does not warn on synchronize_srcu_expedited()
because it does not induce OS jitter, courtesy of its otherwise
much-maligned read-side memory barriers.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Andy Whitcroft <apw@canonical.com>
Cc: Joe Perches <joe@perches.com>
9 years agorcu: Update MAINTAINERS entry
Lai Jiangshan [Wed, 1 Jul 2015 07:26:27 +0000 (15:26 +0800)]
rcu: Update MAINTAINERS entry

This commit updates Lai Jiangshan's email address because the old
laijs@cn.fujitsu.com address will expire after July 10, 2015.

Signed-off-by: Lai Jiangshan <jiangshanlai@gmail.com>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Clarify CONFIG_RCU_EQS_DEBUG help text
Paul E. McKenney [Tue, 30 Jun 2015 16:56:31 +0000 (09:56 -0700)]
rcu: Clarify CONFIG_RCU_EQS_DEBUG help text

Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Fix backwards RCU_LOCKDEP_WARN() in synchronize_rcu_tasks()
Paul E. McKenney [Tue, 30 Jun 2015 15:17:40 +0000 (08:17 -0700)]
rcu: Fix backwards RCU_LOCKDEP_WARN() in synchronize_rcu_tasks()

The RCU_LOCKDEP_WARN() in synchronize_rcu_tasks() triggers if the
scheduler is active, which is backwards.  This commit therefore
negates the test.

Reported-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Rename rcu_lockdep_assert() to RCU_LOCKDEP_WARN()
Paul E. McKenney [Thu, 18 Jun 2015 22:50:02 +0000 (15:50 -0700)]
rcu: Rename rcu_lockdep_assert() to RCU_LOCKDEP_WARN()

This commit renames rcu_lockdep_assert() to RCU_LOCKDEP_WARN() for
consistency with the WARN() series of macros.  This also requires
inverting the sense of the conditional, which this commit also does.

Reported-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
9 years agorcu: Make rcu_is_watching() really notrace
Alexei Starovoitov [Tue, 16 Jun 2015 17:35:18 +0000 (10:35 -0700)]
rcu: Make rcu_is_watching() really notrace

Although rcu_is_watching() is marked notrace, it invokes preempt_disable()
and preempt_enable(), both of which can be traced.  This defeats the
purpose of the notrace on rcu_is_watching(), so this commit substitutes
preempt_disable_notrace() and preempt_enable_notrace().

Signed-off-by: Alexei Starovoitov <ast@plumgrid.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
9 years agocpu: Wait for RCU grace periods concurrently
Paul E. McKenney [Wed, 10 Jun 2015 20:34:41 +0000 (13:34 -0700)]
cpu: Wait for RCU grace periods concurrently

In kernels built with CONFIG_PREEMPT, _cpu_down() waits for RCU and
RCU-sched grace periods back-to-back, incurring quite a bit more latency
than required.  This commit therefore uses the new synchronize_rcu_mult()
to allow waiting for both grace periods concurrently.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Create a synchronize_rcu_mult()
Paul E. McKenney [Wed, 10 Jun 2015 19:53:06 +0000 (12:53 -0700)]
rcu: Create a synchronize_rcu_mult()

There have been several requests for a primitive that waits for
grace periods for several RCU flavors concurrently, so this
commit creates it.  This is a variadic macro, and you pass in
the call_rcu() functions of the flavors of RCU that you wish to
wait for.

Note that you cannot pass in call_srcu() for two reasons: (1) This
would result in a type mismatch and (2) You need to specify which
srcu_struct you want to use.  Handle this by creating a wrapper
function for your SRCU domain, for example:

void call_srcu_mine(struct rcu_head *head, rcu_callback_t func)
{
call_srcu(&ss_mine, head, func);
}

You can then do something like this:

synchronize_rcu_mult(call_srcu_mine, call_rcu, call_rcu_sched);

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Fix obsolete priority-boosting comment
Paul E. McKenney [Sat, 6 Jun 2015 15:11:43 +0000 (08:11 -0700)]
rcu: Fix obsolete priority-boosting comment

Tasks are no longer migrated to the root rcu_node, so there is no
longer any need for a boost kthread for the root rcu_node, and there no
longer is such a kthread.  This commit therefore fixes the comment in
rcu_boost_kthread()'s header to reflect this new reality.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Use WRITE_ONCE in RCU_INIT_POINTER
Peter Zijlstra [Tue, 2 Jun 2015 15:26:48 +0000 (17:26 +0200)]
rcu: Use WRITE_ONCE in RCU_INIT_POINTER

For the paranoid amongst us GCC would be in its right to use byte stores
to write our NULL value, tell it not to do that.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Hide RCU_NOCB_CPU behind RCU_EXPERT
Paul E. McKenney [Tue, 2 Jun 2015 12:29:18 +0000 (05:29 -0700)]
rcu: Hide RCU_NOCB_CPU behind RCU_EXPERT

This commit prevents Kconfig from asking the user about RCU_NOCB_CPU
unless the user really wants to be asked.

Reported-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
9 years agorcu: Add RCU-sched flavors of get-state and cond-sync
Paul E. McKenney [Sat, 30 May 2015 17:11:24 +0000 (10:11 -0700)]
rcu: Add RCU-sched flavors of get-state and cond-sync

The get_state_synchronize_rcu() and cond_synchronize_rcu() functions
allow polling for grace-period completion, with an actual wait for a
grace period occurring only when cond_synchronize_rcu() is called too
soon after the corresponding get_state_synchronize_rcu().  However,
these functions work only for vanilla RCU.  This commit adds the
get_state_synchronize_sched() and cond_synchronize_sched(), which provide
the same capability for RCU-sched.

Reported-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Add fastpath bypassing funnel locking
Paul E. McKenney [Sat, 11 Jul 2015 23:24:45 +0000 (16:24 -0700)]
rcu: Add fastpath bypassing funnel locking

In the common case, there will be only one expedited grace period in
the system at a given time, in which case it is not helpful to use
funnel locking.  This commit therefore adds a fastpath that bypasses
funnel locking when the root ->exp_funnel_mutex is not held.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Rename RCU_GP_DONE_FQS to RCU_GP_DOING_FQS
Paul E. McKenney [Thu, 2 Jul 2015 19:27:31 +0000 (12:27 -0700)]
rcu: Rename RCU_GP_DONE_FQS to RCU_GP_DOING_FQS

The grace-period kthread sleeps waiting to do a force-quiescent-state
scan, and when awakened sets rsp->gp_state to RCU_GP_DONE_FQS.
However, this is confusing because the kthread has not done the
force-quiescent-state, but is instead just starting to do it.  This commit
therefore renames RCU_GP_DONE_FQS to RCU_GP_DOING_FQS in order to make
things a bit easier on reviewers.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Pull out wait_event*() condition into helper function
Paul E. McKenney [Wed, 1 Jul 2015 20:50:28 +0000 (13:50 -0700)]
rcu: Pull out wait_event*() condition into helper function

The condition for the wait_event_interruptible_timeout() that waits
to do the next force-quiescent-state scan is a bit ornate:

((gf = READ_ONCE(rsp->gp_flags)) &
 RCU_GP_FLAG_FQS) ||
(!READ_ONCE(rnp->qsmask) &&
 !rcu_preempt_blocked_readers_cgp(rnp))

This commit therefore pulls this condition out into a helper function
and comments its component conditions.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agodocumentation: Describe new expedited stall warnings
Paul E. McKenney [Tue, 30 Jun 2015 21:54:09 +0000 (14:54 -0700)]
documentation: Describe new expedited stall warnings

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Add stall warnings to synchronize_sched_expedited()
Paul E. McKenney [Tue, 30 Jun 2015 18:14:32 +0000 (11:14 -0700)]
rcu: Add stall warnings to synchronize_sched_expedited()

Although synchronize_sched_expedited() historically has no RCU CPU stall
warnings, the availability of the rcupdate.rcu_expedited boot parameter
invalidates the old assumption that synchronize_sched()'s stall warnings
would suffice.  This commit therefore adds RCU CPU stall warnings to
synchronize_sched_expedited().

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Extend expedited funnel locking to rcu_data structure
Paul E. McKenney [Tue, 30 Jun 2015 00:06:39 +0000 (17:06 -0700)]
rcu: Extend expedited funnel locking to rcu_data structure

The strictly rcu_node based funnel-locking scheme works well in many
cases, but systems with CONFIG_RCU_FANOUT_LEAF=64 won't necessarily get
all that much concurrency.  This commit therefore extends the funnel
locking into the per-CPU rcu_data structure, providing concurrency equal
to the number of CPUs.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Consolidate last open-coded expedited memory barrier
Paul E. McKenney [Sat, 27 Jun 2015 16:36:29 +0000 (09:36 -0700)]
rcu: Consolidate last open-coded expedited memory barrier

One of the requirements on RCU grace periods is that if there is a
causal chain of operations that starts after one grace period and
ends before another grace period, then the two grace periods must
be serialized.  There has been (and might still be) code that relies
on this, for example, certain types of reference-counting code that
does a call_rcu() within an RCU callback function.

This requirement is why there is an smp_mb() at the end of both
synchronize_sched_expedited() and synchronize_rcu_expedited().
However, this is the only smp_mb() in these functions, so it would
be nicer to consolidate it into rcu_exp_gp_seq_end().  This commit
does just that.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Apply rcu_seq operations to _rcu_barrier()
Paul E. McKenney [Fri, 26 Jun 2015 18:20:00 +0000 (11:20 -0700)]
rcu: Apply rcu_seq operations to _rcu_barrier()

The rcu_seq operations were open-coded in _rcu_barrier(), so this commit
replaces the open-coding with the shiny new rcu_seq operations.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Use funnel locking for synchronize_rcu_expedited()'s polling loop
Paul E. McKenney [Fri, 26 Jun 2015 02:03:16 +0000 (19:03 -0700)]
rcu: Use funnel locking for synchronize_rcu_expedited()'s polling loop

This commit gets rid of synchronize_rcu_expedited()'s mutex_trylock()
polling loop in favor of the funnel-locking scheme that was abstracted
from synchronize_sched_expedited().

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Fix synchronize_sched_expedited() type error for "s"
Paul E. McKenney [Thu, 25 Jun 2015 23:35:03 +0000 (16:35 -0700)]
rcu: Fix synchronize_sched_expedited() type error for "s"

The type of "s" has been "long" rather than the correct "unsigned long"
for quite some time.  This commit fixes this type error.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Abstract funnel locking from synchronize_sched_expedited()
Paul E. McKenney [Thu, 25 Jun 2015 23:30:54 +0000 (16:30 -0700)]
rcu: Abstract funnel locking from synchronize_sched_expedited()

This commit abstracts funnel locking from synchronize_sched_expedited()
so that it may be used by synchronize_rcu_expedited().

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Make synchronize_rcu_expedited() use sequence-counter scheme
Paul E. McKenney [Thu, 25 Jun 2015 22:52:50 +0000 (15:52 -0700)]
rcu: Make synchronize_rcu_expedited() use sequence-counter scheme

Although synchronize_rcu_expedited() uses a sequence-counter scheme, it
is based on a single increment per grace period, which means that tasks
piggybacking off of concurrent grace periods may be forced to wait longer
than necessary.  This commit therefore applies the new sequence-count
functions developed for synchronize_sched_expedited() to speed things
up a bit and to consolidate the sequence-counter implementation.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Abstract sequence counting from synchronize_sched_expedited()
Paul E. McKenney [Thu, 25 Jun 2015 22:00:58 +0000 (15:00 -0700)]
rcu: Abstract sequence counting from synchronize_sched_expedited()

This commit creates rcu_exp_gp_seq_start() and rcu_exp_gp_seq_end() to
bracket an expedited grace period, rcu_exp_gp_seq_snap() to snapshot the
sequence counter, and rcu_exp_gp_seq_done() to check to see if a full
expedited grace period has elapsed since the snapshot.  These will be
applied to synchronize_rcu_expedited().  These are defined in terms of
underlying rcu_seq_start(), rcu_seq_end(), rcu_seq_snap(), rcu_seq_done(),
which will be applied to _rcu_barrier().

One reason that this commit doesn't use the seqcount primitives themselves
is that the smp_wmb() in those primitive is insufficient due to the fact
that expedited grace periods do reads as well as writes.  In addition,
the read-side seqcount primitives detect a potentially partial change,
where the expedited primitives instead need a guaranteed full change.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Make expedited GP CPU stoppage asynchronous
Peter Zijlstra [Thu, 25 Jun 2015 18:27:10 +0000 (11:27 -0700)]
rcu: Make expedited GP CPU stoppage asynchronous

Sequentially stopping the CPUs slows down expedited grace periods by
at least a factor of two, based on rcutorture's grace-period-per-second
rate.  This is a conservative measure because rcutorture uses unusually
long RCU read-side critical sections and because rcutorture periodically
quiesces the system in order to test RCU's ability to ramp down to and
up from the idle state.  This commit therefore replaces the stop_one_cpu()
with stop_one_cpu_nowait(), using an atomic-counter scheme to determine
when all CPUs have passed through the stopped state.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Get rid of synchronize_sched_expedited()'s polling loop
Paul E. McKenney [Wed, 24 Jun 2015 21:20:08 +0000 (14:20 -0700)]
rcu: Get rid of synchronize_sched_expedited()'s polling loop

This commit gets rid of synchronize_sched_expedited()'s mutex_trylock()
polling loop in favor of a funnel-locking scheme based on the rcu_node
tree.  The work-done check is done at each level of the tree, allowing
high-contention situations to be resolved quickly with reasonable levels
of mutex contention.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Rework synchronize_sched_expedited() counter handling
Paul E. McKenney [Wed, 24 Jun 2015 17:46:30 +0000 (10:46 -0700)]
rcu: Rework synchronize_sched_expedited() counter handling

Now that synchronize_sched_expedited() have a mutex, it can use simpler
work-already-done detection scheme.  This commit simplifies this scheme
by using something similar to the sequence-locking counter scheme.
A counter is incremented before and after each grace period, so that
the counter is odd in the midst of the grace period and even otherwise.
So if the counter has advanced to the second even number that is
greater than or equal to the snapshot, the required grace period has
already happened.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Switch synchronize_sched_expedited() to stop_one_cpu()
Peter Zijlstra [Wed, 24 Jun 2015 02:03:45 +0000 (19:03 -0700)]
rcu: Switch synchronize_sched_expedited() to stop_one_cpu()

The synchronize_sched_expedited() currently invokes try_stop_cpus(),
which schedules the stopper kthreads on each online non-idle CPU,
and waits until all those kthreads are running before letting any
of them stop.  This is disastrous for real-time workloads, which
get hit with a preemption that is as long as the longest scheduling
latency on any CPU, including any non-realtime housekeeping CPUs.
This commit therefore switches to using stop_one_cpu() on each CPU
in turn.  This avoids inflicting the worst-case scheduling latency
on the worst-case CPU onto all other CPUs, and also simplifies the
code a little bit.

Follow-up commits will simplify the counter-snapshotting algorithm
and convert a number of the counters that are now protected by the
new ->expedited_mutex to non-atomic.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
[ paulmck: Kept stop_one_cpu(), dropped disabling of "guardrails". ]
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Remove CONFIG_RCU_CPU_STALL_INFO
Paul E. McKenney [Thu, 11 Jun 2015 22:22:43 +0000 (15:22 -0700)]
rcu: Remove CONFIG_RCU_CPU_STALL_INFO

The CONFIG_RCU_CPU_STALL_INFO has been default-y for a couple of
releases with no complaints, so it is time to eliminate this Kconfig
option entirely, so that the long-form RCU CPU stall warnings cannot
be disabled.  This commit does just that.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Stop disabling CPU hotplug in synchronize_rcu_expedited()
Paul E. McKenney [Thu, 11 Jun 2015 21:50:22 +0000 (14:50 -0700)]
rcu: Stop disabling CPU hotplug in synchronize_rcu_expedited()

The fact that tasks could be migrated from leaf to root rcu_node
structures meant that synchronize_rcu_expedited() had to disable
CPU hotplug.  However, tasks now stay put, so this commit removes the
CPU-hotplug disabling from synchronize_rcu_expedited().

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Reset rcu_fanout_leaf if out of bounds
Paul E. McKenney [Thu, 4 Jun 2015 17:06:01 +0000 (10:06 -0700)]
rcu: Reset rcu_fanout_leaf if out of bounds

Currently if the rcu_fanout_leaf boot parameter is out of bounds (that
is, less than RCU_FANOUT_LEAF or greater than the number of bits in an
unsigned long), a warning is issued and execution continues with the
out-of-bounds value.  This can result in all manner of failures, so this
patch resets rcu_fanout_leaf to RCU_FANOUT_LEAF when an out-of-bounds
condition is detected.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Shut up bogus gcc array bounds warning
Alexander Gordeev [Thu, 9 Jul 2015 13:34:23 +0000 (15:34 +0200)]
rcu: Shut up bogus gcc array bounds warning

Because gcc does not realize a loop would not be entered ever
(i.e. in case of rcu_num_lvls == 1):

  for (i = 1; i < rcu_num_lvls; i++)
  rsp->level[i] = rsp->level[i - 1] + levelcnt[i - 1];

some compiler (pre- 5.x?) versions give a bogus warning:

  kernel/rcu/tree.c: In function ‘rcu_init_one.isra.55’:
  kernel/rcu/tree.c:4108:13: warning: array subscript is above array bounds [-Warray-bounds]
     rsp->level[i] = rsp->level[i - 1] + rsp->levelcnt[i - 1];
               ^
Fix that warning by adding an extra item to rcu_state::level[]
array. Once the bogus warning is fixed in gcc and kernel drops
support of older versions, the dummy item may be removed from
the array.

Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Suggested-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcutorture: Enable lockdep-RCU on TASKS01
Paul E. McKenney [Tue, 30 Jun 2015 16:14:01 +0000 (09:14 -0700)]
rcutorture: Enable lockdep-RCU on TASKS01

Currently none of the RCU-tasks scenarios enables lockdep-RCU, which
causes bugs to be missed.  This commit therefore enables lockdep-RCU
on TASKS01.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcutorture: Add RCU-tasks qualifier to dereference
Paul E. McKenney [Tue, 30 Jun 2015 15:57:57 +0000 (08:57 -0700)]
rcutorture: Add RCU-tasks qualifier to dereference

Although RCU-tasks isn't really designed to support rcu_dereference()
and list manipulation, that is how rcutorture tests it.  Which means
that lockdep-RCU complains about the rcu_dereference_check() invocations
because RCU-tasks doesn't have read-side markers.  This commit therefore
creates a torturing_tasks() to silence the lockdep-RCU complaints from
rcu_dereference_check() when RCU-tasks is being tortured.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcutorture: Fix rcu_torture_cbflood() for callback-free RCU
Paul E. McKenney [Tue, 23 Jun 2015 01:11:31 +0000 (18:11 -0700)]
rcutorture: Fix rcu_torture_cbflood() for callback-free RCU

The rcu_torture_cbflood() function correctly checks for flavors of
RCU that lack analogs to call_rcu() and rcu_barrier(), but in that
case it fails to terminate correctly.  In fact, it terminates so
incorrectly that segfaults can result.  This commit therefore causes
rcu_torture_cbflood() to do the proper wait-for-stop procedure.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcutorture: Bounds-check rcutorture.shuffle_interval
Paul E. McKenney [Thu, 14 May 2015 23:55:45 +0000 (16:55 -0700)]
rcutorture: Bounds-check rcutorture.shuffle_interval

Specifying a negative rcutorture.shuffle_interval value will cause a
negative value to be used as a sleep time.  This commit therefore
refuses to start shuffling unless the rcutorture.shuffle_interval
value is greater than zero.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcutorture: Check nfakewriters parameter
Paul E. McKenney [Thu, 14 May 2015 22:42:40 +0000 (15:42 -0700)]
rcutorture: Check nfakewriters parameter

Currently, a negative value for rcutorture.nfakewriters= can cause
rcutorture to pass a negative size to the memory allocator, which
is not really a particularly good thing to do.  This commit therefore
adds bounds checking to this parameter, so that values that are less
than or equal to zero disable fake writing.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcutorture: Better bounds checking for n_barrier_cbs
Paul E. McKenney [Thu, 14 May 2015 22:35:43 +0000 (15:35 -0700)]
rcutorture: Better bounds checking for n_barrier_cbs

A negative value for rcutorture.n_barrier_cbs can pass a negative value
to the memory allocator, so this commit instead causes rcu_barrier()
testing to be disabled in this case.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Simplify arithmetic to calculate number of RCU nodes
Alexander Gordeev [Wed, 3 Jun 2015 06:18:31 +0000 (08:18 +0200)]
rcu: Simplify arithmetic to calculate number of RCU nodes

This update makes arithmetic to calculate number of RCU nodes
more straight and easy to read.

Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Limit count of static data to the number of RCU levels
Alexander Gordeev [Wed, 3 Jun 2015 06:18:30 +0000 (08:18 +0200)]
rcu: Limit count of static data to the number of RCU levels

Although a number of RCU levels may be less than the current
maximum of four, some static data associated with each level
are allocated for all four levels. As result, the extra data
never get accessed and just wast memory. This update limits
count of allocated items to the number of used RCU levels.

Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Remove unnecessary fields from rcu_state structure
Alexander Gordeev [Wed, 3 Jun 2015 06:18:29 +0000 (08:18 +0200)]
rcu: Remove unnecessary fields from rcu_state structure

Members rcu_state::levelcnt[] and rcu_state::levelspread[]
are only used at init. There is no reason to keep them
afterwards.

Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Limit rcu_capacity[] size to RCU_NUM_LVLS items
Alexander Gordeev [Wed, 3 Jun 2015 06:18:28 +0000 (08:18 +0200)]
rcu: Limit rcu_capacity[] size to RCU_NUM_LVLS items

Number of items in rcu_capacity[] array is defined by macro
MAX_RCU_LVLS. However, that array is never accessed beyond
RCU_NUM_LVLS index. Therefore, we can limit the array to
RCU_NUM_LVLS items and eliminate MAX_RCU_LVLS. As result,
in most cases the memory is conserved.

Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Limit rcu_state::levelcnt[] to RCU_NUM_LVLS items
Alexander Gordeev [Wed, 3 Jun 2015 06:18:27 +0000 (08:18 +0200)]
rcu: Limit rcu_state::levelcnt[] to RCU_NUM_LVLS items

Variable rcu_num_lvls is limited by RCU_NUM_LVLS macro.
In turn, rcu_state::levelcnt[] array is never accessed
beyond rcu_num_lvls. Thus, rcu_state::levelcnt[] is safe
to limit to RCU_NUM_LVLS items.

Since rcu_num_lvls could be changed during boot (as result
of rcutree.rcu_fanout_leaf kernel parameter update) one might
assume a new value could overflow the value of RCU_NUM_LVLS.
However, that is not the case, since leaf-level fanout is only
permitted to increase, resulting in rcu_num_lvls possibly to
decrease.

Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Simplify rcu_init_geometry() capacity arithmetics
Alexander Gordeev [Wed, 3 Jun 2015 06:18:26 +0000 (08:18 +0200)]
rcu: Simplify rcu_init_geometry() capacity arithmetics

Current code suggests that introducing the extra level to
rcu_capacity[] array makes some of the arithmetic easier.
Well, in fact it appears rather confusing and unnecessary.

Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Cleanup rcu_init_geometry() code and arithmetics
Alexander Gordeev [Wed, 3 Jun 2015 06:18:25 +0000 (08:18 +0200)]
rcu: Cleanup rcu_init_geometry() code and arithmetics

This update simplifies rcu_init_geometry() code flow
and makes calculation of the total number of rcu_node
structures more easy to read.

The update relies on the fact num_rcu_lvl[] is never
accessed beyond rcu_num_lvls index by the rest of the
code. Therefore, there is no need initialize the whole
num_rcu_lvl[].

Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Remove superfluous local variable in rcu_init_geometry()
Alexander Gordeev [Wed, 3 Jun 2015 06:18:24 +0000 (08:18 +0200)]
rcu: Remove superfluous local variable in rcu_init_geometry()

Local variable 'n' mimics 'nr_cpu_ids' while the both are
used within one function. There is no reason for 'n' to
exist whatsoever.

Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Panic if RCU tree can not accommodate all CPUs
Alexander Gordeev [Wed, 3 Jun 2015 06:18:23 +0000 (08:18 +0200)]
rcu: Panic if RCU tree can not accommodate all CPUs

Currently a condition when RCU tree is unable to accommodate
the configured number of CPUs is not permitted and causes
a fall back to compile-time values. However, the code has no
means to exceed the RCU tree capacity neither at compile-time
nor in run-time. Therefore, if the condition is met in run-
time then it indicates a serios problem elsewhere and should
be handled with a panic.

Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Alexander Gordeev <agordeev@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Provide more diagnostics for stalled GP kthread
Paul E. McKenney [Tue, 19 May 2015 21:16:52 +0000 (14:16 -0700)]
rcu: Provide more diagnostics for stalled GP kthread

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Change return type to bool
Nicholas Mc Guire [Wed, 27 May 2015 06:56:25 +0000 (08:56 +0200)]
rcu: Change return type to bool

Type-checking coccinelle spatches are being used to locate type mismatches
between function signatures and return values in this case this produced:
./kernel/rcu/srcu.c:271 WARNING: return of wrong type
        int != unsigned long,

srcu_readers_active() returns an int that is the sum of per_cpu unsigned
long but the only user is cleanup_srcu_struct() which is using it as a
boolean (condition) to see if there is any readers rather than actually
using the approximate number of readers. The theoretically possible
unsigned long overflow case does not need to be handled explicitly - if
we had 4G++ readers then something else went wrong a long time ago.

proposal: change the return type to boolean. The function name is left
          unchanged as it fits the naming expectation for a boolean.

patch was compile tested for x86_64_defconfig (implies CONFIG_SRCU=y)

patch is against 4.1-rc5 (localversion-next is -next-20150525)

Signed-off-by: Nicholas Mc Guire <hofrat@osadl.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Deinline rcu_read_lock_sched_held() if DEBUG_LOCK_ALLOC
Denys Vlasenko [Tue, 26 May 2015 15:48:34 +0000 (17:48 +0200)]
rcu: Deinline rcu_read_lock_sched_held() if DEBUG_LOCK_ALLOC

DEBUG_LOCK_ALLOC=y is not a production setting, but it is
not very unusual either. Many developers routinely
use kernels built with it enabled.

Apart from being selected by hand, it is also auto-selected by
PROVE_LOCKING "Lock debugging: prove locking correctness" and
LOCK_STAT "Lock usage statistics" config options.
LOCK STAT is necessary for "perf lock" to work.

I wouldn't spend too much time optimizing it, but this particular
function has a very large cost in code size: when it is deinlined,
code size decreases by 830,000 bytes:

    text     data      bss       dec     hex filename
85674192 22294776 20627456 128596424 7aa39c8 vmlinux.before
84837612 22294424 20627456 127759492 79d7484 vmlinux

(with this config: http://busybox.net/~vda/kernel_config)

Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
CC: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
CC: Josh Triplett <josh@joshtriplett.org>
CC: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
CC: Lai Jiangshan <laijs@cn.fujitsu.com>
CC: Tejun Heo <tj@kernel.org>
CC: Oleg Nesterov <oleg@redhat.com>
CC: linux-kernel@vger.kernel.org
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agodoc: Call out smp_mb__after_unlock_lock() transitivity
Paul E. McKenney [Mon, 13 Jul 2015 22:55:52 +0000 (15:55 -0700)]
doc: Call out smp_mb__after_unlock_lock() transitivity

Although "full barrier" should be interpreted as providing transitivity,
it is worth eliminating any possible confusion.  This commit therefore
adds "(including transitivity)" to eliminate any possible confusion.

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agodocumentation: Replace ACCESS_ONCE() by READ_ONCE() and WRITE_ONCE()
Paul E. McKenney [Thu, 18 Jun 2015 21:33:24 +0000 (14:33 -0700)]
documentation: Replace ACCESS_ONCE() by READ_ONCE() and WRITE_ONCE()

Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agodocumentation: Fix variable-name typo in memory-barriers.txt
Paul E. McKenney [Tue, 19 May 2015 01:27:42 +0000 (18:27 -0700)]
documentation: Fix variable-name typo in memory-barriers.txt

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agodocumentation: Fix spelling of "operators"
Paul E. McKenney [Fri, 15 May 2015 00:31:07 +0000 (17:31 -0700)]
documentation: Fix spelling of "operators"

Reported-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agodocumentation: Bring rcutorture parameters up to date
Paul E. McKenney [Fri, 15 May 2015 00:29:51 +0000 (17:29 -0700)]
documentation: Bring rcutorture parameters up to date

This commit changes the documentation of the rcutorture parameters to
better match reality.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
9 years agorcu: Drop RCU_USER_QS in favor of NO_HZ_FULL
Paul E. McKenney [Wed, 13 May 2015 17:41:58 +0000 (10:41 -0700)]
rcu: Drop RCU_USER_QS in favor of NO_HZ_FULL

The RCU_USER_QS Kconfig parameter is now just a synonym for NO_HZ_FULL,
so this commit eliminates RCU_USER_QS, replacing all uses with NO_HZ_FULL.

Reported-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Frederic Weisbecker <fweisbec@gmail.com>
9 years agoLinux 4.2-rc1
Linus Torvalds [Sun, 5 Jul 2015 18:01:52 +0000 (11:01 -0700)]
Linux 4.2-rc1

9 years agoMerge tag 'platform-drivers-x86-v4.2-2' of git://git.infradead.org/users/dvhart/linux...
Linus Torvalds [Sun, 5 Jul 2015 17:54:09 +0000 (10:54 -0700)]
Merge tag 'platform-drivers-x86-v4.2-2' of git://git.infradead.org/users/dvhart/linux-platform-drivers-x86

Pull late x86 platform driver updates from Darren Hart:
 "The following came in a bit later and I wanted them to bake in next a
  few more days before submitting, thus the second pull.

  A new intel_pmc_ipc driver, a symmetrical allocation and free fix in
  dell-laptop, a couple minor fixes, and some updated documentation in
  the dell-laptop comments.

  intel_pmc_ipc:
   - Add Intel Apollo Lake PMC IPC driver

  tc1100-wmi:
   - Delete an unnecessary check before the function call "kfree"

  dell-laptop:
   - Fix allocating & freeing SMI buffer page
   - Show info about WiGig and UWB in debugfs
   - Update information about wireless control"

* tag 'platform-drivers-x86-v4.2-2' of git://git.infradead.org/users/dvhart/linux-platform-drivers-x86:
  intel_pmc_ipc: Add Intel Apollo Lake PMC IPC driver
  tc1100-wmi: Delete an unnecessary check before the function call "kfree"
  dell-laptop: Fix allocating & freeing SMI buffer page
  dell-laptop: Show info about WiGig and UWB in debugfs
  dell-laptop: Update information about wireless control

9 years agoMerge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Linus Torvalds [Sun, 5 Jul 2015 02:36:06 +0000 (19:36 -0700)]
Merge branch 'for-linus' of git://git./linux/kernel/git/viro/vfs

Pull more vfs updates from Al Viro:
 "Assorted VFS fixes and related cleanups (IMO the most interesting in
  that part are f_path-related things and Eric's descriptor-related
  stuff).  UFS regression fixes (it got broken last cycle).  9P fixes.
  fs-cache series, DAX patches, Jan's file_remove_suid() work"

[ I'd say this is much more than "fixes and related cleanups".  The
  file_table locking rule change by Eric Dumazet is a rather big and
  fundamental update even if the patch isn't huge.   - Linus ]

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (49 commits)
  9p: cope with bogus responses from server in p9_client_{read,write}
  p9_client_write(): avoid double p9_free_req()
  9p: forgetting to cancel request on interrupted zero-copy RPC
  dax: bdev_direct_access() may sleep
  block: Add support for DAX reads/writes to block devices
  dax: Use copy_from_iter_nocache
  dax: Add block size note to documentation
  fs/file.c: __fget() and dup2() atomicity rules
  fs/file.c: don't acquire files->file_lock in fd_install()
  fs:super:get_anon_bdev: fix race condition could cause dev exceed its upper limitation
  vfs: avoid creation of inode number 0 in get_next_ino
  namei: make set_root_rcu() return void
  make simple_positive() public
  ufs: use dir_pages instead of ufs_dir_pages()
  pagemap.h: move dir_pages() over there
  remove the pointless include of lglock.h
  fs: cleanup slight list_entry abuse
  xfs: Correctly lock inode when removing suid and file capabilities
  fs: Call security_ops->inode_killpriv on truncate
  fs: Provide function telling whether file_remove_privs() will do anything
  ...

9 years agobluetooth: fix list handling
Linus Torvalds [Sun, 5 Jul 2015 02:11:33 +0000 (19:11 -0700)]
bluetooth: fix list handling

Commit 835a6a2f8603 ("Bluetooth: Stop sabotaging list poisoning")
thought that the code was sabotaging the list poisoning when NULL'ing
out the list pointers and removed it.

But what was going on was that the bluetooth code was using NULL
pointers for the list as a way to mark it empty, and that commit just
broke it (and replaced the test with NULL with a "list_empty()" test on
a uninitialized list instead, breaking things even further).

So fix it all up to use the regular and real list_empty() handling
(which does not use NULL, but a pointer to itself), also making sure to
initialize the list properly (the previous NULL case was initialized
implicitly by the session being allocated with kzalloc())

This is a combination of patches by Marcel Holtmann and Tedd Ho-Jeong
An.

[ I would normally expect to get this through the bt tree, but I'm going
  to release -rc1, so I'm just committing this directly   - Linus ]

Reported-and-tested-by: Jörg Otte <jrg.otte@gmail.com>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Original-by: Tedd Ho-Jeong An <tedd.an@intel.com>
Original-by: Marcel Holtmann <marcel@holtmann.org>:
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
9 years agoMerge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/nab/target...
Linus Torvalds [Sat, 4 Jul 2015 21:13:43 +0000 (14:13 -0700)]
Merge branch 'for-next' of git://git./linux/kernel/git/nab/target-pending

Pull SCSI target updates from Nicholas Bellinger:
 "It's been a busy development cycle for target-core in a number of
  different areas.

  The fabric API usage for se_node_acl allocation is now within
  target-core code, dropping the external API callers for all fabric
  drivers tree-wide.

  There is a new conversion to RCU hlists for se_node_acl and
  se_portal_group LUN mappings, that turns fast-past LUN lookup into a
  completely lockless code-path.  It also removes the original
  hard-coded limitation of 256 LUNs per fabric endpoint.

  The configfs attributes for backends can now be shared between core
  and driver code, allowing existing drivers to use common code while
  still allowing flexibility for new backend provided attributes.

  The highlights include:

   - Merge sbc_verify_dif_* into common code (sagi)
   - Remove iscsi-target support for obsolete IFMarker/OFMarker
     (Christophe Vu-Brugier)
   - Add bidi support in target/user backend (ilias + vangelis + agover)
   - Move se_node_acl allocation into target-core code (hch)
   - Add crc_t10dif_update common helper (akinobu + mkp)
   - Handle target-core odd SGL mapping for data transfer memory
     (akinobu)
   - Move transport ID handling into target-core (hch)
   - Move task tag into struct se_cmd + support 64-bit tags (bart)
   - Convert se_node_acl->device_list[] to RCU hlist (nab + hch +
     paulmck)
   - Convert se_portal_group->tpg_lun_list[] to RCU hlist (nab + hch +
     paulmck)
   - Simplify target backend driver registration (hch)
   - Consolidate + simplify target backend attribute implementations
     (hch + nab)
   - Subsume se_port + t10_alua_tg_pt_gp_member into se_lun (hch)
   - Drop lun_sep_lock for se_lun->lun_se_dev RCU usage (hch + nab)
   - Drop unnecessary core_tpg_register TFO parameter (nab)
   - Use 64-bit LUNs tree-wide (hannes)
   - Drop left-over TARGET_MAX_LUNS_PER_TRANSPORT limit (hannes)"

* 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending: (76 commits)
  target: Bump core version to v5.0
  target: remove target_core_configfs.h
  target: remove unused TARGET_CORE_CONFIG_ROOT define
  target: consolidate version defines
  target: implement WRITE_SAME with UNMAP bit using ->execute_unmap
  target: simplify UNMAP handling
  target: replace se_cmd->execute_rw with a protocol_data field
  target/user: Fix inconsistent kmap_atomic/kunmap_atomic
  target: Send UA when changing LUN inventory
  target: Send UA upon LUN RESET tmr completion
  target: Send UA on ALUA target port group change
  target: Convert se_lun->lun_deve_lock to normal spinlock
  target: use 'se_dev_entry' when allocating UAs
  target: Remove 'ua_nacl' pointer from se_ua structure
  target_core_alua: Correct UA handling when switching states
  xen-scsiback: Fix compile warning for 64-bit LUN
  target: Remove TARGET_MAX_LUNS_PER_TRANSPORT
  target: use 64-bit LUNs
  target: Drop duplicate + unused se_dev_check_wce
  target: Drop unnecessary core_tpg_register TFO parameter
  ...

9 years agoMerge tag 'ntb-4.2' of git://github.com/jonmason/ntb
Linus Torvalds [Sat, 4 Jul 2015 21:07:47 +0000 (14:07 -0700)]
Merge tag 'ntb-4.2' of git://github.com/jonmason/ntb

Pull NTB updates from Jon Mason:
 "This includes a pretty significant reworking of the NTB core code, but
  has already produced some significant performance improvements.

  An abstraction layer was added to allow the hardware and clients to be
  easily added.  This required rewriting the NTB transport layer for
  this abstraction layer.  This modification will allow future "high
  performance" NTB clients.

  In addition to this change, a number of performance modifications were
  added.  These changes include NUMA enablement, using CPU memcpy
  instead of asyncdma, and modification of NTB layer MTU size"

* tag 'ntb-4.2' of git://github.com/jonmason/ntb: (22 commits)
  NTB: Add split BAR output for debugfs stats
  NTB: Change WARN_ON_ONCE to pr_warn_once on unsafe
  NTB: Print driver name and version in module init
  NTB: Increase transport MTU to 64k from 16k
  NTB: Rename Intel code names to platform names
  NTB: Default to CPU memcpy for performance
  NTB: Improve performance with write combining
  NTB: Use NUMA memory in Intel driver
  NTB: Use NUMA memory and DMA chan in transport
  NTB: Rate limit ntb_qp_link_work
  NTB: Add tool test client
  NTB: Add ping pong test client
  NTB: Add parameters for Intel SNB B2B addresses
  NTB: Reset transport QP link stats on down
  NTB: Do not advance transport RX on link down
  NTB: Differentiate transport link down messages
  NTB: Check the device ID to set errata flags
  NTB: Enable link for Intel root port mode in probe
  NTB: Read peer info from local SPAD in transport
  NTB: Split ntb_hw_intel and ntb_transport drivers
  ...

9 years ago9p: cope with bogus responses from server in p9_client_{read,write}
Al Viro [Sat, 4 Jul 2015 20:17:39 +0000 (16:17 -0400)]
9p: cope with bogus responses from server in p9_client_{read,write}

if server claims to have written/read more than we'd told it to,
warn and cap the claimed byte count to avoid advancing more than
we are ready to.

9 years agop9_client_write(): avoid double p9_free_req()
Al Viro [Sat, 4 Jul 2015 20:11:05 +0000 (16:11 -0400)]
p9_client_write(): avoid double p9_free_req()

Braino in "9p: switch p9_client_write() to passing it struct iov_iter *";
if response is impossible to parse and we discard the request, get the
out of the loop right there.

Cc: stable@vger.kernel.org
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
9 years ago9p: forgetting to cancel request on interrupted zero-copy RPC
Al Viro [Sat, 4 Jul 2015 20:04:19 +0000 (16:04 -0400)]
9p: forgetting to cancel request on interrupted zero-copy RPC

If we'd already sent a request and decide to abort it, we *must*
issue TFLUSH properly and not just blindly reuse the tag, or
we'll get seriously screwed when response eventually arrives
and we confuse it for response to later request that had reused
the same tag.

Cc: stable@vger.kernel.org # v3.2 and later
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
9 years agodax: bdev_direct_access() may sleep
Matthew Wilcox [Fri, 3 Jul 2015 14:40:43 +0000 (10:40 -0400)]
dax: bdev_direct_access() may sleep

The brd driver is the only in-tree driver that may sleep currently.
After some discussion on linux-fsdevel, we decided that any driver
may choose to sleep in its ->direct_access method.  To ensure that all
callers of bdev_direct_access() are prepared for this, add a call
to might_sleep().

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
9 years agoblock: Add support for DAX reads/writes to block devices
Matthew Wilcox [Fri, 3 Jul 2015 14:40:42 +0000 (10:40 -0400)]
block: Add support for DAX reads/writes to block devices

If a block device supports the ->direct_access methods, bypass the normal
DIO path and use DAX to go straight to memcpy() instead of allocating
a DIO and a BIO.

Includes support for the DIO_SKIP_DIO_COUNT flag in DAX, as is done in
do_blockdev_direct_IO().

Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
9 years agodax: Use copy_from_iter_nocache
Matthew Wilcox [Fri, 3 Jul 2015 14:40:39 +0000 (10:40 -0400)]
dax: Use copy_from_iter_nocache

When userspace does a write, there's no need for the written data to
pollute the CPU cache.  This matches the original XIP code.

Signed-off-by: Matthew Wilcox <willy@linux.intel.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
9 years agodax: Add block size note to documentation
Matthew Wilcox [Fri, 3 Jul 2015 14:40:38 +0000 (10:40 -0400)]
dax: Add block size note to documentation

For block devices which are small enough, mkfs will default to creating
a filesystem with block sizes smaller than page size.

Signed-off-by: Matthew Wilcox <willy@linux.intel.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
9 years agoMerge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Linus Torvalds [Sat, 4 Jul 2015 18:29:59 +0000 (11:29 -0700)]
Merge tag 'for-linus' of git://git./virt/kvm/kvm

Pull kvm fixes from Paolo Bonzini:
 "Except for the preempt notifiers fix, these are all small bugfixes
  that could have been waited for -rc2.  Sending them now since I was
  taking care of Peter's patch anyway"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  kvm: add hyper-v crash msrs values
  KVM: x86: remove data variable from kvm_get_msr_common
  KVM: s390: virtio-ccw: don't overwrite config space values
  KVM: x86: keep track of LVT0 changes under APICv
  KVM: x86: properly restore LVT0
  KVM: x86: make vapics_in_nmi_mode atomic
  sched, preempt_notifier: separate notifier registration from static_key inc/dec

9 years agoNTB: Add split BAR output for debugfs stats
Dave Jiang [Thu, 18 Jun 2015 09:17:30 +0000 (05:17 -0400)]
NTB: Add split BAR output for debugfs stats

When split BAR is enabled, the driver needs to dump out the split BAR
registers rather than the original 64bit BAR registers.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Change WARN_ON_ONCE to pr_warn_once on unsafe
Dave Jiang [Mon, 15 Jun 2015 12:22:30 +0000 (08:22 -0400)]
NTB: Change WARN_ON_ONCE to pr_warn_once on unsafe

The unsafe doorbell and scratchpad access should display reason when
WARN is called.  Otherwise we get a stack dump without any explanation.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Print driver name and version in module init
Dave Jiang [Mon, 15 Jun 2015 12:21:33 +0000 (08:21 -0400)]
NTB: Print driver name and version in module init

Printouts driver name and version to indicate what is being loaded.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Increase transport MTU to 64k from 16k
Dave Jiang [Wed, 3 Jun 2015 15:29:38 +0000 (11:29 -0400)]
NTB: Increase transport MTU to 64k from 16k

Benchmarking showed a significant performance increase with the MTU size
to 64k instead of 16k.  Change the driver default to 64k.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Rename Intel code names to platform names
Dave Jiang [Wed, 20 May 2015 16:55:47 +0000 (12:55 -0400)]
NTB: Rename Intel code names to platform names

Instead of using the platform code names, use the correct platform names
to identify the respective Intel NTB hardware.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Default to CPU memcpy for performance
Dave Jiang [Tue, 19 May 2015 20:52:04 +0000 (16:52 -0400)]
NTB: Default to CPU memcpy for performance

Disable DMA usage by default, since the CPU provides much better
performance with write combining.  Provide a module parameter to enable
DMA usage when offloading the memcpy is preferred.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Improve performance with write combining
Dave Jiang [Tue, 19 May 2015 20:45:46 +0000 (16:45 -0400)]
NTB: Improve performance with write combining

Changing the memory window BAR mappings to write combining significantly
boosts the performance.  We will also use memcpy that uses non-temporal
store, which showed performance improvement when doing non-cached
memcpys.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Use NUMA memory in Intel driver
Allen Hubbe [Tue, 19 May 2015 16:04:52 +0000 (12:04 -0400)]
NTB: Use NUMA memory in Intel driver

Allocate memory for the NUMA node of the NTB device.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Use NUMA memory and DMA chan in transport
Allen Hubbe [Mon, 18 May 2015 10:20:47 +0000 (06:20 -0400)]
NTB: Use NUMA memory and DMA chan in transport

Allocate memory and request the DMA channel for the same NUMA node as
the NTB device.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Rate limit ntb_qp_link_work
Allen Hubbe [Mon, 11 May 2015 14:08:26 +0000 (10:08 -0400)]
NTB: Rate limit ntb_qp_link_work

When the ntb transport is connecting and waiting for the peer, the debug
console receives lots of debug level messages about the remote qp link
status being down.  Rate limit those messages.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Add tool test client
Allen Hubbe [Thu, 21 May 2015 06:51:39 +0000 (02:51 -0400)]
NTB: Add tool test client

This is a simple debugging driver that enables the doorbell and
scratch pad registers to be read and written from the debugfs.  This
tool enables more complicated debugging to be scripted from user space.
This driver may be used to test that your ntb hardware and drivers are
functioning at a basic level.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Add ping pong test client
Allen Hubbe [Wed, 15 Apr 2015 15:12:41 +0000 (11:12 -0400)]
NTB: Add ping pong test client

This is a simple ping pong driver that exercises the scratch pads and
doorbells of the ntb hardware.  This driver may be used to test that
your ntb hardware and drivers are functioning at a basic level.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Add parameters for Intel SNB B2B addresses
Allen Hubbe [Mon, 11 May 2015 09:45:30 +0000 (05:45 -0400)]
NTB: Add parameters for Intel SNB B2B addresses

Add module parameters for the addresses to be used in B2B topology.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Reset transport QP link stats on down
Allen Hubbe [Tue, 12 May 2015 12:09:15 +0000 (08:09 -0400)]
NTB: Reset transport QP link stats on down

Reset the link stats when the link goes down.  In particular, the TX and
RX index and count must be reset, or else the TX side will be sending
packets to the RX side where the RX side is not expecting them.  Reset
all the stats, to be consistent.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Do not advance transport RX on link down
Allen Hubbe [Tue, 12 May 2015 10:24:27 +0000 (06:24 -0400)]
NTB: Do not advance transport RX on link down

On link down, don't advance RX index to the next entry.  The next entry
should never be valid after receiving the link down flag.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Differentiate transport link down messages
Allen Hubbe [Tue, 12 May 2015 10:55:44 +0000 (06:55 -0400)]
NTB: Differentiate transport link down messages

The same message "qp %d: Link Down\n" was printed at two locations in
ntb_transport.  Change the messages so they are distinct.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Check the device ID to set errata flags
Dave Jiang [Fri, 8 May 2015 16:24:40 +0000 (12:24 -0400)]
NTB: Check the device ID to set errata flags

Set errata flags for the specific device IDs to which they apply,
instead of the whole Xeon hardware class.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Enable link for Intel root port mode in probe
Dave Jiang [Tue, 19 May 2015 20:59:34 +0000 (16:59 -0400)]
NTB: Enable link for Intel root port mode in probe

Link training should be enabled in the driver probe for root port mode.
We should not have to wait for transport to be loaded for this to
happen.  Otherwise the ntb device will not show up on the transparent
bridge side of the link.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Read peer info from local SPAD in transport
Dave Jiang [Tue, 2 Jun 2015 07:45:07 +0000 (03:45 -0400)]
NTB: Read peer info from local SPAD in transport

The transport was writing and then reading the peer scratch pad,
essentially reading what it just wrote instead of exchanging any
information with the peer.  The transport expects the peer values to be
the same as the local values, so this issue was not obvious.

Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Split ntb_hw_intel and ntb_transport drivers
Allen Hubbe [Thu, 9 Apr 2015 14:33:20 +0000 (10:33 -0400)]
NTB: Split ntb_hw_intel and ntb_transport drivers

Change ntb_hw_intel to use the new NTB hardware abstraction layer.

Split ntb_transport into its own driver.  Change it to use the new NTB
hardware abstraction layer.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoNTB: Add NTB hardware abstraction layer
Allen Hubbe [Thu, 9 Apr 2015 14:33:20 +0000 (10:33 -0400)]
NTB: Add NTB hardware abstraction layer

Abstract the NTB device behind a programming interface, so that it can
support different hardware and client drivers.

Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
9 years agoMerge branch 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sat, 4 Jul 2015 16:22:51 +0000 (09:22 -0700)]
Merge branch 'irq-urgent-for-linus' of git://git./linux/kernel/git/tip/tip

Pull irq update from Thomas Gleixner:
 "The last update for 4.2 is just moving a macro from a local header to
  the global one, so it can be used in architecture code as well.

  Cleanup of the now empty local header is 4.3 material"

* 'irq-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  irqchip: Move IRQCHIP_DECLARE macro to include/linux/irqchip.h

9 years agoMerge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sat, 4 Jul 2015 15:58:50 +0000 (08:58 -0700)]
Merge branch 'x86-urgent-for-linus' of git://git./linux/kernel/git/tip/tip

Pull x86 fixes from Ingo Molnar:
 "Two FPU rewrite related fixes.  This addresses all known x86
  regressions at this stage.  Also some other misc fixes"

* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  x86/fpu: Fix boot crash in the early FPU code
  x86/asm/entry/64: Update path names
  x86/fpu: Fix FPU related boot regression when CPUID masking BIOS feature is enabled
  x86/boot/setup: Clean up the e820_reserve_setup_data() code
  x86/kaslr: Fix typo in the KASLR_FLAG documentation

9 years agoMerge branch 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sat, 4 Jul 2015 15:56:53 +0000 (08:56 -0700)]
Merge branch 'sched-urgent-for-linus' of git://git./linux/kernel/git/tip/tip

Pull scheduler fixes from Ingo Molnar:
 "Debug info and other statistics fixes and related enhancements"

* 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  sched/numa: Fix numa balancing stats in /proc/pid/sched
  sched/numa: Show numa_group ID in /proc/sched_debug task listings
  sched/debug: Move print_cfs_rq() declaration to kernel/sched/sched.h
  sched/stat: Expose /proc/pid/schedstat if CONFIG_SCHED_INFO=y
  sched/stat: Simplify the sched_info accounting dependency