Stefan Agner [Wed, 4 Jul 2018 15:07:45 +0000 (17:07 +0200)]
mmc: sdhci-esdhc-imx: allow 1.8V modes without 100/200MHz pinctrl states
commit
92748beac07c471d995fbec642b63572dc01b3dc upstream.
If pinctrl nodes for 100/200MHz are missing, the controller should
not select any mode which need signal frequencies 100MHz or higher.
To prevent such speed modes the driver currently uses the quirk flag
SDHCI_QUIRK2_NO_1_8_V. This works nicely for SD cards since 1.8V
signaling is required for all faster modes and slower modes use 3.3V
signaling only.
However, there are eMMC modes which use 1.8V signaling and run below
100MHz, e.g. DDR52 at 1.8V. With using SDHCI_QUIRK2_NO_1_8_V this
mode is prevented. When using a fixed 1.8V regulator as vqmmc-supply
the stack has no valid mode to use. In this tenuous situation the
kernel continuously prints voltage switching errors:
mmc1: Switching to 3.3V signalling voltage failed
Avoid using SDHCI_QUIRK2_NO_1_8_V and prevent faster modes by
altering the SDHCI capability register. With that the stack is able
to select 1.8V modes even if no faster pinctrl states are available:
# cat /sys/kernel/debug/mmc1/ios
...
timing spec: 8 (mmc DDR52)
signal voltage: 1 (1.80 V)
...
Link: http://lkml.kernel.org/r/20180628081331.13051-1-stefan@agner.ch
Signed-off-by: Stefan Agner <stefan@agner.ch>
Fixes:
ad93220de7da ("mmc: sdhci-esdhc-imx: change pinctrl state according
to uhs mode")
Cc: <stable@vger.kernel.org> # v4.13+
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Burton [Thu, 5 Jul 2018 21:37:52 +0000 (14:37 -0700)]
MIPS: Fix ioremap() RAM check
commit
523402fa9101090c91d2033b7ebdfdcf65880488 upstream.
We currently attempt to check whether a physical address range provided
to __ioremap() may be in use by the page allocator by examining the
value of PageReserved for each page in the region - lowmem pages not
marked reserved are presumed to be in use by the page allocator, and
requests to ioremap them fail.
The way we check this has been broken since commit
92923ca3aace ("mm:
meminit: only set page reserved in the memblock region"), because
memblock will typically not have any knowledge of non-RAM pages and
therefore those pages will not have the PageReserved flag set. Thus when
we attempt to ioremap a region outside of RAM we incorrectly fail
believing that the region is RAM that may be in use.
In most cases ioremap() on MIPS will take a fast-path to use the
unmapped kseg1 or xkphys virtual address spaces and never hit this path,
so the only way to hit it is for a MIPS32 system to attempt to ioremap()
an address range in lowmem with flags other than _CACHE_UNCACHED.
Perhaps the most straightforward way to do this is using
ioremap_uncached_accelerated(), which is how the problem was discovered.
Fix this by making use of walk_system_ram_range() to test the address
range provided to __ioremap() against only RAM pages, rather than all
lowmem pages. This means that if we have a lowmem I/O region, which is
very common for MIPS systems, we're free to ioremap() address ranges
within it. A nice bonus is that the test is no longer limited to lowmem.
The approach here matches the way x86 performed the same test after
commit
c81c8a1eeede ("x86, ioremap: Speed up check for RAM pages") until
x86 moved towards a slightly more complicated check using walk_mem_res()
for unrelated reasons with commit
0e4c12b45aa8 ("x86/mm, resource: Use
PAGE_KERNEL protection for ioremap of memory pages").
Signed-off-by: Paul Burton <paul.burton@mips.com>
Reported-by: Serge Semin <fancer.lancer@gmail.com>
Tested-by: Serge Semin <fancer.lancer@gmail.com>
Fixes:
92923ca3aace ("mm: meminit: only set page reserved in the memblock region")
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.2+
Patchwork: https://patchwork.linux-mips.org/patch/19786/
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Burton [Fri, 22 Jun 2018 17:55:46 +0000 (10:55 -0700)]
MIPS: Use async IPIs for arch_trigger_cpumask_backtrace()
commit
b63e132b6433a41cf311e8bc382d33fd2b73b505 upstream.
The current MIPS implementation of arch_trigger_cpumask_backtrace() is
broken because it attempts to use synchronous IPIs despite the fact that
it may be run with interrupts disabled.
This means that when arch_trigger_cpumask_backtrace() is invoked, for
example by the RCU CPU stall watchdog, we may:
- Deadlock due to use of synchronous IPIs with interrupts disabled,
causing the CPU that's attempting to generate the backtrace output
to hang itself.
- Not succeed in generating the desired output from remote CPUs.
- Produce warnings about this from smp_call_function_many(), for
example:
[42760.526910] INFO: rcu_sched detected stalls on CPUs/tasks:
[42760.535755] 0-...!: (1 GPs behind) idle=ade/
140000000000000/0 softirq=526944/526945 fqs=0
[42760.547874] 1-...!: (0 ticks this GP) idle=e4a/
140000000000000/0 softirq=547885/547885 fqs=0
[42760.559869] (detected by 2, t=2162 jiffies, g=266689, c=266688, q=33)
[42760.568927] ------------[ cut here ]------------
[42760.576146] WARNING: CPU: 2 PID: 1216 at kernel/smp.c:416 smp_call_function_many+0x88/0x20c
[42760.587839] Modules linked in:
[42760.593152] CPU: 2 PID: 1216 Comm: sh Not tainted
4.15.4-00373-gee058bb4d0c2 #2
[42760.603767] Stack :
8e09bd20 8e09bd20 8e09bd20 fffffff0 00000007 00000006 00000000 8e09bca8
[42760.616937]
95b2b379 95b2b379 807a0080 00000007 81944518 0000018a 00000032 00000000
[42760.630095]
00000000 00000030 80000000 00000000 806eca74 00000009 8017e2b8 000001a0
[42760.643169]
00000000 00000002 00000000 8e09baa4 00000008 808b8008 86d69080 8e09bca0
[42760.656282]
8e09ad50 805e20aa 00000000 00000000 00000000 8017e2b8 00000009 801070ca
[42760.669424] ...
[42760.673919] Call Trace:
[42760.678672] [<
27fde568>] show_stack+0x70/0xf0
[42760.685417] [<
84751641>] dump_stack+0xaa/0xd0
[42760.692188] [<
699d671c>] __warn+0x80/0x92
[42760.698549] [<
68915d41>] warn_slowpath_null+0x28/0x36
[42760.705912] [<
f7c76c1c>] smp_call_function_many+0x88/0x20c
[42760.713696] [<
6bbdfc2a>] arch_trigger_cpumask_backtrace+0x30/0x4a
[42760.722216] [<
f845bd33>] rcu_dump_cpu_stacks+0x6a/0x98
[42760.729580] [<
796e7629>] rcu_check_callbacks+0x672/0x6ac
[42760.737476] [<
059b3b43>] update_process_times+0x18/0x34
[42760.744981] [<
6eb94941>] tick_sched_handle.isra.5+0x26/0x38
[42760.752793] [<
478d3d70>] tick_sched_timer+0x1c/0x50
[42760.759882] [<
e56ea39f>] __hrtimer_run_queues+0xc6/0x226
[42760.767418] [<
e88bbcae>] hrtimer_interrupt+0x88/0x19a
[42760.775031] [<
6765a19e>] gic_compare_interrupt+0x2e/0x3a
[42760.782761] [<
0558bf5f>] handle_percpu_devid_irq+0x78/0x168
[42760.790795] [<
90c11ba2>] generic_handle_irq+0x1e/0x2c
[42760.798117] [<
1b6d462c>] gic_handle_local_int+0x38/0x86
[42760.805545] [<
b2ada1c7>] gic_irq_dispatch+0xa/0x14
[42760.812534] [<
90c11ba2>] generic_handle_irq+0x1e/0x2c
[42760.820086] [<
c7521934>] do_IRQ+0x16/0x20
[42760.826274] [<
9aef3ce6>] plat_irq_dispatch+0x62/0x94
[42760.833458] [<
6a94b53c>] except_vec_vi_end+0x70/0x78
[42760.840655] [<
22284043>] smp_call_function_many+0x1ba/0x20c
[42760.848501] [<
54022b58>] smp_call_function+0x1e/0x2c
[42760.855693] [<
ab9fc705>] flush_tlb_mm+0x2a/0x98
[42760.862730] [<
0844cdd0>] tlb_flush_mmu+0x1c/0x44
[42760.869628] [<
cb259b74>] arch_tlb_finish_mmu+0x26/0x3e
[42760.877021] [<
1aeaaf74>] tlb_finish_mmu+0x18/0x66
[42760.883907] [<
b3fce717>] exit_mmap+0x76/0xea
[42760.890428] [<
c4c8a2f6>] mmput+0x80/0x11a
[42760.896632] [<
a41a08f4>] do_exit+0x1f4/0x80c
[42760.903158] [<
ee01cef6>] do_group_exit+0x20/0x7e
[42760.909990] [<
13fa8d54>] __wake_up_parent+0x0/0x1e
[42760.917045] [<
46cf89d0>] smp_call_function_many+0x1a2/0x20c
[42760.924893] [<
8c21a93b>] syscall_common+0x14/0x1c
[42760.931765] ---[ end trace
02aa09da9dc52a60 ]---
[42760.938342] ------------[ cut here ]------------
[42760.945311] WARNING: CPU: 2 PID: 1216 at kernel/smp.c:291 smp_call_function_single+0xee/0xf8
...
This patch switches MIPS' arch_trigger_cpumask_backtrace() to use async
IPIs & smp_call_function_single_async() in order to resolve this
problem. We ensure use of the pre-allocated call_single_data_t
structures is serialized by maintaining a cpumask indicating that
they're busy, and refusing to attempt to send an IPI when a CPU's bit is
set in this mask. This should only happen if a CPU hasn't responded to a
previous backtrace IPI - ie. if it's hung - and we print a warning to
the console in this case.
I've marked this for stable branches as far back as v4.9, to which it
applies cleanly. Strictly speaking the faulty MIPS implementation can be
traced further back to commit
856839b76836 ("MIPS: Add
arch_trigger_all_cpu_backtrace() function") in v3.19, but kernel
versions v3.19 through v4.8 will require further work to backport due to
the rework performed in commit
9a01c3ed5cdb ("nmi_backtrace: add more
trigger_*_cpu_backtrace() methods").
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19597/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.9+
Fixes:
856839b76836 ("MIPS: Add arch_trigger_all_cpu_backtrace() function")
Fixes:
9a01c3ed5cdb ("nmi_backtrace: add more trigger_*_cpu_backtrace() methods")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Burton [Fri, 22 Jun 2018 17:55:45 +0000 (10:55 -0700)]
MIPS: Call dump_stack() from show_regs()
commit
5a267832c2ec47b2dad0fdb291a96bb5b8869315 upstream.
The generic nmi_cpu_backtrace() function calls show_regs() when a struct
pt_regs is available, and dump_stack() otherwise. If we were to make use
of the generic nmi_cpu_backtrace() with MIPS' current implementation of
show_regs() this would mean that we see only register data with no
accompanying stack information, in contrast with our current
implementation which calls dump_stack() regardless of whether register
state is available.
In preparation for making use of the generic nmi_cpu_backtrace() to
implement arch_trigger_cpumask_backtrace(), have our implementation of
show_regs() call dump_stack() and drop the explicit dump_stack() call in
arch_dump_stack() which is invoked by arch_trigger_cpumask_backtrace().
This will allow the output we produce to remain the same after a later
patch switches to using nmi_cpu_backtrace(). It may mean that we produce
extra stack output in other uses of show_regs(), but this:
1) Seems harmless.
2) Is good for consistency between arch_trigger_cpumask_backtrace()
and other users of show_regs().
3) Matches the behaviour of the ARM & PowerPC architectures.
Marked for stable back to v4.9 as a prerequisite of the following patch
"MIPS: Call dump_stack() from show_regs()".
Signed-off-by: Paul Burton <paul.burton@mips.com>
Patchwork: https://patchwork.linux-mips.org/patch/19596/
Cc: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Huacai Chen <chenhc@lemote.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # v4.9+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kai Chieh Chuang [Fri, 27 Apr 2018 02:11:35 +0000 (10:11 +0800)]
ASoC: mediatek: preallocate pages use platform device
commit
5845e6155d8f4a4a9bae2d4c1d1bb4a4d9a925c2 upstream.
preallocate pages should use platform device,
since we set dma mask for platform device.
Signed-off-by: KaiChieh Chuang <kaichieh.chuang@mediatek.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: stable@vger.kernel.org
Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sean Young [Sun, 8 Apr 2018 10:36:40 +0000 (06:36 -0400)]
media: rc: mce_kbd decoder: fix stuck keys
commit
63039c29f7a4ce8a8bd165173840543c0098d7b0 upstream.
The MCE Remote sends a 0 scancode when keys are released. If this is not
received or decoded, then keys can get "stuck"; the keyup event is not
sent since the input_sync() is missing from the timeout handler.
Cc: stable@vger.kernel.org
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Wed, 11 Jul 2018 14:29:25 +0000 (16:29 +0200)]
Linux 4.14.55
Sebastian Andrzej Siewior [Thu, 28 Jun 2018 06:26:05 +0000 (23:26 -0700)]
Revert mm/vmstat.c: fix vmstat_update() preemption BUG
commit
28557cc106e6d2aa8b8c5c7687ea9f8055ff3911 upstream.
Revert commit
c7f26ccfb2c3 ("mm/vmstat.c: fix vmstat_update() preemption
BUG"). Steven saw a "using smp_processor_id() in preemptible" message
and added a preempt_disable() section around it to keep it quiet. This
is not the right thing to do it does not fix the real problem.
vmstat_update() is invoked by a kworker on a specific CPU. This worker
it bound to this CPU. The name of the worker was "kworker/1:1" so it
should have been a worker which was bound to CPU1. A worker which can
run on any CPU would have a `u' before the first digit.
smp_processor_id() can be used in a preempt-enabled region as long as
the task is bound to a single CPU which is the case here. If it could
run on an arbitrary CPU then this is the problem we have an should seek
to resolve.
Not only this smp_processor_id() must not be migrated to another CPU but
also refresh_cpu_vm_stats() which might access wrong per-CPU variables.
Not to mention that other code relies on the fact that such a worker
runs on one specific CPU only.
Therefore revert that commit and we should look instead what broke the
affinity mask of the kworker.
Link: http://lkml.kernel.org/r/20180504104451.20278-1-bigeasy@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Steven J. Hill <steven.hill@cavium.com>
Cc: Tejun Heo <htejun@gmail.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sebastian Andrzej Siewior [Thu, 24 May 2018 13:26:48 +0000 (15:26 +0200)]
sched, tracing: Fix trace_sched_pi_setprio() for deboosting
commit
4ff648decf4712d39f184fc2df3163f43975575a upstream.
Since the following commit:
b91473ff6e97 ("sched,tracing: Update trace_sched_pi_setprio()")
the sched_pi_setprio trace point shows the "newprio" during a deboost:
|futex sched_pi_setprio: comm=futex_requeue_p pid"34 oldprio
\98 newprio=3D98
|futex sched_switch: prev_comm=futex_requeue_p prev_pid"34 prev_prio=120
This patch open codes __rt_effective_prio() in the tracepoint as the
'newprio' to get the old behaviour back / the correct priority:
|futex sched_pi_setprio: comm=futex_requeue_p pid"20 oldprio
\98 newprio=3D120
|futex sched_switch: prev_comm=futex_requeue_p prev_pid"20 prev_prio=120
Peter suggested to open code the new priority so people using tracehook
could get the deadline data out.
Reported-by: Mansky Christian <man@keba.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes:
b91473ff6e97 ("sched,tracing: Update trace_sched_pi_setprio()")
Link: http://lkml.kernel.org/r/20180524132647.gg6ziuogczdmjjzu@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Carpenter [Tue, 5 Jun 2018 09:36:30 +0000 (12:36 +0300)]
staging: comedi: quatech_daqp_cs: fix no-op loop daqp_ao_insn_write()
commit
1376b0a2160319125c3a2822e8c09bd283cd8141 upstream.
There is a '>' vs '<' typo so this loop is a no-op.
Fixes:
d35dcc89fc93 ("staging: comedi: quatech_daqp_cs: fix daqp_ao_insn_write()")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Ian Abbott <abbotti@mev.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jann Horn [Mon, 25 Jun 2018 15:22:00 +0000 (17:22 +0200)]
netfilter: nf_log: don't hold nf_log_mutex during user access
commit
ce00bf07cc95a57cd20b208e02b3c2604e532ae8 upstream.
The old code would indefinitely block other users of nf_log_mutex if
a userspace access in proc_dostring() blocked e.g. due to a userfaultfd
region. Fix it by moving proc_dostring() out of the locked region.
This is a followup to commit
266d07cb1c9a ("netfilter: nf_log: fix
sleeping function called from invalid context"), which changed this code
from using rcu_read_lock() to taking nf_log_mutex.
Fixes:
266d07cb1c9a ("netfilter: nf_log: fix sleeping function calle[...]")
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tokunori Ikegami [Wed, 30 May 2018 09:32:29 +0000 (18:32 +0900)]
mtd: cfi_cmdset_0002: Change erase functions to check chip good only
commit
79ca484b613041ca223f74b34608bb6f5221724b upstream.
Currently the functions use to check both chip ready and good.
But the chip ready is not enough to check the operation status.
So change this to check the chip good instead of this.
About the retry functions to make sure the error handling remain it.
Signed-off-by: Tokunori Ikegami <ikegami@allied-telesis.co.jp>
Reviewed-by: Joakim Tjernlund <Joakim.Tjernlund@infinera.com>
Cc: Chris Packham <chris.packham@alliedtelesis.co.nz>
Cc: Brian Norris <computersforpeace@gmail.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Boris Brezillon <boris.brezillon@free-electrons.com>
Cc: Marek Vasut <marek.vasut@gmail.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Cyrille Pitchen <cyrille.pitchen@wedev4u.fr>
Cc: linux-mtd@lists.infradead.org
Cc: stable@vger.kernel.org
Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tokunori Ikegami [Wed, 30 May 2018 09:32:28 +0000 (18:32 +0900)]
mtd: cfi_cmdset_0002: Change erase functions to retry for error
commit
45f75b8a919a4255f52df454f1ffdee0e42443b2 upstream.
For the word write functions it is retried for error.
But it is not implemented to retry for the erase functions.
To make sure for the erase functions change to retry as same.
This is needed to prevent the flash erase error caused only once.
It was caused by the error case of chip_good() in the do_erase_oneblock().
Also it was confirmed on the MACRONIX flash device MX29GL512FHT2I-11G.
But the error issue behavior is not able to reproduce at this moment.
The flash controller is parallel Flash interface integrated on BCM53003.
Signed-off-by: Tokunori Ikegami <ikegami@allied-telesis.co.jp>
Reviewed-by: Joakim Tjernlund <Joakim.Tjernlund@infinera.com>
Cc: Chris Packham <chris.packham@alliedtelesis.co.nz>
Cc: Brian Norris <computersforpeace@gmail.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Boris Brezillon <boris.brezillon@free-electrons.com>
Cc: Marek Vasut <marek.vasut@gmail.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Cyrille Pitchen <cyrille.pitchen@wedev4u.fr>
Cc: linux-mtd@lists.infradead.org
Cc: stable@vger.kernel.org
Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tokunori Ikegami [Wed, 30 May 2018 09:32:27 +0000 (18:32 +0900)]
mtd: cfi_cmdset_0002: Change definition naming to retry write operation
commit
85a82e28b023de9b259a86824afbd6ba07bd6475 upstream.
The definition can be used for other program and erase operations also.
So change the naming to MAX_RETRIES from MAX_WORD_RETRIES.
Signed-off-by: Tokunori Ikegami <ikegami@allied-telesis.co.jp>
Reviewed-by: Joakim Tjernlund <Joakim.Tjernlund@infinera.com>
Cc: Chris Packham <chris.packham@alliedtelesis.co.nz>
Cc: Brian Norris <computersforpeace@gmail.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Boris Brezillon <boris.brezillon@free-electrons.com>
Cc: Marek Vasut <marek.vasut@gmail.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Cyrille Pitchen <cyrille.pitchen@wedev4u.fr>
Cc: linux-mtd@lists.infradead.org
Cc: stable@vger.kernel.org
Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ross Zwisler [Tue, 26 Jun 2018 22:30:41 +0000 (16:30 -0600)]
dm: prevent DAX mounts if not supported
commit
dbc626597c39b24cefce09fbd8e9dea85869a801 upstream.
Currently device_supports_dax() just checks to see if the QUEUE_FLAG_DAX
flag is set on the device's request queue to decide whether or not the
device supports filesystem DAX. Really we should be using
bdev_dax_supported() like filesystems do at mount time. This performs
other tests like checking to make sure the dax_direct_access() path works.
We also explicitly clear QUEUE_FLAG_DAX on the DM device's request queue if
any of the underlying devices do not support DAX. This makes the handling
of QUEUE_FLAG_DAX consistent with the setting/clearing of most other flags
in dm_table_set_restrictions().
Now that bdev_dax_supported() explicitly checks for QUEUE_FLAG_DAX, this
will ensure that filesystems built upon DM devices will only be able to
mount with DAX if all underlying devices also support DAX.
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Fixes: commit
545ed20e6df6 ("dm: add infrastructure for DAX support")
Cc: stable@vger.kernel.org
Acked-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Toshi Kani <toshi.kani@hpe.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mike Snitzer [Tue, 5 Dec 2017 04:28:32 +0000 (23:28 -0500)]
dm: set QUEUE_FLAG_DAX accordingly in dm_table_set_restrictions()
commit
ad3793fc3945173f64d82d05d3ecde41f6c0435c upstream.
Rather than having DAX support be unique by setting it based on table
type in dm_setup_md_queue().
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ross Zwisler [Tue, 26 Jun 2018 22:30:40 +0000 (16:30 -0600)]
dax: check for QUEUE_FLAG_DAX in bdev_dax_supported()
commit
15256f6cc4b44f2e70503758150267fd2a53c0d6 upstream.
Add an explicit check for QUEUE_FLAG_DAX to __bdev_dax_supported(). This
is needed for DM configurations where the first element in the dm-linear or
dm-stripe target supports DAX, but other elements do not. Without this
check __bdev_dax_supported() will pass for such devices, letting a
filesystem on that device mount with the DAX option.
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Suggested-by: Mike Snitzer <snitzer@redhat.com>
Fixes: commit
545ed20e6df6 ("dm: add infrastructure for DAX support")
Cc: stable@vger.kernel.org
Acked-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Toshi Kani <toshi.kani@hpe.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dave Jiang [Wed, 30 May 2018 20:03:46 +0000 (13:03 -0700)]
dax: change bdev_dax_supported() to support boolean returns
commit
80660f20252d6f76c9f203874ad7c7a4a8508cf8 upstream.
The function return values are confusing with the way the function is
named. We expect a true or false return value but it actually returns
0/-errno. This makes the code very confusing. Changing the return values
to return a bool where if DAX is supported then return true and no DAX
support returns false.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Darrick J. Wong [Wed, 30 May 2018 20:03:45 +0000 (13:03 -0700)]
fs: allow per-device dax status checking for filesystems
commit
ba23cba9b3bdc967aabdc6ff1e3e9b11ce05bb4f upstream.
Change bdev_dax_supported so it takes a bdev parameter. This enables
multi-device filesystems like xfs to check that a dax device can work for
the particular filesystem. Once that's in place, actually fix all the
parts of XFS where we need to be able to distinguish between datadev and
rtdev.
This patch fixes the problem where we screw up the dax support checking
in xfs if the datadev and rtdev have different dax capabilities.
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
[rez: Re-added __bdev_dax_supported() for !CONFIG_FS_DAX cases]
Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Reviewed-by: Eric Sandeen <sandeen@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Martin Kaiser [Mon, 18 Jun 2018 20:41:03 +0000 (22:41 +0200)]
mtd: rawnand: mxc: set spare area size register explicitly
commit
3f77f244d8ec28e3a0a81240ffac7d626390060c upstream.
The v21 version of the NAND flash controller contains a Spare Area Size
Register (SPAS) at offset 0x10. Its setting defaults to the maximum
spare area size of 218 bytes. The size that is set in this register is
used by the controller when it calculates the ECC bytes internally in
hardware.
Usually, this register is updated from settings in the IIM fuses when
the system is booting from NAND flash. For other boot media, however,
the SPAS register remains at the default setting, which may not work for
the particular flash chip on the board. The same goes for flash chips
whose configuration cannot be set in the IIM fuses (e.g. chips with 2k
sector size and 128 bytes spare area size can't be configured in the IIM
fuses on imx25 systems).
Set the SPAS register explicitly during the preset operation. Derive the
register value from mtd->oobsize that was detected during probe by
decoding the flash chip's ID bytes.
While at it, rename the define for the spare area register's offset to
NFC_V21_RSLTSPARE_AREA. The register at offset 0x10 on v1 controllers is
different from the register on v21 controllers.
Fixes:
d484018 ("mtd: mxc_nand: set NFC registers after reset")
Cc: stable@vger.kernel.org
Signed-off-by: Martin Kaiser <martin@kaiser.cx>
Reviewed-by: Sascha Hauer <s.hauer@pengutronix.de>
Reviewed-by: Miquel Raynal <miquel.raynal@bootlin.com>
Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Brad Love [Tue, 6 Mar 2018 19:15:34 +0000 (14:15 -0500)]
media: cx25840: Use subdev host data for PLL override
commit
3ee9bc12342cf546313d300808ff47d7dbb8e7db upstream.
The cx25840 driver currently configures 885, 887, and 888 using
default divisors for each chip. This check to see if the cx23885
driver has passed the cx25840 a non-default clock rate for a
specific chip. If a cx23885 board has left clk_freq at 0, the
clock default values will be used to configure the PLLs.
This patch only has effect on 888 boards who set clk_freq to 25M.
Signed-off-by: Brad Love <brad@nextdimension.cc>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Cc: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rasmus Villemoes [Sun, 8 Apr 2018 21:35:28 +0000 (23:35 +0200)]
Kbuild: fix # escaping in .cmd files for future Make
commit
9564a8cf422d7b58f6e857e3546d346fa970191e upstream.
I tried building using a freshly built Make (
4.2.1-69-g8a731d1), but
already the objtool build broke with
orc_dump.c: In function ‘orc_dump’:
orc_dump.c:106:2: error: ‘elf_getshnum’ is deprecated [-Werror=deprecated-declarations]
if (elf_getshdrnum(elf, &nr_sections)) {
Turns out that with that new Make, the backslash was not removed, so cpp
didn't see a #include directive, grep found nothing, and
-DLIBELF_USE_DEPRECATED was wrongly put in CFLAGS.
Now, that new Make behaviour is documented in their NEWS file:
* WARNING: Backward-incompatibility!
Number signs (#) appearing inside a macro reference or function invocation
no longer introduce comments and should not be escaped with backslashes:
thus a call such as:
foo := $(shell echo '#')
is legal. Previously the number sign needed to be escaped, for example:
foo := $(shell echo '\#')
Now this latter will resolve to "\#". If you want to write makefiles
portable to both versions, assign the number sign to a variable:
C := \#
foo := $(shell echo '$C')
This was claimed to be fixed in 3.81, but wasn't, for some reason.
To detect this change search for 'nocomment' in the .FEATURES variable.
This also fixes up the two make-cmd instances to replace # with $(pound)
rather than with \#. There might very well be other places that need
similar fixup in preparation for whatever future Make release contains
the above change, but at least this builds an x86_64 defconfig with the
new make.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=197847
Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Tue, 10 Jul 2018 14:01:30 +0000 (16:01 +0200)]
Revert "dpaa_eth: fix error in dpaa_remove()"
This reverts commit
5bbb99d2fde047df596379be6c58e265e2ddbe1f which is
commit
88075256ee817041d68c2387f29065b5cb2b342a upstream.
Jiri writes that this was an incorrect fix, and Madalin-cristian says it
was fixed differently in a later patch. So just revert this from
4.14.y.
Reported-by: Jiri Slaby <jslaby@suse.cz>
Cc: Madalin Bucur <madalin.bucur@nxp.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jaegeuk Kim [Sat, 31 Mar 2018 00:58:13 +0000 (17:58 -0700)]
f2fs: truncate preallocated blocks in error case
commit
dc7a10ddee0c56c6d891dd18de5c4ee9869545e0 upstream.
If write is failed, we must deallocate the blocks that we couldn't write.
Cc: stable@vger.kernel.org
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sakari Ailus [Fri, 2 Feb 2018 10:08:59 +0000 (05:08 -0500)]
media: vb2: core: Finish buffers at the end of the stream
commit
03703ed1debf777ea845aa9b50ba2e80a5e7dd3c upstream.
If buffers were prepared or queued and the buffers were released without
starting the queue, the finish mem op (corresponding to the prepare mem
op) was never called to the buffers.
Before commit
a136f59c0a1f there was no need to do this as in such a case
the prepare mem op had not been called yet. Address the problem by
explicitly calling finish mem op when the queue is stopped if the buffer
is in either prepared or queued state.
Fixes:
a136f59c0a1f ("[media] vb2: Move buffer cache synchronisation to prepare from queue")
Cc: stable@vger.kernel.org # for v4.13 and up
Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
Tested-by: Devin Heitmueller <dheitmueller@kernellabs.com>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Naoya Horiguchi [Thu, 5 Apr 2018 23:23:05 +0000 (16:23 -0700)]
mm: hwpoison: disable memory error handling on 1GB hugepage
commit
31286a8484a85e8b4e91ddb0f5415aee8a416827 upstream.
Recently the following BUG was reported:
Injecting memory failure for pfn 0x3c0000 at process virtual address 0x7fe300000000
Memory failure: 0x3c0000: recovery action for huge page: Recovered
BUG: unable to handle kernel paging request at
ffff8dfcc0003000
IP: gup_pgd_range+0x1f0/0xc20
PGD
17ae72067 P4D
17ae72067 PUD 0
Oops: 0000 [#1] SMP PTI
...
CPU: 3 PID: 5467 Comm: hugetlb_1gb Not tainted 4.15.0-rc8-mm1-abc+ #3
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.9.3-1.fc25 04/01/2014
You can easily reproduce this by calling madvise(MADV_HWPOISON) twice on
a 1GB hugepage. This happens because get_user_pages_fast() is not aware
of a migration entry on pud that was created in the 1st madvise() event.
I think that conversion to pud-aligned migration entry is working, but
other MM code walking over page table isn't prepared for it. We need
some time and effort to make all this work properly, so this patch
avoids the reported bug by just disabling error handling for 1GB
hugepage.
[n-horiguchi@ah.jp.nec.com: v2]
Link: http://lkml.kernel.org/r/1517284444-18149-1-git-send-email-n-horiguchi@ah.jp.nec.com
Link: http://lkml.kernel.org/r/1517207283-15769-1-git-send-email-n-horiguchi@ah.jp.nec.com
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Punit Agrawal <punit.agrawal@arm.com>
Tested-by: Michael Ellerman <mpe@ellerman.id.au>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rakib Mullick [Wed, 1 Nov 2017 04:14:51 +0000 (10:14 +0600)]
irq/core: Fix boot crash when the irqaffinity= boot parameter is passed on CPUMASK_OFFSTACK=y kernels(v1)
commit
10d94ff4d558b96bfc4f55bb0051ae4d938246fe upstream.
When the irqaffinity= kernel parameter is passed in a CPUMASK_OFFSTACK=y
kernel, it fails to boot, because zalloc_cpumask_var() cannot be used before
initializing the slab allocator to allocate a cpumask.
So, use alloc_bootmem_cpumask_var() instead.
Also do some cleanups while at it: in init_irq_default_affinity() remove
an #ifdef via using cpumask_available().
Signed-off-by: Rakib Mullick <rakib.mullick@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20171026045800.27087-1-rakib.mullick@gmail.com
Link: http://lkml.kernel.org/r/20171101041451.12581-1-rakib.mullick@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Janne Huttunen <janne.huttunen@nokia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Rosenberg [Mon, 2 Jul 2018 23:59:37 +0000 (16:59 -0700)]
HID: debug: check length before copy_to_user()
commit
717adfdaf14704fd3ec7fa2c04520c0723247eac upstream.
If our length is greater than the size of the buffer, we
overflow the buffer
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Rosenberg <drosen@google.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gustavo A. R. Silva [Fri, 29 Jun 2018 22:08:44 +0000 (17:08 -0500)]
HID: hiddev: fix potential Spectre v1
commit
4f65245f2d178b9cba48350620d76faa4a098841 upstream.
uref->field_index, uref->usage_index, finfo.field_index and cinfo.index can be
indirectly controlled by user-space, hence leading to a potential exploitation
of the Spectre variant 1 vulnerability.
This issue was detected with the help of Smatch:
drivers/hid/usbhid/hiddev.c:473 hiddev_ioctl_usage() warn: potential spectre issue 'report->field' (local cap)
drivers/hid/usbhid/hiddev.c:477 hiddev_ioctl_usage() warn: potential spectre issue 'field->usage' (local cap)
drivers/hid/usbhid/hiddev.c:757 hiddev_ioctl() warn: potential spectre issue 'report->field' (local cap)
drivers/hid/usbhid/hiddev.c:801 hiddev_ioctl() warn: potential spectre issue 'hid->collection' (local cap)
Fix this by sanitizing such structure fields before using them to index
report->field, field->usage and hid->collection
Notice that given that speculation windows are large, the policy is
to kill the speculation on the first load and not worry if it can be
completed with a dependent load/store [1].
[1] https://marc.info/?l=linux-kernel&m=
152449131114778&w=2
Cc: stable@vger.kernel.org
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jason Andryuk [Fri, 22 Jun 2018 16:25:49 +0000 (12:25 -0400)]
HID: i2c-hid: Fix "incomplete report" noise
commit
ef6eaf27274c0351f7059163918f3795da13199c upstream.
Commit
ac75a041048b ("HID: i2c-hid: fix size check and type usage") started
writing messages when the ret_size is <= 2 from i2c_master_recv. However, my
device i2c-DLL07D1 returns 2 for a short period of time (~0.5s) after I stop
moving the pointing stick or touchpad. It varies, but you get ~50 messages
each time which spams the log hard.
[ 95.925055] i2c_hid i2c-DLL07D1:01: i2c_hid_get_input: incomplete report (83/2)
This has also been observed with a i2c-ALP0017.
[ 1781.266353] i2c_hid i2c-ALP0017:00: i2c_hid_get_input: incomplete report (30/2)
Only print the message when ret_size is totally invalid and less than 2 to cut
down on the log spam.
Fixes:
ac75a041048b ("HID: i2c-hid: fix size check and type usage")
Reported-by: John Smith <john-s-84@gmx.net>
Cc: stable@vger.kernel.org
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ilya Dryomov [Mon, 16 Oct 2017 13:59:10 +0000 (15:59 +0200)]
block: cope with WRITE ZEROES failing in blkdev_issue_zeroout()
commit
d5ce4c31d6df518dd8f63bbae20d7423c5018a6c upstream.
sd_config_write_same() ignores ->max_ws_blocks == 0 and resets it to
permit trying WRITE SAME on older SCSI devices, unless ->no_write_same
is set. Because REQ_OP_WRITE_ZEROES is implemented in terms of WRITE
SAME, blkdev_issue_zeroout() may fail with -EREMOTEIO:
$ fallocate -zn -l 1k /dev/sdg
fallocate: fallocate failed: Remote I/O error
$ fallocate -zn -l 1k /dev/sdg # OK
$ fallocate -zn -l 1k /dev/sdg # OK
The following calls succeed because sd_done() sets ->no_write_same in
response to a sense that would become BLK_STS_TARGET/-EREMOTEIO, causing
__blkdev_issue_zeroout() to fall back to generating ZERO_PAGE bios.
This means blkdev_issue_zeroout() must cope with WRITE ZEROES failing
and fall back to manually zeroing, unless BLKDEV_ZERO_NOFALLBACK is
specified. For BLKDEV_ZERO_NOFALLBACK case, return -EOPNOTSUPP if
sd_done() has just set ->no_write_same thus indicating lack of offload
support.
Fixes:
c20cfc27a473 ("block: stop using blkdev_issue_write_same for zeroing")
Cc: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Cc: Janne Huttunen <janne.huttunen@nokia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ilya Dryomov [Mon, 16 Oct 2017 13:59:09 +0000 (15:59 +0200)]
block: factor out __blkdev_issue_zero_pages()
commit
425a4dba7953e35ffd096771973add6d2f40d2ed upstream.
blkdev_issue_zeroout() will use this in !BLKDEV_ZERO_NOFALLBACK case.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Cc: Janne Huttunen <janne.huttunen@nokia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jon Derrick [Mon, 2 Jul 2018 22:45:18 +0000 (18:45 -0400)]
ext4: check superblock mapped prior to committing
commit
a17712c8e4be4fa5404d20e9cd3b2b21eae7bc56 upstream.
This patch attempts to close a hole leading to a BUG seen with hot
removals during writes [1].
A block device (NVME namespace in this test case) is formatted to EXT4
without partitions. It's mounted and write I/O is run to a file, then
the device is hot removed from the slot. The superblock attempts to be
written to the drive which is no longer present.
The typical chain of events leading to the BUG:
ext4_commit_super()
__sync_dirty_buffer()
submit_bh()
submit_bh_wbc()
BUG_ON(!buffer_mapped(bh));
This fix checks for the superblock's buffer head being mapped prior to
syncing.
[1] https://www.spinics.net/lists/linux-ext4/msg56527.html
Signed-off-by: Jon Derrick <jonathan.derrick@intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Sun, 17 Jun 2018 22:11:20 +0000 (18:11 -0400)]
ext4: add more mount time checks of the superblock
commit
bfe0a5f47ada40d7984de67e59a7d3390b9b9ecc upstream.
The kernel's ext4 mount-time checks were more permissive than
e2fsprogs's libext2fs checks when opening a file system. The
superblock is considered too insane for debugfs or e2fsck to operate
on it, the kernel has no business trying to mount it.
This will make file system fuzzing tools work harder, but the failure
cases that they find will be more useful and be easier to evaluate.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Sun, 17 Jun 2018 04:41:14 +0000 (00:41 -0400)]
ext4: add more inode number paranoia checks
commit
c37e9e013469521d9adb932d17a1795c139b36db upstream.
If there is a directory entry pointing to a system inode (such as a
journal inode), complain and declare the file system to be corrupted.
Also, if the superblock's first inode number field is too small,
refuse to mount the file system.
This addresses CVE-2018-10882.
https://bugzilla.kernel.org/show_bug.cgi?id=200069
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Sun, 17 Jun 2018 03:41:59 +0000 (23:41 -0400)]
ext4: avoid running out of journal credits when appending to an inline file
commit
8bc1379b82b8e809eef77a9fedbb75c6c297be19 upstream.
Use a separate journal transaction if it turns out that we need to
convert an inline file to use an data block. Otherwise we could end
up failing due to not having journal credits.
This addresses CVE-2018-10883.
https://bugzilla.kernel.org/show_bug.cgi?id=200071
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Sat, 16 Jun 2018 19:40:48 +0000 (15:40 -0400)]
ext4: never move the system.data xattr out of the inode body
commit
8cdb5240ec5928b20490a2bb34cb87e9a5f40226 upstream.
When expanding the extra isize space, we must never move the
system.data xattr out of the inode body. For performance reasons, it
doesn't make any sense, and the inline data implementation assumes
that system.data xattr is never in the external xattr block.
This addresses CVE-2018-10880
https://bugzilla.kernel.org/show_bug.cgi?id=200005
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Fri, 15 Jun 2018 16:28:16 +0000 (12:28 -0400)]
ext4: clear i_data in ext4_inode_info when removing inline data
commit
6e8ab72a812396996035a37e5ca4b3b99b5d214b upstream.
When converting from an inode from storing the data in-line to a data
block, ext4_destroy_inline_data_nolock() was only clearing the on-disk
copy of the i_blocks[] array. It was not clearing copy of the
i_blocks[] in ext4_inode_info, in i_data[], which is the copy actually
used by ext4_map_blocks().
This didn't matter much if we are using extents, since the extents
header would be invalid and thus the extents could would re-initialize
the extents tree. But if we are using indirect blocks, the previous
contents of the i_blocks array will be treated as block numbers, with
potentially catastrophic results to the file system integrity and/or
user data.
This gets worse if the file system is using a 1k block size and
s_first_data is zero, but even without this, the file system can get
quite badly corrupted.
This addresses CVE-2018-10881.
https://bugzilla.kernel.org/show_bug.cgi?id=200015
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Fri, 15 Jun 2018 16:27:16 +0000 (12:27 -0400)]
ext4: include the illegal physical block in the bad map ext4_error msg
commit
bdbd6ce01a70f02e9373a584d0ae9538dcf0a121 upstream.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Thu, 14 Jun 2018 16:55:10 +0000 (12:55 -0400)]
ext4: verify the depth of extent tree in ext4_find_extent()
commit
bc890a60247171294acc0bd67d211fa4b88d40ba upstream.
If there is a corupted file system where the claimed depth of the
extent tree is -1, this can cause a massive buffer overrun leading to
sadness.
This addresses CVE-2018-10877.
https://bugzilla.kernel.org/show_bug.cgi?id=199417
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Thu, 14 Jun 2018 04:58:00 +0000 (00:58 -0400)]
ext4: only look at the bg_flags field if it is valid
commit
8844618d8aa7a9973e7b527d038a2a589665002c upstream.
The bg_flags field in the block group descripts is only valid if the
uninit_bg or metadata_csum feature is enabled. We were not
consistently looking at this field; fix this.
Also block group #0 must never have uninitialized allocation bitmaps,
or need to be zeroed, since that's where the root inode, and other
special inodes are set up. Check for these conditions and mark the
file system as corrupted if they are detected.
This addresses CVE-2018-10876.
https://bugzilla.kernel.org/show_bug.cgi?id=199403
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Thu, 14 Jun 2018 03:00:48 +0000 (23:00 -0400)]
ext4: always check block group bounds in ext4_init_block_bitmap()
commit
819b23f1c501b17b9694325471789e6b5cc2d0d2 upstream.
Regardless of whether the flex_bg feature is set, we should always
check to make sure the bits we are setting in the block bitmap are
within the block group bounds.
https://bugzilla.kernel.org/show_bug.cgi?id=199865
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Thu, 14 Jun 2018 03:08:26 +0000 (23:08 -0400)]
ext4: make sure bitmaps and the inode table don't overlap with bg descriptors
commit
77260807d1170a8cf35dbb06e07461a655f67eee upstream.
It's really bad when the allocation bitmaps and the inode table
overlap with the block group descriptors, since it causes random
corruption of the bg descriptors. So we really want to head those off
at the pass.
https://bugzilla.kernel.org/show_bug.cgi?id=199865
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Wed, 13 Jun 2018 04:51:28 +0000 (00:51 -0400)]
ext4: always verify the magic number in xattr blocks
commit
513f86d73855ce556ea9522b6bfd79f87356dc3a upstream.
If there an inode points to a block which is also some other type of
metadata block (such as a block allocation bitmap), the
buffer_verified flag can be set when it was validated as that other
metadata block type; however, it would make a really terrible external
attribute block. The reason why we use the verified flag is to avoid
constantly reverifying the block. However, it doesn't take much
overhead to make sure the magic number of the xattr block is correct,
and this will avoid potential crashes.
This addresses CVE-2018-10879.
https://bugzilla.kernel.org/show_bug.cgi?id=200001
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Wed, 13 Jun 2018 04:23:11 +0000 (00:23 -0400)]
ext4: add corruption check in ext4_xattr_set_entry()
commit
5369a762c882c0b6e9599e4ebbb3a9ba9eee7e2d upstream.
In theory this should have been caught earlier when the xattr list was
verified, but in case it got missed, it's simple enough to add check
to make sure we don't overrun the xattr buffer.
This addresses CVE-2018-10879.
https://bugzilla.kernel.org/show_bug.cgi?id=200001
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Sun, 17 Jun 2018 00:21:45 +0000 (20:21 -0400)]
jbd2: don't mark block as modified if the handle is out of credits
commit
e09463f220ca9a1a1ecfda84fcda658f99a1f12a upstream.
Do not set the b_modified flag in block's journal head should not
until after we're sure that jbd2_journal_dirty_metadat() will not
abort with an error due to there not being enough space reserved in
the jbd2 handle.
Otherwise, future attempts to modify the buffer may lead a large
number of spurious errors and warnings.
This addresses CVE-2018-10883.
https://bugzilla.kernel.org/show_bug.cgi?id=200071
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mikulas Patocka [Sun, 3 Jun 2018 14:40:54 +0000 (16:40 +0200)]
drm/udl: fix display corruption of the last line
commit
99ec9e77511dea55d81729fc80b6c63a61bfa8e0 upstream.
The displaylink hardware has such a peculiarity that it doesn't render a
command until next command is received. This produces occasional
corruption, such as when setting 22x11 font on the console, only the first
line of the cursor will be blinking if the cursor is located at some
specific columns.
When we end up with a repeating pixel, the driver has a bug that it leaves
one uninitialized byte after the command (and this byte is enough to flush
the command and render it - thus it fixes the screen corruption), however
whe we end up with a non-repeating pixel, there is no byte appended and
this results in temporary screen corruption.
This patch fixes the screen corruption by always appending a byte 0xAF at
the end of URB. It also removes the uninitialized byte.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michel Dänzer [Fri, 29 Jun 2018 14:27:10 +0000 (16:27 +0200)]
drm: Use kvzalloc for allocating blob property memory
commit
718b5406cd76f1aa6434311241b7febf0e8571ff upstream.
The property size may be controlled by userspace, can be large (I've
seen failure with order 4, i.e. 16 pages / 64 KB) and doesn't need to be
physically contiguous.
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180629142710.2069-1-michel@daenzer.net
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stefano Brivio [Thu, 5 Jul 2018 09:46:42 +0000 (11:46 +0200)]
cifs: Fix slab-out-of-bounds in send_set_info() on SMB2 ACE setting
commit
f46ecbd97f508e68a7806291a139499794874f3d upstream.
A "small" CIFS buffer is not big enough in general to hold a
setacl request for SMB2, and we end up overflowing the buffer in
send_set_info(). For instance:
# mount.cifs //127.0.0.1/test /mnt/test -o username=test,password=test,nounix,cifsacl
# touch /mnt/test/acltest
# getcifsacl /mnt/test/acltest
REVISION:0x1
CONTROL:0x9004
OWNER:S-1-5-21-
2926364953-
924364008-
418108241-1000
GROUP:S-1-22-2-1001
ACL:S-1-5-21-
2926364953-
924364008-
418108241-1000:ALLOWED/0x0/0x1e01ff
ACL:S-1-22-2-1001:ALLOWED/0x0/R
ACL:S-1-22-2-1001:ALLOWED/0x0/R
ACL:S-1-5-21-
2926364953-
924364008-
418108241-1000:ALLOWED/0x0/0x1e01ff
ACL:S-1-1-0:ALLOWED/0x0/R
# setcifsacl -a "ACL:S-1-22-2-1004:ALLOWED/0x0/R" /mnt/test/acltest
this setacl will cause the following KASAN splat:
[ 330.777927] BUG: KASAN: slab-out-of-bounds in send_set_info+0x4dd/0xc20 [cifs]
[ 330.779696] Write of size 696 at addr
ffff88010d5e2860 by task setcifsacl/1012
[ 330.781882] CPU: 1 PID: 1012 Comm: setcifsacl Not tainted 4.18.0-rc2+ #2
[ 330.783140] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 330.784395] Call Trace:
[ 330.784789] dump_stack+0xc2/0x16b
[ 330.786777] print_address_description+0x6a/0x270
[ 330.787520] kasan_report+0x258/0x380
[ 330.788845] memcpy+0x34/0x50
[ 330.789369] send_set_info+0x4dd/0xc20 [cifs]
[ 330.799511] SMB2_set_acl+0x76/0xa0 [cifs]
[ 330.801395] set_smb2_acl+0x7ac/0xf30 [cifs]
[ 330.830888] cifs_xattr_set+0x963/0xe40 [cifs]
[ 330.840367] __vfs_setxattr+0x84/0xb0
[ 330.842060] __vfs_setxattr_noperm+0xe6/0x370
[ 330.843848] vfs_setxattr+0xc2/0xd0
[ 330.845519] setxattr+0x258/0x320
[ 330.859211] path_setxattr+0x15b/0x1b0
[ 330.864392] __x64_sys_setxattr+0xc0/0x160
[ 330.866133] do_syscall_64+0x14e/0x4b0
[ 330.876631] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 330.878503] RIP: 0033:0x7ff2e507db0a
[ 330.880151] Code: 48 8b 0d 89 93 2c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 bc 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 56 93 2c 00 f7 d8 64 89 01 48
[ 330.885358] RSP: 002b:
00007ffdc4903c18 EFLAGS:
00000246 ORIG_RAX:
00000000000000bc
[ 330.887733] RAX:
ffffffffffffffda RBX:
000055d1170de140 RCX:
00007ff2e507db0a
[ 330.890067] RDX:
000055d1170de7d0 RSI:
000055d115b39184 RDI:
00007ffdc4904818
[ 330.892410] RBP:
0000000000000001 R08:
0000000000000000 R09:
000055d1170de7e4
[ 330.894785] R10:
00000000000002b8 R11:
0000000000000246 R12:
0000000000000007
[ 330.897148] R13:
000055d1170de0c0 R14:
0000000000000008 R15:
000055d1170de550
[ 330.901057] Allocated by task 1012:
[ 330.902888] kasan_kmalloc+0xa0/0xd0
[ 330.904714] kmem_cache_alloc+0xc8/0x1d0
[ 330.906615] mempool_alloc+0x11e/0x380
[ 330.908496] cifs_small_buf_get+0x35/0x60 [cifs]
[ 330.910510] smb2_plain_req_init+0x4a/0xd60 [cifs]
[ 330.912551] send_set_info+0x198/0xc20 [cifs]
[ 330.914535] SMB2_set_acl+0x76/0xa0 [cifs]
[ 330.916465] set_smb2_acl+0x7ac/0xf30 [cifs]
[ 330.918453] cifs_xattr_set+0x963/0xe40 [cifs]
[ 330.920426] __vfs_setxattr+0x84/0xb0
[ 330.922284] __vfs_setxattr_noperm+0xe6/0x370
[ 330.924213] vfs_setxattr+0xc2/0xd0
[ 330.926008] setxattr+0x258/0x320
[ 330.927762] path_setxattr+0x15b/0x1b0
[ 330.929592] __x64_sys_setxattr+0xc0/0x160
[ 330.931459] do_syscall_64+0x14e/0x4b0
[ 330.933314] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 330.936843] Freed by task 0:
[ 330.938588] (stack is not available)
[ 330.941886] The buggy address belongs to the object at
ffff88010d5e2800
which belongs to the cache cifs_small_rq of size 448
[ 330.946362] The buggy address is located 96 bytes inside of
448-byte region [
ffff88010d5e2800,
ffff88010d5e29c0)
[ 330.950722] The buggy address belongs to the page:
[ 330.952789] page:
ffffea0004357880 count:1 mapcount:0 mapping:
ffff880108fdca80 index:0x0 compound_mapcount: 0
[ 330.955665] flags: 0x17ffffc0008100(slab|head)
[ 330.957760] raw:
0017ffffc0008100 dead000000000100 dead000000000200 ffff880108fdca80
[ 330.960356] raw:
0000000000000000 0000000080100010 00000001ffffffff 0000000000000000
[ 330.963005] page dumped because: kasan: bad access detected
[ 330.967039] Memory state around the buggy address:
[ 330.969255]
ffff88010d5e2880: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 330.971833]
ffff88010d5e2900: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
[ 330.974397] >
ffff88010d5e2980: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc
[ 330.976956] ^
[ 330.979226]
ffff88010d5e2a00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 330.981755]
ffff88010d5e2a80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 330.984225] ==================================================================
Fix this by allocating a regular CIFS buffer in
smb2_plain_req_init() if the request command is SMB2_SET_INFO.
Reported-by: Jianhong Yin <jiyin@redhat.com>
Fixes:
366ed846df60 ("cifs: Use smb 2 - 3 and cifsacl mount options setacl function")
CC: Stable <stable@vger.kernel.org>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-and-tested-by: Aurelien Aptel <aaptel@suse.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paulo Alcantara [Thu, 5 Jul 2018 16:46:34 +0000 (13:46 -0300)]
cifs: Fix infinite loop when using hard mount option
commit
7ffbe65578b44fafdef577a360eb0583929f7c6e upstream.
For every request we send, whether it is SMB1 or SMB2+, we attempt to
reconnect tcon (cifs_reconnect_tcon or smb2_reconnect) before carrying
out the request.
So, while server->tcpStatus != CifsNeedReconnect, we wait for the
reconnection to succeed on wait_event_interruptible_timeout(). If it
returns, that means that either the condition was evaluated to true, or
timeout elapsed, or it was interrupted by a signal.
Since we're not handling the case where the process woke up due to a
received signal (-ERESTARTSYS), the next call to
wait_event_interruptible_timeout() will _always_ fail and we end up
looping forever inside either cifs_reconnect_tcon() or smb2_reconnect().
Here's an example of how to trigger that:
$ mount.cifs //foo/share /mnt/test -o
username=foo,password=foo,vers=1.0,hard
(break connection to server before executing bellow cmd)
$ stat -f /mnt/test & sleep 140
[1] 2511
$ ps -aux -q 2511
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 2511 0.0 0.0 12892 1008 pts/0 S 12:24 0:00 stat -f
/mnt/test
$ kill -9 2511
(wait for a while; process is stuck in the kernel)
$ ps -aux -q 2511
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 2511 83.2 0.0 12892 1008 pts/0 R 12:24 30:01 stat -f
/mnt/test
By using 'hard' mount point means that cifs.ko will keep retrying
indefinitely, however we must allow the process to be killed otherwise
it would hang the system.
Signed-off-by: Paulo Alcantara <palcantara@suse.de>
Cc: stable@vger.kernel.org
Reviewed-by: Aurelien Aptel <aaptel@suse.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paulo Alcantara [Wed, 4 Jul 2018 17:16:16 +0000 (14:16 -0300)]
cifs: Fix memory leak in smb2_set_ea()
commit
6aa0c114eceec8cc61715f74a4ce91b048d7561c upstream.
This patch fixes a memory leak when doing a setxattr(2) in SMB2+.
Signed-off-by: Paulo Alcantara <palcantara@suse.de>
Cc: stable@vger.kernel.org
Signed-off-by: Steve French <stfrench@microsoft.com>
Reviewed-by: Aurelien Aptel <aaptel@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Lars Persson [Mon, 25 Jun 2018 12:05:25 +0000 (14:05 +0200)]
cifs: Fix use after free of a mid_q_entry
commit
696e420bb2a6624478105651d5368d45b502b324 upstream.
With protocol version 2.0 mounts we have seen crashes with corrupt mid
entries. Either the server->pending_mid_q list becomes corrupt with a
cyclic reference in one element or a mid object fetched by the
demultiplexer thread becomes overwritten during use.
Code review identified a race between the demultiplexer thread and the
request issuing thread. The demultiplexer thread seems to be written
with the assumption that it is the sole user of the mid object until
it calls the mid callback which either wakes the issuer task or
deletes the mid.
This assumption is not true because the issuer task can be woken up
earlier by a signal. If the demultiplexer thread has proceeded as far
as setting the mid_state to MID_RESPONSE_RECEIVED then the issuer
thread will happily end up calling cifs_delete_mid while the
demultiplexer thread still is using the mid object.
Inserting a delay in the cifs demultiplexer thread widens the race
window and makes reproduction of the race very easy:
if (server->large_buf)
buf = server->bigbuf;
+ usleep_range(500, 4000);
server->lstrp = jiffies;
To resolve this I think the proper solution involves putting a
reference count on the mid object. This patch makes sure that the
demultiplexer thread holds a reference until it has finished
processing the transaction.
Cc: stable@vger.kernel.org
Signed-off-by: Lars Persson <larper@axis.com>
Acked-by: Paulo Alcantara <palcantara@suse.de>
Reviewed-by: Ronnie Sahlberg <lsahlber@redhat.com>
Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com>
Signed-off-by: Steve French <stfrench@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jason Gunthorpe [Fri, 29 Jun 2018 17:31:50 +0000 (11:31 -0600)]
vfio: Use get_user_pages_longterm correctly
commit
bb94b55af3461e26b32f0e23d455abeae0cfca5d upstream.
The patch noted in the fixes below converted get_user_pages_fast() to
get_user_pages_longterm(), however the two calls differ in a few ways.
First _fast() is documented to not require the mmap_sem, while _longterm()
is documented to need it. Hold the mmap sem as required.
Second, _fast accepts an 'int write' while _longterm uses 'unsigned int
gup_flags', so the expression '!!(prot & IOMMU_WRITE)' is only working by
luck as FOLL_WRITE is currently == 0x1. Use the expected FOLL_WRITE
constant instead.
Fixes:
94db151dc892 ("vfio: disable filesystem-dax page pinning")
Cc: <stable@vger.kernel.org>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Acked-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Lars Ellenberg [Mon, 25 Jun 2018 09:39:52 +0000 (11:39 +0200)]
drbd: fix access after free
commit
64dafbc9530c10300acffc57fae3269d95fa8f93 upstream.
We have
struct drbd_requests { ... struct bio *private_bio; ... }
to hold a bio clone for local submission.
On local IO completion, we put that bio, and in case we want to use the
result later, we overload that member to hold the ERR_PTR() of the
completion result,
Which, before v4.3, used to be the passed in "int error",
so we could first bio_put(), then assign.
v4.3-rc1~100^2~21
4246a0b63bd8 block: add a bi_error field to struct bio
changed that:
bio_put(req->private_bio);
- req->private_bio = ERR_PTR(error);
+ req->private_bio = ERR_PTR(bio->bi_error);
Which introduces an access after free,
because it was non obvious that req->private_bio == bio.
Impact of that was mostly unnoticable, because we only use that value
in a multiple-failure case, and even then map any "unexpected" error
code to EIO, so worst case we could potentially mask a more specific
error with EIO in a multiple failure case.
Unless the pointed to memory region was unmapped, as is the case with
CONFIG_DEBUG_PAGEALLOC, in which case this results in
BUG: unable to handle kernel paging request
v4.13-rc1~70^2~75
4e4cbee93d56 block: switch bios to blk_status_t
changes it further to
bio_put(req->private_bio);
req->private_bio = ERR_PTR(blk_status_to_errno(bio->bi_status));
And blk_status_to_errno() now contains a WARN_ON_ONCE() for unexpected
values, which catches this "sometimes", if the memory has been reused
quickly enough for other things.
Should also go into stable since 4.3, with the trivial change around 4.13.
Cc: stable@vger.kernel.org
Fixes:
4246a0b63bd8 block: add a bi_error field to struct bio
Reported-by: Sarah Newman <srn@prgmr.com>
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christian Borntraeger [Thu, 21 Jun 2018 12:49:38 +0000 (14:49 +0200)]
s390: Correct register corruption in critical section cleanup
commit
891f6a726cacbb87e5b06076693ffab53bd378d7 upstream.
In the critical section cleanup we must not mess with r1. For march=z9
or older, larl + ex (instead of exrl) are used with r1 as a temporary
register. This can clobber r1 in several interrupt handlers. Fix this by
using r11 as a temp register. r11 is being saved by all callers of
cleanup_critical.
Fixes:
6dd85fbb87 ("s390: move expoline assembler macros to a header")
Cc: stable@vger.kernel.org #v4.16
Reported-by: Oliver Kurz <okurz@suse.com>
Reported-by: Petr TesaÅ™Ãk <ptesarik@suse.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Hendrik Brueckner <brueckner@linux.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Disseldorp [Tue, 19 Jun 2018 15:58:24 +0000 (17:58 +0200)]
scsi: target: Fix truncated PR-in ReadKeys response
commit
63ce3c384db26494615e3c8972bcd419ed71f4c4 upstream.
SPC5r17 states that the contents of the ADDITIONAL LENGTH field are not
altered based on the allocation length, so always calculate and pack the
full key list length even if the list itself is truncated.
According to Maged:
Yes it fixes the "Storage Spaces Persistent Reservation" test in the
Windows 2016 Server Failover Cluster validation suites when having
many connections that result in more than 8 registrations. I tested
your patch on 4.17 with iblock.
This behaviour can be tested using the libiscsi PrinReadKeys.Truncate test.
Cc: stable@vger.kernel.org
Signed-off-by: David Disseldorp <ddiss@suse.de>
Reviewed-by: Mike Christie <mchristi@redhat.com>
Tested-by: Maged Mokhtar <mmokhtar@petasan.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jann Horn [Mon, 25 Jun 2018 14:25:44 +0000 (16:25 +0200)]
scsi: sg: mitigate read/write abuse
commit
26b5b874aff5659a7e26e5b1997e3df2c41fa7fd upstream.
As Al Viro noted in commit
128394eff343 ("sg_write()/bsg_write() is not fit
to be called under KERNEL_DS"), sg improperly accesses userspace memory
outside the provided buffer, permitting kernel memory corruption via
splice(). But it doesn't just do it on ->write(), also on ->read().
As a band-aid, make sure that the ->read() and ->write() handlers can not
be called in weird contexts (kernel context or credentials different from
file opener), like for ib_safe_file_access().
If someone needs to use these interfaces from different security contexts,
a new interface should be written that goes through the ->ioctl() handler.
I've mostly copypasted ib_safe_file_access() over as sg_safe_file_access()
because I couldn't find a good common header - please tell me if you know a
better way.
[mkp: s/_safe_/_check_/]
Fixes:
1da177e4c3f4 ("Linux-2.6.12-rc2")
Cc: <stable@vger.kernel.org>
Signed-off-by: Jann Horn <jannh@google.com>
Acked-by: Douglas Gilbert <dgilbert@interlog.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Changbin Du [Wed, 31 Jan 2018 15:48:49 +0000 (23:48 +0800)]
tracing: Fix missing return symbol in function_graph output
commit
1fe4293f4b8de75824935f8d8e9a99c7fc6873da upstream.
The function_graph tracer does not show the interrupt return marker for the
leaf entry. On leaf entries, we see an unbalanced interrupt marker (the
interrupt was entered, but nevern left).
Before:
1) | SyS_write() {
1) | __fdget_pos() {
1) 0.061 us | __fget_light();
1) 0.289 us | }
1) | vfs_write() {
1) 0.049 us | rw_verify_area();
1) + 15.424 us | __vfs_write();
1) ==========> |
1) 6.003 us | smp_apic_timer_interrupt();
1) 0.055 us | __fsnotify_parent();
1) 0.073 us | fsnotify();
1) + 23.665 us | }
1) + 24.501 us | }
After:
0) | SyS_write() {
0) | __fdget_pos() {
0) 0.052 us | __fget_light();
0) 0.328 us | }
0) | vfs_write() {
0) 0.057 us | rw_verify_area();
0) | __vfs_write() {
0) ==========> |
0) 8.548 us | smp_apic_timer_interrupt();
0) <========== |
0) + 36.507 us | } /* __vfs_write */
0) 0.049 us | __fsnotify_parent();
0) 0.066 us | fsnotify();
0) + 50.064 us | }
0) + 50.952 us | }
Link: http://lkml.kernel.org/r/1517413729-20411-1-git-send-email-changbin.du@intel.com
Cc: stable@vger.kernel.org
Fixes:
f8b755ac8e0cc ("tracing/function-graph-tracer: Output arrows signal on hardirq call/return")
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cannon Matthews [Wed, 4 Jul 2018 00:02:43 +0000 (17:02 -0700)]
mm: hugetlb: yield when prepping struct pages
commit
520495fe96d74e05db585fc748351e0504d8f40d upstream.
When booting with very large numbers of gigantic (i.e. 1G) pages, the
operations in the loop of gather_bootmem_prealloc, and specifically
prep_compound_gigantic_page, takes a very long time, and can cause a
softlockup if enough pages are requested at boot.
For example booting with 3844 1G pages requires prepping
(set_compound_head, init the count) over 1 billion 4K tail pages, which
takes considerable time.
Add a cond_resched() to the outer loop in gather_bootmem_prealloc() to
prevent this lockup.
Tested: Booted with softlockup_panic=1 hugepagesz=1G hugepages=3844 and
no softlockup is reported, and the hugepages are reported as
successfully setup.
Link: http://lkml.kernel.org/r/20180627214447.260804-1-cannonmatthews@google.com
Signed-off-by: Cannon Matthews <cannonmatthews@google.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Peter Feiner <pfeiner@google.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Janosch Frank [Wed, 4 Jul 2018 00:02:39 +0000 (17:02 -0700)]
userfaultfd: hugetlbfs: fix userfaultfd_huge_must_wait() pte access
commit
1e2c043628c7736dd56536d16c0ce009bc834ae7 upstream.
Use huge_ptep_get() to translate huge ptes to normal ptes so we can
check them with the huge_pte_* functions. Otherwise some architectures
will check the wrong values and will not wait for userspace to bring in
the memory.
Link: http://lkml.kernel.org/r/20180626132421.78084-1-frankja@linux.ibm.com
Fixes:
369cd2121be4 ("userfaultfd: hugetlbfs: userfaultfd_huge_must_wait for hugepmd ranges")
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Sun, 8 Jul 2018 13:30:53 +0000 (15:30 +0200)]
Linux 4.14.54
Damien Thébault [Thu, 31 May 2018 07:04:01 +0000 (07:04 +0000)]
net: dsa: b53: Add BCM5389 support
[ Upstream commit
a95691bc54af1ac4b12c354f91e9cabf1cb068df ]
This patch adds support for the BCM5389 switch connected through MDIO.
Signed-off-by: Damien Thébault <damien.thebault@vitec.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Finn Thain [Wed, 30 May 2018 03:03:51 +0000 (13:03 +1000)]
net/sonic: Use dma_mapping_error()
[ Upstream commit
26de0b76d9ba3200f09c6cb9d9618bda338be5f7 ]
With CONFIG_DMA_API_DEBUG=y, calling sonic_open() produces the
message, "DMA-API: device driver failed to check map error".
Add the missing dma_mapping_error() call.
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
João Paulo Rechi Vita [Tue, 22 May 2018 21:30:15 +0000 (14:30 -0700)]
platform/x86: asus-wmi: Fix NULL pointer dereference
[ Upstream commit
32ffd6e8d1f6cef94bedca15dfcdebdeb590499d ]
Do not perform the rfkill cleanup routine when
(asus->driver->wlan_ctrl_by_user && ashs_present()) is true, since
nothing is registered with the rfkill subsystem in that case. Doing so
leads to the following kernel NULL pointer dereference:
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<
ffffffff816c7348>] __mutex_lock_slowpath+0x98/0x120
PGD
1a3aa8067
PUD
1a3b3d067
PMD 0
Oops: 0002 [#1] PREEMPT SMP
Modules linked in: bnep ccm binfmt_misc uvcvideo videobuf2_vmalloc videobuf2_memops videobuf2_v4l2 videobuf2_core hid_a4tech videodev x86_pkg_temp_thermal intel_powerclamp coretemp ath3k btusb btrtl btintel bluetooth kvm_intel snd_hda_codec_hdmi kvm snd_hda_codec_realtek snd_hda_codec_generic irqbypass crc32c_intel arc4 i915 snd_hda_intel snd_hda_codec ath9k ath9k_common ath9k_hw ath i2c_algo_bit snd_hwdep mac80211 ghash_clmulni_intel snd_hda_core snd_pcm snd_timer cfg80211 ehci_pci xhci_pci drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops drm xhci_hcd ehci_hcd asus_nb_wmi(-) asus_wmi sparse_keymap r8169 rfkill mxm_wmi serio_raw snd mii mei_me lpc_ich i2c_i801 video soundcore mei i2c_smbus wmi i2c_core mfd_core
CPU: 3 PID: 3275 Comm: modprobe Not tainted 4.9.34-gentoo #34
Hardware name: ASUSTeK COMPUTER INC. K56CM/K56CM, BIOS K56CM.206 08/21/2012
task:
ffff8801a639ba00 task.stack:
ffffc900014cc000
RIP: 0010:[<
ffffffff816c7348>] [<
ffffffff816c7348>] __mutex_lock_slowpath+0x98/0x120
RSP: 0018:
ffffc900014cfce0 EFLAGS:
00010282
RAX:
0000000000000000 RBX:
ffff8801a54315b0 RCX:
00000000c0000100
RDX:
0000000000000001 RSI:
0000000000000000 RDI:
ffff8801a54315b4
RBP:
ffffc900014cfd30 R08:
0000000000000000 R09:
0000000000000002
R10:
0000000000000000 R11:
0000000000000000 R12:
ffff8801a54315b4
R13:
ffff8801a639ba00 R14:
00000000ffffffff R15:
ffff8801a54315b8
FS:
00007faa254fb700(0000) GS:
ffff8801aef80000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
0000000000000000 CR3:
00000001a3b1b000 CR4:
00000000001406e0
Stack:
ffff8801a54315b8 0000000000000000 ffffffff814733ae ffffc900014cfd28
ffffffff8146a28c ffff8801a54315b0 0000000000000000 ffff8801a54315b0
ffff8801a66f3820 0000000000000000 ffffc900014cfd48 ffffffff816c73e7
Call Trace:
[<
ffffffff814733ae>] ? acpi_ut_release_mutex+0x5d/0x61
[<
ffffffff8146a28c>] ? acpi_ns_get_node+0x49/0x52
[<
ffffffff816c73e7>] mutex_lock+0x17/0x30
[<
ffffffffa00a3bb4>] asus_rfkill_hotplug+0x24/0x1a0 [asus_wmi]
[<
ffffffffa00a4421>] asus_wmi_rfkill_exit+0x61/0x150 [asus_wmi]
[<
ffffffffa00a49f1>] asus_wmi_remove+0x61/0xb0 [asus_wmi]
[<
ffffffff814a5128>] platform_drv_remove+0x28/0x40
[<
ffffffff814a2901>] __device_release_driver+0xa1/0x160
[<
ffffffff814a29e3>] device_release_driver+0x23/0x30
[<
ffffffff814a1ffd>] bus_remove_device+0xfd/0x170
[<
ffffffff8149e5a9>] device_del+0x139/0x270
[<
ffffffff814a5028>] platform_device_del+0x28/0x90
[<
ffffffff814a50a2>] platform_device_unregister+0x12/0x30
[<
ffffffffa00a4209>] asus_wmi_unregister_driver+0x19/0x30 [asus_wmi]
[<
ffffffffa00da0ea>] asus_nb_wmi_exit+0x10/0xf26 [asus_nb_wmi]
[<
ffffffff8110c692>] SyS_delete_module+0x192/0x270
[<
ffffffff810022b2>] ? exit_to_usermode_loop+0x92/0xa0
[<
ffffffff816ca560>] entry_SYSCALL_64_fastpath+0x13/0x94
Code: e8 5e 30 00 00 8b 03 83 f8 01 0f 84 93 00 00 00 48 8b 43 10 4c 8d 7b 08 48 89 63 10 41 be ff ff ff ff 4c 89 3c 24 48 89 44 24 08 <48> 89 20 4c 89 6c 24 10 eb 1d 4c 89 e7 49 c7 45 08 02 00 00 00
RIP [<
ffffffff816c7348>] __mutex_lock_slowpath+0x98/0x120
RSP <
ffffc900014cfce0>
CR2:
0000000000000000
---[ end trace
8d484233fa7cb512 ]---
note: modprobe[3275] exited with preempt_count 2
https://bugzilla.kernel.org/show_bug.cgi?id=196467
Reported-by: red.f0xyz@gmail.com
Signed-off-by: João Paulo Rechi Vita <jprvita@endlessm.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Burton [Sat, 26 May 2018 15:46:47 +0000 (08:46 -0700)]
sched/core: Require cpu_active() in select_task_rq(), for user tasks
[ Upstream commit
7af443ee1697607541c6346c87385adab2214743 ]
select_task_rq() is used in a few paths to select the CPU upon which a
thread should be run - for example it is used by try_to_wake_up() & by
fork or exec balancing. As-is it allows use of any online CPU that is
present in the task's cpus_allowed mask.
This presents a problem because there is a period whilst CPUs are
brought online where a CPU is marked online, but is not yet fully
initialized - ie. the period where CPUHP_AP_ONLINE_IDLE <= state <
CPUHP_ONLINE. Usually we don't run any user tasks during this window,
but there are corner cases where this can happen. An example observed
is:
- Some user task A, running on CPU X, forks to create task B.
- sched_fork() calls __set_task_cpu() with cpu=X, setting task B's
task_struct::cpu field to X.
- CPU X is offlined.
- Task A, currently somewhere between the __set_task_cpu() in
copy_process() and the call to wake_up_new_task(), is migrated to
CPU Y by migrate_tasks() when CPU X is offlined.
- CPU X is onlined, but still in the CPUHP_AP_ONLINE_IDLE state. The
scheduler is now active on CPU X, but there are no user tasks on
the runqueue.
- Task A runs on CPU Y & reaches wake_up_new_task(). This calls
select_task_rq() with cpu=X, taken from task B's task_struct,
and select_task_rq() allows CPU X to be returned.
- Task A enqueues task B on CPU X's runqueue, via activate_task() &
enqueue_task().
- CPU X now has a user task on its runqueue before it has reached the
CPUHP_ONLINE state.
In most cases, the user tasks that schedule on the newly onlined CPU
have no idea that anything went wrong, but one case observed to be
problematic is if the task goes on to invoke the sched_setaffinity
syscall. The newly onlined CPU reaches the CPUHP_AP_ONLINE_IDLE state
before the CPU that brought it online calls stop_machine_unpark(). This
means that for a portion of the window of time between
CPUHP_AP_ONLINE_IDLE & CPUHP_ONLINE the newly onlined CPU's struct
cpu_stopper has its enabled field set to false. If a user thread is
executed on the CPU during this window and it invokes sched_setaffinity
with a CPU mask that does not include the CPU it's running on, then when
__set_cpus_allowed_ptr() calls stop_one_cpu() intending to invoke
migration_cpu_stop() and perform the actual migration away from the CPU
it will simply return -ENOENT rather than calling migration_cpu_stop().
We then return from the sched_setaffinity syscall back to the user task
that is now running on a CPU which it just asked not to run on, and
which is not present in its cpus_allowed mask.
This patch resolves the problem by having select_task_rq() enforce that
user tasks run on CPUs that are active - the same requirement that
select_fallback_rq() already enforces. This should ensure that newly
onlined CPUs reach the CPUHP_AP_ACTIVE state before being able to
schedule user tasks, and also implies that bringup_wait_for_ap() will
have called stop_machine_unpark() which resolves the sched_setaffinity
issue above.
I haven't yet investigated them, but it may be of interest to review
whether any of the actions performed by hotplug states between
CPUHP_AP_ONLINE_IDLE & CPUHP_AP_ACTIVE could have similar unintended
effects on user tasks that might schedule before they are reached, which
might widen the scope of the problem from just affecting the behaviour
of sched_setaffinity.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20180526154648.11635-2-paul.burton@mips.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Peter Zijlstra [Tue, 25 Jul 2017 16:58:21 +0000 (18:58 +0200)]
sched/core: Fix rules for running on online && !active CPUs
[ Upstream commit
175f0e25abeaa2218d431141ce19cf1de70fa82d ]
As already enforced by the WARN() in __set_cpus_allowed_ptr(), the rules
for running on an online && !active CPU are stricter than just being a
kthread, you need to be a per-cpu kthread.
If you're not strictly per-CPU, you have better CPUs to run on and
don't need the partially booted one to get your work done.
The exception is to allow smpboot threads to bootstrap the CPU itself
and get kernel 'services' initialized before we allow userspace on it.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes:
955dbdf4ce87 ("sched: Allow migrating kthreads into online but inactive CPUs")
Link: http://lkml.kernel.org/r/20170725165821.cejhb7v2s3kecems@hirez.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Darrick J. Wong [Thu, 31 May 2018 02:43:53 +0000 (19:43 -0700)]
fs: clear writeback errors in inode_init_always
[ Upstream commit
829bc787c1a0403e4d886296dd4d90c5f9c1744a ]
In inode_init_always(), we clear the inode mapping flags, which clears
any retained error (AS_EIO, AS_ENOSPC) bits. Unfortunately, we do not
also clear wb_err, which means that old mapping errors can leak through
to new inodes.
This is crucial for the XFS inode allocation path because we recycle old
in-core inodes and we do not want error state from an old file to leak
into the new file. This bug was discovered by running generic/036 and
generic/047 in a loop and noticing that the EIOs generated by the
collision of direct and buffered writes in generic/036 would survive the
remount between 036 and 047, and get reported to the fsyncs (on
different files!) in generic/047.
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
YueHaibing [Fri, 11 May 2018 11:21:42 +0000 (19:21 +0800)]
perf bpf: Fix NULL return handling in bpf__prepare_load()
[ Upstream commit
ab4e32ff5aa797eaea551dbb67946e2fcb56cc7e ]
bpf_object__open()/bpf_object__open_buffer can return error pointer or
NULL, check the return values with IS_ERR_OR_NULL() in bpf__prepare_load
and bpf__prepare_load_buffer
Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: netdev@vger.kernel.org
Link: https://lkml.kernel.org/n/tip-psf4xwc09n62al2cb9s33v9h@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Richter [Mon, 28 May 2018 07:36:57 +0000 (09:36 +0200)]
perf test: "Session topology" dumps core on s390
[ Upstream commit
d121109100bda84bbbb199dab97f9d56432ab235 ]
The "perf test Session topology" entry fails with core dump on s390. The root
cause is a NULL pointer dereference in function check_cpu_topology() line 76
(or line 82 without -v).
The session->header.env.cpu variable is NULL because on s390 function
process_cpu_topology() returns with error:
socket_id number is too big.
You may need to upgrade the perf tool.
and releases the env.cpu variable via zfree() and sets it to NULL.
Here is the gdb output:
(gdb) n
76 pr_debug("CPU %d, core %d, socket %d\n", i,
(gdb) n
Program received signal SIGSEGV, Segmentation fault.
0x00000000010f4d9e in check_cpu_topology (path=0x3ffffffd6c8
"/tmp/perf-test-J6CHMa", map=0x14a1740) at tests/topology.c:76
76 pr_debug("CPU %d, core %d, socket %d\n", i,
(gdb)
Make sure the env.cpu variable is not used when its NULL.
Test for NULL pointer and return TEST_SKIP if so.
Output before:
[root@p23lp27 perf]# ./perf test -F 39
39: Session topology :Segmentation fault (core dumped)
[root@p23lp27 perf]#
Output after:
[root@p23lp27 perf]# ./perf test -vF 39
39: Session topology :
--- start ---
templ file: /tmp/perf-test-Ajx59D
socket_id number is too big.You may need to upgrade the perf tool.
---- end ----
Session topology: Skip
[root@p23lp27 perf]#
Signed-off-by: Thomas Richter <tmricht@linux.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Link: http://lkml.kernel.org/r/20180528073657.11743-1-tmricht@linux.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Josh Hill [Mon, 28 May 2018 00:10:41 +0000 (20:10 -0400)]
net: qmi_wwan: Add Netgear Aircard 779S
[ Upstream commit
2415f3bd059fe050eb98aedf93664d000ceb4e92 ]
Add support for Netgear Aircard 779S
Signed-off-by: Josh Hill <josh@joshuajhill.com>
Acked-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ivan Bornyakov [Fri, 25 May 2018 17:49:52 +0000 (20:49 +0300)]
atm: zatm: fix memcmp casting
[ Upstream commit
f9c6442a8f0b1dde9e755eb4ff6fa22bcce4eabc ]
memcmp() returns int, but eprom_try_esi() cast it to unsigned char. One
can lose significant bits and get 0 from non-0 value returned by the
memcmp().
Signed-off-by: Ivan Bornyakov <brnkv.i1@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hao Wei Tee [Tue, 29 May 2018 07:25:17 +0000 (10:25 +0300)]
iwlwifi: pcie: compare with number of IRQs requested for, not number of CPUs
[ Upstream commit
ab1068d6866e28bf6427ceaea681a381e5870a4a ]
When there are 16 or more logical CPUs, we request for
`IWL_MAX_RX_HW_QUEUES` (16) IRQs only as we limit to that number of
IRQs, but later on we compare the number of IRQs returned to
nr_online_cpus+2 instead of max_irqs, the latter being what we
actually asked for. This ends up setting num_rx_queues to 17 which
causes lots of out-of-bounds array accesses later on.
Compare to max_irqs instead, and also add an assertion in case
num_rx_queues > IWM_MAX_RX_HW_QUEUES.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=199551
Fixes:
2e5d4a8f61dc ("iwlwifi: pcie: Add new configuration to enable MSIX")
Signed-off-by: Hao Wei Tee <angelsl@in04.sg>
Tested-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Julian Anastasov [Sat, 19 May 2018 15:22:35 +0000 (18:22 +0300)]
ipvs: fix buffer overflow with sync daemon and service
[ Upstream commit
52f96757905bbf0edef47f3ee6c7c784e7f8ff8a ]
syzkaller reports for buffer overflow for interface name
when starting sync daemons [1]
What we do is that we copy user structure into larger stack
buffer but later we search NUL past the stack buffer.
The same happens for sched_name when adding/editing virtual server.
We are restricted by IP_VS_SCHEDNAME_MAXLEN and IP_VS_IFNAME_MAXLEN
being used as size in include/uapi/linux/ip_vs.h, so they
include the space for NUL.
As using strlcpy is wrong for unsafe source, replace it with
strscpy and add checks to return EINVAL if source string is not
NUL-terminated. The incomplete strlcpy fix comes from 2.6.13.
For the netlink interface reduce the len parameter for
IPVS_DAEMON_ATTR_MCAST_IFN and IPVS_SVC_ATTR_SCHED_NAME,
so that we get proper EINVAL.
[1]
kernel BUG at lib/string.c:1052!
invalid opcode: 0000 [#1] SMP KASAN
Dumping ftrace buffer:
(ftrace buffer empty)
Modules linked in:
CPU: 1 PID: 373 Comm: syz-executor936 Not tainted 4.17.0-rc4+ #45
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
RIP: 0010:fortify_panic+0x13/0x20 lib/string.c:1051
RSP: 0018:
ffff8801c976f800 EFLAGS:
00010282
RAX:
0000000000000022 RBX:
0000000000000040 RCX:
0000000000000000
RDX:
0000000000000022 RSI:
ffffffff8160f6f1 RDI:
ffffed00392edef6
RBP:
ffff8801c976f800 R08:
ffff8801cf4c62c0 R09:
ffffed003b5e4fb0
R10:
ffffed003b5e4fb0 R11:
ffff8801daf27d87 R12:
ffff8801c976fa20
R13:
ffff8801c976fae4 R14:
ffff8801c976fae0 R15:
000000000000048b
FS:
00007fd99f75e700(0000) GS:
ffff8801daf00000(0000)
knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
00000000200001c0 CR3:
00000001d6843000 CR4:
00000000001406e0
DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
Call Trace:
strlen include/linux/string.h:270 [inline]
strlcpy include/linux/string.h:293 [inline]
do_ip_vs_set_ctl+0x31c/0x1d00 net/netfilter/ipvs/ip_vs_ctl.c:2388
nf_sockopt net/netfilter/nf_sockopt.c:106 [inline]
nf_setsockopt+0x7d/0xd0 net/netfilter/nf_sockopt.c:115
ip_setsockopt+0xd8/0xf0 net/ipv4/ip_sockglue.c:1253
udp_setsockopt+0x62/0xa0 net/ipv4/udp.c:2487
ipv6_setsockopt+0x149/0x170 net/ipv6/ipv6_sockglue.c:917
tcp_setsockopt+0x93/0xe0 net/ipv4/tcp.c:3057
sock_common_setsockopt+0x9a/0xe0 net/core/sock.c:3046
__sys_setsockopt+0x1bd/0x390 net/socket.c:1903
__do_sys_setsockopt net/socket.c:1914 [inline]
__se_sys_setsockopt net/socket.c:1911 [inline]
__x64_sys_setsockopt+0xbe/0x150 net/socket.c:1911
do_syscall_64+0x1b1/0x800 arch/x86/entry/common.c:287
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x447369
RSP: 002b:
00007fd99f75dda8 EFLAGS:
00000246 ORIG_RAX:
0000000000000036
RAX:
ffffffffffffffda RBX:
00000000006e39e4 RCX:
0000000000447369
RDX:
000000000000048b RSI:
0000000000000000 RDI:
0000000000000003
RBP:
0000000000000000 R08:
0000000000000018 R09:
0000000000000000
R10:
00000000200001c0 R11:
0000000000000246 R12:
00000000006e39e0
R13:
75a1ff93f0896195 R14:
6f745f3168746576 R15:
0000000000000001
Code: 08 5b 41 5c 41 5d 41 5e 41 5f 5d c3 0f 0b 48 89 df e8 d2 8f 48 fa eb
de 55 48 89 fe 48 c7 c7 60 65 64 88 48 89 e5 e8 91 dd f3 f9 <0f> 0b 90 90
90 90 90 90 90 90 90 90 90 55 48 89 e5 41 57 41 56
RIP: fortify_panic+0x13/0x20 lib/string.c:1051 RSP:
ffff8801c976f800
Reported-and-tested-by: syzbot+aac887f77319868646df@syzkaller.appspotmail.com
Fixes:
e4ff67513096 ("ipvs: add sync_maxlen parameter for the sync daemon")
Fixes:
4da62fc70d7c ("[IPVS]: Fix for overflows")
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Acked-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pablo Neira Ayuso [Wed, 16 May 2018 20:58:33 +0000 (22:58 +0200)]
netfilter: nft_limit: fix packet ratelimiting
[ Upstream commit
3e0f64b7dd3149f75e8652ff1df56cffeedc8fc1 ]
Credit calculations for the packet ratelimiting are not correct, as per
the applied ratelimit of 25/second and burst 8, a total of 33 packets
should have been accepted. This is true in iptables(33) but not in
nftables (~65). For packet ratelimiting, use:
div_u64(limit->nsecs, limit->rate) * limit->burst;
to calculate credit, just like in iptables' xt_limit does.
Moreover, use default burst in iptables, users are expecting similar
behaviour.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sebastian Ott [Tue, 15 May 2018 12:05:13 +0000 (14:05 +0200)]
s390/dasd: use blk_mq_rq_from_pdu for per request data
[ Upstream commit
f0f59a2fab8e52b9d582b39da39f22230ca80aee ]
Dasd uses completion_data from struct request to store per request
private data - this is problematic since this member is part of a
union which is also used by IO schedulers.
Let the block layer maintain space for per request data behind each
struct request.
Fixes crashes on block layer timeouts like this one:
Unable to handle kernel pointer dereference in virtual kernel address space
Failing address:
0000000000000000 TEID:
0000000000000483
Fault in home space mode while using kernel ASCE.
AS:
0000000001308007 R3:
00000000fffc8007 S:
00000000fffcc000 P:
000000000000013d
Oops: 0004 ilc:2 [#1] PREEMPT SMP
Modules linked in: [...]
CPU: 0 PID: 1480 Comm: kworker/0:2H Not tainted
4.17.0-rc4-00046-gaa3bcd43b5af #203
Hardware name: IBM 3906 M02 702 (LPAR)
Workqueue: kblockd blk_mq_timeout_work
Krnl PSW :
0000000067ac406b 00000000b6960308 (do_raw_spin_trylock+0x30/0x78)
R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:2 PM:0 RI:0 EA:3
Krnl GPRS:
0000000000000c00 0000000000000000 0000000000000000 0000000000000001
0000000000b9d3c8 0000000000000000 0000000000000001 00000000cf9639d8
0000000000000000 0700000000000000 0000000000000000 000000000099f09e
0000000000000000 000000000076e9d0 000000006247bb08 000000006247bae0
Krnl Code:
00000000001c159c:
b90400c2 lgr %r12,%r2
00000000001c15a0:
a7180000 lhi %r1,0
#
00000000001c15a4:
583003a4 l %r3,932
>
00000000001c15a8:
ba132000 cs %r1,%r3,0(%r2)
00000000001c15ac:
a7180001 lhi %r1,1
00000000001c15b0:
a784000b brc 8,1c15c6
00000000001c15b4:
c0e5004e72aa brasl %r14,b8fb08
00000000001c15ba: 1812 lr %r1,%r2
Call Trace:
([<
0700000000000000>] 0x700000000000000)
[<
0000000000b9d3d2>] _raw_spin_lock_irqsave+0x7a/0xb8
[<
000000000099f09e>] dasd_times_out+0x46/0x278
[<
000000000076ea6e>] blk_mq_terminate_expired+0x9e/0x108
[<
000000000077497a>] bt_for_each+0x102/0x130
[<
0000000000774e54>] blk_mq_queue_tag_busy_iter+0x74/0xd8
[<
000000000076fea0>] blk_mq_timeout_work+0x260/0x320
[<
0000000000169dd4>] process_one_work+0x3bc/0x708
[<
000000000016a382>] worker_thread+0x262/0x408
[<
00000000001723a8>] kthread+0x160/0x178
[<
0000000000b9e73a>] kernel_thread_starter+0x6/0xc
[<
0000000000b9e734>] kernel_thread_starter+0x0/0xc
INFO: lockdep is turned off.
Last Breaking-Event-Address:
[<
0000000000b9d3cc>] _raw_spin_lock_irqsave+0x74/0xb8
Kernel panic - not syncing: Fatal exception: panic_on_oops
Signed-off-by: Sebastian Ott <sebott@linux.ibm.com>
Reviewed-by: Stefan Haberland <sth@linux.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paolo Abeni [Fri, 27 Apr 2018 08:45:31 +0000 (10:45 +0200)]
netfilter: ebtables: handle string from userspace with care
[ Upstream commit
94c752f99954797da583a84c4907ff19e92550a4 ]
strlcpy() can't be safely used on a user-space provided string,
as it can try to read beyond the buffer's end, if the latter is
not NULL terminated.
Leveraging the above, syzbot has been able to trigger the following
splat:
BUG: KASAN: stack-out-of-bounds in strlcpy include/linux/string.h:300
[inline]
BUG: KASAN: stack-out-of-bounds in compat_mtw_from_user
net/bridge/netfilter/ebtables.c:1957 [inline]
BUG: KASAN: stack-out-of-bounds in ebt_size_mwt
net/bridge/netfilter/ebtables.c:2059 [inline]
BUG: KASAN: stack-out-of-bounds in size_entry_mwt
net/bridge/netfilter/ebtables.c:2155 [inline]
BUG: KASAN: stack-out-of-bounds in compat_copy_entries+0x96c/0x14a0
net/bridge/netfilter/ebtables.c:2194
Write of size 33 at addr
ffff8801b0abf888 by task syz-executor0/4504
CPU: 0 PID: 4504 Comm: syz-executor0 Not tainted 4.17.0-rc2+ #40
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x1b9/0x294 lib/dump_stack.c:113
print_address_description+0x6c/0x20b mm/kasan/report.c:256
kasan_report_error mm/kasan/report.c:354 [inline]
kasan_report.cold.7+0x242/0x2fe mm/kasan/report.c:412
check_memory_region_inline mm/kasan/kasan.c:260 [inline]
check_memory_region+0x13e/0x1b0 mm/kasan/kasan.c:267
memcpy+0x37/0x50 mm/kasan/kasan.c:303
strlcpy include/linux/string.h:300 [inline]
compat_mtw_from_user net/bridge/netfilter/ebtables.c:1957 [inline]
ebt_size_mwt net/bridge/netfilter/ebtables.c:2059 [inline]
size_entry_mwt net/bridge/netfilter/ebtables.c:2155 [inline]
compat_copy_entries+0x96c/0x14a0 net/bridge/netfilter/ebtables.c:2194
compat_do_replace+0x483/0x900 net/bridge/netfilter/ebtables.c:2285
compat_do_ebt_set_ctl+0x2ac/0x324 net/bridge/netfilter/ebtables.c:2367
compat_nf_sockopt net/netfilter/nf_sockopt.c:144 [inline]
compat_nf_setsockopt+0x9b/0x140 net/netfilter/nf_sockopt.c:156
compat_ip_setsockopt+0xff/0x140 net/ipv4/ip_sockglue.c:1279
inet_csk_compat_setsockopt+0x97/0x120 net/ipv4/inet_connection_sock.c:1041
compat_tcp_setsockopt+0x49/0x80 net/ipv4/tcp.c:2901
compat_sock_common_setsockopt+0xb4/0x150 net/core/sock.c:3050
__compat_sys_setsockopt+0x1ab/0x7c0 net/compat.c:403
__do_compat_sys_setsockopt net/compat.c:416 [inline]
__se_compat_sys_setsockopt net/compat.c:413 [inline]
__ia32_compat_sys_setsockopt+0xbd/0x150 net/compat.c:413
do_syscall_32_irqs_on arch/x86/entry/common.c:323 [inline]
do_fast_syscall_32+0x345/0xf9b arch/x86/entry/common.c:394
entry_SYSENTER_compat+0x70/0x7f arch/x86/entry/entry_64_compat.S:139
RIP: 0023:0xf7fb3cb9
RSP: 002b:
00000000fff0c26c EFLAGS:
00000282 ORIG_RAX:
000000000000016e
RAX:
ffffffffffffffda RBX:
0000000000000003 RCX:
0000000000000000
RDX:
0000000000000080 RSI:
0000000020000300 RDI:
00000000000005f4
RBP:
0000000000000000 R08:
0000000000000000 R09:
0000000000000000
R10:
0000000000000000 R11:
0000000000000000 R12:
0000000000000000
R13:
0000000000000000 R14:
0000000000000000 R15:
0000000000000000
The buggy address belongs to the page:
page:
ffffea0006c2afc0 count:0 mapcount:0 mapping:
0000000000000000 index:0x0
flags: 0x2fffc0000000000()
raw:
02fffc0000000000 0000000000000000 0000000000000000 00000000ffffffff
raw:
0000000000000000 ffffea0006c20101 0000000000000000 0000000000000000
page dumped because: kasan: bad access detected
Fix the issue replacing the unsafe function with strscpy() and
taking care of possible errors.
Fixes:
81e675c227ec ("netfilter: ebtables: add CONFIG_COMPAT support")
Reported-and-tested-by: syzbot+4e42a04e0bc33cb6c087@syzkaller.appspotmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Howells [Wed, 16 May 2018 20:25:46 +0000 (21:25 +0100)]
afs: Fix directory permissions check
[ Upstream commit
378831e4daec75fbba6d3612bcf3b4dd00ddbf08 ]
Doing faccessat("/afs/some/directory", 0) triggers a BUG in the permissions
check code.
Fix this by just removing the BUG section. If no permissions are asked
for, just return okay if the file exists.
Also:
(1) Split up the directory check so that it has separate if-statements
rather than if-else-if (e.g. checking for MAY_EXEC shouldn't skip the
check for MAY_READ and MAY_WRITE).
(2) Check for MAY_CHDIR as MAY_EXEC.
Without the main fix, the following BUG may occur:
kernel BUG at fs/afs/security.c:386!
invalid opcode: 0000 [#1] SMP PTI
...
RIP: 0010:afs_permission+0x19d/0x1a0 [kafs]
...
Call Trace:
? inode_permission+0xbe/0x180
? do_faccessat+0xdc/0x270
? do_syscall_64+0x60/0x1f0
? entry_SYSCALL_64_after_hwframe+0x49/0xbe
Fixes:
00d3b7a4533e ("[AFS]: Add security support.")
Reported-by: Jonathan Billings <jsbillings@jsbillings.org>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Sat, 12 May 2018 09:49:30 +0000 (02:49 -0700)]
xfrm6: avoid potential infinite loop in _decode_session6()
[ Upstream commit
d9f92772e8ec388d070752ee8f187ef8fa18621f ]
syzbot found a way to trigger an infinitie loop by overflowing
@offset variable that has been forced to use u16 for some very
obscure reason in the past.
We probably want to look at NEXTHDR_FRAGMENT handling which looks
wrong, in a separate patch.
In net-next, we shall try to use skb_header_pointer() instead of
pskb_may_pull().
watchdog: BUG: soft lockup - CPU#1 stuck for 134s! [syz-executor738:4553]
Modules linked in:
irq event stamp:
13885653
hardirqs last enabled at (
13885652): [<
ffffffff878009d5>] restore_regs_and_return_to_kernel+0x0/0x2b
hardirqs last disabled at (
13885653): [<
ffffffff87800905>] interrupt_entry+0xb5/0xf0 arch/x86/entry/entry_64.S:625
softirqs last enabled at (
13614028): [<
ffffffff84df0809>] tun_napi_alloc_frags drivers/net/tun.c:1478 [inline]
softirqs last enabled at (
13614028): [<
ffffffff84df0809>] tun_get_user+0x1dd9/0x4290 drivers/net/tun.c:1825
softirqs last disabled at (
13614032): [<
ffffffff84df1b6f>] tun_get_user+0x313f/0x4290 drivers/net/tun.c:1942
CPU: 1 PID: 4553 Comm: syz-executor738 Not tainted 4.17.0-rc3+ #40
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:check_kcov_mode kernel/kcov.c:67 [inline]
RIP: 0010:__sanitizer_cov_trace_pc+0x20/0x50 kernel/kcov.c:101
RSP: 0018:
ffff8801d8cfe250 EFLAGS:
00000246 ORIG_RAX:
ffffffffffffff13
RAX:
ffff8801d88a8080 RBX:
ffff8801d7389e40 RCX:
0000000000000006
RDX:
0000000000000000 RSI:
ffffffff868da4ad RDI:
ffff8801c8a53277
RBP:
ffff8801d8cfe250 R08:
ffff8801d88a8080 R09:
ffff8801d8cfe3e8
R10:
ffffed003b19fc87 R11:
ffff8801d8cfe43f R12:
ffff8801c8a5327f
R13:
0000000000000000 R14:
ffff8801c8a4e5fe R15:
ffff8801d8cfe3e8
FS:
0000000000d88940(0000) GS:
ffff8801daf00000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
ffffffffff600400 CR3:
00000001acab3000 CR4:
00000000001406e0
DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
DR3:
0000000000000000 DR6:
00000000fffe0ff0 DR7:
0000000000000400
Call Trace:
_decode_session6+0xc1d/0x14f0 net/ipv6/xfrm6_policy.c:150
__xfrm_decode_session+0x71/0x140 net/xfrm/xfrm_policy.c:2368
xfrm_decode_session_reverse include/net/xfrm.h:1213 [inline]
icmpv6_route_lookup+0x395/0x6e0 net/ipv6/icmp.c:372
icmp6_send+0x1982/0x2da0 net/ipv6/icmp.c:551
icmpv6_send+0x17a/0x300 net/ipv6/ip6_icmp.c:43
ip6_input_finish+0x14e1/0x1a30 net/ipv6/ip6_input.c:305
NF_HOOK include/linux/netfilter.h:288 [inline]
ip6_input+0xe1/0x5e0 net/ipv6/ip6_input.c:327
dst_input include/net/dst.h:450 [inline]
ip6_rcv_finish+0x29c/0xa10 net/ipv6/ip6_input.c:71
NF_HOOK include/linux/netfilter.h:288 [inline]
ipv6_rcv+0xeb8/0x2040 net/ipv6/ip6_input.c:208
__netif_receive_skb_core+0x2468/0x3650 net/core/dev.c:4646
__netif_receive_skb+0x2c/0x1e0 net/core/dev.c:4711
netif_receive_skb_internal+0x126/0x7b0 net/core/dev.c:4785
napi_frags_finish net/core/dev.c:5226 [inline]
napi_gro_frags+0x631/0xc40 net/core/dev.c:5299
tun_get_user+0x3168/0x4290 drivers/net/tun.c:1951
tun_chr_write_iter+0xb9/0x154 drivers/net/tun.c:1996
call_write_iter include/linux/fs.h:1784 [inline]
do_iter_readv_writev+0x859/0xa50 fs/read_write.c:680
do_iter_write+0x185/0x5f0 fs/read_write.c:959
vfs_writev+0x1c7/0x330 fs/read_write.c:1004
do_writev+0x112/0x2f0 fs/read_write.c:1039
__do_sys_writev fs/read_write.c:1112 [inline]
__se_sys_writev fs/read_write.c:1109 [inline]
__x64_sys_writev+0x75/0xb0 fs/read_write.c:1109
do_syscall_64+0x1b1/0x800 arch/x86/entry/common.c:287
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Fixes:
1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Reported-by: syzbot+0053c8...@syzkaller.appspotmail.com
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Abhishek Sahu [Wed, 13 Jun 2018 09:02:36 +0000 (14:32 +0530)]
mtd: rawnand: fix return value check for bad block status
commit
e9893e6fa932f42c90c4ac5849fa9aa0f0f00a34 upstream.
Positive return value from read_oob() is making false BAD
blocks. For some of the NAND controllers, OOB bytes will be
protected with ECC and read_oob() will return number of bitflips.
If there is any bitflip in ECC protected OOB bytes for BAD block
status page, then that block is getting treated as BAD.
Fixes:
c120e75e0e7d ("mtd: nand: use read_oob() instead of cmdfunc() for bad block check")
Cc: <stable@vger.kernel.org>
Signed-off-by: Abhishek Sahu <absahu@codeaurora.org>
Reviewed-by: Miquel Raynal <miquel.raynal@bootlin.com>
Signed-off-by: Boris Brezillon <boris.brezillon@bootlin.com>
[backported to 4.14.y]
Signed-off-by: Abhishek Sahu <absahu@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sean Nyekjaer [Tue, 22 May 2018 17:45:09 +0000 (19:45 +0200)]
ARM: dts: imx6q: Use correct SDMA script for SPI5 core
commit
df07101e1c4a29e820df02f9989a066988b160e6 upstream.
According to the reference manual the shp_2_mcu / mcu_2_shp
scripts must be used for devices connected through the SPBA.
This fixes an issue we saw with DMA transfers.
Sometimes the SPI controller RX FIFO was not empty after a DMA
transfer and the driver got stuck in the next PIO transfer when
it read one word more than expected.
commit
dd4b487b32a35 ("ARM: dts: imx6: Use correct SDMA script
for SPI cores") is fixing the same issue but only for SPI1 - 4.
Fixes:
677940258dd8e ("ARM: dts: imx6q: enable dma for ecspi5")
Signed-off-by: Sean Nyekjaer <sean.nyekjaer@prevas.dk>
Reviewed-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Shawn Guo <shawnguo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Taehee Yoo [Mon, 11 Jun 2018 13:16:33 +0000 (22:16 +0900)]
netfilter: nf_tables: use WARN_ON_ONCE instead of BUG_ON in nft_do_chain()
commit
adc972c5b88829d38ede08b1069718661c7330ae upstream.
When depth of chain is bigger than NFT_JUMP_STACK_SIZE, the nft_do_chain
crashes. But there is no need to crash hard here.
Suggested-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Taehee Yoo <ap420073@gmail.com>
Acked-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vincent Bernat [Sun, 20 May 2018 11:03:38 +0000 (13:03 +0200)]
netfilter: ip6t_rpfilter: provide input interface for route lookup
commit
cede24d1b21d68d84ac5a36c44f7d37daadcc258 upstream.
In commit
47b7e7f82802, this bit was removed at the same time the
RT6_LOOKUP_F_IFACE flag was removed. However, it is needed when
link-local addresses are used, which is a very common case: when
packets are routed, neighbor solicitations are done using link-local
addresses. For example, the following neighbor solicitation is not
matched by "-m rpfilter":
IP6 fe80::5254:33ff:fe00:1 > ff02::1:ff00:3: ICMP6, neighbor
solicitation, who has 2001:db8::5254:33ff:fe00:3, length 32
Commit
47b7e7f82802 doesn't quite explain why we shouldn't use
RT6_LOOKUP_F_IFACE in the rpfilter case. I suppose the interface check
later in the function would make it redundant. However, the remaining
of the routing code is using RT6_LOOKUP_F_IFACE when there is no
source address (which matches rpfilter's case with a non-unicast
destination, like with neighbor solicitation).
Signed-off-by: Vincent Bernat <vincent@bernat.im>
Fixes:
47b7e7f82802 ("netfilter: don't set F_IFACE on ipv6 fib lookups")
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Florian Westphal [Wed, 14 Feb 2018 23:23:05 +0000 (00:23 +0100)]
netfilter: don't set F_IFACE on ipv6 fib lookups
commit
47b7e7f82802dced3ac73658bf4b77584a63063f upstream.
"fib" starts to behave strangely when an ipv6 default route is
added - the FIB lookup returns a route using 'oif' in this case.
This behaviour was inherited from ip6tables rpfilter so change
this as well.
Bugzilla: https://bugzilla.netfilter.org/show_bug.cgi?id=1221
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NeilBrown [Thu, 19 Oct 2017 01:49:15 +0000 (12:49 +1100)]
md: remove special meaning of ->quiesce(.., 2)
commit
b03e0ccb5ab9df3efbe51c87843a1ffbecbafa1f upstream.
The '2' argument means "wake up anything that is waiting".
This is an inelegant part of the design and was added
to help support management of suspend_lo/suspend_hi setting.
Now that suspend_lo/hi is managed in mddev_suspend/resume,
that need is gone.
These is still a couple of places where we call 'quiesce'
with an argument of '2', but they can safely be changed to
call ->quiesce(.., 1); ->quiesce(.., 0) which
achieve the same result at the small cost of pausing IO
briefly.
This removes a small "optimization" from suspend_{hi,lo}_store,
but it isn't clear that optimization served a useful purpose.
The code now is a lot clearer.
Suggested-by: Shaohua Li <shli@kernel.org>
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Jack Wang <jinpu.wang@profitbricks.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NeilBrown [Tue, 17 Oct 2017 02:46:43 +0000 (13:46 +1100)]
md: allow metadata update while suspending.
commit
35bfc52187f6df8779d0f1cebdb52b7f797baf4e upstream.
There are various deadlocks that can occur
when a thread holds reconfig_mutex and calls
->quiesce(mddev, 1).
As some write request block waiting for
metadata to be updated (e.g. to record device
failure), and as the md thread updates the metadata
while the reconfig mutex is held, holding the mutex
can stop write requests completing, and this prevents
->quiesce(mddev, 1) from completing.
->quiesce() is now usually called from mddev_suspend(),
and it is always called with reconfig_mutex held. So
at this time it is safe for the thread to update metadata
without explicitly taking the lock.
So add 2 new flags, one which says the unlocked updates is
allowed, and one which ways it is happening. Then allow it
while the quiesce completes, and then wait for it to finish.
Reported-and-tested-by: Xiao Ni <xni@redhat.com>
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Jack Wang <jinpu.wang@profitbricks.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NeilBrown [Tue, 17 Oct 2017 02:46:43 +0000 (13:46 +1100)]
md: use mddev_suspend/resume instead of ->quiesce()
commit
9e1cc0a54556a6c63dc0cfb7cd7d60d43337bba6 upstream.
mddev_suspend() is a more general interface than
calling ->quiesce() and is so more extensible. A
future patch will make use of this.
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Jack Wang <jinpu.wang@profitbricks.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NeilBrown [Tue, 17 Oct 2017 02:46:43 +0000 (13:46 +1100)]
md: move suspend_hi/lo handling into core md code
commit
b3143b9a38d5039bcd1f2d1c94039651bfba8043 upstream.
responding to ->suspend_lo and ->suspend_hi is similar
to responding to ->suspended. It is best to wait in
the common core code without incrementing ->active_io.
This allows mddev_suspend()/mddev_resume() to work while
requests are waiting for suspend_lo/hi to change.
This is will be important after a subsequent patch
which uses mddev_suspend() to synchronize updating for
suspend_lo/hi.
So move the code for testing suspend_lo/hi out of raid1.c
and raid5.c, and place it in md.c
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Jack Wang <jinpu.wang@profitbricks.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NeilBrown [Tue, 17 Oct 2017 02:46:43 +0000 (13:46 +1100)]
md: don't call bitmap_create() while array is quiesced.
commit
52a0d49de3d592a3118e13f35985e3d99eaf43df upstream.
bitmap_create() allocates memory with GFP_KERNEL and
so can wait for IO.
If called while the array is quiesced, it could wait indefinitely
for write out to the array - deadlock.
So call bitmap_create() before quiescing the array.
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Jack Wang <jinpu.wang@profitbricks.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NeilBrown [Thu, 19 Oct 2017 01:17:16 +0000 (12:17 +1100)]
md: always hold reconfig_mutex when calling mddev_suspend()
commit
4d5324f760aacaefeb721b172aa14bf66045c332 upstream.
Most often mddev_suspend() is called with
reconfig_mutex held. Make this a requirement in
preparation a subsequent patch. Also require
reconfig_mutex to be held for mddev_resume(),
partly for symmetry and partly to guarantee
no races with incr/decr of mddev->suspend.
Taking the mutex in r5c_disable_writeback_async() is
a little tricky as this is called from a work queue
via log->disable_writeback_work, and flush_work()
is called on that while holding ->reconfig_mutex.
If the work item hasn't run before flush_work()
is called, the work function will not be able to
get the mutex.
So we use mddev_trylock() inside the wait_event() call, and have that
abort when conf->log is set to NULL, which happens before
flush_work() is called.
We wait in mddev->sb_wait and ensure this is woken
when any of the conditions change. This requires
waking mddev->sb_wait in mddev_unlock(). This is only
like to trigger extra wake_ups of threads that needn't
be woken when metadata is being written, and that
doesn't happen often enough that the cost would be
noticeable.
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Jack Wang <jinpu.wang@profitbricks.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Taehee Yoo [Mon, 28 May 2018 16:13:45 +0000 (01:13 +0900)]
netfilter: nf_tables: fix NULL-ptr in nf_tables_dump_obj()
commit
360cc79d9d299ce297b205508276285ceffc5fa8 upstream.
The table field in nft_obj_filter is not an array. In order to check
tablename, we should check if the pointer is set.
Test commands:
%nft add table ip filter
%nft add counter ip filter ct1
%nft reset counters
Splat looks like:
[ 306.510504] kasan: CONFIG_KASAN_INLINE enabled
[ 306.516184] kasan: GPF could be caused by NULL-ptr deref or user memory access
[ 306.524775] general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN PTI
[ 306.528284] Modules linked in: nft_objref nft_counter nf_tables nfnetlink ip_tables x_tables
[ 306.528284] CPU: 0 PID: 1488 Comm: nft Not tainted 4.17.0-rc4+ #17
[ 306.528284] Hardware name: To be filled by O.E.M. To be filled by O.E.M./Aptio CRB, BIOS 5.6.5 07/08/2015
[ 306.528284] RIP: 0010:nf_tables_dump_obj+0x52c/0xa70 [nf_tables]
[ 306.528284] RSP: 0018:
ffff8800b6cb7520 EFLAGS:
00010246
[ 306.528284] RAX:
0000000000000000 RBX:
ffff8800b6c49820 RCX:
0000000000000000
[ 306.528284] RDX:
0000000000000000 RSI:
dffffc0000000000 RDI:
ffffed0016d96e9a
[ 306.528284] RBP:
ffff8800b6cb75c0 R08:
ffffed00236fce7c R09:
ffffed00236fce7b
[ 306.528284] R10:
ffffffff9f6241e8 R11:
ffffed00236fce7c R12:
ffff880111365108
[ 306.528284] R13:
0000000000000000 R14:
ffff8800b6c49860 R15:
ffff8800b6c49860
[ 306.528284] FS:
00007f838b007700(0000) GS:
ffff88011b600000(0000) knlGS:
0000000000000000
[ 306.528284] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[ 306.528284] CR2:
00007ffeafabcf78 CR3:
00000000b6cbe000 CR4:
00000000001006f0
[ 306.528284] Call Trace:
[ 306.528284] netlink_dump+0x470/0xa20
[ 306.528284] __netlink_dump_start+0x5ae/0x690
[ 306.528284] ? nf_tables_getobj+0x1b3/0x740 [nf_tables]
[ 306.528284] nf_tables_getobj+0x2f5/0x740 [nf_tables]
[ 306.528284] ? nft_obj_notify+0x100/0x100 [nf_tables]
[ 306.528284] ? nf_tables_getobj+0x740/0x740 [nf_tables]
[ 306.528284] ? nf_tables_dump_flowtable_done+0x70/0x70 [nf_tables]
[ 306.528284] ? nft_obj_notify+0x100/0x100 [nf_tables]
[ 306.528284] nfnetlink_rcv_msg+0x8ff/0x932 [nfnetlink]
[ 306.528284] ? nfnetlink_rcv_msg+0x216/0x932 [nfnetlink]
[ 306.528284] netlink_rcv_skb+0x1c9/0x2f0
[ 306.528284] ? nfnetlink_bind+0x1d0/0x1d0 [nfnetlink]
[ 306.528284] ? debug_check_no_locks_freed+0x270/0x270
[ 306.528284] ? netlink_ack+0x7a0/0x7a0
[ 306.528284] ? ns_capable_common+0x6e/0x110
[ ... ]
Fixes:
e46abbcc05aa8 ("netfilter: nf_tables: Allow table names of up to 255 chars")
Signed-off-by: Taehee Yoo <ap420073@gmail.com>
Acked-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Florian Westphal [Tue, 20 Mar 2018 14:25:37 +0000 (15:25 +0100)]
netfilter: nf_tables: add missing netlink attrs to policies
commit
467697d289e7e6e1b15910d99096c0da08c56d5b upstream.
Fixes:
8aeff920dcc9 ("netfilter: nf_tables: add stateful object reference to set elements")
Fixes:
f25ad2e907f1 ("netfilter: nf_tables: prepare for expressions associated to set elements")
Fixes:
1a94e38d254b ("netfilter: nf_tables: add NFTA_RULE_ID attribute")
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Colin Ian King [Wed, 9 May 2018 12:22:56 +0000 (13:22 +0100)]
netfilter: nf_tables: fix memory leak on error exit return
commit
f0dfd7a2b35b02030949100247d851b793cb275f upstream.
Currently the -EBUSY error return path is not free'ing resources
allocated earlier, leaving a memory leak. Fix this by exiting via the
error exit label err5 that performs the necessary resource clean
up.
Detected by CoverityScan, CID#
1432975 ("Resource leak")
Fixes:
9744a6fcefcb ("netfilter: nf_tables: check if same extensions are set when adding elements")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Taehee Yoo [Mon, 28 May 2018 16:14:12 +0000 (01:14 +0900)]
netfilter: nf_tables: increase nft_counters_enabled in nft_chain_stats_replace()
commit
bbb8c61f97e3a2dd91b30d3e57b7964a67569d11 upstream.
When a chain is updated, a counter can be attached. if so,
the nft_counters_enabled should be increased.
test commands:
%nft add table ip filter
%nft add chain ip filter input { type filter hook input priority 4\; }
%iptables-compat -Z input
%nft delete chain ip filter input
we can see below messages.
[ 286.443720] jump label: negative count!
[ 286.448278] WARNING: CPU: 0 PID: 1459 at kernel/jump_label.c:197 __static_key_slow_dec_cpuslocked+0x6f/0xf0
[ 286.449144] Modules linked in: nf_tables nfnetlink ip_tables x_tables
[ 286.449144] CPU: 0 PID: 1459 Comm: nft Tainted: G W 4.17.0-rc2+ #12
[ 286.449144] RIP: 0010:__static_key_slow_dec_cpuslocked+0x6f/0xf0
[ 286.449144] RSP: 0018:
ffff88010e5176f0 EFLAGS:
00010286
[ 286.449144] RAX:
000000000000001b RBX:
ffffffffc0179500 RCX:
ffffffffb8a82522
[ 286.449144] RDX:
0000000000000001 RSI:
0000000000000008 RDI:
ffff88011b7e5eac
[ 286.449144] RBP:
0000000000000000 R08:
ffffed00236fce5c R09:
ffffed00236fce5b
[ 286.449144] R10:
ffffffffc0179503 R11:
ffffed00236fce5c R12:
0000000000000000
[ 286.449144] R13:
ffff88011a28e448 R14:
ffff88011a28e470 R15:
dffffc0000000000
[ 286.449144] FS:
00007f0384328700(0000) GS:
ffff88011b600000(0000) knlGS:
0000000000000000
[ 286.449144] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[ 286.449144] CR2:
00007f038394bf10 CR3:
0000000104a86000 CR4:
00000000001006f0
[ 286.449144] Call Trace:
[ 286.449144] static_key_slow_dec+0x6a/0x70
[ 286.449144] nf_tables_chain_destroy+0x19d/0x210 [nf_tables]
[ 286.449144] nf_tables_commit+0x1891/0x1c50 [nf_tables]
[ 286.449144] nfnetlink_rcv+0x1148/0x13d0 [nfnetlink]
[ ... ]
Signed-off-by: Taehee Yoo <ap420073@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pablo Neira Ayuso [Sun, 27 May 2018 19:08:13 +0000 (21:08 +0200)]
netfilter: nf_tables: disable preemption in nft_update_chain_stats()
commit
ad9d9e85072b668731f356be0a3750a3ba22a607 upstream.
This patch fixes the following splat.
[118709.054937] BUG: using smp_processor_id() in preemptible [
00000000] code: test/1571
[118709.054970] caller is nft_update_chain_stats.isra.4+0x53/0x97 [nf_tables]
[118709.054980] CPU: 2 PID: 1571 Comm: test Not tainted 4.17.0-rc6+ #335
[...]
[118709.054992] Call Trace:
[118709.055011] dump_stack+0x5f/0x86
[118709.055026] check_preemption_disabled+0xd4/0xe4
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Taehee Yoo [Thu, 17 May 2018 13:49:49 +0000 (22:49 +0900)]
netfilter: nft_meta: fix wrong value dereference in nft_meta_set_eval
commit
97a0549b15a0b466c47f6a0143a490a082c64b4e upstream.
In the nft_meta_set_eval, nftrace value is dereferenced as u32 from sreg.
But correct type is u8. so that sometimes incorrect value is dereferenced.
Steps to reproduce:
%nft add table ip filter
%nft add chain ip filter input { type filter hook input priority 4\; }
%nft add rule ip filter input nftrace set 0
%nft monitor
Sometimes, we can see trace messages.
trace id
16767227 ip filter input packet: iif "enp2s0"
ether saddr xx:xx:xx:xx:xx:xx ether daddr xx:xx:xx:xx:xx:xx
ip saddr 192.168.0.1 ip daddr 255.255.255.255 ip dscp cs0
ip ecn not-ect ip
trace id
16767227 ip filter input rule nftrace set 0 (verdict continue)
trace id
16767227 ip filter input verdict continue
trace id
16767227 ip filter input
Signed-off-by: Taehee Yoo <ap420073@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pablo Neira Ayuso [Tue, 8 May 2018 00:43:57 +0000 (02:43 +0200)]
netfilter: nf_tables: bogus EBUSY in chain deletions
commit
bb7b40aecbf778c0c83a5bd62b0f03ca9f49a618 upstream.
When removing a rule that jumps to chain and such chain in the same
batch, this bogusly hits EBUSY. Add activate and deactivate operations
to expression that can be called from the preparation and the
commit/abort phases.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Florian Westphal [Sat, 5 May 2018 22:47:20 +0000 (00:47 +0200)]
netfilter: nf_tables: don't assume chain stats are set when jumplabel is set
commit
009240940e84c1c089af88b454f7e804a4c5bd1b upstream.
nft_chain_stats_replace() and all other spots assume ->stats can be
NULL, but nft_update_chain_stats does not. It must do this check,
just because the jump label is set doesn't mean all basechains have stats
assigned.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Florian Westphal [Mon, 7 May 2018 13:22:36 +0000 (15:22 +0200)]
netfilter: nft_compat: fix handling of large matchinfo size
commit
732a8049f365f514d0607e03938491bf6cb0d620 upstream.
currently matchinfo gets stored in the expression, but some xt matches
are very large.
To handle those we either need to switch nft core to kvmalloc and increase
size limit, or allocate the info blob of large matches separately.
This does the latter, this limits the scope of the changes to
nft_compat.
I picked a threshold of 192, this allows most matches to work as before and
handle only few ones via separate alloation (cgroup, u32, sctp, rt).
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Florian Westphal [Mon, 7 May 2018 13:22:35 +0000 (15:22 +0200)]
netfilter: nft_compat: prepare for indirect info storage
commit
8bdf164744b2c7f63561846c01cff3db597f282d upstream.
Next patch will make it possible for *info to be stored in
a separate allocation instead of the expr private area.
This removes the 'expr priv area is info blob' assumption
from the match init/destroy/eval functions.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Florian Westphal [Wed, 2 May 2018 12:07:42 +0000 (14:07 +0200)]
netfilter: nf_tables: nft_compat: fix refcount leak on xt module
commit
b8e9dc1c75714ceb53615743e1036f76e00f5a17 upstream.
Taehee Yoo reported following bug:
iptables-compat -I OUTPUT -m cpu --cpu 0
iptables-compat -F
lsmod |grep xt_cpu
xt_cpu 16384 1
Quote:
"When above command is given, a netlink message has two expressions that
are the cpu compat and the nft_counter.
The nft_expr_type_get() in the nf_tables_expr_parse() successes
first expression then, calls select_ops callback.
(allocates memory and holds module)
But, second nft_expr_type_get() in the nf_tables_expr_parse()
returns -EAGAIN because of request_module().
In that point, by the 'goto err1',
the 'module_put(info[i].ops->type->owner)' is called.
There is no release routine."
The core problem is that unlike all other expression,
nft_compat select_ops has side effects.
1. it allocates dynamic memory which holds an nft ops struct.
In all other expressions, ops has static storage duration.
2. It grabs references to the xt module that it is supposed to
invoke.
Depending on where things go wrong, error unwinding doesn't
always do the right thing.
In the above scenario, a new nft_compat_expr is created and
xt_cpu module gets loaded with a refcount of 1.
Due to to -EAGAIN, the netlink messages get re-parsed.
When that happens, nft_compat finds that xt_cpu is already present
and increments module refcount again.
This fixes the problem by making select_ops to have no visible
side effects and removes all extra module_get/put.
When select_ops creates a new nft_compat expression, the new
expression has a refcount of 0, and the xt module gets its refcount
incremented.
When error happens, the next call finds existing entry, but will no
longer increase the reference count -- the presence of existing
nft_xt means we already hold a module reference.
Because nft_xt_put is only called from nft_compat destroy hook,
it will never see the initial zero reference count.
->destroy can only be called after ->init(), and that will increase the
refcount.
Lastly, we now free nft_xt struct with kfree_rcu.
Else, we get use-after free in nf_tables_rule_destroy:
while (expr != nft_expr_last(rule) && expr->ops) {
nf_tables_expr_destroy(ctx, expr);
expr = nft_expr_next(expr); // here
nft_expr_next() dereferences expr->ops. This is safe
for all users, as ops have static storage duration.
In nft_compat case however, its ->destroy callback can
free the memory that hold the ops structure.
Tested-by: Taehee Yoo <ap420073@gmail.com>
Reported-by: Taehee Yoo <ap420073@gmail.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>