GitHub/mt8127/android_kernel_alcatel_ttab.git
6 years agoInput: mpr121 - set missing event capability
Akinobu Mita [Sun, 15 Jan 2017 22:44:05 +0000 (14:44 -0800)]
Input: mpr121 - set missing event capability

commit 9723ddc8fe0d76ce41fe0dc16afb241ec7d0a29d upstream.

This driver reports misc scan input events on the sensor's status
register changes.  But the event capability for them was not set in the
device initialization, so these events were ignored.

This change adds the missing event capability.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Cc: Oliver Neukum <ONeukum@suse.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoInput: mpr121 - handle multiple bits change of status register
Akinobu Mita [Sun, 15 Jan 2017 22:44:30 +0000 (14:44 -0800)]
Input: mpr121 - handle multiple bits change of status register

commit 08fea55e37f58371bffc5336a59e55d1f155955a upstream.

This driver reports input events on their interrupts which are triggered
by the sensor's status register changes.  But only single bit change is
reported in the interrupt handler.  So if there are multiple bits are
changed at almost the same time, other press or release events are ignored.

This fixes it by detecting all changed bits in the status register.

Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Cc: Oliver Neukum <ONeukum@suse.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoInput: tca8418 - use the interrupt trigger from the device tree
Maxime Ripard [Tue, 17 Jan 2017 21:24:22 +0000 (13:24 -0800)]
Input: tca8418 - use the interrupt trigger from the device tree

commit 259b77ef853cc375a5c9198cf81f9b79fc19413c upstream.

The TCA8418 might be used using different interrupt triggers on various
boards. This is not working so far because the current code forces a
falling edge trigger.

The device tree already provides a trigger type, so let's use whatever it
sets up, and since we can be loaded without DT, keep the old behaviour for
the non-DT case.

Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Cc: Oliver Neukum <ONeukum@suse.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoInput: joydev - do not report stale values on first open
Raphael Assenat [Thu, 29 Dec 2016 18:23:09 +0000 (10:23 -0800)]
Input: joydev - do not report stale values on first open

commit 45536d373a21d441bd488f618b6e3e9bfae839f3 upstream.

Postpone axis initialization to the first open instead of doing it
in joydev_connect. This is to make sure the generated startup events
are representative of the current joystick state rather than what
it was when joydev_connect() was called, potentially much earlier.
Once the first user is connected to joydev node we'll be updating
joydev->abs[] values and subsequent clients will be getting correct
initial states as well.

This solves issues with joystick driven menus that start scrolling
up each time they are started, until the user moves the joystick to
generate events. In emulator menu setups where the menu program is
restarted every time the game exits, the repeated need to move the
joystick to stop the unintended scrolling gets old rather quickly...

Signed-off-by: Raphael Assenat <raph@raphnet.net>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Cc: Oliver Neukum <ONeukum@suse.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoInput: kbtab - validate number of endpoints before using them
Johan Hovold [Thu, 16 Mar 2017 18:41:55 +0000 (11:41 -0700)]
Input: kbtab - validate number of endpoints before using them

commit cb1b494663e037253337623bf1ef2df727883cb7 upstream.

Make sure to check the number of endpoints to avoid dereferencing a
NULL-pointer should a malicious device lack endpoints.

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoInput: iforce - validate number of endpoints before using them
Johan Hovold [Thu, 16 Mar 2017 18:34:02 +0000 (11:34 -0700)]
Input: iforce - validate number of endpoints before using them

commit 59cf8bed44a79ec42303151dd014fdb6434254bb upstream.

Make sure to check the number of endpoints to avoid dereferencing a
NULL-pointer or accessing memory that lie beyond the end of the endpoint
array should a malicious device lack the expected endpoints.

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoInput: i8042 - add noloop quirk for Dell Embedded Box PC 3000
Kai-Heng Feng [Tue, 7 Mar 2017 17:31:29 +0000 (09:31 -0800)]
Input: i8042 - add noloop quirk for Dell Embedded Box PC 3000

commit 45838660e34d90db8d4f7cbc8fd66e8aff79f4fe upstream.

The aux port does not get detected without noloop quirk, so external PS/2
mouse cannot work as result.

The PS/2 mouse can work with this quirk.

BugLink: https://bugs.launchpad.net/bugs/1591053
Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Reviewed-by: Marcos Paulo de Souza <marcos.souza.org@gmail.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoInput: xpad - use correct product id for x360w controllers
Pavel Rojtberg [Tue, 27 Dec 2016 19:44:51 +0000 (11:44 -0800)]
Input: xpad - use correct product id for x360w controllers

commit b6fc513da50c5dbc457a8ad6b58b046a6a68fd9d upstream.

currently the controllers get the same product id as the wireless
receiver. However the controllers actually have their own product id.

The patch makes the driver expose the same product id as the windows
driver.

This improves compatibility when running applications with WINE.

see https://github.com/paroj/xpad/issues/54

Signed-off-by: Pavel Rojtberg <rojtberg@gmail.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoHID: hid-cypress: validate length of report
Greg Kroah-Hartman [Fri, 6 Jan 2017 14:33:36 +0000 (15:33 +0100)]
HID: hid-cypress: validate length of report

commit 1ebb71143758f45dc0fa76e2f48429e13b16d110 upstream.

Make sure we have enough of a report structure to validate before
looking at it.

Reported-by: Benoit Camredon <benoit.camredon@airbus.com>
Tested-by: Benoit Camredon <benoit.camredon@airbus.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoigmp: Make igmp group member RFC 3376 compliant
Michal Tesar [Mon, 2 Jan 2017 13:38:36 +0000 (14:38 +0100)]
igmp: Make igmp group member RFC 3376 compliant

commit 7ababb782690e03b78657e27bd051e20163af2d6 upstream.

5.2. Action on Reception of a Query

 When a system receives a Query, it does not respond immediately.
 Instead, it delays its response by a random amount of time, bounded
 by the Max Resp Time value derived from the Max Resp Code in the
 received Query message.  A system may receive a variety of Queries on
 different interfaces and of different kinds (e.g., General Queries,
 Group-Specific Queries, and Group-and-Source-Specific Queries), each
 of which may require its own delayed response.

 Before scheduling a response to a Query, the system must first
 consider previously scheduled pending responses and in many cases
 schedule a combined response.  Therefore, the system must be able to
 maintain the following state:

 o A timer per interface for scheduling responses to General Queries.

 o A per-group and interface timer for scheduling responses to Group-
   Specific and Group-and-Source-Specific Queries.

 o A per-group and interface list of sources to be reported in the
   response to a Group-and-Source-Specific Query.

 When a new Query with the Router-Alert option arrives on an
 interface, provided the system has state to report, a delay for a
 response is randomly selected in the range (0, [Max Resp Time]) where
 Max Resp Time is derived from Max Resp Code in the received Query
 message.  The following rules are then used to determine if a Report
 needs to be scheduled and the type of Report to schedule.  The rules
 are considered in order and only the first matching rule is applied.

 1. If there is a pending response to a previous General Query
    scheduled sooner than the selected delay, no additional response
    needs to be scheduled.

 2. If the received Query is a General Query, the interface timer is
    used to schedule a response to the General Query after the
    selected delay.  Any previously pending response to a General
    Query is canceled.
--8<--

Currently the timer is rearmed with new random expiration time for
every incoming query regardless of possibly already pending report.
Which is not aligned with the above RFE.
It also might happen that higher rate of incoming queries can
postpone the report after the expiration time of the first query
causing group membership loss.

Now the per interface general query timer is rearmed only
when there is no pending report already scheduled on that interface or
the newly selected expiration time is before the already pending
scheduled report.

Signed-off-by: Michal Tesar <mtesar@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agodrop_monitor: consider inserted data in genlmsg_end
Reiter Wolfgang [Tue, 3 Jan 2017 00:39:10 +0000 (01:39 +0100)]
drop_monitor: consider inserted data in genlmsg_end

commit 3b48ab2248e61408910e792fe84d6ec466084c1a upstream.

Final nlmsg_len field update must reflect inserted net_dm_drop_point
data.

This patch depends on previous patch:
"drop_monitor: add missing call to genlmsg_end"

Signed-off-by: Reiter Wolfgang <wr0112358@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agodrop_monitor: add missing call to genlmsg_end
Reiter Wolfgang [Sat, 31 Dec 2016 20:11:57 +0000 (21:11 +0100)]
drop_monitor: add missing call to genlmsg_end

commit 4200462d88f47f3759bdf4705f87e207b0f5b2e4 upstream.

Update nlmsg_len field with genlmsg_end to enable userspace processing
using nlmsg_next helper. Also adds error handling.

Signed-off-by: Reiter Wolfgang <wr0112358@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agonetvsc: reduce maximum GSO size
stephen hemminger [Tue, 6 Dec 2016 21:43:54 +0000 (13:43 -0800)]
netvsc: reduce maximum GSO size

commit a50af86dd49ee1851d1ccf06dd0019c05b95e297 upstream.

Hyper-V (and Azure) support using NVGRE which requires some extra space
for encapsulation headers. Because of this the largest allowed TSO
packet is reduced.

For older releases, hard code a fixed reduced value.  For next release,
there is a better solution which uses result of host offload
negotiation.

Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agotick/broadcast: Prevent NULL pointer dereference
Thomas Gleixner [Thu, 15 Dec 2016 11:10:37 +0000 (12:10 +0100)]
tick/broadcast: Prevent NULL pointer dereference

commit c1a9eeb938b5433947e5ea22f89baff3182e7075 upstream.

When a disfunctional timer, e.g. dummy timer, is installed, the tick core
tries to setup the broadcast timer.

If no broadcast device is installed, the kernel crashes with a NULL pointer
dereference in tick_broadcast_setup_oneshot() because the function has no
sanity check.

Reported-by: Mason <slash.tmp@free.fr>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Anna-Maria Gleixner <anna-maria@linutronix.de>
Cc: Richard Cochran <rcochran@linutronix.de>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
Cc: Sebastian Frias <sf84@laposte.net>
Cc: Thibaud Cornic <thibaud_cornic@sigmadesigns.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Link: http://lkml.kernel.org/r/1147ef90-7877-e4d2-bb2b-5c4fa8d3144b@free.fr
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agonet: ti: cpmac: Fix compiler warning due to type confusion
Paul Burton [Fri, 2 Sep 2016 14:22:48 +0000 (15:22 +0100)]
net: ti: cpmac: Fix compiler warning due to type confusion

commit 2f5281ba2a8feaf6f0aee93356f350855bb530fc upstream.

cpmac_start_xmit() used the max() macro on skb->len (an unsigned int)
and ETH_ZLEN (a signed int literal). This led to the following compiler
warning:

  In file included from include/linux/list.h:8:0,
                   from include/linux/module.h:9,
                   from drivers/net/ethernet/ti/cpmac.c:19:
  drivers/net/ethernet/ti/cpmac.c: In function 'cpmac_start_xmit':
  include/linux/kernel.h:748:17: warning: comparison of distinct pointer
  types lacks a cast
    (void) (&_max1 == &_max2);  \
                   ^
  drivers/net/ethernet/ti/cpmac.c:560:8: note: in expansion of macro 'max'
    len = max(skb->len, ETH_ZLEN);
          ^

On top of this, it assigned the result of the max() macro to a signed
integer whilst all further uses of it result in it being cast to varying
widths of unsigned integer.

Fix this up by using max_t to ensure the comparison is performed as
unsigned integers, and for consistency change the type of the len
variable to unsigned int.

Signed-off-by: Paul Burton <paul.burton@imgtec.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agocred/userns: define current_user_ns() as a function
Arnd Bergmann [Tue, 22 Mar 2016 21:27:11 +0000 (14:27 -0700)]
cred/userns: define current_user_ns() as a function

commit 0335695dfa4df01edff5bb102b9a82a0668ee51e upstream.

The current_user_ns() macro currently returns &init_user_ns when user
namespaces are disabled, and that causes several warnings when building
with gcc-6.0 in code that compares the result of the macro to
&init_user_ns itself:

  fs/xfs/xfs_ioctl.c: In function 'xfs_ioctl_setattr_check_projid':
  fs/xfs/xfs_ioctl.c:1249:22: error: self-comparison always evaluates to true [-Werror=tautological-compare]
    if (current_user_ns() == &init_user_ns)

This is a legitimate warning in principle, but here it isn't really
helpful, so I'm reprasing the definition in a way that shuts up the
warning.  Apparently gcc only warns when comparing identical literals,
but it can figure out that the result of an inline function can be
identical to a constant expression in order to optimize a condition yet
not warn about the fact that the condition is known at compile time.
This is exactly what we want here, and it looks reasonable because we
generally prefer inline functions over macros anyway.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Yaowei Bai <baiyaowei@cmss.chinamobile.com>
Cc: James Morris <james.l.morris@oracle.com>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoftrace/x86: Set ftrace_stub to weak to prevent gcc from using short jumps to it
Steven Rostedt [Tue, 17 May 2016 03:00:35 +0000 (23:00 -0400)]
ftrace/x86: Set ftrace_stub to weak to prevent gcc from using short jumps to it

commit 8329e818f14926a6040df86b2668568bde342ebf upstream.

Matt Fleming reported seeing crashes when enabling and disabling
function profiling which uses function graph tracer. Later Namhyung Kim
hit a similar issue and he found that the issue was due to the jmp to
ftrace_stub in ftrace_graph_call was only two bytes, and when it was
changed to jump to the tracing code, it overwrote the ftrace_stub that
was after it.

Masami Hiramatsu bisected this down to a binutils change:

8dcea93252a9ea7dff57e85220a719e2a5e8ab41 is the first bad commit
commit 8dcea93252a9ea7dff57e85220a719e2a5e8ab41
Author: H.J. Lu <hjl.tools@gmail.com>
Date:   Fri May 15 03:17:31 2015 -0700

    Add -mshared option to x86 ELF assembler

    This patch adds -mshared option to x86 ELF assembler.  By default,
    assembler will optimize out non-PLT relocations against defined non-weak
    global branch targets with default visibility.  The -mshared option tells
    the assembler to generate code which may go into a shared library
    where all non-weak global branch targets with default visibility can
    be preempted.  The resulting code is slightly bigger.  This option
    only affects the handling of branch instructions.

Declaring ftrace_stub as a weak call prevents gas from using two byte
jumps to it, which would be converted to a jump to the function graph
code.

Link: http://lkml.kernel.org/r/20160516230035.1dbae571@gandalf.local.home
Reported-by: Matt Fleming <matt@codeblueprint.co.uk>
Reported-by: Namhyung Kim <namhyung@kernel.org>
Tested-by: Matt Fleming <matt@codeblueprint.co.uk>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agosg_write()/bsg_write() is not fit to be called under KERNEL_DS
Al Viro [Fri, 16 Dec 2016 18:42:06 +0000 (13:42 -0500)]
sg_write()/bsg_write() is not fit to be called under KERNEL_DS

commit 128394eff343fc6d2f32172f03e24829539c5835 upstream.

Both damn things interpret userland pointers embedded into the payload;
worse, they are actually traversing those.  Leaving aside the bad
API design, this is very much _not_ safe to call with KERNEL_DS.
Bail out early if that happens.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agopowerpc/ps3: Fix system hang with GCC 5 builds
Geoff Levand [Tue, 29 Nov 2016 18:47:32 +0000 (10:47 -0800)]
powerpc/ps3: Fix system hang with GCC 5 builds

commit 6dff5b67054e17c91bd630bcdda17cfca5aa4215 upstream.

GCC 5 generates different code for this bootwrapper null check that
causes the PS3 to hang very early in its bootup. This check is of
limited value, so just get rid of it.

Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agonfs_write_end(): fix handling of short copies
Al Viro [Tue, 6 Sep 2016 01:42:32 +0000 (21:42 -0400)]
nfs_write_end(): fix handling of short copies

commit c0cf3ef5e0f47e385920450b245d22bead93e7ad upstream.

What matters when deciding if we should make a page uptodate is
not how much we _wanted_ to copy, but how much we actually have
copied.  As it is, on architectures that do not zero tail on
short copy we can leave uninitialized data in page marked uptodate.

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agolibceph: verify authorize reply on connect
Ilya Dryomov [Fri, 2 Dec 2016 15:35:09 +0000 (16:35 +0100)]
libceph: verify authorize reply on connect

commit 5c056fdc5b474329037f2aa18401bd73033e0ce0 upstream.

After sending an authorizer (ceph_x_authorize_a + ceph_x_authorize_b),
the client gets back a ceph_x_authorize_reply, which it is supposed to
verify to ensure the authenticity and protect against replay attacks.
The code for doing this is there (ceph_x_verify_authorizer_reply(),
ceph_auth_verify_authorizer_reply() + plumbing), but it is never
invoked by the the messenger.

AFAICT this goes back to 2009, when ceph authentication protocols
support was added to the kernel client in 4e7a5dcd1bba ("ceph:
negotiate authentication protocol; implement AUTH_NONE protocol").

The second param of ceph_connection_operations::verify_authorizer_reply
is unused all the way down.  Pass 0 to facilitate backporting, and kill
it in the next commit.

Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agos390/vmlogrdr: fix IUCV buffer allocation
Gerald Schaefer [Mon, 21 Nov 2016 11:13:58 +0000 (12:13 +0100)]
s390/vmlogrdr: fix IUCV buffer allocation

commit 5457e03de918f7a3e294eb9d26a608ab8a579976 upstream.

The buffer for iucv_message_receive() needs to be below 2 GB. In
__iucv_message_receive(), the buffer address is casted to an u32, which
would result in either memory corruption or an addressing exception when
using addresses >= 2 GB.

Fix this by using GFP_DMA for the buffer allocation.

Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoscsi: sd: Fix capacity calculation with 32-bit sector_t
Martin K. Petersen [Tue, 4 Apr 2017 14:42:30 +0000 (10:42 -0400)]
scsi: sd: Fix capacity calculation with 32-bit sector_t

commit 7c856152cb92f8eee2df29ef325a1b1f43161aff upstream.

We previously made sure that the reported disk capacity was less than
0xffffffff blocks when the kernel was not compiled with large sector_t
support (CONFIG_LBDAF). However, this check assumed that the capacity
was reported in units of 512 bytes.

Add a sanity check function to ensure that we only enable disks if the
entire reported capacity can be expressed in terms of sector_t.

Reported-by: Steve Magnani <steve.magnani@digidescorp.com>
Cc: Bart Van Assche <Bart.VanAssche@sandisk.com>
Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoscsi: sr: Sanity check returned mode data
Martin K. Petersen [Fri, 17 Mar 2017 12:47:14 +0000 (08:47 -0400)]
scsi: sr: Sanity check returned mode data

commit a00a7862513089f17209b732f230922f1942e0b9 upstream.

Kefeng Wang discovered that old versions of the QEMU CD driver would
return mangled mode data causing us to walk off the end of the buffer in
an attempt to parse it. Sanity check the returned mode sense data.

Reported-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoscsi: lpfc: Add shutdown method for kexec
Anton Blanchard [Sun, 12 Feb 2017 21:49:20 +0000 (08:49 +1100)]
scsi: lpfc: Add shutdown method for kexec

commit 85e8a23936ab3442de0c42da97d53b29f004ece1 upstream.

We see lpfc devices regularly fail during kexec. Fix this by adding a
shutdown method which mirrors the remove method.

Signed-off-by: Anton Blanchard <anton@samba.org>
Reviewed-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Tested-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agotarget/pscsi: Fix TYPE_TAPE + TYPE_MEDIMUM_CHANGER export
Nicholas Bellinger [Fri, 4 Nov 2016 06:06:53 +0000 (23:06 -0700)]
target/pscsi: Fix TYPE_TAPE + TYPE_MEDIMUM_CHANGER export

commit a04e54f2c35823ca32d56afcd5cea5b783e2f51a upstream.

The following fixes a divide by zero OOPs with TYPE_TAPE
due to pscsi_tape_read_blocksize() failing causing a zero
sd->sector_size being propigated up via dev_attrib.hw_block_size.

It also fixes another long-standing bug where TYPE_TAPE and
TYPE_MEDIMUM_CHANGER where using pscsi_create_type_other(),
which does not call scsi_device_get() to take the device
reference.  Instead, rename pscsi_create_type_rom() to
pscsi_create_type_nondisk() and use it for all cases.

Finally, also drop a dump_stack() in pscsi_get_blocks() for
non TYPE_DISK, which in modern target-core can get invoked
via target_sense_desc_format() during CHECK_CONDITION.

[js] cast max_sectors to unsigned to avoid warnings

Reported-by: Malcolm Haak <insanemal@gmail.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoscsi: storvsc: properly set residual data length on errors
Long Li [Thu, 15 Dec 2016 02:46:03 +0000 (18:46 -0800)]
scsi: storvsc: properly set residual data length on errors

commit 40630f462824ee24bc00d692865c86c3828094e0 upstream.

On I/O errors, the Windows driver doesn't set data_transfer_length
on error conditions other than SRB_STATUS_DATA_OVERRUN.
In these cases we need to set data_transfer_length to 0,
indicating there is no data transferred. On SRB_STATUS_DATA_OVERRUN,
data_transfer_length is set by the Windows driver to the actual data transferred.

Reported-by: Shiva Krishna <Shiva.Krishna@nimblestorage.com>
Signed-off-by: Long Li <longli@microsoft.com>
Reviewed-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoscsi: storvsc: properly handle SRB_ERROR when sense message is present
Long Li [Thu, 15 Dec 2016 02:46:02 +0000 (18:46 -0800)]
scsi: storvsc: properly handle SRB_ERROR when sense message is present

commit bba5dc332ec2d3a685cb4dae668c793f6a3713a3 upstream.

When sense message is present on error, we should pass along to the upper
layer to decide how to deal with the error.
This patch fixes connectivity issues with Fiber Channel devices.

Signed-off-by: Long Li <longli@microsoft.com>
Reviewed-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoscsi: don't BUG_ON() empty DMA transfers
Johannes Thumshirn [Tue, 31 Jan 2017 09:16:00 +0000 (10:16 +0100)]
scsi: don't BUG_ON() empty DMA transfers

commit fd3fc0b4d7305fa7246622dcc0dec69c42443f45 upstream.

Don't crash the machine just because of an empty transfer. Use WARN_ON()
combined with returning an error.

Found by Dmitry Vyukov and syzkaller.

[ Changed to "WARN_ON_ONCE()". Al has a patch that should fix the root
  cause, but a BUG_ON() is not acceptable in any case, and a WARN_ON()
  might still be a cause of excessive log spamming.

  NOTE! If this warning ever triggers, we may end up leaking resources,
  since this doesn't bother to try to clean the command up. So this
  WARN_ON_ONCE() triggering does imply real problems. But BUG_ON() is
  much worse.

  People really need to stop using BUG_ON() for "this shouldn't ever
  happen". It makes pretty much any bug worse.     - Linus ]

Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Cc: James Bottomley <jejb@linux.vnet.ibm.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoscsi: move the nr_phys_segments assert into scsi_init_io
Christoph Hellwig [Sat, 28 Jun 2014 09:51:01 +0000 (11:51 +0200)]
scsi: move the nr_phys_segments assert into scsi_init_io

commit 635d98b1d0cfc2ba3426a701725d31a6102c059a upstream.

scsi_init_io should only be called for requests that transfer data,
so move the assert that a request has segments from the callers into
scsi_init_io.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoscsi: avoid a permanent stop of the scsi device's request queue
Wei Fang [Tue, 13 Dec 2016 01:25:21 +0000 (09:25 +0800)]
scsi: avoid a permanent stop of the scsi device's request queue

commit d2a145252c52792bc59e4767b486b26c430af4bb upstream.

A race between scanning and fc_remote_port_delete() may result in a
permanent stop if the device gets blocked before scsi_sysfs_add_sdev()
and unblocked after.  The reason is that blocking a device sets both the
SDEV_BLOCKED state and the QUEUE_FLAG_STOPPED.  However,
scsi_sysfs_add_sdev() unconditionally sets SDEV_RUNNING which causes the
device to be ignored by scsi_target_unblock() and thus never have its
QUEUE_FLAG_STOPPED cleared leading to a device which is apparently
running but has a stopped queue.

We actually have two places where SDEV_RUNNING is set: once in
scsi_add_lun() which respects the blocked flag and once in
scsi_sysfs_add_sdev() which doesn't.  Since the second set is entirely
spurious, simply remove it to fix the problem.

Reported-by: Zengxi Chen <chenzengxi@huawei.com>
Signed-off-by: Wei Fang <fangwei1@huawei.com>
Reviewed-by: Ewan D. Milne <emilne@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agodrivers/gpu/drm/ast: Fix infinite loop if read fails
Russell Currey [Thu, 15 Dec 2016 05:12:41 +0000 (16:12 +1100)]
drivers/gpu/drm/ast: Fix infinite loop if read fails

commit 298360af3dab45659810fdc51aba0c9f4097e4f6 upstream.

ast_get_dram_info() configures a window in order to access BMC memory.
A BMC register can be configured to disallow this, and if so, causes
an infinite loop in the ast driver which renders the system unusable.

Fix this by erroring out if an error is detected.  On powerpc systems with
EEH, this leads to the device being fenced and the system continuing to
operate.

Signed-off-by: Russell Currey <ruscur@russell.cc>
Reviewed-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20161215051241.20815-1-ruscur@russell.cc
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agossb: Fix error routine when fallback SPROM fails
Larry Finger [Sat, 5 Nov 2016 19:08:57 +0000 (14:08 -0500)]
ssb: Fix error routine when fallback SPROM fails

commit 8052d7245b6089992343c80b38b14dbbd8354651 upstream.

When there is a CRC error in the SPROM read from the device, the code
attempts to handle a fallback SPROM. When this also fails, the driver
returns zero rather than an error code.

Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoxfs: clear _XBF_PAGES from buffers when readahead page
Darrick J. Wong [Thu, 26 Jan 2017 04:24:57 +0000 (20:24 -0800)]
xfs: clear _XBF_PAGES from buffers when readahead page

commit 2aa6ba7b5ad3189cc27f14540aa2f57f0ed8df4b upstream.

If we try to allocate memory pages to back an xfs_buf that we're trying
to read, it's possible that we'll be so short on memory that the page
allocation fails.  For a blocking read we'll just wait, but for
readahead we simply dump all the pages we've collected so far.

Unfortunately, after dumping the pages we neglect to clear the
_XBF_PAGES state, which means that the subsequent call to xfs_buf_free
thinks that b_pages still points to pages we own.  It then double-frees
the b_pages pages.

This results in screaming about negative page refcounts from the memory
manager, which xfs oughtn't be triggering.  To reproduce this case,
mount a filesystem where the size of the inodes far outweighs the
availalble memory (a ~500M inode filesystem on a VM with 300MB memory
did the trick here) and run bulkstat in parallel with other memory
eating processes to put a huge load on the system.  The "check summary"
phase of xfs_scrub also works for this purpose.

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Eric Sandeen <sandeen@redhat.com>
Cc: Ivan Kozik <ivan@ludios.org>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoxfs: set AGI buffer type in xlog_recover_clear_agi_bucket
Eric Sandeen [Mon, 5 Dec 2016 01:31:06 +0000 (12:31 +1100)]
xfs: set AGI buffer type in xlog_recover_clear_agi_bucket

commit 6b10b23ca94451fae153a5cc8d62fd721bec2019 upstream.

xlog_recover_clear_agi_bucket didn't set the
type to XFS_BLFT_AGI_BUF, so we got a warning during log
replay (or an ASSERT on a debug build).

    XFS (md0): Unknown buffer type 0!
    XFS (md0): _xfs_buf_ioapply: no ops on block 0xaea8802/0x1

Fix this, as was done in f19b872b for 2 other locations
with the same problem.

Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoarm/xen: Use alloc_percpu rather than __alloc_percpu
Julien Grall [Wed, 7 Dec 2016 12:24:40 +0000 (12:24 +0000)]
arm/xen: Use alloc_percpu rather than __alloc_percpu

commit 24d5373dda7c00a438d26016bce140299fae675e upstream.

The function xen_guest_init is using __alloc_percpu with an alignment
which are not power of two.

However, the percpu allocator never supported alignments which are not power
of two and has always behaved incorectly in thise case.

Commit 3ca45a4 "percpu: ensure requested alignment is power of two"
introduced a check which trigger a warning [1] when booting linux-next
on Xen. But in reality this bug was always present.

This can be fixed by replacing the call to __alloc_percpu with
alloc_percpu. The latter will use an alignment which are a power of two.

[1]

[    0.023921] illegal size (48) or align (48) for percpu allocation
[    0.024167] ------------[ cut here ]------------
[    0.024344] WARNING: CPU: 0 PID: 1 at linux/mm/percpu.c:892 pcpu_alloc+0x88/0x6c0
[    0.024584] Modules linked in:
[    0.024708]
[    0.024804] CPU: 0 PID: 1 Comm: swapper/0 Not tainted
4.9.0-rc7-next-20161128 #473
[    0.025012] Hardware name: Foundation-v8A (DT)
[    0.025162] task: ffff80003d870000 task.stack: ffff80003d844000
[    0.025351] PC is at pcpu_alloc+0x88/0x6c0
[    0.025490] LR is at pcpu_alloc+0x88/0x6c0
[    0.025624] pc : [<ffff00000818e678>] lr : [<ffff00000818e678>]
pstate: 60000045
[    0.025830] sp : ffff80003d847cd0
[    0.025946] x29: ffff80003d847cd0 x28: 0000000000000000
[    0.026147] x27: 0000000000000000 x26: 0000000000000000
[    0.026348] x25: 0000000000000000 x24: 0000000000000000
[    0.026549] x23: 0000000000000000 x22: 00000000024000c0
[    0.026752] x21: ffff000008e97000 x20: 0000000000000000
[    0.026953] x19: 0000000000000030 x18: 0000000000000010
[    0.027155] x17: 0000000000000a3f x16: 00000000deadbeef
[    0.027357] x15: 0000000000000006 x14: ffff000088f79c3f
[    0.027573] x13: ffff000008f79c4d x12: 0000000000000041
[    0.027782] x11: 0000000000000006 x10: 0000000000000042
[    0.027995] x9 : ffff80003d847a40 x8 : 6f697461636f6c6c
[    0.028208] x7 : 6120757063726570 x6 : ffff000008f79c84
[    0.028419] x5 : 0000000000000005 x4 : 0000000000000000
[    0.028628] x3 : 0000000000000000 x2 : 000000000000017f
[    0.028840] x1 : ffff80003d870000 x0 : 0000000000000035
[    0.029056]
[    0.029152] ---[ end trace 0000000000000000 ]---
[    0.029297] Call trace:
[    0.029403] Exception stack(0xffff80003d847b00 to
                               0xffff80003d847c30)
[    0.029621] 7b00: 0000000000000030 0001000000000000
ffff80003d847cd0 ffff00000818e678
[    0.029901] 7b20: 0000000000000002 0000000000000004
ffff000008f7c060 0000000000000035
[    0.030153] 7b40: ffff000008f79000 ffff000008c4cd88
ffff80003d847bf0 ffff000008101778
[    0.030402] 7b60: 0000000000000030 0000000000000000
ffff000008e97000 00000000024000c0
[    0.030647] 7b80: 0000000000000000 0000000000000000
0000000000000000 0000000000000000
[    0.030895] 7ba0: 0000000000000035 ffff80003d870000
000000000000017f 0000000000000000
[    0.031144] 7bc0: 0000000000000000 0000000000000005
ffff000008f79c84 6120757063726570
[    0.031394] 7be0: 6f697461636f6c6c ffff80003d847a40
0000000000000042 0000000000000006
[    0.031643] 7c00: 0000000000000041 ffff000008f79c4d
ffff000088f79c3f 0000000000000006
[    0.031877] 7c20: 00000000deadbeef 0000000000000a3f
[    0.032051] [<ffff00000818e678>] pcpu_alloc+0x88/0x6c0
[    0.032229] [<ffff00000818ece8>] __alloc_percpu+0x18/0x20
[    0.032409] [<ffff000008d9606c>] xen_guest_init+0x174/0x2f4
[    0.032591] [<ffff0000080830f8>] do_one_initcall+0x38/0x130
[    0.032783] [<ffff000008d90c34>] kernel_init_freeable+0xe0/0x248
[    0.032995] [<ffff00000899a890>] kernel_init+0x10/0x100
[    0.033172] [<ffff000008082ec0>] ret_from_fork+0x10/0x50

Reported-by: Wei Chen <wei.chen@arm.com>
Link: https://lkml.org/lkml/2016/11/28/669
Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoUSB: UHCI: report non-PME wakeup signalling for Intel hardware
Alan Stern [Fri, 21 Oct 2016 20:49:07 +0000 (16:49 -0400)]
USB: UHCI: report non-PME wakeup signalling for Intel hardware

commit ccdb6be9ec6580ef69f68949ebe26e0fb58a6fb0 upstream.

The UHCI controllers in Intel chipsets rely on a platform-specific non-PME
mechanism for wakeup signalling.  They can generate wakeup signals even
though they don't support PME.

We need to let the USB core know this so that it will enable runtime
suspend for UHCI controllers.

Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agousb: gadget: composite: correctly initialize ep->maxpacket
Felipe Balbi [Wed, 28 Sep 2016 07:38:11 +0000 (10:38 +0300)]
usb: gadget: composite: correctly initialize ep->maxpacket

commit e8f29bb719b47a234f33b0af62974d7a9521a52c upstream.

usb_endpoint_maxp() returns wMaxPacketSize in its
raw form. Without taking into consideration that it
also contains other bits reserved for isochronous
endpoints.

This patch fixes one occasion where this is a
problem by making sure that we initialize
ep->maxpacket only with lower 10 bits of the value
returned by usb_endpoint_maxp(). Note that seperate
patches will be necessary to audit all call sites of
usb_endpoint_maxp() and make sure that
usb_endpoint_maxp() only returns lower 10 bits of
wMaxPacketSize.

Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agousb: hub: Wait for connection to be reestablished after port reset
Guenter Roeck [Thu, 1 Dec 2016 21:49:59 +0000 (13:49 -0800)]
usb: hub: Wait for connection to be reestablished after port reset

commit 22547c4cc4fe20698a6a85a55b8788859134b8e4 upstream.

On a system with a defective USB device connected to an USB hub,
an endless sequence of port connect events was observed. The sequence
of events as observed is as follows:

- Port reports connected event (port status=USB_PORT_STAT_CONNECTION).
- Event handler debounces port and resets it by calling hub_port_reset().
- hub_port_reset() calls hub_port_wait_reset() to wait for the reset
  to complete.
- The reset completes, but USB_PORT_STAT_CONNECTION is not immediately
  set in the port status register.
- hub_port_wait_reset() returns -ENOTCONN.
- Port initialization sequence is aborted.
- A few milliseconds later, the port again reports a connected event,
  and the sequence repeats.

This continues either forever or, randomly, stops if the connection
is already re-established when the port status is read. It results in
a high rate of udev events. This in turn destabilizes userspace since
the above sequence holds the device mutex pretty much continuously
and prevents userspace from actually reading the device status.

To prevent the problem from happening, let's wait for the connection
to be re-established after a port reset. If the device was actually
disconnected, the code will still return an error, but it will do so
only after the long reset timeout.

Cc: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agousb: dwc3: gadget: delay unmap of bounced requests
Janusz Dziedzic [Mon, 13 Mar 2017 12:11:32 +0000 (14:11 +0200)]
usb: dwc3: gadget: delay unmap of bounced requests

commit de288e36fe33f7e06fa272bc8e2f85aa386d99aa upstream.

In the case of bounced ep0 requests, we must delay DMA operation until
after ->complete() otherwise we might overwrite contents of req->buf.

This caused problems with RNDIS gadget.

Signed-off-by: Janusz Dziedzic <januszx.dziedzic@intel.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agousb: host: xhci-plat: Fix timeout on removal of hot pluggable xhci controllers
Guenter Roeck [Thu, 9 Mar 2017 13:39:37 +0000 (15:39 +0200)]
usb: host: xhci-plat: Fix timeout on removal of hot pluggable xhci controllers

commit dcc7620cad5ad1326a78f4031a7bf4f0e5b42984 upstream.

Upstream commit 98d74f9ceaef ("xhci: fix 10 second timeout on removal of
PCI hotpluggable xhci controllers") fixes a problem with hot pluggable PCI
xhci controllers which can result in excessive timeouts, to the point where
the system reports a deadlock.

The same problem is seen with hot pluggable xhci controllers using the
xhci-plat driver, such as the driver used for Type-C ports on rk3399.
Similar to hot-pluggable PCI controllers, the driver for this chip
removes the xhci controller from the system when the Type-C cable is
disconnected.

The solution for PCI devices works just as well for non-PCI devices
and avoids the problem.

Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agousb: dwc3: gadget: make Set Endpoint Configuration macros safe
Felipe Balbi [Tue, 31 Jan 2017 11:24:54 +0000 (13:24 +0200)]
usb: dwc3: gadget: make Set Endpoint Configuration macros safe

commit 7369090a9fb57c3fc705ce355d2e4523a5a24716 upstream.

Some gadget drivers are bad, bad boys. We notice
that ADB was passing bad Burst Size which caused top
bits of param0 to be overwritten which confused DWC3
when running this command.

In order to avoid future issues, we're going to make
sure values passed by macros are always safe for the
controller. Note that ADB still needs a fix to *not*
pass bad values.

Reported-by: Mohamed Abbas <mohamed.abbas@intel.com>
Sugested-by: Adam Andruszak <adam.andruszak@intel.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoUSB: cdc-acm: fix failed open not being detected
Johan Hovold [Mon, 26 May 2014 17:23:43 +0000 (19:23 +0200)]
USB: cdc-acm: fix failed open not being detected

commit 8727bf689a77a79816065e23a7a58a474ad544f9 upstream.

Fix errors during open not being returned to userspace. Specifically,
failed control-line manipulations or control or read urb submissions
would not be detected.

Fixes: 7fb57a019f94 ("USB: cdc-acm: Fix potential deadlock (lockdep
warning)")

Signed-off-by: Johan Hovold <jhovold@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoUSB: cdc-acm: fix open and suspend race
Johan Hovold [Mon, 26 May 2014 17:23:42 +0000 (19:23 +0200)]
USB: cdc-acm: fix open and suspend race

commit 703df3297fb1950b0aa53e656108eb936d3f21d9 upstream.

We must not do the usb_autopm_put_interface() before submitting the read
urbs or we might end up doing I/O to a suspended device.

Fixes: 088c64f81284 ("USB: cdc-acm: re-write read processing")
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoUSB: cdc-acm: fix double usb_autopm_put_interface() in acm_port_activate()
Alexey Khoroshilov [Fri, 11 Apr 2014 22:10:45 +0000 (02:10 +0400)]
USB: cdc-acm: fix double usb_autopm_put_interface() in acm_port_activate()

commit 070c0b17f6a1ba39dff9be112218127e7e8fd456 upstream.

If acm_submit_read_urbs() fails in acm_port_activate(), error handling
code calls usb_autopm_put_interface() while it is already called
before acm_submit_read_urbs(). The patch reorganizes error handling code
to avoid double decrement of USB interface's PM-usage counter.

Found by Linux Driver Verification project (linuxtesting.org).

Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru>
Acked-by: Oliver Neukum <oliver@neukum.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agousb: gadget: composite: always set ep->mult to a sensible value
Felipe Balbi [Wed, 28 Sep 2016 09:33:31 +0000 (12:33 +0300)]
usb: gadget: composite: always set ep->mult to a sensible value

commit eaa496ffaaf19591fe471a36cef366146eeb9153 upstream.

ep->mult is supposed to be set to Isochronous and
Interrupt Endapoint's multiplier value. This value
is computed from different places depending on the
link speed.

If we're dealing with HighSpeed, then it's part of
bits [12:11] of wMaxPacketSize. This case wasn't
taken into consideration before.

While at that, also make sure the ep->mult defaults
to one so drivers can use it unconditionally and
assume they'll never multiply ep->maxpacket to zero.

Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoUSB: serial: io_ti: bind to interface after fw download
Johan Hovold [Tue, 3 Jan 2017 15:39:46 +0000 (16:39 +0100)]
USB: serial: io_ti: bind to interface after fw download

commit e35d6d7c4e6532a89732cf4bace0e910ee684c88 upstream.

Bind to the interface, but do not register any ports, after having
downloaded the firmware. The device will still disconnect and
re-enumerate, but this way we avoid an error messages from being logged
as part of the process:

io_ti: probe of 1-1.3:1.0 failed with error -5

Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoxhci: free xhci virtual devices with leaf nodes first
Mathias Nyman [Tue, 3 Jan 2017 16:28:43 +0000 (18:28 +0200)]
xhci: free xhci virtual devices with leaf nodes first

commit ee8665e28e8d90ce69d4abe5a469c14a8707ae0e upstream.

the tt_info provided by a HS hub might be in use to by a child device
Make sure we free the devices in the correct order.

This is needed in special cases such as when xhci controller is
reset when resuming from hibernate, and all virt_devices are freed.

Also free the virt_devices starting from max slot_id as children
more commonly have higher slot_id than parent.

Reported-by: Guenter Roeck <groeck@chromium.org>
Tested-by: Guenter Roeck <groeck@chromium.org>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoUSB: gadgetfs: fix checks of wTotalLength in config descriptors
Alan Stern [Fri, 9 Dec 2016 20:24:24 +0000 (15:24 -0500)]
USB: gadgetfs: fix checks of wTotalLength in config descriptors

commit 1c069b057dcf64fada952eaa868d35f02bb0cfc2 upstream.

Andrey Konovalov's fuzz testing of gadgetfs showed that we should
improve the driver's checks for valid configuration descriptors passed
in by the user.  In particular, the driver needs to verify that the
wTotalLength value in the descriptor is not too short (smaller
than USB_DT_CONFIG_SIZE).  And the check for whether wTotalLength is
too large has to be changed, because the driver assumes there is
always enough room remaining in the buffer to hold a device descriptor
(at least USB_DT_DEVICE_SIZE bytes).

This patch adds the additional check and fixes the existing check.  It
may do a little more than strictly necessary, but one extra check
won't hurt.

Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
CC: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoUSB: gadgetfs: fix use-after-free bug
Alan Stern [Fri, 9 Dec 2016 20:18:43 +0000 (15:18 -0500)]
USB: gadgetfs: fix use-after-free bug

commit add333a81a16abbd4f106266a2553677a165725f upstream.

Andrey Konovalov reports that fuzz testing with syzkaller causes a
KASAN use-after-free bug report in gadgetfs:

BUG: KASAN: use-after-free in gadgetfs_setup+0x208a/0x20e0 at addr ffff88003dfe5bf2
Read of size 2 by task syz-executor0/22994
CPU: 3 PID: 22994 Comm: syz-executor0 Not tainted 4.9.0-rc7+ #16
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
 ffff88006df06a18 ffffffff81f96aba ffffffffe0528500 1ffff1000dbe0cd6
 ffffed000dbe0cce ffff88006df068f0 0000000041b58ab3 ffffffff8598b4c8
 ffffffff81f96828 1ffff1000dbe0ccd ffff88006df06708 ffff88006df06748
Call Trace:
 <IRQ> [  201.343209]  [<     inline     >] __dump_stack lib/dump_stack.c:15
 <IRQ> [  201.343209]  [<ffffffff81f96aba>] dump_stack+0x292/0x398 lib/dump_stack.c:51
 [<ffffffff817e4dec>] kasan_object_err+0x1c/0x70 mm/kasan/report.c:159
 [<     inline     >] print_address_description mm/kasan/report.c:197
 [<ffffffff817e5080>] kasan_report_error+0x1f0/0x4e0 mm/kasan/report.c:286
 [<     inline     >] kasan_report mm/kasan/report.c:306
 [<ffffffff817e562a>] __asan_report_load_n_noabort+0x3a/0x40 mm/kasan/report.c:337
 [<     inline     >] config_buf drivers/usb/gadget/legacy/inode.c:1298
 [<ffffffff8322c8fa>] gadgetfs_setup+0x208a/0x20e0 drivers/usb/gadget/legacy/inode.c:1368
 [<ffffffff830fdcd0>] dummy_timer+0x11f0/0x36d0 drivers/usb/gadget/udc/dummy_hcd.c:1858
 [<ffffffff814807c1>] call_timer_fn+0x241/0x800 kernel/time/timer.c:1308
 [<     inline     >] expire_timers kernel/time/timer.c:1348
 [<ffffffff81482de6>] __run_timers+0xa06/0xec0 kernel/time/timer.c:1641
 [<ffffffff814832c1>] run_timer_softirq+0x21/0x80 kernel/time/timer.c:1654
 [<ffffffff84f4af8b>] __do_softirq+0x2fb/0xb63 kernel/softirq.c:284

The cause of the bug is subtle.  The dev_config() routine gets called
twice by the fuzzer.  The first time, the user data contains both a
full-speed configuration descriptor and a high-speed config
descriptor, causing dev->hs_config to be set.  But it also contains an
invalid device descriptor, so the buffer containing the descriptors is
deallocated and dev_config() returns an error.

The second time dev_config() is called, the user data contains only a
full-speed config descriptor.  But dev->hs_config still has the stale
pointer remaining from the first call, causing the routine to think
that there is a valid high-speed config.  Later on, when the driver
dereferences the stale pointer to copy that descriptor, we get a
use-after-free access.

The fix is simple: Clear dev->hs_config if the passed-in data does not
contain a high-speed config descriptor.

Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoUSB: gadgetfs: fix unbounded memory allocation bug
Alan Stern [Fri, 9 Dec 2016 20:17:46 +0000 (15:17 -0500)]
USB: gadgetfs: fix unbounded memory allocation bug

commit faab50984fe6636e616c7cc3d30308ba391d36fd upstream.

Andrey Konovalov reports that fuzz testing with syzkaller causes a
KASAN warning in gadgetfs:

BUG: KASAN: slab-out-of-bounds in dev_config+0x86f/0x1190 at addr ffff88003c47e160
Write of size 65537 by task syz-executor0/6356
CPU: 3 PID: 6356 Comm: syz-executor0 Not tainted 4.9.0-rc7+ #19
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
 ffff88003c107ad8 ffffffff81f96aba ffffffff3dc11ef0 1ffff10007820eee
 ffffed0007820ee6 ffff88003dc11f00 0000000041b58ab3 ffffffff8598b4c8
 ffffffff81f96828 ffffffff813fb4a0 ffff88003b6eadc0 ffff88003c107738
Call Trace:
 [<     inline     >] __dump_stack lib/dump_stack.c:15
 [<ffffffff81f96aba>] dump_stack+0x292/0x398 lib/dump_stack.c:51
 [<ffffffff817e4dec>] kasan_object_err+0x1c/0x70 mm/kasan/report.c:159
 [<     inline     >] print_address_description mm/kasan/report.c:197
 [<ffffffff817e5080>] kasan_report_error+0x1f0/0x4e0 mm/kasan/report.c:286
 [<ffffffff817e5705>] kasan_report+0x35/0x40 mm/kasan/report.c:306
 [<     inline     >] check_memory_region_inline mm/kasan/kasan.c:308
 [<ffffffff817e3fb9>] check_memory_region+0x139/0x190 mm/kasan/kasan.c:315
 [<ffffffff817e4044>] kasan_check_write+0x14/0x20 mm/kasan/kasan.c:326
 [<     inline     >] copy_from_user arch/x86/include/asm/uaccess.h:689
 [<     inline     >] ep0_write drivers/usb/gadget/legacy/inode.c:1135
 [<ffffffff83228caf>] dev_config+0x86f/0x1190 drivers/usb/gadget/legacy/inode.c:1759
 [<ffffffff817fdd55>] __vfs_write+0x5d5/0x760 fs/read_write.c:510
 [<ffffffff817ff650>] vfs_write+0x170/0x4e0 fs/read_write.c:560
 [<     inline     >] SYSC_write fs/read_write.c:607
 [<ffffffff81803a5b>] SyS_write+0xfb/0x230 fs/read_write.c:599
 [<ffffffff84f47ec1>] entry_SYSCALL_64_fastpath+0x1f/0xc2

Indeed, there is a comment saying that the value of len is restricted
to a 16-bit integer, but the code doesn't actually do this.

This patch fixes the warning.  It replaces the comment with a
computation that forces the amount of data copied from the user in
ep0_write() to be no larger than the wLength size for the control
transfer, which is a 16-bit quantity.

Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agousb: gadgetfs: restrict upper bound on device configuration size
Greg Kroah-Hartman [Tue, 6 Dec 2016 07:36:29 +0000 (08:36 +0100)]
usb: gadgetfs: restrict upper bound on device configuration size

commit 0994b0a257557e18ee8f0b7c5f0f73fe2b54eec1 upstream.

Andrey Konovalov reported that we were not properly checking the upper
limit before of a device configuration size before calling
memdup_user(), which could cause some problems.

So set the upper limit to PAGE_SIZE * 4, which should be good enough for
all devices.

Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoALSA: usb-audio: Add QuickCam Communicate Deluxe/S7500 to volume_control_quirks
Con Kolivas [Fri, 9 Dec 2016 04:15:57 +0000 (15:15 +1100)]
ALSA: usb-audio: Add QuickCam Communicate Deluxe/S7500 to volume_control_quirks

commit 82ffb6fc637150b279f49e174166d2aa3853eaf4 upstream.

The Logitech QuickCam Communicate Deluxe/S7500 microphone fails with the
following warning.

[    6.778995] usb 2-1.2.2.2: Warning! Unlikely big volume range (=3072),
cval->res is probably wrong.
[    6.778996] usb 2-1.2.2.2: [5] FU [Mic Capture Volume] ch = 1, val =
4608/7680/1

Adding it to the list of devices in volume_control_quirks makes it work
properly, fixing related typo.

Signed-off-by: Con Kolivas <kernel@kolivas.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoALSA: seq: Don't break snd_use_lock_sync() loop by timeout
Takashi Iwai [Sun, 9 Apr 2017 08:41:27 +0000 (10:41 +0200)]
ALSA: seq: Don't break snd_use_lock_sync() loop by timeout

commit 4e7655fd4f47c23e5249ea260dc802f909a64611 upstream.

The snd_use_lock_sync() (thus its implementation
snd_use_lock_sync_helper()) has the 5 seconds timeout to break out of
the sync loop.  It was introduced from the beginning, just to be
"safer", in terms of avoiding the stupid bugs.

However, as Ben Hutchings suggested, this timeout rather introduces a
potential leak or use-after-free that was apparently fixed by the
commit 2d7d54002e39 ("ALSA: seq: Fix race during FIFO resize"):
for example, snd_seq_fifo_event_in() -> snd_seq_event_dup() ->
copy_from_user() could block for a long time, and snd_use_lock_sync()
goes timeout and still leaves the cell at releasing the pool.

For fixing such a problem, we remove the break by the timeout while
still keeping the warning.

Suggested-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoALSA: seq: Fix race during FIFO resize
Takashi Iwai [Fri, 24 Mar 2017 16:07:57 +0000 (17:07 +0100)]
ALSA: seq: Fix race during FIFO resize

commit 2d7d54002e396c180db0c800c1046f0a3c471597 upstream.

When a new event is queued while processing to resize the FIFO in
snd_seq_fifo_clear(), it may lead to a use-after-free, as the old pool
that is being queued gets removed.  For avoiding this race, we need to
close the pool to be deleted and sync its usage before actually
deleting it.

The issue was spotted by syzkaller.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoALSA: seq: Fix racy cell insertions during snd_seq_pool_done()
Takashi Iwai [Tue, 21 Mar 2017 12:56:04 +0000 (13:56 +0100)]
ALSA: seq: Fix racy cell insertions during snd_seq_pool_done()

commit c520ff3d03f0b5db7146d9beed6373ad5d2a5e0e upstream.

When snd_seq_pool_done() is called, it marks the closing flag to
refuse the further cell insertions.  But snd_seq_pool_done() itself
doesn't clear the cells but just waits until all cells are cleared by
the caller side.  That is, it's racy, and this leads to the endless
stall as syzkaller spotted.

This patch addresses the racy by splitting the setup of pool->closing
flag out of snd_seq_pool_done(), and calling it properly before
snd_seq_pool_done().

BugLink: http://lkml.kernel.org/r/CACT4Y+aqqy8bZA1fFieifNxR2fAfFQQABcBHj801+u5ePV0URw@mail.gmail.com
Reported-and-tested-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoALSA: seq: Fix link corruption by event error handling
Takashi Iwai [Tue, 28 Feb 2017 21:15:51 +0000 (22:15 +0100)]
ALSA: seq: Fix link corruption by event error handling

commit f3ac9f737603da80c2da3e84b89e74429836bb6d upstream.

The sequencer FIFO management has a bug that may lead to a corruption
(shortage) of the cell linked list.  When a sequencer client faces an
error at the event delivery, it tries to put back the dequeued cell.
When the first queue was put back, this forgot the tail pointer
tracking, and the link will be screwed up.

Although there is no memory corruption, the sequencer client may stall
forever at exit while flushing the pending FIFO cells in
snd_seq_pool_done(), as spotted by syzkaller.

This patch addresses the missing tail pointer tracking at
snd_seq_fifo_cell_putback().  Also the patch makes sure to clear the
cell->enxt pointer at snd_seq_fifo_event_in() for avoiding a similar
mess-up of the FIFO linked list.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoALSA: timer: Reject user params with too small ticks
Takashi Iwai [Tue, 28 Feb 2017 13:49:07 +0000 (14:49 +0100)]
ALSA: timer: Reject user params with too small ticks

commit 71321eb3f2d0df4e6c327e0b936eec4458a12054 upstream.

When a user sets a too small ticks with a fine-grained timer like
hrtimer, the kernel tries to fire up the timer irq too frequently.
This may lead to the condensed locks, eventually the kernel spinlock
lockup with warnings.

For avoiding such a situation, we define a lower limit of the
resolution, namely 1ms.  When the user passes a too small tick value
that results in less than that, the kernel returns -EINVAL now.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoALSA: seq: Don't handle loop timeout at snd_seq_pool_done()
Takashi Iwai [Mon, 6 Feb 2017 14:09:48 +0000 (15:09 +0100)]
ALSA: seq: Don't handle loop timeout at snd_seq_pool_done()

commit 37a7ea4a9b81f6a864c10a7cb0b96458df5310a3 upstream.

snd_seq_pool_done() syncs with closing of all opened threads, but it
aborts the wait loop with a timeout, and proceeds to the release
resource even if not all threads have been closed.  The timeout was 5
seconds, and if you run a crazy stuff, it can exceed easily, and may
result in the access of the invalid memory address -- this is what
syzkaller detected in a bug report.

As a fix, let the code graduate from naiveness, simply remove the loop
timeout.

BugLink: http://lkml.kernel.org/r/CACT4Y+YdhDV2H5LLzDTJDVF-qiYHUHhtRaW4rbb4gUhTCQB81w@mail.gmail.com
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoALSA: seq: Fix race at creating a queue
Takashi Iwai [Wed, 8 Feb 2017 11:35:39 +0000 (12:35 +0100)]
ALSA: seq: Fix race at creating a queue

commit 4842e98f26dd80be3623c4714a244ba52ea096a8 upstream.

When a sequencer queue is created in snd_seq_queue_alloc(),it adds the
new queue element to the public list before referencing it.  Thus the
queue might be deleted before the call of snd_seq_queue_use(), and it
results in the use-after-free error, as spotted by syzkaller.

The fix is to reference the queue object at the right time.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoALSA: hda - Fix up GPIO for ASUS ROG Ranger
Takashi Iwai [Tue, 6 Dec 2016 15:20:36 +0000 (16:20 +0100)]
ALSA: hda - Fix up GPIO for ASUS ROG Ranger

commit 85bcf96caba8b4a7c0805555638629ba3c67ea0c upstream.

ASUS ROG Ranger VIII with ALC1150 codec requires the extra GPIO pin to
up for the front panel.  Just use the existing fixup for setting up
the GPIO pins.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=189411
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agocan: usb_8dev: Fix memory leak of priv->cmd_msg_buffer
Marc Kleine-Budde [Thu, 2 Mar 2017 11:03:40 +0000 (12:03 +0100)]
can: usb_8dev: Fix memory leak of priv->cmd_msg_buffer

commit 7c42631376306fb3f34d51fda546b50a9b6dd6ec upstream.

The priv->cmd_msg_buffer is allocated in the probe function, but never
kfree()ed. This patch converts the kzalloc() to resource-managed
kzalloc.

Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agocan: bcm: fix hrtimer/tasklet termination in bcm op removal
Oliver Hartkopp [Wed, 18 Jan 2017 20:30:51 +0000 (21:30 +0100)]
can: bcm: fix hrtimer/tasklet termination in bcm op removal

commit a06393ed03167771246c4c43192d9c264bc48412 upstream.

When removing a bcm tx operation either a hrtimer or a tasklet might run.
As the hrtimer triggers its associated tasklet and vice versa we need to
take care to mutually terminate both handlers.

Reported-by: Michael Josenhans <michael.josenhans@web.de>
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Tested-by: Michael Josenhans <michael.josenhans@web.de>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agocan: ti_hecc: add missing prepare and unprepare of the clock
Yegor Yefremov [Wed, 18 Jan 2017 10:35:57 +0000 (11:35 +0100)]
can: ti_hecc: add missing prepare and unprepare of the clock

commit befa60113ce7ea270cb51eada28443ca2756f480 upstream.

In order to make the driver work with the common clock framework, this
patch converts the clk_enable()/clk_disable() to
clk_prepare_enable()/clk_disable_unprepare().

Also add error checking for clk_prepare_enable().

Signed-off-by: Yegor Yefremov <yegorslists@googlemail.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agocan: c_can_pci: fix null-pointer-deref in c_can_start() - set device pointer
Einar Jón [Fri, 12 Aug 2016 11:50:41 +0000 (13:50 +0200)]
can: c_can_pci: fix null-pointer-deref in c_can_start() - set device pointer

commit c97c52be78b8463ac5407f1cf1f22f8f6cf93a37 upstream.

The priv->device pointer for c_can_pci is never set, but it is used
without a NULL check in c_can_start(). Setting it in c_can_pci_probe()
like c_can_plat_probe() prevents c_can_pci.ko from crashing, with and
without CONFIG_PM.

This might also cause the pm_runtime_*() functions in c_can.c to
actually be executed for c_can_pci devices - they are the only other
place where priv->device is used, but they all contain a null check.

Signed-off-by: Einar Jón <tolvupostur@gmail.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agocan: peak: fix bad memory access and free sequence
추지호 [Thu, 8 Dec 2016 12:01:13 +0000 (12:01 +0000)]
can: peak: fix bad memory access and free sequence

commit b67d0dd7d0dc9e456825447bbeb935d8ef43ea7c upstream.

Fix for bad memory access while disconnecting. netdev is freed before
private data free, and dev is accessed after freeing netdev.

This makes a slub problem, and it raise kernel oops with slub debugger
config.

Signed-off-by: Jiho Chu <jiho.chu@samsung.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agocan: raw: raw_setsockopt: limit number of can_filter that can be set
Marc Kleine-Budde [Mon, 5 Dec 2016 10:44:23 +0000 (11:44 +0100)]
can: raw: raw_setsockopt: limit number of can_filter that can be set

commit 332b05ca7a438f857c61a3c21a88489a21532364 upstream.

This patch adds a check to limit the number of can_filters that can be
set via setsockopt on CAN_RAW sockets. Otherwise allocations > MAX_ORDER
are not prevented resulting in a warning.

Reference: https://lkml.org/lkml/2016/12/2/230

Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoocfs2: fix BUG_ON() in ocfs2_ci_checkpointed()
Tariq Saeed [Fri, 4 Sep 2015 22:44:31 +0000 (15:44 -0700)]
ocfs2: fix BUG_ON() in ocfs2_ci_checkpointed()

commit 3d46a44a0c01b15d385ccaae24b56f619613c256 upstream.

PID: 614    TASK: ffff882a739da580  CPU: 3   COMMAND: "ocfs2dc"
  #0 [ffff882ecc3759b0] machine_kexec at ffffffff8103b35d
  #1 [ffff882ecc375a20] crash_kexec at ffffffff810b95b5
  #2 [ffff882ecc375af0] oops_end at ffffffff815091d8
  #3 [ffff882ecc375b20] die at ffffffff8101868b
  #4 [ffff882ecc375b50] do_trap at ffffffff81508bb0
  #5 [ffff882ecc375ba0] do_invalid_op at ffffffff810165e5
  #6 [ffff882ecc375c40] invalid_op at ffffffff815116fb
     [exception RIP: ocfs2_ci_checkpointed+208]
     RIP: ffffffffa0a7e940  RSP: ffff882ecc375cf0  RFLAGS: 00010002
     RAX: 0000000000000001  RBX: 000000000000654b  RCX: ffff8812dc83f1f8
     RDX: 00000000000017d9  RSI: ffff8812dc83f1f8  RDI: ffffffffa0b2c318
     RBP: ffff882ecc375d20   R8: ffff882ef6ecfa60   R9: ffff88301f272200
     R10: 0000000000000000  R11: 0000000000000000  R12: ffffffffffffffff
     R13: ffff8812dc83f4f0  R14: 0000000000000000  R15: ffff8812dc83f1f8
     ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
  #7 [ffff882ecc375d28] ocfs2_check_meta_downconvert at ffffffffa0a7edbd [ocfs2]
  #8 [ffff882ecc375d38] ocfs2_unblock_lock at ffffffffa0a84af8 [ocfs2]
  #9 [ffff882ecc375dc8] ocfs2_process_blocked_lock at ffffffffa0a85285 [ocfs2]
assert is tripped because the tran is not checkpointed and the lock level is PR.

Some time ago, chmod command had been executed. As result, the following call
chain left the inode cluster lock in PR state, latter on causing the assert.
system_call_fastpath
  -> my_chmod
   -> sys_chmod
    -> sys_fchmodat
     -> notify_change
      -> ocfs2_setattr
       -> posix_acl_chmod
        -> ocfs2_iop_set_acl
         -> ocfs2_set_acl
          -> ocfs2_acl_set_mode
Here is how.
1119 int ocfs2_setattr(struct dentry *dentry, struct iattr *attr)
1120 {
1247         ocfs2_inode_unlock(inode, 1); <<< WRONG thing to do.
..
1258         if (!status && attr->ia_valid & ATTR_MODE) {
1259                 status =  posix_acl_chmod(inode, inode->i_mode);

519 posix_acl_chmod(struct inode *inode, umode_t mode)
520 {
..
539         ret = inode->i_op->set_acl(inode, acl, ACL_TYPE_ACCESS);

287 int ocfs2_iop_set_acl(struct inode *inode, struct posix_acl *acl, ...
288 {
289         return ocfs2_set_acl(NULL, inode, NULL, type, acl, NULL, NULL);

224 int ocfs2_set_acl(handle_t *handle,
225                          struct inode *inode, ...
231 {
..
252                                 ret = ocfs2_acl_set_mode(inode, di_bh,
253                                                          handle, mode);

168 static int ocfs2_acl_set_mode(struct inode *inode, struct buffer_head ...
170 {
183         if (handle == NULL) {
                    >>> BUG: inode lock not held in ex at this point <<<
184                 handle = ocfs2_start_trans(OCFS2_SB(inode->i_sb),
185                                            OCFS2_INODE_UPDATE_CREDITS);

ocfs2_setattr.#1247 we unlock and at #1259 call posix_acl_chmod. When we reach
ocfs2_acl_set_mode.#181 and do trans, the inode cluster lock is not held in EX
mode (it should be). How this could have happended?

We are the lock master, were holding lock EX and have released it in
ocfs2_setattr.#1247.  Note that there are no holders of this lock at
this point.  Another node needs the lock in PR, and we downconvert from
EX to PR.  So the inode lock is PR when do the trans in
ocfs2_acl_set_mode.#184.  The trans stays in core (not flushed to disc).
Now another node want the lock in EX, downconvert thread gets kicked
(the one that tripped assert abovt), finds an unflushed trans but the
lock is not EX (it is PR).  If the lock was at EX, it would have flushed
the trans ocfs2_ci_checkpointed -> ocfs2_start_checkpoint before
downconverting (to NULL) for the request.

ocfs2_setattr must not drop inode lock ex in this code path.  If it
does, takes it again before the trans, say in ocfs2_set_acl, another
cluster node can get in between, execute another setattr, overwriting
the one in progress on this node, resulting in a mode acl size combo
that is a mix of the two.

Orabug: 20189959
Signed-off-by: Tariq Saeed <tariq.x.saeed@oracle.com>
Reviewed-by: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Joseph Qi <joseph.qi@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoocfs2: fix crash caused by stale lvb with fsdlm plugin
Eric Ren [Wed, 11 Jan 2017 00:57:33 +0000 (16:57 -0800)]
ocfs2: fix crash caused by stale lvb with fsdlm plugin

commit e7ee2c089e94067d68475990bdeed211c8852917 upstream.

The crash happens rather often when we reset some cluster nodes while
nodes contend fiercely to do truncate and append.

The crash backtrace is below:

   dlm: C21CBDA5E0774F4BA5A9D4F317717495: dlm_recover_grant 1 locks on 971 resources
   dlm: C21CBDA5E0774F4BA5A9D4F317717495: dlm_recover 9 generation 5 done: 4 ms
   ocfs2: Begin replay journal (node 318952601, slot 2) on device (253,18)
   ocfs2: End replay journal (node 318952601, slot 2) on device (253,18)
   ocfs2: Beginning quota recovery on device (253,18) for slot 2
   ocfs2: Finishing quota recovery on device (253,18) for slot 2
   (truncate,30154,1):ocfs2_truncate_file:470 ERROR: bug expression: le64_to_cpu(fe->i_size) != i_size_read(inode)
   (truncate,30154,1):ocfs2_truncate_file:470 ERROR: Inode 290321, inode i_size = 732 != di i_size = 937, i_flags = 0x1
   ------------[ cut here ]------------
   kernel BUG at /usr/src/linux/fs/ocfs2/file.c:470!
   invalid opcode: 0000 [#1] SMP
   Modules linked in: ocfs2_stack_user(OEN) ocfs2(OEN) ocfs2_nodemanager ocfs2_stackglue(OEN) quota_tree dlm(OEN) configfs fuse sd_mod    iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi af_packet iscsi_ibft iscsi_boot_sysfs softdog xfs libcrc32c ppdev parport_pc pcspkr parport      joydev virtio_balloon virtio_net i2c_piix4 acpi_cpufreq button processor ext4 crc16 jbd2 mbcache ata_generic cirrus virtio_blk ata_piix               drm_kms_helper ahci syscopyarea libahci sysfillrect sysimgblt fb_sys_fops ttm floppy libata drm virtio_pci virtio_ring uhci_hcd virtio ehci_hcd       usbcore serio_raw usb_common sg dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_dh_alua scsi_mod autofs4
   Supported: No, Unsupported modules are loaded
   CPU: 1 PID: 30154 Comm: truncate Tainted: G           OE   N  4.4.21-69-default #1
   Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.1-0-g4adadbd-20151112_172657-sheep25 04/01/2014
   task: ffff88004ff6d240 ti: ffff880074e68000 task.ti: ffff880074e68000
   RIP: 0010:[<ffffffffa05c8c30>]  [<ffffffffa05c8c30>] ocfs2_truncate_file+0x640/0x6c0 [ocfs2]
   RSP: 0018:ffff880074e6bd50  EFLAGS: 00010282
   RAX: 0000000000000074 RBX: 000000000000029e RCX: 0000000000000000
   RDX: 0000000000000001 RSI: 0000000000000246 RDI: 0000000000000246
   RBP: ffff880074e6bda8 R08: 000000003675dc7a R09: ffffffff82013414
   R10: 0000000000034c50 R11: 0000000000000000 R12: ffff88003aab3448
   R13: 00000000000002dc R14: 0000000000046e11 R15: 0000000000000020
   FS:  00007f839f965700(0000) GS:ffff88007fc80000(0000) knlGS:0000000000000000
   CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
   CR2: 00007f839f97e000 CR3: 0000000036723000 CR4: 00000000000006e0
   Call Trace:
     ocfs2_setattr+0x698/0xa90 [ocfs2]
     notify_change+0x1ae/0x380
     do_truncate+0x5e/0x90
     do_sys_ftruncate.constprop.11+0x108/0x160
     entry_SYSCALL_64_fastpath+0x12/0x6d
   Code: 24 28 ba d6 01 00 00 48 c7 c6 30 43 62 a0 8b 41 2c 89 44 24 08 48 8b 41 20 48 c7 c1 78 a3 62 a0 48 89 04 24 31 c0 e8 a0 97 f9 ff <0f> 0b 3d 00 fe ff ff 0f 84 ab fd ff ff 83 f8 fc 0f 84 a2 fd ff
   RIP  [<ffffffffa05c8c30>] ocfs2_truncate_file+0x640/0x6c0 [ocfs2]

It's because ocfs2_inode_lock() get us stale LVB in which the i_size is
not equal to the disk i_size.  We mistakenly trust the LVB because the
underlaying fsdlm dlm_lock() doesn't set lkb_sbflags with
DLM_SBF_VALNOTVALID properly for us.  But, why?

The current code tries to downconvert lock without DLM_LKF_VALBLK flag
to tell o2cb don't update RSB's LVB if it's a PR->NULL conversion, even
if the lock resource type needs LVB.  This is not the right way for
fsdlm.

The fsdlm plugin behaves different on DLM_LKF_VALBLK, it depends on
DLM_LKF_VALBLK to decide if we care about the LVB in the LKB.  If
DLM_LKF_VALBLK is not set, fsdlm will skip recovering RSB's LVB from
this lkb and set the right DLM_SBF_VALNOTVALID appropriately when node
failure happens.

The following diagram briefly illustrates how this crash happens:

RSB1 is inode metadata lock resource with LOCK_TYPE_USES_LVB;

The 1st round:

             Node1                                    Node2
RSB1: PR
                                                  RSB1(master): NULL->EX
ocfs2_downconvert_lock(PR->NULL, set_lvb==0)
  ocfs2_dlm_lock(no DLM_LKF_VALBLK)

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

dlm_lock(no DLM_LKF_VALBLK)
  convert_lock(overwrite lkb->lkb_exflags
               with no DLM_LKF_VALBLK)

RSB1: NULL                                        RSB1: EX
                                                  reset Node2
dlm_recover_rsbs()
  recover_lvb()

/* The LVB is not trustable if the node with EX fails and
 * no lock >= PR is left. We should set RSB_VALNOTVALID for RSB1.
 */

 if(!(kb_exflags & DLM_LKF_VALBLK)) /* This means we miss the chance to
           return;                   * to invalid the LVB here.
                                     */

The 2nd round:

         Node 1                                Node2
RSB1(become master from recovery)

ocfs2_setattr()
  ocfs2_inode_lock(NULL->EX)
    /* dlm_lock() return the stale lvb without setting DLM_SBF_VALNOTVALID */
    ocfs2_meta_lvb_is_trustable() return 1 /* so we don't refresh inode from disk */
  ocfs2_truncate_file()
      mlog_bug_on_msg(disk isize != i_size_read(inode))  /* crash! */

The fix is quite straightforward.  We keep to set DLM_LKF_VALBLK flag
for dlm_lock() if the lock resource type needs LVB and the fsdlm plugin
is uesed.

Link: http://lkml.kernel.org/r/1481275846-6604-1-git-send-email-zren@suse.com
Signed-off-by: Eric Ren <zren@suse.com>
Reviewed-by: Joseph Qi <jiangqi903@gmail.com>
Cc: Mark Fasheh <mfasheh@versity.com>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agocifs: Do not send echoes before Negotiate is complete
Sachin Prabhu [Sun, 16 Apr 2017 19:37:24 +0000 (20:37 +0100)]
cifs: Do not send echoes before Negotiate is complete

commit 62a6cfddcc0a5313e7da3e8311ba16226fe0ac10 upstream.

commit 4fcd1813e640 ("Fix reconnect to not defer smb3 session reconnect
long after socket reconnect") added support for Negotiate requests to
be initiated by echo calls.

To avoid delays in calling echo after a reconnect, I added the patch
introduced by the commit b8c600120fc8 ("Call echo service immediately
after socket reconnect").

This has however caused a regression with cifs shares which do not have
support for echo calls to trigger Negotiate requests. On connections
which need to call Negotiation, the echo calls trigger an error which
triggers a reconnect which in turn triggers another echo call. This
results in a loop which is only broken when an operation is performed on
the cifs share. For an idle share, it can DOS a server.

The patch uses the smb_operation can_echo() for cifs so that it is
called only if connection has been already been setup.

kernel bz: 194531

Signed-off-by: Sachin Prabhu <sprabhu@redhat.com>
Tested-by: Jonathan Liu <net147@gmail.com>
Acked-by: Pavel Shilovsky <pshilov@microsoft.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agofs/cifs: make share unaccessible at root level mountable
Aurelien Aptel [Thu, 26 Jan 2017 13:25:49 +0000 (14:25 +0100)]
fs/cifs: make share unaccessible at root level mountable

commit a6b5058fafdf508904bbf16c29b24042cef3c496 upstream.

if, when mounting //HOST/share/sub/dir/foo we can query /sub/dir/foo but
not any of the path components above:

- store the /sub/dir/foo prefix in the cifs super_block info
- in the superblock, set root dentry to the subpath dentry (instead of
  the share root)
- set a flag in the superblock to remember it
- use prefixpath when building path from a dentry

fixes bso#8950

Signed-off-by: Aurelien Aptel <aaptel@suse.com>
Reviewed-by: Pavel Shilovsky <pshilovsky@samba.org>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoCIFS: remove bad_network_name flag
Germano Percossi [Fri, 7 Apr 2017 11:29:37 +0000 (12:29 +0100)]
CIFS: remove bad_network_name flag

commit a0918f1ce6a43ac980b42b300ec443c154970979 upstream.

STATUS_BAD_NETWORK_NAME can be received during node failover,
causing the flag to be set and making the reconnect thread
always unsuccessful, thereafter.

Once the only place where it is set is removed, the remaining
bits are rendered moot.

Removing it does not prevent "mount" from failing when a non
existent share is passed.

What happens when the share really ceases to exist while the
share is mounted is undefined now as much as it was before.

Signed-off-by: Germano Percossi <germano.percossi@citrix.com>
Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoCIFS: Fix a possible memory corruption in push locks
Pavel Shilovsky [Wed, 30 Nov 2016 00:14:43 +0000 (16:14 -0800)]
CIFS: Fix a possible memory corruption in push locks

commit e3d240e9d505fc67f8f8735836df97a794bbd946 upstream.

If maxBuf is not 0 but less than a size of SMB2 lock structure
we can end up with a memory corruption.

Signed-off-by: Pavel Shilovsky <pshilov@microsoft.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoCIFS: Fix missing nls unload in smb2_reconnect()
Pavel Shilovsky [Tue, 29 Nov 2016 19:30:58 +0000 (11:30 -0800)]
CIFS: Fix missing nls unload in smb2_reconnect()

commit 4772c79599564bd08ee6682715a7d3516f67433f upstream.

Acked-by: Sachin Prabhu <sprabhu@redhat.com>
Signed-off-by: Pavel Shilovsky <pshilov@microsoft.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoCIFS: Fix a possible memory corruption during reconnect
Pavel Shilovsky [Fri, 4 Nov 2016 18:50:31 +0000 (11:50 -0700)]
CIFS: Fix a possible memory corruption during reconnect

commit 53e0e11efe9289535b060a51d4cf37c25e0d0f2b upstream.

We can not unlock/lock cifs_tcp_ses_lock while walking through ses
and tcon lists because it can corrupt list iterator pointers and
a tcon structure can be released if we don't hold an extra reference.
Fix it by moving a reconnect process to a separate delayed work
and acquiring a reference to every tcon that needs to be reconnected.
Also do not send an echo request on newly established connections.

Signed-off-by: Pavel Shilovsky <pshilov@microsoft.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agomd linear: fix a race between linear_add() and linear_congested()
colyli@suse.de [Sat, 28 Jan 2017 13:11:49 +0000 (21:11 +0800)]
md linear: fix a race between linear_add() and linear_congested()

commit 03a9e24ef2aaa5f1f9837356aed79c860521407a upstream.

Recently I receive a bug report that on Linux v3.0 based kerenl, hot add
disk to a md linear device causes kernel crash at linear_congested(). From
the crash image analysis, I find in linear_congested(), mddev->raid_disks
contains value N, but conf->disks[] only has N-1 pointers available. Then
a NULL pointer deference crashes the kernel.

There is a race between linear_add() and linear_congested(), RCU stuffs
used in these two functions cannot avoid the race. Since Linuv v4.0
RCU code is replaced by introducing mddev_suspend().  After checking the
upstream code, it seems linear_congested() is not called in
generic_make_request() code patch, so mddev_suspend() cannot provent it
from being called. The possible race still exists.

Here I explain how the race still exists in current code.  For a machine
has many CPUs, on one CPU, linear_add() is called to add a hard disk to a
md linear device; at the same time on other CPU, linear_congested() is
called to detect whether this md linear device is congested before issuing
an I/O request onto it.

Now I use a possible code execution time sequence to demo how the possible
race happens,

seq    linear_add()                linear_congested()
 0                                 conf=mddev->private
 1   oldconf=mddev->private
 2   mddev->raid_disks++
 3                              for (i=0; i<mddev->raid_disks;i++)
 4                                bdev_get_queue(conf->disks[i].rdev->bdev)
 5   mddev->private=newconf

In linear_add() mddev->raid_disks is increased in time seq 2, and on
another CPU in linear_congested() the for-loop iterates conf->disks[i] by
the increased mddev->raid_disks in time seq 3,4. But conf with one more
element (which is a pointer to struct dev_info type) to conf->disks[] is
not updated yet, accessing its structure member in time seq 4 will cause a
NULL pointer deference fault.

To fix this race, there are 2 parts of modification in the patch,
 1) Add 'int raid_disks' in struct linear_conf, as a copy of
    mddev->raid_disks. It is initialized in linear_conf(), always being
    consistent with pointers number of 'struct dev_info disks[]'. When
    iterating conf->disks[] in linear_congested(), use conf->raid_disks to
    replace mddev->raid_disks in the for-loop, then NULL pointer deference
    will not happen again.
 2) RCU stuffs are back again, and use kfree_rcu() in linear_add() to
    free oldconf memory. Because oldconf may be referenced as mddev->private
    in linear_congested(), kfree_rcu() makes sure that its memory will not
    be released until no one uses it any more.
Also some code comments are added in this patch, to make this modification
to be easier understandable.

This patch can be applied for kernels since v4.0 after commit:
3be260cc18f8 ("md/linear: remove rcu protections in favour of
suspend/resume"). But this bug is reported on Linux v3.0 based kernel, for
people who maintain kernels before Linux v4.0, they need to do some back
back port to this patch.

Changelog:
 - V3: add 'int raid_disks' in struct linear_conf, and use kfree_rcu() to
       replace rcu_call() in linear_add().
 - v2: add RCU stuffs by suggestion from Shaohua and Neil.
 - v1: initial effort.

Signed-off-by: Coly Li <colyli@suse.de>
Cc: Shaohua Li <shli@fb.com>
Cc: Neil Brown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agomd:raid1: fix a dead loop when read from a WriteMostly disk
Wei Fang [Mon, 21 Mar 2016 11:18:32 +0000 (19:18 +0800)]
md:raid1: fix a dead loop when read from a WriteMostly disk

commit 816b0acf3deb6d6be5d0519b286fdd4bafade905 upstream.

If first_bad == this_sector when we get the WriteMostly disk
in read_balance(), valid disk will be returned with zero
max_sectors. It'll lead to a dead loop in make_request(), and
OOM will happen because of endless allocation of struct bio.

Since we can't get data from this disk in this case, so
continue for another disk.

Signed-off-by: Wei Fang <fangwei1@huawei.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agomd/raid5: limit request size according to implementation limits
Konstantin Khlebnikov [Sun, 27 Nov 2016 16:32:32 +0000 (19:32 +0300)]
md/raid5: limit request size according to implementation limits

commit e8d7c33232e5fdfa761c3416539bc5b4acd12db5 upstream.

Current implementation employ 16bit counter of active stripes in lower
bits of bio->bi_phys_segments. If request is big enough to overflow
this counter bio will be completed and freed too early.

Fortunately this not happens in default configuration because several
other limits prevent that: stripe_cache_size * nr_disks effectively
limits count of active stripes. And small max_sectors_kb at lower
disks prevent that during normal read/write operations.

Overflow easily happens in discard if it's enabled by module parameter
"devices_handle_discard_safely" and stripe_cache_size is set big enough.

This patch limits requests size with 256Mb - 8Kb to prevent overflows.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Shaohua Li <shli@kernel.org>
Cc: Neil Brown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agodm space map metadata: fix 'struct sm_metadata' leak on failed create
Benjamin Marzinski [Wed, 30 Nov 2016 23:56:14 +0000 (17:56 -0600)]
dm space map metadata: fix 'struct sm_metadata' leak on failed create

commit 314c25c56c1ee5026cf99c570bdfe01847927acb upstream.

In dm_sm_metadata_create() we temporarily change the dm_space_map
operations from 'ops' (whose .destroy function deallocates the
sm_metadata) to 'bootstrap_ops' (whose .destroy function doesn't).

If dm_sm_metadata_create() fails in sm_ll_new_metadata() or
sm_ll_extend(), it exits back to dm_tm_create_internal(), which calls
dm_sm_destroy() with the intention of freeing the sm_metadata, but it
doesn't (because the dm_space_map operations is still set to
'bootstrap_ops').

Fix this by setting the dm_space_map operations back to 'ops' if
dm_sm_metadata_create() fails when it is set to 'bootstrap_ops'.

[js] no nr_blocks test in 3.12 yet

Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com>
Acked-by: Joe Thornber <ejt@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agodm crypt: mark key as invalid until properly loaded
Ondrej Kozina [Wed, 2 Nov 2016 14:02:08 +0000 (15:02 +0100)]
dm crypt: mark key as invalid until properly loaded

commit 265e9098bac02bc5e36cda21fdbad34cb5b2f48d upstream.

In crypt_set_key(), if a failure occurs while replacing the old key
(e.g. tfm->setkey() fails) the key must not have DM_CRYPT_KEY_VALID flag
set.  Otherwise, the crypto layer would have an invalid key that still
has DM_CRYPT_KEY_VALID flag set.

Signed-off-by: Ondrej Kozina <okozina@redhat.com>
Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoblock: fix del_gendisk() vs blkdev_ioctl crash
Dan Williams [Tue, 29 Dec 2015 22:02:29 +0000 (14:02 -0800)]
block: fix del_gendisk() vs blkdev_ioctl crash

commit ac34f15e0c6d2fd58480052b6985f6991fb53bcc upstream.

When tearing down a block device early in its lifetime, userspace may
still be performing discovery actions like blkdev_ioctl() to re-read
partitions.

The nvdimm_revalidate_disk() implementation depends on
disk->driverfs_dev to be valid at entry.  However, it is set to NULL in
del_gendisk() and fatally this is happening *before* the disk device is
deleted from userspace view.

There's no reason for del_gendisk() to clear ->driverfs_dev.  That
device is the parent of the disk.  It is guaranteed to not be freed
until the disk, as a child, drops its ->parent reference.

We could also fix this issue locally in nvdimm_revalidate_disk() by
using disk_to_dev(disk)->parent, but lets fix it globally since
->driverfs_dev follows the lifetime of the parent.  Longer term we
should probably just add a @parent parameter to add_disk(), and stop
carrying this pointer in the gendisk.

 BUG: unable to handle kernel NULL pointer dereference at           (null)
 IP: [<ffffffffa00340a8>] nvdimm_revalidate_disk+0x18/0x90 [libnvdimm]
 CPU: 2 PID: 538 Comm: systemd-udevd Tainted: G           O    4.4.0-rc5 #2257
 [..]
 Call Trace:
  [<ffffffff8143e5c7>] rescan_partitions+0x87/0x2c0
  [<ffffffff810f37f9>] ? __lock_is_held+0x49/0x70
  [<ffffffff81438c62>] __blkdev_reread_part+0x72/0xb0
  [<ffffffff81438cc5>] blkdev_reread_part+0x25/0x40
  [<ffffffff8143982d>] blkdev_ioctl+0x4fd/0x9c0
  [<ffffffff811246c9>] ? current_kernel_time64+0x69/0xd0
  [<ffffffff812916dd>] block_ioctl+0x3d/0x50
  [<ffffffff81264c38>] do_vfs_ioctl+0x308/0x560
  [<ffffffff8115dbd1>] ? __audit_syscall_entry+0xb1/0x100
  [<ffffffff810031d6>] ? do_audit_syscall_entry+0x66/0x70
  [<ffffffff81264f09>] SyS_ioctl+0x79/0x90
  [<ffffffff81902672>] entry_SYSCALL_64_fastpath+0x12/0x76

Cc: Jan Kara <jack@suse.cz>
Cc: Jens Axboe <axboe@fb.com>
Reported-by: Robert Hu <robert.hu@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoblock: allow WRITE_SAME commands with the SG_IO ioctl
Mauricio Faria de Oliveira [Sat, 25 Mar 2017 16:18:14 +0000 (21:48 +0530)]
block: allow WRITE_SAME commands with the SG_IO ioctl

commit 25cdb64510644f3e854d502d69c73f21c6df88a9 upstream.

The WRITE_SAME commands are not present in the blk_default_cmd_filter
write_ok list, and thus are failed with -EPERM when the SG_IO ioctl()
is executed without CAP_SYS_RAWIO capability (e.g., unprivileged users).
[ sg_io() -> blk_fill_sghdr_rq() > blk_verify_command() -> -EPERM ]

The problem can be reproduced with the sg_write_same command

  # sg_write_same --num 1 --xferlen 512 /dev/sda
  #

  # capsh --drop=cap_sys_rawio -- -c \
    'sg_write_same --num 1 --xferlen 512 /dev/sda'
    Write same: pass through os error: Operation not permitted
  #

For comparison, the WRITE_VERIFY command does not observe this problem,
since it is in that list:

  # capsh --drop=cap_sys_rawio -- -c \
    'sg_write_verify --num 1 --ilen 512 --lba 0 /dev/sda'
  #

So, this patch adds the WRITE_SAME commands to the list, in order
for the SG_IO ioctl to finish successfully:

  # capsh --drop=cap_sys_rawio -- -c \
    'sg_write_same --num 1 --xferlen 512 /dev/sda'
  #

That case happens to be exercised by QEMU KVM guests with 'scsi-block' devices
(qemu "-device scsi-block" [1], libvirt "<disk type='block' device='lun'>" [2]),
which employs the SG_IO ioctl() and runs as an unprivileged user (libvirt-qemu).

In that scenario, when a filesystem (e.g., ext4) performs its zero-out calls,
which are translated to write-same calls in the guest kernel, and then into
SG_IO ioctls to the host kernel, SCSI I/O errors may be observed in the guest:

  [...] sd 0:0:0:0: [sda] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
  [...] sd 0:0:0:0: [sda] tag#0 Sense Key : Aborted Command [current]
  [...] sd 0:0:0:0: [sda] tag#0 Add. Sense: I/O process terminated
  [...] sd 0:0:0:0: [sda] tag#0 CDB: Write Same(10) 41 00 01 04 e0 78 00 00 08 00
  [...] blk_update_request: I/O error, dev sda, sector 17096824

Links:
[1] http://git.qemu.org/?p=qemu.git;a=commit;h=336a6915bc7089fb20fea4ba99972ad9a97c5f52
[2] https://libvirt.org/formatdomain.html#elementsDisks (see 'disk' -> 'device')

Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com>
Signed-off-by: Brahadambal Srinivasan <latha@linux.vnet.ibm.com>
Reported-by: Manjunatha H R <manjuhr1@in.ibm.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoblock: fix use-after-free in sys_ioprio_get()
Omar Sandoval [Fri, 1 Jul 2016 07:39:35 +0000 (00:39 -0700)]
block: fix use-after-free in sys_ioprio_get()

commit 8ba8682107ee2ca3347354e018865d8e1967c5f4 upstream.

get_task_ioprio() accesses the task->io_context without holding the task
lock and thus can race with exit_io_context(), leading to a
use-after-free. The reproducer below hits this within a few seconds on
my 4-core QEMU VM:

int main(int argc, char **argv)
{
pid_t pid, child;
long nproc, i;

/* ioprio_set(IOPRIO_WHO_PROCESS, 0, IOPRIO_PRIO_VALUE(IOPRIO_CLASS_IDLE, 0)); */
syscall(SYS_ioprio_set, 1, 0, 0x6000);

nproc = sysconf(_SC_NPROCESSORS_ONLN);

for (i = 0; i < nproc; i++) {
pid = fork();
assert(pid != -1);
if (pid == 0) {
for (;;) {
pid = fork();
assert(pid != -1);
if (pid == 0) {
_exit(0);
} else {
child = wait(NULL);
assert(child == pid);
}
}
}

pid = fork();
assert(pid != -1);
if (pid == 0) {
for (;;) {
/* ioprio_get(IOPRIO_WHO_PGRP, 0); */
syscall(SYS_ioprio_get, 2, 0);
}
}
}

for (;;) {
/* ioprio_get(IOPRIO_WHO_PGRP, 0); */
syscall(SYS_ioprio_get, 2, 0);
}

return 0;
}

This gets us KASAN dumps like this:

[   35.526914] ==================================================================
[   35.530009] BUG: KASAN: out-of-bounds in get_task_ioprio+0x7b/0x90 at addr ffff880066f34e6c
[   35.530009] Read of size 2 by task ioprio-gpf/363
[   35.530009] =============================================================================
[   35.530009] BUG blkdev_ioc (Not tainted): kasan: bad access detected
[   35.530009] -----------------------------------------------------------------------------

[   35.530009] Disabling lock debugging due to kernel taint
[   35.530009] INFO: Allocated in create_task_io_context+0x2b/0x370 age=0 cpu=0 pid=360
[   35.530009]  ___slab_alloc+0x55d/0x5a0
[   35.530009]  __slab_alloc.isra.20+0x2b/0x40
[   35.530009]  kmem_cache_alloc_node+0x84/0x200
[   35.530009]  create_task_io_context+0x2b/0x370
[   35.530009]  get_task_io_context+0x92/0xb0
[   35.530009]  copy_process.part.8+0x5029/0x5660
[   35.530009]  _do_fork+0x155/0x7e0
[   35.530009]  SyS_clone+0x19/0x20
[   35.530009]  do_syscall_64+0x195/0x3a0
[   35.530009]  return_from_SYSCALL_64+0x0/0x6a
[   35.530009] INFO: Freed in put_io_context+0xe7/0x120 age=0 cpu=0 pid=1060
[   35.530009]  __slab_free+0x27b/0x3d0
[   35.530009]  kmem_cache_free+0x1fb/0x220
[   35.530009]  put_io_context+0xe7/0x120
[   35.530009]  put_io_context_active+0x238/0x380
[   35.530009]  exit_io_context+0x66/0x80
[   35.530009]  do_exit+0x158e/0x2b90
[   35.530009]  do_group_exit+0xe5/0x2b0
[   35.530009]  SyS_exit_group+0x1d/0x20
[   35.530009]  entry_SYSCALL_64_fastpath+0x1a/0xa4
[   35.530009] INFO: Slab 0xffffea00019bcd00 objects=20 used=4 fp=0xffff880066f34ff0 flags=0x1fffe0000004080
[   35.530009] INFO: Object 0xffff880066f34e58 @offset=3672 fp=0x0000000000000001
[   35.530009] ==================================================================

Fix it by grabbing the task lock while we poke at the io_context.

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Acked-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoext4: fix inode checksum calculation problem if i_extra_size is small
Daeho Jeong [Thu, 1 Dec 2016 16:49:12 +0000 (11:49 -0500)]
ext4: fix inode checksum calculation problem if i_extra_size is small

commit 05ac5aa18abd7db341e54df4ae2b4c98ea0e43b7 upstream.

We've fixed the race condition problem in calculating ext4 checksum
value in commit b47820edd163 ("ext4: avoid modifying checksum fields
directly during checksum veficationon"). However, by this change,
when calculating the checksum value of inode whose i_extra_size is
less than 4, we couldn't calculate the checksum value in a proper way.
This problem was found and reported by Nix, Thank you.

Reported-by: Nix <nix@esperi.org.uk>
Signed-off-by: Daeho Jeong <daeho.jeong@samsung.com>
Signed-off-by: Youngjin Gil <youngjin.gil@samsung.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoext4: return EROFS if device is r/o and journal replay is needed
Theodore Ts'o [Sun, 5 Feb 2017 06:26:48 +0000 (01:26 -0500)]
ext4: return EROFS if device is r/o and journal replay is needed

commit 4753d8a24d4588657bc0a4cd66d4e282dff15c8c upstream.

If the file system requires journal recovery, and the device is
read-ony, return EROFS to the mount system call.  This allows xfstests
generic/050 to pass.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoext4: preserve the needs_recovery flag when the journal is aborted
Theodore Ts'o [Sun, 5 Feb 2017 04:38:06 +0000 (23:38 -0500)]
ext4: preserve the needs_recovery flag when the journal is aborted

commit 97abd7d4b5d9c48ec15c425485f054e1c15e591b upstream.

If the journal is aborted, the needs_recovery feature flag should not
be removed.  Otherwise, it's the journal might not get replayed and
this could lead to more data getting lost.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoext4: trim allocation requests to group size
Jan Kara [Fri, 27 Jan 2017 19:34:30 +0000 (14:34 -0500)]
ext4: trim allocation requests to group size

commit cd648b8a8fd5071d232242d5ee7ee3c0815776af upstream.

If filesystem groups are artifically small (using parameter -g to
mkfs.ext4), ext4_mb_normalize_request() can result in a request that is
larger than a block group. Trim the request size to not confuse
allocation code.

Reported-by: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoext4: fix fencepost in s_first_meta_bg validation
Theodore Ts'o [Wed, 15 Feb 2017 06:26:39 +0000 (01:26 -0500)]
ext4: fix fencepost in s_first_meta_bg validation

commit 2ba3e6e8afc9b6188b471f27cf2b5e3cf34e7af2 upstream.

It is OK for s_first_meta_bg to be equal to the number of block group
descriptor blocks.  (It rarely happens, but it shouldn't cause any
problems.)

https://bugzilla.kernel.org/show_bug.cgi?id=194567

Fixes: 3a4b77cd47bb837b8557595ec7425f281f2ca1fe
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agojbd2: don't leak modified metadata buffers on an aborted journal
Theodore Ts'o [Sun, 5 Feb 2017 04:14:19 +0000 (23:14 -0500)]
jbd2: don't leak modified metadata buffers on an aborted journal

commit e112666b4959b25a8552d63bc564e1059be703e8 upstream.

If the journal has been aborted, we shouldn't mark the underlying
buffer head as dirty, since that will cause the metadata block to get
modified.  And if the journal has been aborted, we shouldn't allow
this since it will almost certainly lead to a corrupted file system.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoext4: validate s_first_meta_bg at mount time
Eryu Guan [Thu, 1 Dec 2016 20:08:37 +0000 (15:08 -0500)]
ext4: validate s_first_meta_bg at mount time

commit 3a4b77cd47bb837b8557595ec7425f281f2ca1fe upstream.

Ralf Spenneberg reported that he hit a kernel crash when mounting a
modified ext4 image. And it turns out that kernel crashed when
calculating fs overhead (ext4_calculate_overhead()), this is because
the image has very large s_first_meta_bg (debug code shows it's
842150400), and ext4 overruns the memory in count_overhead() when
setting bitmap buffer, which is PAGE_SIZE.

ext4_calculate_overhead():
  buf = get_zeroed_page(GFP_NOFS);  <=== PAGE_SIZE buffer
  blks = count_overhead(sb, i, buf);

count_overhead():
  for (j = ext4_bg_num_gdb(sb, grp); j > 0; j--) { <=== j = 842150400
          ext4_set_bit(EXT4_B2C(sbi, s++), buf);   <=== buffer overrun
          count++;
  }

This can be reproduced easily for me by this script:

  #!/bin/bash
  rm -f fs.img
  mkdir -p /mnt/ext4
  fallocate -l 16M fs.img
  mke2fs -t ext4 -O bigalloc,meta_bg,^resize_inode -F fs.img
  debugfs -w -R "ssv first_meta_bg 842150400" fs.img
  mount -o loop fs.img /mnt/ext4

Fix it by validating s_first_meta_bg first at mount time, and
refusing to mount if its value exceeds the largest possible meta_bg
number.

[js] use EXT4_HAS_INCOMPAT_FEATURE instead of new
     ext4_has_feature_meta_bg

Reported-by: Ralf Spenneberg <ralf@os-t.de>
Signed-off-by: Eryu Guan <guaneryu@gmail.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoext4: add sanity checking to count_overhead()
Theodore Ts'o [Fri, 18 Nov 2016 18:37:47 +0000 (13:37 -0500)]
ext4: add sanity checking to count_overhead()

commit c48ae41bafe31e9a66d8be2ced4e42a6b57fa814 upstream.

The commit "ext4: sanity check the block and cluster size at mount
time" should prevent any problems, but in case the superblock is
modified while the file system is mounted, add an extra safety check
to make sure we won't overrun the allocated buffer.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoext4: fix in-superblock mount options processing
Theodore Ts'o [Fri, 18 Nov 2016 18:24:26 +0000 (13:24 -0500)]
ext4: fix in-superblock mount options processing

commit 5aee0f8a3f42c94c5012f1673420aee96315925a upstream.

Fix a large number of problems with how we handle mount options in the
superblock.  For one, if the string in the superblock is long enough
that it is not null terminated, we could run off the end of the string
and try to interpret superblocks fields as characters.  It's unlikely
this will cause a security problem, but it could result in an invalid
parse.  Also, parse_options is destructive to the string, so in some
cases if there is a comma-separated string, it would be modified in
the superblock.  (Fortunately it only happens on file systems with a
1k block size.)

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoext4: use more strict checks for inodes_per_block on mount
Theodore Ts'o [Fri, 18 Nov 2016 18:28:30 +0000 (13:28 -0500)]
ext4: use more strict checks for inodes_per_block on mount

commit cd6bb35bf7f6d7d922509bf50265383a0ceabe96 upstream.

Centralize the checks for inodes_per_block and be more strict to make
sure the inodes_per_block_group can't end up being zero.

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoBtrfs: fix memory leak in reading btree blocks
Liu Bo [Wed, 3 Aug 2016 19:33:01 +0000 (12:33 -0700)]
Btrfs: fix memory leak in reading btree blocks

commit 2571e739677f1e4c0c63f5ed49adcc0857923625 upstream.

So we can read a btree block via readahead or intentional read,
and we can end up with a memory leak when something happens as
follows,
1) readahead starts to read block A but does not wait for read
   completion,
2) btree_readpage_end_io_hook finds that block A is corrupted,
   and it needs to clear all block A's pages' uptodate bit.
3) meanwhile an intentional read kicks in and checks block A's
   pages' uptodate to decide which page needs to be read.
4) when some pages have the uptodate bit during 3)'s check so
   3) doesn't count them for eb->io_pages, but they are later
   cleared by 2) so we has to readpage on the page, we get
   the wrong eb->io_pages which results in a memory leak of
   this block.

This fixes the problem by firstly getting all pages's locking and
then checking pages' uptodate bit.

   t1(readahead)                              t2(readahead endio)                                       t3(the following read)
read_extent_buffer_pages                    end_bio_extent_readpage
  for pg in eb:                                for page 0,1,2 in eb:
      if pg is uptodate:                           btree_readpage_end_io_hook(pg)
          num_reads++                              if uptodate:
  eb->io_pages = num_reads                             SetPageUptodate(pg)              _______________
  for pg in eb:                                for page 3 in eb:                                     read_extent_buffer_pages
       if pg is NOT uptodate:                      btree_readpage_end_io_hook(pg)                       for pg in eb:
           __extent_read_full_page(pg)                 sanity check reports something wrong                 if pg is uptodate:
                                                       clear_extent_buffer_uptodate(eb)                         num_reads++
                                                           for pg in eb:                                eb->io_pages = num_reads
                                                               ClearPageUptodate(page)  _______________
                                                                                                        for pg in eb:
                                                                                                            if pg is NOT uptodate:
                                                                                                                __extent_read_full_page(pg)

So t3's eb->io_pages is not consistent with the number of pages it's reading,
and during endio(), atomic_dec_and_test(&eb->io_pages) will get a negative
number so that we're not able to free the eb.

Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoRevert "Btrfs: don't delay inode ref updates during log, replay"
Jeff Mahoney [Sat, 3 Dec 2016 03:21:55 +0000 (22:21 -0500)]
Revert "Btrfs: don't delay inode ref updates during log, replay"

commit 081fafddc3ff1e86e36024b0177c08e340b19a12 upstream.

This reverts commit 644d10716875b24388680925d6c7502420987bfe, upstream
commit 6f8960541b1eb6054a642da48daae2320fddba93.

The original patch for mainline, 6f8960541b1 (Btrfs: don't delay
inode ref updates during log replay) lists 1d52c78afbb (Btrfs: try
not to ENOSPC on log replay) as the only pre-3.18 dependency, but it
also depends on 67de11769bd (Btrfs: introduce the delayed inode ref
deletion for the single link inode), which was introduced in 3.14
and isn't in 3.12.y.

The -stable commit added the check to btrfs_delayed_update_inode,
which may look similar to btrfs_delayed_delete_inode_ref, but it's
only superficial.  The tops of both functions handle typical
delayed node boilerplate.  The upshot is that the patch is harmless
since the caller already checks to see if we're doing log recovery,
so we're not breaking anything.  It should be reverted because it
makes it appear as if this issue was fixed for users who did
backport 67de11769bd, when it is not.

Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agoLinux 3.10.106
Willy Tarreau [Thu, 15 Jun 2017 17:56:54 +0000 (19:56 +0200)]
Linux 3.10.106

6 years agodccp: fix freeing skb too early for IPV6_RECVPKTINFO
Andrey Konovalov [Thu, 16 Feb 2017 16:22:46 +0000 (17:22 +0100)]
dccp: fix freeing skb too early for IPV6_RECVPKTINFO

[ Upstream commit 5edabca9d4cff7f1f2b68f0bac55ef99d9798ba4 ]

In the current DCCP implementation an skb for a DCCP_PKT_REQUEST packet
is forcibly freed via __kfree_skb in dccp_rcv_state_process if
dccp_v6_conn_request successfully returns.

However, if IPV6_RECVPKTINFO is set on a socket, the address of the skb
is saved to ireq->pktopts and the ref count for skb is incremented in
dccp_v6_conn_request, so skb is still in use. Nevertheless, it gets freed
in dccp_rcv_state_process.

Fix by calling consume_skb instead of doing goto discard and therefore
calling __kfree_skb.

Similar fixes for TCP:

fb7e2399ec17f1004c0e0ccfd17439f8759ede01 [TCP]: skb is unexpectedly freed.
0aea76d35c9651d55bbaf746e7914e5f9ae5a25d tcp: SYN packets are now
simply consumed

Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agochar: lp: fix possible integer overflow in lp_setup()
Willy Tarreau [Tue, 16 May 2017 17:18:55 +0000 (19:18 +0200)]
char: lp: fix possible integer overflow in lp_setup()

commit 3e21f4af170bebf47c187c1ff8bf155583c9f3b1 upstream.

The lp_setup() code doesn't apply any bounds checking when passing
"lp=none", and only in this case, resulting in an overflow of the
parport_nr[] array. All versions in Git history are affected.

Reported-By: Roee Hay <roee.hay@hcl.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agodccp/tcp: do not inherit mc_list from parent
Eric Dumazet [Tue, 9 May 2017 13:29:19 +0000 (06:29 -0700)]
dccp/tcp: do not inherit mc_list from parent

commit 657831ffc38e30092a2d5f03d385d710eb88b09a upstream.

syzkaller found a way to trigger double frees from ip_mc_drop_socket()

It turns out that leave a copy of parent mc_list at accept() time,
which is very bad.

Very similar to commit 8b485ce69876 ("tcp: do not inherit
fastopen_req from parent")

Initial report from Pray3r, completed by Andrey one.
Thanks a lot to them !

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Pray3r <pray3r.z@gmail.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
6 years agomm/huge_memory.c: respect FOLL_FORCE/FOLL_COW for thp
Keno Fischer [Tue, 24 Jan 2017 23:17:48 +0000 (15:17 -0800)]
mm/huge_memory.c: respect FOLL_FORCE/FOLL_COW for thp

commit 8310d48b125d19fcd9521d83b8293e63eb1646aa upstream.

In commit 19be0eaffa3a ("mm: remove gup_flags FOLL_WRITE games from
__get_user_pages()"), the mm code was changed from unsetting FOLL_WRITE
after a COW was resolved to setting the (newly introduced) FOLL_COW
instead.  Simultaneously, the check in gup.c was updated to still allow
writes with FOLL_FORCE set if FOLL_COW had also been set.

However, a similar check in huge_memory.c was forgotten.  As a result,
remote memory writes to ro regions of memory backed by transparent huge
pages cause an infinite loop in the kernel (handle_mm_fault sets
FOLL_COW and returns 0 causing a retry, but follow_trans_huge_pmd bails
out immidiately because `(flags & FOLL_WRITE) && !pmd_write(*pmd)` is
true.

While in this state the process is stil SIGKILLable, but little else
works (e.g.  no ptrace attach, no other signals).  This is easily
reproduced with the following code (assuming thp are set to always):

    #include <assert.h>
    #include <fcntl.h>
    #include <stdint.h>
    #include <stdio.h>
    #include <string.h>
    #include <sys/mman.h>
    #include <sys/stat.h>
    #include <sys/types.h>
    #include <sys/wait.h>
    #include <unistd.h>

    #define TEST_SIZE 5 * 1024 * 1024

    int main(void) {
      int status;
      pid_t child;
      int fd = open("/proc/self/mem", O_RDWR);
      void *addr = mmap(NULL, TEST_SIZE, PROT_READ,
                        MAP_ANONYMOUS | MAP_PRIVATE, 0, 0);
      assert(addr != MAP_FAILED);
      pid_t parent_pid = getpid();
      if ((child = fork()) == 0) {
        void *addr2 = mmap(NULL, TEST_SIZE, PROT_READ | PROT_WRITE,
                           MAP_ANONYMOUS | MAP_PRIVATE, 0, 0);
        assert(addr2 != MAP_FAILED);
        memset(addr2, 'a', TEST_SIZE);
        pwrite(fd, addr2, TEST_SIZE, (uintptr_t)addr);
        return 0;
      }
      assert(child == waitpid(child, &status, 0));
      assert(WIFEXITED(status) && WEXITSTATUS(status) == 0);
      return 0;
    }

Fix this by updating follow_trans_huge_pmd in huge_memory.c analogously
to the update in gup.c in the original commit.  The same pattern exists
in follow_devmap_pmd.  However, we should not be able to reach that
check with FOLL_COW set, so add WARN_ONCE to make sure we notice if we
ever do.

[akpm@linux-foundation.org: coding-style fixes]
Link: http://lkml.kernel.org/r/20170106015025.GA38411@juliacomputing.com
Signed-off-by: Keno Fischer <keno@juliacomputing.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Willy Tarreau <w@1wt.eu>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[bwh: Backported to 3.2:
 - Drop change to follow_devmap_pmd()
 - pmd_dirty() is not available; check the page flags as in
   can_follow_write_pte()
 - Adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
[mhocko:
  This has been forward ported from the 3.2 stable tree.
  And fixed to return NULL.]
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>