Marcin Nowakowski [Thu, 6 Jul 2017 22:35:31 +0000 (15:35 -0700)]
kernel/extable.c: mark core_kernel_text notrace
commit
c0d80ddab89916273cb97114889d3f337bc370ae upstream.
core_kernel_text is used by MIPS in its function graph trace processing,
so having this method traced leads to an infinite set of recursive calls
such as:
Call Trace:
ftrace_return_to_handler+0x50/0x128
core_kernel_text+0x10/0x1b8
prepare_ftrace_return+0x6c/0x114
ftrace_graph_caller+0x20/0x44
return_to_handler+0x10/0x30
return_to_handler+0x0/0x30
return_to_handler+0x0/0x30
ftrace_ops_no_ops+0x114/0x1bc
core_kernel_text+0x10/0x1b8
core_kernel_text+0x10/0x1b8
core_kernel_text+0x10/0x1b8
ftrace_ops_no_ops+0x114/0x1bc
core_kernel_text+0x10/0x1b8
prepare_ftrace_return+0x6c/0x114
ftrace_graph_caller+0x20/0x44
(...)
Mark the function notrace to avoid it being traced.
Link: http://lkml.kernel.org/r/1498028607-6765-1-git-send-email-marcin.nowakowski@imgtec.com
Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Meyer <thomas@m3y3r.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Dumazet [Tue, 27 Jun 2017 14:02:20 +0000 (07:02 -0700)]
net: prevent sign extension in dev_get_stats()
commit
6f64ec74515925cced6df4571638b5a099a49aae upstream.
Similar to the fix provided by Dominik Heidler in commit
9b3dc0a17d73 ("l2tp: cast l2tp traffic counter to unsigned")
we need to take care of 32bit kernels in dev_get_stats().
When using atomic_long_read(), we add a 'long' to u64 and
might misinterpret high order bit, unless we cast to unsigned.
Fixes:
caf586e5f23ce ("net: add a core netdev->rx_dropped counter")
Fixes:
015f0688f57ca ("net: net: add a core netdev->tx_dropped counter")
Fixes:
6e7333d315a76 ("net: add rx_nohandler stat counter")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jarod Wilson <jarod@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jan Kara [Mon, 22 May 2017 02:33:23 +0000 (22:33 -0400)]
ext4: fix SEEK_HOLE
commit
7d95eddf313c88b24f99d4ca9c2411a4b82fef33 upstream.
Currently, SEEK_HOLE implementation in ext4 may both return that there's
a hole at some offset although that offset already has data and skip
some holes during a search for the next hole. The first problem is
demostrated by:
xfs_io -c "falloc 0 256k" -c "pwrite 0 56k" -c "seek -h 0" file
wrote 57344/57344 bytes at offset 0
56 KiB, 14 ops; 0.0000 sec (2.054 GiB/sec and 538461.5385 ops/sec)
Whence Result
HOLE 0
Where we can see that SEEK_HOLE wrongly returned offset 0 as containing
a hole although we have written data there. The second problem can be
demonstrated by:
xfs_io -c "falloc 0 256k" -c "pwrite 0 56k" -c "pwrite 128k 8k"
-c "seek -h 0" file
wrote 57344/57344 bytes at offset 0
56 KiB, 14 ops; 0.0000 sec (1.978 GiB/sec and 518518.5185 ops/sec)
wrote 8192/8192 bytes at offset 131072
8 KiB, 2 ops; 0.0000 sec (2 GiB/sec and 500000.0000 ops/sec)
Whence Result
HOLE 139264
Where we can see that hole at offsets 56k..128k has been ignored by the
SEEK_HOLE call.
The underlying problem is in the ext4_find_unwritten_pgoff() which is
just buggy. In some cases it fails to update returned offset when it
finds a hole (when no pages are found or when the first found page has
higher index than expected), in some cases conditions for detecting hole
are just missing (we fail to detect a situation where indices of
returned pages are not contiguous).
Fix ext4_find_unwritten_pgoff() to properly detect non-contiguous page
indices and also handle all cases where we got less pages then expected
in one place and handle it properly there.
Fixes:
c8c0df241cc2719b1262e627f999638411934f60
CC: Zheng Liu <wenqing.lz@taobao.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Ilya Matveychikov [Fri, 23 Jun 2017 22:08:49 +0000 (15:08 -0700)]
lib/cmdline.c: fix get_options() overflow while parsing ranges
commit
a91e0f680bcd9e10c253ae8b62462a38bd48f09f upstream.
When using get_options() it's possible to specify a range of numbers,
like 1-100500. The problem is that it doesn't track array size while
calling internally to get_range() which iterates over the range and
fills the memory with numbers.
Link: http://lkml.kernel.org/r/2613C75C-B04D-4BFF-82A6-12F97BA0F620@gmail.com
Signed-off-by: Ilya V. Matveychikov <matvejchikov@gmail.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jason Yan [Fri, 10 Mar 2017 03:27:23 +0000 (11:27 +0800)]
md: fix super_offset endianness in super_1_rdev_size_change
commit
3fb632e40d7667d8bedfabc28850ac06d5493f54 upstream.
The sb->super_offset should be big-endian, but the rdev->sb_start is in
host byte order, so fix this by adding cpu_to_le64.
Signed-off-by: Jason Yan <yanaijie@huawei.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Shaohua Li [Thu, 23 Feb 2017 20:26:41 +0000 (12:26 -0800)]
md/raid10: submit bio directly to replacement disk
commit
6d399783e9d4e9bd44931501948059d24ad96ff8 upstream.
Commit
57c67df(md/raid10: submit IO from originating thread instead of
md thread) submits bio directly for normal disks but not for replacement
disks. There is no point we shouldn't do this for replacement disks.
Cc: NeilBrown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Liping Zhang [Sat, 25 Mar 2017 00:53:12 +0000 (08:53 +0800)]
netfilter: invoke synchronize_rcu after set the _hook_ to NULL
commit
3b7dabf029478bb80507a6c4500ca94132a2bc0b upstream.
Otherwise, another CPU may access the invalid pointer. For example:
CPU0 CPU1
- rcu_read_lock();
- pfunc = _hook_;
_hook_ = NULL; -
mod unload -
- pfunc(); // invalid, panic
- rcu_read_unlock();
So we must call synchronize_rcu() to wait the rcu reader to finish.
Also note, in nf_nat_snmp_basic_fini, synchronize_rcu() will be invoked
by later nf_conntrack_helper_unregister, but I'm inclined to add a
explicit synchronize_rcu after set the nf_nat_snmp_hook to NULL. Depend
on such obscure assumptions is not a good idea.
Last, in nfnetlink_cttimeout, we use kfree_rcu to free the time object,
so in cttimeout_exit, invoking rcu_barrier() is not necessary at all,
remove it too.
Signed-off-by: Liping Zhang <zlpnobody@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Biggers [Mon, 9 Oct 2017 19:43:20 +0000 (12:43 -0700)]
lib/digsig: fix dereference of NULL user_key_payload
commit
192cabd6a296cbc57b3d8c05c4c89d87fc102506 upstream.
digsig_verify() requests a user key, then accesses its payload.
However, a revoked key has a NULL payload, and we failed to check for
this. request_key() *does* skip revoked keys, but there is still a
window where the key can be revoked before we acquire its semaphore.
Fix it by checking for a NULL payload, treating it like a key which was
already revoked at the time it was requested.
Fixes:
051dbb918c7f ("crypto: digital signature verification support")
Reviewed-by: James Morris <james.l.morris@oracle.com>
Cc: <stable@vger.kernel.org> [v3.3+]
Cc: Dmitry Kasatkin <dmitry.kasatkin@intel.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
NeilBrown [Thu, 31 Aug 2017 00:23:25 +0000 (10:23 +1000)]
md/bitmap: disable bitmap_resize for file-backed bitmaps.
commit
e8a27f836f165c26f867ece7f31eb5c811692319 upstream.
bitmap_resize() does not work for file-backed bitmaps.
The buffer_heads are allocated and initialized when
the bitmap is read from the file, but resize doesn't
read from the file, it loads from the internal bitmap.
When it comes time to write the new bitmap, the bh is
non-existent and we crash.
The common case when growing an array involves making the array larger,
and that normally means making the bitmap larger. Doing
that inside the kernel is possible, but would need more code.
It is probably easier to require people who use file-backed
bitmaps to remove them and re-add after a reshape.
So this patch disables the resizing of arrays which have
file-backed bitmaps. This is better than crashing.
Reported-by: Zhilong Liu <zlliu@suse.com>
Fixes:
d60b479d177a ("md/bitmap: add bitmap_resize function to allow bitmap resizing.")
Cc: stable@vger.kernel.org (v3.5+).
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Biggers [Mon, 9 Oct 2017 19:37:49 +0000 (12:37 -0700)]
KEYS: encrypted: fix dereference of NULL user_key_payload
commit
13923d0865ca96312197962522e88bc0aedccd74 upstream.
A key of type "encrypted" references a "master key" which is used to
encrypt and decrypt the encrypted key's payload. However, when we
accessed the master key's payload, we failed to handle the case where
the master key has been revoked, which sets the payload pointer to NULL.
Note that request_key() *does* skip revoked keys, but there is still a
window where the key can be revoked before we acquire its semaphore.
Fix it by checking for a NULL payload, treating it like a key which was
already revoked at the time it was requested.
This was an issue for master keys of type "user" only. Master keys can
also be of type "trusted", but those cannot be revoked.
Fixes:
7e70cb497850 ("keys: add new key-type encrypted")
Reviewed-by: James Morris <james.l.morris@oracle.com>
Cc: <stable@vger.kernel.org> [v2.6.38+]
Cc: Mimi Zohar <zohar@linux.vnet.ibm.com>
Cc: David Safford <safford@us.ibm.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Biggers [Mon, 18 Sep 2017 18:37:03 +0000 (11:37 -0700)]
KEYS: prevent creating a different user's keyrings
commit
237bbd29f7a049d310d907f4b2716a7feef9abf3 upstream.
It was possible for an unprivileged user to create the user and user
session keyrings for another user. For example:
sudo -u '#3000' sh -c 'keyctl add keyring _uid.4000 "" @u
keyctl add keyring _uid_ses.4000 "" @u
sleep 15' &
sleep 1
sudo -u '#4000' keyctl describe @u
sudo -u '#4000' keyctl describe @us
This is problematic because these "fake" keyrings won't have the right
permissions. In particular, the user who created them first will own
them and will have full access to them via the possessor permissions,
which can be used to compromise the security of a user's keys:
-4: alswrv-----v------------ 3000 0 keyring: _uid.4000
-5: alswrv-----v------------ 3000 0 keyring: _uid_ses.4000
Fix it by marking user and user session keyrings with a flag
KEY_FLAG_UID_KEYRING. Then, when searching for a user or user session
keyring by name, skip all keyrings that don't have the flag set.
Fixes:
69664cf16af4 ("keys: don't generate user and user session keyrings unless they're accessed")
Cc: <stable@vger.kernel.org> [v2.6.26+]
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
[wt: adjust context]
Signed-off-by: Willy Tarreau <w@1wt.eu>
James Hogan [Wed, 31 May 2017 15:19:47 +0000 (16:19 +0100)]
MIPS: Fix mips_atomic_set() retry condition
commit
2ec420b26f7b6ff332393f0bb5a7d245f7ad87f0 upstream.
The inline asm retry check in the MIPS_ATOMIC_SET operation of the
sysmips system call has been backwards since commit
f1e39a4a616c ("MIPS:
Rewrite sysmips(MIPS_ATOMIC_SET, ...) in C with inline assembler")
merged in v2.6.32, resulting in the non R10000_LLSC_WAR case retrying
until the operation was inatomic, before returning the new value that
was probably just written multiple times instead of the old value.
Invert the branch condition to fix that particular issue.
Fixes:
f1e39a4a616c ("MIPS: Rewrite sysmips(MIPS_ATOMIC_SET, ...) in C with inline assembler")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org
Patchwork: https://patchwork.linux-mips.org/patch/16148/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Konstantin Khlebnikov [Mon, 22 May 2017 02:36:23 +0000 (22:36 -0400)]
ext4: keep existing extra fields when inode expands
commit
887a9730614727c4fff7cb756711b190593fc1df upstream.
ext4_expand_extra_isize() should clear only space between old and new
size.
Fixes:
6dd4ee7cab7e # v2.6.23
Cc: stable@vger.kernel.org
Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Biggers [Mon, 9 Oct 2017 19:40:00 +0000 (12:40 -0700)]
FS-Cache: fix dereference of NULL user_key_payload
commit
d124b2c53c7bee6569d2a2d0b18b4a1afde00134 upstream.
When the file /proc/fs/fscache/objects (available with
CONFIG_FSCACHE_OBJECT_LIST=y) is opened, we request a user key with
description "fscache:objlist", then access its payload. However, a
revoked key has a NULL payload, and we failed to check for this.
request_key() *does* skip revoked keys, but there is still a window
where the key can be revoked before we access its payload.
Fix it by checking for a NULL payload, treating it like a key which was
already revoked at the time it was requested.
Fixes:
4fbf4291aa15 ("FS-Cache: Allow the current state of all objects to be dumped")
Reviewed-by: James Morris <james.l.morris@oracle.com>
Cc: <stable@vger.kernel.org> [v2.6.32+]
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
David Howells [Thu, 12 Oct 2017 15:00:41 +0000 (16:00 +0100)]
KEYS: don't let add_key() update an uninstantiated key
commit
60ff5b2f547af3828aebafd54daded44cfb0807a upstream.
Currently, when passed a key that already exists, add_key() will call the
key's ->update() method if such exists. But this is heavily broken in the
case where the key is uninstantiated because it doesn't call
__key_instantiate_and_link(). Consequently, it doesn't do most of the
things that are supposed to happen when the key is instantiated, such as
setting the instantiation state, clearing KEY_FLAG_USER_CONSTRUCT and
awakening tasks waiting on it, and incrementing key->user->nikeys.
It also never takes key_construction_mutex, which means that
->instantiate() can run concurrently with ->update() on the same key. In
the case of the "user" and "logon" key types this causes a memory leak, at
best. Maybe even worse, the ->update() methods of the "encrypted" and
"trusted" key types actually just dereference a NULL pointer when passed an
uninstantiated key.
Change key_create_or_update() to wait interruptibly for the key to finish
construction before continuing.
This patch only affects *uninstantiated* keys. For now we still allow a
negatively instantiated key to be updated (thereby positively
instantiating it), although that's broken too (the next patch fixes it)
and I'm not sure that anyone actually uses that functionality either.
Here is a simple reproducer for the bug using the "encrypted" key type
(requires CONFIG_ENCRYPTED_KEYS=y), though as noted above the bug
pertained to more than just the "encrypted" key type:
#include <stdlib.h>
#include <unistd.h>
#include <keyutils.h>
int main(void)
{
int ringid = keyctl_join_session_keyring(NULL);
if (fork()) {
for (;;) {
const char payload[] = "update user:foo 32";
usleep(rand() % 10000);
add_key("encrypted", "desc", payload, sizeof(payload), ringid);
keyctl_clear(ringid);
}
} else {
for (;;)
request_key("encrypted", "desc", "callout_info", ringid);
}
}
It causes:
BUG: unable to handle kernel NULL pointer dereference at
0000000000000018
IP: encrypted_update+0xb0/0x170
PGD
7a178067 P4D
7a178067 PUD
77269067 PMD 0
PREEMPT SMP
CPU: 0 PID: 340 Comm: reproduce Tainted: G D
4.14.0-rc1-00025-g428490e38b2e #796
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
task:
ffff8a467a39a340 task.stack:
ffffb15c40770000
RIP: 0010:encrypted_update+0xb0/0x170
RSP: 0018:
ffffb15c40773de8 EFLAGS:
00010246
RAX:
0000000000000000 RBX:
ffff8a467a275b00 RCX:
0000000000000000
RDX:
0000000000000005 RSI:
ffff8a467a275b14 RDI:
ffffffffb742f303
RBP:
ffffb15c40773e20 R08:
0000000000000000 R09:
ffff8a467a275b17
R10:
0000000000000020 R11:
0000000000000000 R12:
0000000000000000
R13:
0000000000000000 R14:
ffff8a4677057180 R15:
ffff8a467a275b0f
FS:
00007f5d7fb08700(0000) GS:
ffff8a467f200000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
0000000000000018 CR3:
0000000077262005 CR4:
00000000001606f0
Call Trace:
key_create_or_update+0x2bc/0x460
SyS_add_key+0x10c/0x1d0
entry_SYSCALL_64_fastpath+0x1f/0xbe
RIP: 0033:0x7f5d7f211259
RSP: 002b:
00007ffed03904c8 EFLAGS:
00000246 ORIG_RAX:
00000000000000f8
RAX:
ffffffffffffffda RBX:
000000003b2a7955 RCX:
00007f5d7f211259
RDX:
00000000004009e4 RSI:
00000000004009ff RDI:
0000000000400a04
RBP:
0000000068db8bad R08:
000000003b2a7955 R09:
0000000000000004
R10:
000000000000001a R11:
0000000000000246 R12:
0000000000400868
R13:
00007ffed03905d0 R14:
0000000000000000 R15:
0000000000000000
Code: 77 28 e8 64 34 1f 00 45 31 c0 31 c9 48 8d 55 c8 48 89 df 48 8d 75 d0 e8 ff f9 ff ff 85 c0 41 89 c4 0f 88 84 00 00 00 4c 8b 7d c8 <49> 8b 75 18 4c 89 ff e8 24 f8 ff ff 85 c0 41 89 c4 78 6d 49 8b
RIP: encrypted_update+0xb0/0x170 RSP:
ffffb15c40773de8
CR2:
0000000000000018
Cc: <stable@vger.kernel.org> # v2.6.12+
Reported-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Eric Biggers <ebiggers@google.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Johan Hovold [Wed, 4 Oct 2017 09:01:13 +0000 (11:01 +0200)]
USB: serial: console: fix use-after-free after failed setup
commit
299d7572e46f98534033a9e65973f13ad1ce9047 upstream.
Make sure to reset the USB-console port pointer when console setup fails
in order to avoid having the struct usb_serial be prematurely freed by
the console code when the device is later disconnected.
Fixes:
73e487fdb75f ("[PATCH] USB console: fix disconnection issues")
Cc: stable <stable@vger.kernel.org> # 2.6.18
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Andreas Gruenbacher [Mon, 9 Oct 2017 09:13:18 +0000 (11:13 +0200)]
direct-io: Prevent NULL pointer access in submit_page_section
commit
899f0429c7d3eed886406cd72182bee3b96aa1f9 upstream.
In the code added to function submit_page_section by commit
b1058b981,
sdio->bio can currently be NULL when calling dio_bio_submit. This then
leads to a NULL pointer access in dio_bio_submit, so check for a NULL
bio in submit_page_section before trying to submit it instead.
Fixes xfstest generic/250 on gfs2.
Cc: stable@vger.kernel.org # v3.10+
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Joerg Roedel [Fri, 13 Oct 2017 12:32:37 +0000 (14:32 +0200)]
iommu/amd: Finish TLB flush in amd_iommu_unmap()
commit
ce76353f169a6471542d999baf3d29b121dce9c0 upstream.
The function only sends the flush command to the IOMMU(s),
but does not wait for its completion when it returns. Fix
that.
Fixes:
601367d76bd1 ('x86/amd-iommu: Remove iommu_flush_domain function')
Cc: stable@vger.kernel.org # >= 2.6.33
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Yoshihiro Shimoda [Wed, 27 Sep 2017 09:47:13 +0000 (18:47 +0900)]
usb: renesas_usbhs: fix usbhsf_fifo_clear() for RX direction
commit
0a2ce62b61f2c76d0213edf4e37aaf54a8ddf295 upstream.
This patch fixes an issue that the usbhsf_fifo_clear() is possible
to cause 10 msec delay if the pipe is RX direction and empty because
the FRDY bit will never be set to 1 in such case.
Fixes:
e8d548d54968 ("usb: renesas_usbhs: fifo became independent from pipe.")
Cc: <stable@vger.kernel.org> # v3.1+
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Yoshihiro Shimoda [Wed, 27 Sep 2017 09:47:12 +0000 (18:47 +0900)]
usb: renesas_usbhs: fix the BCLR setting condition for non-DCP pipe
commit
6124607acc88fffeaadf3aacfeb3cc1304c87387 upstream.
This patch fixes an issue that the driver sets the BCLR bit of
{C,Dn}FIFOCTR register to 1 even when it's non-DCP pipe and
the FRDY bit of {C,Dn}FIFOCTR register is set to 1.
Fixes:
e8d548d54968 ("usb: renesas_usbhs: fifo became independent from pipe.")
Cc: <stable@vger.kernel.org> # v3.1+
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Steffen Maier [Fri, 28 Jul 2017 10:30:57 +0000 (12:30 +0200)]
scsi: zfcp: trace HBA FSF response by default on dismiss or timedout late response
commit
fdb7cee3b9e3c561502e58137a837341f10cbf8b upstream.
At the default trace level, we only trace unsuccessful events including
FSF responses.
zfcp_dbf_hba_fsf_response() only used protocol status and FSF status to
decide on an unsuccessful response. However, this is only one of multiple
possible sources determining a failed struct zfcp_fsf_req.
An FSF request can also "fail" if its response runs into an ERP timeout
or if it gets dismissed because a higher level recovery was triggered
[trace tags "erscf_1" or "erscf_2" in zfcp_erp_strategy_check_fsfreq()].
FSF requests with ERP timeout are:
FSF_QTCB_EXCHANGE_CONFIG_DATA, FSF_QTCB_EXCHANGE_PORT_DATA,
FSF_QTCB_OPEN_PORT_WITH_DID or FSF_QTCB_CLOSE_PORT or
FSF_QTCB_CLOSE_PHYSICAL_PORT for target ports,
FSF_QTCB_OPEN_LUN, FSF_QTCB_CLOSE_LUN.
One example is slow queue processing which can cause follow-on errors,
e.g. FSF_PORT_ALREADY_OPEN after FSF_QTCB_OPEN_PORT_WITH_DID timed out.
In order to see the root cause, we need to see late responses even if the
channel presented them successfully with FSF_PROT_GOOD and FSF_GOOD.
Example trace records formatted with zfcpdbf from the s390-tools package:
Timestamp : ...
Area : REC
Subarea : 00
Level : 1
Exception : -
CPU ID : ..
Caller : ...
Record ID : 1
Tag : fcegpf1
LUN : 0xffffffffffffffff
WWPN : 0x<WWPN>
D_ID : 0x00<D_ID>
Adapter status : 0x5400050b
Port status : 0x41200000
LUN status : 0x00000000
Ready count : 0x00000001
Running count : 0x...
ERP want : 0x02 ZFCP_ERP_ACTION_REOPEN_PORT
ERP need : 0x02 ZFCP_ERP_ACTION_REOPEN_PORT
|
Timestamp : ... 30 seconds later
Area : REC
Subarea : 00
Level : 1
Exception : -
CPU ID : ..
Caller : ...
Record ID : 2
Tag : erscf_2
LUN : 0xffffffffffffffff
WWPN : 0x<WWPN>
D_ID : 0x00<D_ID>
Adapter status : 0x5400050b
Port status : 0x41200000
LUN status : 0x00000000
Request ID : 0x<request_ID>
ERP status : 0x10000000 ZFCP_STATUS_ERP_TIMEDOUT
ERP step : 0x0800 ZFCP_ERP_STEP_PORT_OPENING
ERP action : 0x02 ZFCP_ERP_ACTION_REOPEN_PORT
ERP count : 0x00
|
Timestamp : ... later than previous record
Area : HBA
Subarea : 00
Level : 5 > default level => 3 <= default level
Exception : -
CPU ID : 00
Caller : ...
Record ID : 1
Tag : fs_qtcb => fs_rerr
Request ID : 0x<request_ID>
Request status : 0x00001010 ZFCP_STATUS_FSFREQ_DISMISSED
| ZFCP_STATUS_FSFREQ_CLEANUP
FSF cmnd : 0x00000005
FSF sequence no: 0x...
FSF issued : ... > 30 seconds ago
FSF stat : 0x00000000 FSF_GOOD
FSF stat qual :
00000000 00000000 00000000 00000000
Prot stat : 0x00000001 FSF_PROT_GOOD
Prot stat qual :
00000000 00000000 00000000 00000000
Port handle : 0x...
LUN handle : 0x00000000
QTCB log length: ...
QTCB log info : ...
In case of problems detecting that new responses are waiting on the input
queue, we sooner or later trigger adapter recovery due to an FSF request
timeout (trace tag "fsrth_1").
FSF requests with FSF request timeout are:
typically FSF_QTCB_ABORT_FCP_CMND; but theoretically also
FSF_QTCB_EXCHANGE_CONFIG_DATA or FSF_QTCB_EXCHANGE_PORT_DATA via sysfs,
FSF_QTCB_OPEN_PORT_WITH_DID or FSF_QTCB_CLOSE_PORT for WKA ports,
FSF_QTCB_FCP_CMND for task management function (LUN / target reset).
One or more pending requests can meanwhile have FSF_PROT_GOOD and FSF_GOOD
because the channel filled in the response via DMA into the request's QTCB.
In a theroretical case, inject code can create an erroneous FSF request
on purpose. If data router is enabled, it uses deferred error reporting.
A READ SCSI command can succeed with FSF_PROT_GOOD, FSF_GOOD, and
SAM_STAT_GOOD. But on writing the read data to host memory via DMA,
it can still fail, e.g. if an intentionally wrong scatter list does not
provide enough space. Rather than getting an unsuccessful response,
we get a QDIO activate check which in turn triggers adapter recovery.
One or more pending requests can meanwhile have FSF_PROT_GOOD and FSF_GOOD
because the channel filled in the response via DMA into the request's QTCB.
Example trace records formatted with zfcpdbf from the s390-tools package:
Timestamp : ...
Area : HBA
Subarea : 00
Level : 6 > default level => 3 <= default level
Exception : -
CPU ID : ..
Caller : ...
Record ID : 1
Tag : fs_norm => fs_rerr
Request ID : 0x<request_ID2>
Request status : 0x00001010 ZFCP_STATUS_FSFREQ_DISMISSED
| ZFCP_STATUS_FSFREQ_CLEANUP
FSF cmnd : 0x00000001
FSF sequence no: 0x...
FSF issued : ...
FSF stat : 0x00000000 FSF_GOOD
FSF stat qual :
00000000 00000000 00000000 00000000
Prot stat : 0x00000001 FSF_PROT_GOOD
Prot stat qual : ........ ........
00000000 00000000
Port handle : 0x...
LUN handle : 0x...
|
Timestamp : ...
Area : SCSI
Subarea : 00
Level : 3
Exception : -
CPU ID : ..
Caller : ...
Record ID : 1
Tag : rsl_err
Request ID : 0x<request_ID2>
SCSI ID : 0x...
SCSI LUN : 0x...
SCSI result : 0x000e0000 DID_TRANSPORT_DISRUPTED
SCSI retries : 0x00
SCSI allowed : 0x05
SCSI scribble : 0x<request_ID2>
SCSI opcode : 28... Read(10)
FCP rsp inf cod: 0x00
FCP rsp IU :
00000000 00000000 00000000 00000000
^^ SAM_STAT_GOOD
00000000 00000000
Only with luck in both above cases, we could see a follow-on trace record
of an unsuccesful event following a successful but late FSF response with
FSF_PROT_GOOD and FSF_GOOD. Typically this was the case for I/O requests
resulting in a SCSI trace record "rsl_err" with DID_TRANSPORT_DISRUPTED
[On ZFCP_STATUS_FSFREQ_DISMISSED, zfcp_fsf_protstatus_eval() sets
ZFCP_STATUS_FSFREQ_ERROR seen by the request handler functions as failure].
However, the reason for this follow-on trace was invisible because the
corresponding HBA trace record was missing at the default trace level
(by default hidden records with tags "fs_norm", "fs_qtcb", or "fs_open").
On adapter recovery, after we had shut down the QDIO queues, we perform
unsuccessful pseudo completions with flag ZFCP_STATUS_FSFREQ_DISMISSED
for each pending FSF request in zfcp_fsf_req_dismiss_all().
In order to find the root cause, we need to see all pseudo responses even
if the channel presented them successfully with FSF_PROT_GOOD and FSF_GOOD.
Therefore, check zfcp_fsf_req.status for ZFCP_STATUS_FSFREQ_DISMISSED
or ZFCP_STATUS_FSFREQ_ERROR and trace with a new tag "fs_rerr".
It does not matter that there are numerous places which set
ZFCP_STATUS_FSFREQ_ERROR after the location where we trace an FSF response
early. These cases are based on protocol status != FSF_PROT_GOOD or
== FSF_PROT_FSF_STATUS_PRESENTED and are thus already traced by default
as trace tag "fs_perr" or "fs_ferr" respectively.
NB: The trace record with tag "fssrh_1" for status read buffers on dismiss
all remains. zfcp_fsf_req_complete() handles this and returns early.
All other FSF request types are handled separately and as described above.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes:
8a36e4532ea1 ("[SCSI] zfcp: enhancement of zfcp debug features")
Fixes:
2e261af84cdb ("[SCSI] zfcp: Only collect FSF/HBA debug data for matching trace levels")
Cc: <stable@vger.kernel.org> #2.6.38+
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Steffen Maier [Fri, 28 Jul 2017 10:30:56 +0000 (12:30 +0200)]
scsi: zfcp: fix payload with full FCP_RSP IU in SCSI trace records
commit
12c3e5754c8022a4f2fd1e9f00d19e99ee0d3cc1 upstream.
If the FCP_RSP UI has optional parts (FCP_SNS_INFO or FCP_RSP_INFO) and
thus does not fit into the fsp_rsp field built into a SCSI trace record,
trace the full FCP_RSP UI with all optional parts as payload record
instead of just FCP_SNS_INFO as payload and
a 1 byte RSP_INFO_CODE part of FCP_RSP_INFO built into the SCSI record.
That way we would also get the full FCP_SNS_INFO in case a
target would ever send more than
min(SCSI_SENSE_BUFFERSIZE==96, ZFCP_DBF_PAY_MAX_REC==256)==96.
The mandatory part of FCP_RSP IU is only 24 bytes.
PAYload costs at least one full PAY record of 256 bytes anyway.
We cap to the hardware response size which is only FSF_FCP_RSP_SIZE==128.
So we can just put the whole FCP_RSP IU with any optional parts into
PAYload similarly as we do for SAN PAY since v4.9 commit
aceeffbb59bb
("zfcp: trace full payload of all SAN records (req,resp,iels)").
This does not cause any additional trace records wasting memory.
Decoded trace records were confusing because they showed a hard-coded
sense data length of 96 even if the FCP_RSP_IU field FCP_SNS_LEN showed
actually less.
Since the same commit, we set pl_len for SAN traces to the full length of a
request/response even if we cap the corresponding trace.
In contrast, here for SCSI traces we set pl_len to the pre-computed
length of FCP_RSP IU considering SNS_LEN or RSP_LEN if valid.
Nonetheless we trace a hardcoded payload of length FSF_FCP_RSP_SIZE==128
if there were optional parts.
This makes it easier for the zfcpdbf tool to format only the relevant
part of the long FCP_RSP UI buffer. And any trailing information is still
available in the payload trace record just in case.
Rename the payload record tag from "fcp_sns" to "fcp_riu" to make the new
content explicit to zfcpdbf which can then pick a suitable field name such
as "FCP rsp IU all:" instead of "Sense info :"
Also, the same zfcpdbf can still be backwards compatible with "fcp_sns".
Old example trace record before this fix, formatted with the tool zfcpdbf
from s390-tools:
Timestamp : ...
Area : SCSI
Subarea : 00
Level : 3
Exception : -
CPU id : ..
Caller : 0x...
Record id : 1
Tag : rsl_err
Request id : 0x<request_id>
SCSI ID : 0x...
SCSI LUN : 0x...
SCSI result : 0x00000002
SCSI retries : 0x00
SCSI allowed : 0x05
SCSI scribble : 0x<request_id>
SCSI opcode :
00000000 00000000 00000000 00000000
FCP rsp inf cod: 0x00
FCP rsp IU :
00000000 00000000 00000202 00000000
^^==FCP_SNS_LEN_VALID
00000020 00000000
^^^^^^^^==FCP_SNS_LEN==32
Sense len : 96 <==min(SCSI_SENSE_BUFFERSIZE,ZFCP_DBF_PAY_MAX_REC)
Sense info :
70000600 00000018 00000000 29000000
00000400 00000000 00000000 00000000
00000000 00000000 00000000 00000000<==superfluous
00000000 00000000 00000000 00000000<==superfluous
00000000 00000000 00000000 00000000<==superfluous
00000000 00000000 00000000 00000000<==superfluous
New example trace records with this fix:
Timestamp : ...
Area : SCSI
Subarea : 00
Level : 3
Exception : -
CPU ID : ..
Caller : 0x...
Record ID : 1
Tag : rsl_err
Request ID : 0x<request_id>
SCSI ID : 0x...
SCSI LUN : 0x...
SCSI result : 0x00000002
SCSI retries : 0x00
SCSI allowed : 0x03
SCSI scribble : 0x<request_id>
SCSI opcode :
a30c0112 00000000 02000000 00000000
FCP rsp inf cod: 0x00
FCP rsp IU :
00000000 00000000 00000a02 00000200
00000020 00000000
FCP rsp IU len : 56
FCP rsp IU all :
00000000 00000000 00000a02 00000200
^^=FCP_RESID_UNDER|FCP_SNS_LEN_VALID
00000020 00000000 70000500 00000018
^^^^^^^^==FCP_SNS_LEN
^^^^^^^^^^^^^^^^^
00000000 240000cb 00011100 00000000
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
00000000 00000000
^^^^^^^^^^^^^^^^^==FCP_SNS_INFO
Timestamp : ...
Area : SCSI
Subarea : 00
Level : 1
Exception : -
CPU ID : ..
Caller : 0x...
Record ID : 1
Tag : lr_okay
Request ID : 0x<request_id>
SCSI ID : 0x...
SCSI LUN : 0x...
SCSI result : 0x00000000
SCSI retries : 0x00
SCSI allowed : 0x05
SCSI scribble : 0x<request_id>
SCSI opcode : <CDB of unrelated SCSI command passed to eh handler>
FCP rsp inf cod: 0x00
FCP rsp IU :
00000000 00000000 00000100 00000000
00000000 00000008
FCP rsp IU len : 32
FCP rsp IU all :
00000000 00000000 00000100 00000000
^^==FCP_RSP_LEN_VALID
00000000 00000008 00000000 00000000
^^^^^^^^==FCP_RSP_LEN
^^^^^^^^^^^^^^^^^==FCP_RSP_INFO
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes:
250a1352b95e ("[SCSI] zfcp: Redesign of the debug tracing for SCSI records.")
Cc: <stable@vger.kernel.org> #2.6.38+
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Steffen Maier [Fri, 28 Jul 2017 10:30:55 +0000 (12:30 +0200)]
scsi: zfcp: fix missing trace records for early returns in TMF eh handlers
commit
1a5d999ebfc7bfe28deb48931bb57faa8e4102b6 upstream.
For problem determination we need to see that we were in scsi_eh
as well as whether and why we were successful or not.
The following commits introduced new early returns without adding
a trace record:
v2.6.35 commit
a1dbfddd02d2
("[SCSI] zfcp: Pass return code from fc_block_scsi_eh to scsi eh")
on fc_block_scsi_eh() returning != 0 which is FAST_IO_FAIL,
v2.6.30 commit
63caf367e1c9
("[SCSI] zfcp: Improve reliability of SCSI eh handlers in zfcp")
on not having gotten an FSF request after the maximum number of retry
attempts and thus could not issue a TMF and has to return FAILED.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes:
a1dbfddd02d2 ("[SCSI] zfcp: Pass return code from fc_block_scsi_eh to scsi eh")
Fixes:
63caf367e1c9 ("[SCSI] zfcp: Improve reliability of SCSI eh handlers in zfcp")
Cc: <stable@vger.kernel.org> #2.6.38+
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Benjamin Block [Fri, 28 Jul 2017 10:30:52 +0000 (12:30 +0200)]
scsi: zfcp: add handling for FCP_RESID_OVER to the fcp ingress path
commit
a099b7b1fc1f0418ab8d79ecf98153e1e134656e upstream.
Up until now zfcp would just ignore the FCP_RESID_OVER flag in the FCP
response IU. When this flag is set, it is possible, in regards to the
FCP standard, that the storage-server processes the command normally, up
to the point where data is missing and simply ignores those.
In this case no CHECK CONDITION would be set, and because we ignored the
FCP_RESID_OVER flag we resulted in at least a data loss or even
-corruption as a follow-up error, depending on how the
applications/layers on top behave. To prevent this, we now set the
host-byte of the corresponding scsi_cmnd to DID_ERROR.
Other storage-behaviors, where the same condition results in a CHECK
CONDITION set in the answer, don't need to be changed as they are
handled in the mid-layer already.
Following is an example trace record decoded with zfcpdbf from the
s390-tools package. We forcefully injected a fc_dl which is one byte too
small:
Timestamp : ...
Area : SCSI
Subarea : 00
Level : 3
Exception : -
CPU ID : ..
Caller : 0x...
Record ID : 1
Tag : rsl_err
Request ID : 0x...
SCSI ID : 0x...
SCSI LUN : 0x...
SCSI result : 0x00070000
^^DID_ERROR
SCSI retries : 0x..
SCSI allowed : 0x..
SCSI scribble : 0x...
SCSI opcode :
2a000000 00000000 08000000 00000000
FCP rsp inf cod: 0x00
FCP rsp IU :
00000000 00000000 00000400 00000001
^^fr_flags==FCP_RESID_OVER
^^fr_status==SAM_STAT_GOOD
^^^^^^^^fr_resid
00000000 00000000
As of now, we don't actively handle to possibility that a response IU
has both flags - FCP_RESID_OVER and FCP_RESID_UNDER - set at once.
Reported-by: Luke M. Hopkins <lmhopkin@us.ibm.com>
Reviewed-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes:
553448f6c483 ("[SCSI] zfcp: Message cleanup")
Fixes:
ea127f975424 ("[PATCH] s390 (7/7): zfcp host adapter.") (tglx/history.git)
Cc: <stable@vger.kernel.org> #2.6.33+
Signed-off-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Steffen Maier [Fri, 28 Jul 2017 10:30:51 +0000 (12:30 +0200)]
scsi: zfcp: fix queuecommand for scsi_eh commands when DIX enabled
commit
71b8e45da51a7b64a23378221c0a5868bd79da4f upstream.
Since commit
db007fc5e20c ("[SCSI] Command protection operation"),
scsi_eh_prep_cmnd() saves scmd->prot_op and temporarily resets it to
SCSI_PROT_NORMAL.
Other FCP LLDDs such as qla2xxx and lpfc shield their queuecommand()
to only access any of scsi_prot_sg...() if
(scsi_get_prot_op(cmd) != SCSI_PROT_NORMAL).
Do the same thing for zfcp, which introduced DIX support with
commit
ef3eb71d8ba4 ("[SCSI] zfcp: Introduce experimental support for
DIF/DIX").
Otherwise, TUR SCSI commands as part of scsi_eh likely fail in zfcp,
because the regular SCSI command with DIX protection data, that scsi_eh
re-uses in scsi_send_eh_cmnd(), of course still has
(scsi_prot_sg_count() != 0) and so zfcp sends down bogus requests to the
FCP channel hardware.
This causes scsi_eh_test_devices() to have (finish_cmds == 0)
[not SCSI device is online or not scsi_eh_tur() failed]
so regular SCSI commands, that caused / were affected by scsi_eh,
are moved to work_q and scsi_eh_test_devices() itself returns false.
In turn, it unnecessarily escalates in our case in scsi_eh_ready_devs()
beyond host reset to finally scsi_eh_offline_sdevs()
which sets affected SCSI devices offline with the following kernel message:
"kernel: sd H:0:T:L: Device offlined - not ready after error recovery"
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Fixes:
ef3eb71d8ba4 ("[SCSI] zfcp: Introduce experimental support for DIF/DIX")
Cc: <stable@vger.kernel.org> #2.6.36+
Reviewed-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Benjamin Block <bblock@linux.vnet.ibm.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Mateusz Jurczyk [Wed, 7 Jun 2017 10:26:49 +0000 (12:26 +0200)]
fuse: initialize the flock flag in fuse_file on allocation
commit
68227c03cba84a24faf8a7277d2b1a03c8959c2c upstream.
Before the patch, the flock flag could remain uninitialized for the
lifespan of the fuse_file allocation. Unless set to true in
fuse_file_flock(), it would remain in an indeterminate state until read in
an if statement in fuse_release_common(). This could consequently lead to
taking an unexpected branch in the code.
The bug was discovered by a runtime instrumentation designed to detect use
of uninitialized memory in the kernel.
Signed-off-by: Mateusz Jurczyk <mjurczyk@google.com>
Fixes:
37fb3a30b462 ("fuse: fix flock")
Cc: <stable@vger.kernel.org> # v3.1+
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Nicholas Bellinger [Mon, 27 Mar 2017 23:12:43 +0000 (16:12 -0700)]
target: Avoid mappedlun symlink creation during lun shutdown
commit
49cb77e297dc611a1b795cfeb79452b3002bd331 upstream.
This patch closes a race between se_lun deletion during configfs
unlink in target_fabric_port_unlink() -> core_dev_del_lun()
-> core_tpg_remove_lun(), when transport_clear_lun_ref() blocks
waiting for percpu_ref RCU grace period to finish, but a new
NodeACL mappedlun is added before the RCU grace period has
completed.
This can happen in target_fabric_mappedlun_link() because it
only checks for se_lun->lun_se_dev, which is not cleared until
after transport_clear_lun_ref() percpu_ref RCU grace period
finishes.
This bug originally manifested as NULL pointer dereference
OOPsen in target_stat_scsi_att_intr_port_show_attr_dev() on
v4.1.y code, because it dereferences lun->lun_se_dev without
a explicit NULL pointer check.
In post v4.1 code with target-core RCU conversion, the code
in target_stat_scsi_att_intr_port_show_attr_dev() no longer
uses se_lun->lun_se_dev, but the same race still exists.
To address the bug, go ahead and set se_lun>lun_shutdown as
early as possible in core_tpg_remove_lun(), and ensure new
NodeACL mappedlun creation in target_fabric_mappedlun_link()
fails during se_lun shutdown.
Reported-by: James Shen <jcs@datera.io>
Cc: James Shen <jcs@datera.io>
Tested-by: James Shen <jcs@datera.io>
Cc: stable@vger.kernel.org # 3.10+
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Prabhakar Lad [Thu, 20 Jul 2017 12:02:09 +0000 (08:02 -0400)]
media: platform: davinci: return -EINVAL for VPFE_CMD_S_CCDC_RAW_PARAMS ioctl
commit
da05d52d2f0f6bd61094a0cd045fed94bf7d673a upstream.
this patch makes sure VPFE_CMD_S_CCDC_RAW_PARAMS ioctl no longer works
for vpfe_capture driver with a minimal patch suitable for backporting.
- This ioctl was never in public api and was only defined in kernel header.
- The function set_params constantly mixes up pointers and phys_addr_t
numbers.
- This is part of a 'VPFE_CMD_S_CCDC_RAW_PARAMS' ioctl command that is
described as an 'experimental ioctl that will change in future kernels'.
- The code to allocate the table never gets called after we copy_from_user
the user input over the kernel settings, and then compare them
for inequality.
- We then go on to use an address provided by user space as both the
__user pointer for input and pass it through phys_to_virt to come up
with a kernel pointer to copy the data to. This looks like a trivially
exploitable root hole.
Due to these reasons we make sure this ioctl now returns -EINVAL and backport
this patch as far as possible.
Fixes:
5f15fbb68fd7 ("V4L/DVB (12251): v4l: dm644x ccdc module for vpfe capture driver")
Signed-off-by: Lad, Prabhakar <prabhakar.csengg@gmail.com>
Cc: <stable@vger.kernel.org> # for v3.7 and up
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jerry Lee [Sun, 6 Aug 2017 05:18:31 +0000 (01:18 -0400)]
ext4: fix overflow caused by missing cast in ext4_resize_fs()
commit
aec51758ce10a9c847a62a48a168f8c804c6e053 upstream.
On a 32-bit platform, the value of n_blcoks_count may be wrong during
the file system is resized to size larger than 2^32 blocks. This may
caused the superblock being corrupted with zero blocks count.
Fixes:
1c6bd7173d66
Signed-off-by: Jerry Lee <jerrylee@qnap.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org # 3.7+
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jan Kara [Sat, 5 Aug 2017 21:43:24 +0000 (17:43 -0400)]
ext4: fix SEEK_HOLE/SEEK_DATA for blocksize < pagesize
commit
fcf5ea10992fbac3c7473a1db33d56a139333cd1 upstream.
ext4_find_unwritten_pgoff() does not properly handle a situation when
starting index is in the middle of a page and blocksize < pagesize. The
following command shows the bug on filesystem with 1k blocksize:
xfs_io -f -c "falloc 0 4k" \
-c "pwrite 1k 1k" \
-c "pwrite 3k 1k" \
-c "seek -a -r 0" foo
In this example, neither lseek(fd, 1024, SEEK_HOLE) nor lseek(fd, 2048,
SEEK_DATA) will return the correct result.
Fix the problem by neglecting buffers in a page before starting offset.
Reported-by: Andreas Gruenbacher <agruenba@redhat.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Jan Kara <jack@suse.cz>
CC: stable@vger.kernel.org # 3.8+
Signed-off-by: Willy Tarreau <w@1wt.eu>
Tejun Heo [Tue, 18 Jul 2017 22:41:52 +0000 (18:41 -0400)]
workqueue: restore WQ_UNBOUND/max_active==1 to be ordered
commit
5c0338c68706be53b3dc472e4308961c36e4ece1 upstream.
The combination of WQ_UNBOUND and max_active == 1 used to imply
ordered execution. After NUMA affinity
4c16bd327c74 ("workqueue:
implement NUMA affinity for unbound workqueues"), this is no longer
true due to per-node worker pools.
While the right way to create an ordered workqueue is
alloc_ordered_workqueue(), the documentation has been misleading for a
long time and people do use WQ_UNBOUND and max_active == 1 for ordered
workqueues which can lead to subtle bugs which are very difficult to
trigger.
It's unlikely that we'd see noticeable performance impact by enforcing
ordering on WQ_UNBOUND / max_active == 1 workqueues. Let's
automatically set __WQ_ORDERED for those workqueues.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Christoph Hellwig <hch@infradead.org>
Reported-by: Alexei Potashnik <alexei@purestorage.com>
Fixes:
4c16bd327c74 ("workqueue: implement NUMA affinity for unbound workqueues")
Cc: stable@vger.kernel.org # v3.10+
Signed-off-by: Willy Tarreau <w@1wt.eu>
Dan Carpenter [Wed, 19 Jul 2017 10:06:41 +0000 (13:06 +0300)]
libata: array underflow in ata_find_dev()
commit
59a5e266c3f5c1567508888dd61a45b86daed0fa upstream.
My static checker complains that "devno" can be negative, meaning that
we read before the start of the loop. I've looked at the code, and I
think the warning is right. This come from /proc so it's root only or
it would be quite a quite a serious bug. The call tree looks like this:
proc_scsi_write() <- gets id and channel from simple_strtoul()
-> scsi_add_single_device() <- calls shost->transportt->user_scan()
-> ata_scsi_user_scan()
-> ata_find_dev()
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: stable@vger.kernel.org # all versions at this point
Signed-off-by: Willy Tarreau <w@1wt.eu>
Maciej W. Rozycki [Thu, 15 Jun 2017 23:05:08 +0000 (00:05 +0100)]
MIPS: math-emu: Prevent wrong ISA mode instruction emulation
commit
13769ebad0c42738831787e27c7c7f982e7da579 upstream.
Terminate FPU emulation immediately whenever an ISA mode switch has been
observed. This is so that we do not interpret machine code in the wrong
mode, for example when a regular MIPS FPU instruction has been placed in
a delay slot of a jump that switches into the MIPS16 mode, as with the
following code (taken from a GCC test suite case):
00400650 <set_fast_math>:
400650:
3c020100 lui v0,0x100
400654:
03e00008 jr ra
400658:
44c2f800 ctc1 v0,c1_fcsr
40065c:
00000000 nop
[...]
004012d0 <__libc_csu_init>:
4012d0: f000 6a02 li v0,2
4012d4: f150 0b1c la v1,3f9430 <_DYNAMIC-0x6df0>
4012d8: f400 3240 sll v0,16
4012dc: e269 addu v0,v1
4012de: 659a move gp,v0
4012e0: f00c 64f6 save a0-a2,48,ra,s0-s1
4012e4: 673c move s1,gp
4012e6: f010 9978 lw v1,-32744(s1)
4012ea: d204 sw v0,16(sp)
4012ec: eb40 jalr v1
4012ee: 653b move t9,v1
4012f0: f010 997c lw v1,-32740(s1)
4012f4: f030 9920 lw s1,-32736(s1)
4012f8: e32f subu v1,s1
4012fa: 326b sra v0,v1,2
4012fc: d206 sw v0,24(sp)
4012fe: 220c beqz v0,401318 <__libc_csu_init+0x48>
401300: 6800 li s0,0
401302: 99e0 lw a3,0(s1)
401304: 4801 addiu s0,1
401306: 960e lw a2,56(sp)
401308: 4904 addiu s1,4
40130a: 950d lw a1,52(sp)
40130c: 940c lw a0,48(sp)
40130e: ef40 jalr a3
401310: 653f move t9,a3
401312: 9206 lw v0,24(sp)
401314: ea0a cmp v0,s0
401316: 61f5 btnez 401302 <__libc_csu_init+0x32>
401318: 6476 restore 48,ra,s0-s1
40131a: e8a0 jrc ra
Here `set_fast_math' is called from `40130e' (`40130f' with the ISA bit)
and emulation triggers for the CTC1 instruction. As it is in a jump
delay slot emulation continues from `401312' (`401313' with the ISA
bit). However we have no path to handle MIPS16 FPU code emulation,
because there are no MIPS16 FPU instructions. So the default emulation
path is taken, interpreting a 32-bit word fetched by `get_user' from
`401313' as a regular MIPS instruction, which is:
401313:
f5ea0a92 sdc1 $f10,2706(t7)
This makes the FPU emulator proceed with the supposed SDC1 instruction
and consequently makes the program considered here terminate with
SIGSEGV.
A similar although less severe issue exists with pure-microMIPS
processors in the case where similarly an FPU instruction is emulated in
a delay slot of a register jump that (incorrectly) switches into the
regular MIPS mode. A subsequent instruction fetch from the jump's
target is supposed to cause an Address Error exception, however instead
we proceed with regular MIPS FPU emulation.
For simplicity then, always terminate the emulation loop whenever a mode
change is detected, denoted by an ISA mode bit flip. As from commit
377cb1b6c16a ("MIPS: Disable MIPS16/microMIPS crap for platforms not
supporting these ASEs.") the result of `get_isa16_mode' can be hardcoded
to 0, so we need to examine the ISA mode bit by hand.
This complements commit
102cedc32a6e ("MIPS: microMIPS: Floating point
support.") which added JALX decoding to FPU emulation.
Fixes:
102cedc32a6e ("MIPS: microMIPS: Floating point support.")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.9+
Patchwork: https://patchwork.linux-mips.org/patch/16393/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Maciej W. Rozycki [Thu, 15 Jun 2017 23:07:34 +0000 (00:07 +0100)]
MIPS: Fix unaligned PC interpretation in `compute_return_epc'
commit
11a3799dbeb620bf0400b1fda5cc2c6bea55f20a upstream.
Fix a regression introduced with commit
fb6883e5809c ("MIPS: microMIPS:
Support handling of delay slots.") and defer to `__compute_return_epc'
if the ISA bit is set in EPC with non-MIPS16, non-microMIPS hardware,
which will then arrange for a SIGBUS due to an unaligned instruction
reference. Returning EPC here is never correct as the API defines this
function's result to be either a negative error code on failure or one
of 0 and BRANCH_LIKELY_TAKEN on success.
Fixes:
fb6883e5809c ("MIPS: microMIPS: Support handling of delay slots.")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.9+
Patchwork: https://patchwork.linux-mips.org/patch/16395/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Maciej W. Rozycki [Thu, 15 Jun 2017 23:06:19 +0000 (00:06 +0100)]
MIPS: Actually decode JALX in `__compute_return_epc_for_insn'
commit
a9db101b735a9d49295326ae41f610f6da62b08c upstream.
Complement commit
fb6883e5809c ("MIPS: microMIPS: Support handling of
delay slots.") and actually decode the regular MIPS JALX major
instruction opcode, the handling of which has been added with the said
commit for EPC calculation in `__compute_return_epc_for_insn'.
Fixes:
fb6883e5809c ("MIPS: microMIPS: Support handling of delay slots.")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 3.9+
Patchwork: https://patchwork.linux-mips.org/patch/16394/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Yoshihiro Shimoda [Wed, 19 Jul 2017 07:16:54 +0000 (16:16 +0900)]
usb: renesas_usbhs: fix usbhsc_resume() for !USBHSF_RUNTIME_PWCTRL
commit
59a0879a0e17b2e43ecdc5e3299da85b8410d7ce upstream.
This patch fixes an issue that some registers may be not initialized
after resume if the USBHSF_RUNTIME_PWCTRL is not set. Otherwise,
if a cable is not connected, the driver will not enable INTENB0.VBSE
after resume. And then, the driver cannot detect the VBUS.
Fixes:
ca8a282a5373 ("usb: gadget: renesas_usbhs: add suspend/resume support")
Cc: <stable@vger.kernel.org> # v3.2+
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Oliver O'Halloran [Thu, 6 Jul 2017 08:46:43 +0000 (18:46 +1000)]
powerpc/asm: Mark cr0 as clobbered in mftb()
commit
2400fd822f467cb4c886c879d8ad99feac9cf319 upstream.
The workaround for the CELL timebase bug does not correctly mark cr0 as
being clobbered. This means GCC doesn't know that the asm block changes cr0 and
might leave the result of an unrelated comparison in cr0 across the block, which
we then trash, leading to basically random behaviour.
Fixes:
859deea949c3 ("[POWERPC] Cell timebase bug workaround")
Cc: stable@vger.kernel.org # v2.6.19+
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
[mpe: Tweak change log and flag for stable]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Anton Blanchard [Wed, 14 Jun 2017 23:46:39 +0000 (09:46 +1000)]
powerpc: Fix emulation of mfocrf in emulate_step()
commit
64e756c55aa46fc18fd53e8f3598b73b528d8637 upstream.
From POWER4 onwards, mfocrf() only places the specified CR field into
the destination GPR, and the rest of it is set to 0. The PowerPC AS
from version 3.0 now requires this behaviour.
The emulation code currently puts the entire CR into the destination GPR.
Fix it.
Fixes:
6888199f7fe5 ("[POWERPC] Emulate more instructions in software")
Cc: stable@vger.kernel.org # v2.6.22+
Signed-off-by: Anton Blanchard <anton@samba.org>
Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Michael Ellerman [Tue, 11 Jul 2017 12:10:54 +0000 (22:10 +1000)]
powerpc/64: Fix atomic64_inc_not_zero() to return an int
commit
01e6a61aceb82e13bec29502a8eb70d9574f97ad upstream.
Although it's not documented anywhere, there is an expectation that
atomic64_inc_not_zero() returns a result which fits in an int. This is
the behaviour implemented on all arches except powerpc.
This has caused at least one bug in practice, in the percpu-refcount
code, where the long result from our atomic64_inc_not_zero() was
truncated to an int leading to lost references and stuck systems. That
was worked around in that code in commit
966d2b04e070 ("percpu-refcount:
fix reference leak during percpu-atomic transition").
To the best of my grepping abilities there are no other callers
in-tree which truncate the value, but we should fix it anyway. Because
the breakage is subtle and potentially very harmful I'm also tagging
it for stable.
Code generation is largely unaffected because in most cases the
callers are just using the result for a test anyway. In particular the
case of fget() that was mentioned in commit
a6cf7ed5119f
("powerpc/atomic: Implement atomic*_inc_not_zero") generates exactly
the same code.
Fixes:
a6cf7ed5119f ("powerpc/atomic: Implement atomic*_inc_not_zero")
Cc: stable@vger.kernel.org # v3.4
Noticed-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Krzysztof Kozlowski [Wed, 28 Jun 2017 14:56:18 +0000 (16:56 +0200)]
PM / Domains: Fix unsafe iteration over modified list of device links
commit
c6e83cac3eda5f7dd32ee1453df2f7abb5c6cd46 upstream.
pm_genpd_remove_subdomain() iterates over domain's master_links list and
removes matching element thus it has to use safe version of list
iteration.
Fixes:
f721889ff65a ("PM / Domains: Support for generic I/O PM domains (v8)")
Cc: 3.1+ <stable@vger.kernel.org> # 3.1+
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Acked-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Martin Hicks [Tue, 2 May 2017 13:38:35 +0000 (09:38 -0400)]
crypto: talitos - Extend max key length for SHA384/512-HMAC and AEAD
commit
03d2c5114c95797c0aa7d9f463348b171a274fd4 upstream.
An updated patch that also handles the additional key length requirements
for the AEAD algorithms.
The max keysize is not 96. For SHA384/512 it's 128, and for the AEAD
algorithms it's longer still. Extend the max keysize for the
AEAD size for AES256 + HMAC(SHA512).
Cc: <stable@vger.kernel.org> # 3.6+
Fixes:
357fb60502ede ("crypto: talitos - add sha224, sha384 and sha512 to existing AEAD algorithms")
Signed-off-by: Martin Hicks <mort@bork.org>
Acked-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Adam Borowski [Sat, 3 Jun 2017 07:35:06 +0000 (09:35 +0200)]
vt: fix unchecked __put_user() in tioclinux ioctls
commit
6987dc8a70976561d22450b5858fc9767788cc1c upstream.
Only read access is checked before this call.
Actually, at the moment this is not an issue, as every in-tree arch does
the same manual checks for VERIFY_READ vs VERIFY_WRITE, relying on the MMU
to tell them apart, but this wasn't the case in the past and may happen
again on some odd arch in the future.
If anyone cares about 3.7 and earlier, this is a security hole (untested)
on real 80386 CPUs.
Signed-off-by: Adam Borowski <kilobyte@angband.pl>
CC: stable@vger.kernel.org # v3.7-
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Arend van Spriel [Fri, 7 Jul 2017 20:09:06 +0000 (21:09 +0100)]
brcmfmac: fix possible buffer overflow in brcmf_cfg80211_mgmt_tx()
commit
8f44c9a41386729fea410e688959ddaa9d51be7c upstream.
The lower level nl80211 code in cfg80211 ensures that "len" is between
25 and NL80211_ATTR_FRAME (2304). We subtract DOT11_MGMT_HDR_LEN (24) from
"len" so thats's max of 2280. However, the action_frame->data[] buffer is
only BRCMF_FIL_ACTION_FRAME_SIZE (1800) bytes long so this memcpy() can
overflow.
memcpy(action_frame->data, &buf[DOT11_MGMT_HDR_LEN],
le16_to_cpu(action_frame->len));
Cc: stable@vger.kernel.org # 3.9.x
Fixes:
18e2f61db3b70 ("brcmfmac: P2P action frame tx.")
Reported-by: "freenerguo(郭大兴)" <freenerguo@tencent.com>
Signed-off-by: Arend van Spriel <arend.vanspriel@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
[wt: s/cfg80211.c/wl_cfg80211.c in 3.10]
Signed-off-by: Willy Tarreau <w@1wt.eu>
Ian Abbott [Fri, 16 Jun 2017 18:35:34 +0000 (19:35 +0100)]
staging: comedi: fix clean-up of comedi_class in comedi_init()
commit
a9332e9ad09c2644c99058fcf6ae2f355e93ce74 upstream.
There is a clean-up bug in the core comedi module initialization
functions, `comedi_init()`. If the `comedi_num_legacy_minors` module
parameter is non-zero (and valid), it creates that many "legacy" devices
and registers them in SysFS. A failure causes the function to clean up
and return an error. Unfortunately, it fails to destroy the "comedi"
class that was created earlier. Fix it by adding a call to
`class_destroy(comedi_class)` at the appropriate place in the clean-up
sequence.
Signed-off-by: Ian Abbott <abbotti@mev.co.uk>
Cc: <stable@vger.kernel.org> # 3.9+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Naveen N. Rao [Thu, 1 Jun 2017 10:48:15 +0000 (16:18 +0530)]
powerpc/kprobes: Pause function_graph tracing during jprobes handling
commit
a9f8553e935f26cb5447f67e280946b0923cd2dc upstream.
This fixes a crash when function_graph and jprobes are used together.
This is essentially commit
237d28db036e ("ftrace/jprobes/x86: Fix
conflict between jprobes and function graph tracing"), but for powerpc.
Jprobes breaks function_graph tracing since the jprobe hook needs to use
jprobe_return(), which never returns back to the hook, but instead to
the original jprobe'd function. The solution is to momentarily pause
function_graph tracing before invoking the jprobe hook and re-enable it
when returning back to the original jprobe'd function.
Fixes:
6794c78243bf ("powerpc64: port of the function graph tracer")
Cc: stable@vger.kernel.org # v2.6.30+
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Tomasz Wilczyński [Sun, 11 Jun 2017 08:28:39 +0000 (17:28 +0900)]
cpufreq: conservative: Allow down_threshold to take values from 1 to 10
commit
b8e11f7d2791bd9320be1c6e772a60b2aa093e45 upstream.
Commit
27ed3cd2ebf4 (cpufreq: conservative: Fix the logic in frequency
decrease checking) removed the 10 point substraction when comparing the
load against down_threshold but did not remove the related limit for the
down_threshold value. As a result, down_threshold lower than 11 is not
allowed even though values from 1 to 10 do work correctly too. The
comment ("cannot be lower than 11 otherwise freq will not fall") also
does not apply after removing the substraction.
For this reason, allow down_threshold to take any value from 1 to 99
and fix the related comment.
Fixes:
27ed3cd2ebf4 (cpufreq: conservative: Fix the logic in frequency decrease checking)
Signed-off-by: Tomasz Wilczyński <twilczynski@naver.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Cc: 3.10+ <stable@vger.kernel.org> # 3.10+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Michael Thalmeier [Thu, 18 May 2017 14:14:14 +0000 (16:14 +0200)]
usb: chipidea: debug: check before accessing ci_role
commit
0340ff83cd4475261e7474033a381bc125b45244 upstream.
ci_role BUGs when the role is >= CI_ROLE_END.
Cc: stable@vger.kernel.org #v3.10+
Signed-off-by: Michael Thalmeier <michael.thalmeier@hale.at>
Signed-off-by: Peter Chen <peter.chen@nxp.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Biggers [Thu, 8 Jun 2017 13:48:40 +0000 (14:48 +0100)]
KEYS: fix dereferencing NULL payload with nonzero length
commit
5649645d725c73df4302428ee4e02c869248b4c5 upstream.
sys_add_key() and the KEYCTL_UPDATE operation of sys_keyctl() allowed a
NULL payload with nonzero length to be passed to the key type's
->preparse(), ->instantiate(), and/or ->update() methods. Various key
types including asymmetric, cifs.idmap, cifs.spnego, and pkcs7_test did
not handle this case, allowing an unprivileged user to trivially cause a
NULL pointer dereference (kernel oops) if one of these key types was
present. Fix it by doing the copy_from_user() when 'plen' is nonzero
rather than when '_payload' is non-NULL, causing the syscall to fail
with EFAULT as expected when an invalid buffer is specified.
Cc: stable@vger.kernel.org # 2.6.10+
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Johan Hovold [Wed, 26 Apr 2017 10:24:21 +0000 (12:24 +0200)]
serial: ifx6x60: fix use-after-free on module unload
commit
1e948479b3d63e3ac0ecca13cbf4921c7d17c168 upstream.
Make sure to deregister the SPI driver before releasing the tty driver
to avoid use-after-free in the SPI remove callback where the tty
devices are deregistered.
Fixes:
72d4724ea54c ("serial: ifx6x60: Add modem power off function in the platform reboot process")
Cc: stable <stable@vger.kernel.org> # 3.8
Cc: Jun Chen <jun.d.chen@intel.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Maciej W. Rozycki [Thu, 15 Jun 2017 23:08:29 +0000 (00:08 +0100)]
MIPS: Send SIGILL for BPOSGE32 in `__compute_return_epc_for_insn'
commit
7b82c1058ac1f8f8b9f2b8786b1f710a57a870a8 upstream.
Fix commit
e50c0a8fa60d ("Support the MIPS32 / MIPS64 DSP ASE.") and
send SIGILL rather than SIGBUS whenever an unimplemented BPOSGE32 DSP
ASE instruction has been encountered in `__compute_return_epc_for_insn'
as our Reserved Instruction exception handler would in response to an
attempt to actually execute the instruction. Sending SIGBUS only makes
sense for the unaligned PC case, since moved to `__compute_return_epc'.
Adjust function documentation accordingly, correct formatting and use
`pr_info' rather than `printk' as the other exit path already does.
Fixes:
e50c0a8fa60d ("Support the MIPS32 / MIPS64 DSP ASE.")
Signed-off-by: Maciej W. Rozycki <macro@imgtec.com>
Cc: James Hogan <james.hogan@imgtec.com>
Cc: linux-mips@linux-mips.org
Cc: stable@vger.kernel.org # 2.6.14+
Patchwork: https://patchwork.linux-mips.org/patch/16396/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Takashi Iwai [Mon, 9 Oct 2017 09:09:20 +0000 (11:09 +0200)]
ALSA: seq: Fix use-after-free at creating a port
commit
71105998845fb012937332fe2e806d443c09e026 upstream.
There is a potential race window opened at creating and deleting a
port via ioctl, as spotted by fuzzing. snd_seq_create_port() creates
a port object and returns its pointer, but it doesn't take the
refcount, thus it can be deleted immediately by another thread.
Meanwhile, snd_seq_ioctl_create_port() still calls the function
snd_seq_system_client_ev_port_start() with the created port object
that is being deleted, and this triggers use-after-free like:
BUG: KASAN: use-after-free in snd_seq_ioctl_create_port+0x504/0x630 [snd_seq] at addr
ffff8801f2241cb1
=============================================================================
BUG kmalloc-512 (Tainted: G B ): kasan: bad access detected
-----------------------------------------------------------------------------
INFO: Allocated in snd_seq_create_port+0x94/0x9b0 [snd_seq] age=1 cpu=3 pid=4511
___slab_alloc+0x425/0x460
__slab_alloc+0x20/0x40
kmem_cache_alloc_trace+0x150/0x190
snd_seq_create_port+0x94/0x9b0 [snd_seq]
snd_seq_ioctl_create_port+0xd1/0x630 [snd_seq]
snd_seq_do_ioctl+0x11c/0x190 [snd_seq]
snd_seq_ioctl+0x40/0x80 [snd_seq]
do_vfs_ioctl+0x54b/0xda0
SyS_ioctl+0x79/0x90
entry_SYSCALL_64_fastpath+0x16/0x75
INFO: Freed in port_delete+0x136/0x1a0 [snd_seq] age=1 cpu=2 pid=4717
__slab_free+0x204/0x310
kfree+0x15f/0x180
port_delete+0x136/0x1a0 [snd_seq]
snd_seq_delete_port+0x235/0x350 [snd_seq]
snd_seq_ioctl_delete_port+0xc8/0x180 [snd_seq]
snd_seq_do_ioctl+0x11c/0x190 [snd_seq]
snd_seq_ioctl+0x40/0x80 [snd_seq]
do_vfs_ioctl+0x54b/0xda0
SyS_ioctl+0x79/0x90
entry_SYSCALL_64_fastpath+0x16/0x75
Call Trace:
[<
ffffffff81b03781>] dump_stack+0x63/0x82
[<
ffffffff81531b3b>] print_trailer+0xfb/0x160
[<
ffffffff81536db4>] object_err+0x34/0x40
[<
ffffffff815392d3>] kasan_report.part.2+0x223/0x520
[<
ffffffffa07aadf4>] ? snd_seq_ioctl_create_port+0x504/0x630 [snd_seq]
[<
ffffffff815395fe>] __asan_report_load1_noabort+0x2e/0x30
[<
ffffffffa07aadf4>] snd_seq_ioctl_create_port+0x504/0x630 [snd_seq]
[<
ffffffffa07aa8f0>] ? snd_seq_ioctl_delete_port+0x180/0x180 [snd_seq]
[<
ffffffff8136be50>] ? taskstats_exit+0xbc0/0xbc0
[<
ffffffffa07abc5c>] snd_seq_do_ioctl+0x11c/0x190 [snd_seq]
[<
ffffffffa07abd10>] snd_seq_ioctl+0x40/0x80 [snd_seq]
[<
ffffffff8136d433>] ? acct_account_cputime+0x63/0x80
[<
ffffffff815b515b>] do_vfs_ioctl+0x54b/0xda0
.....
We may fix this in a few different ways, and in this patch, it's fixed
simply by taking the refcount properly at snd_seq_create_port() and
letting the caller unref the object after use. Also, there is another
potential use-after-free by sprintf() call in snd_seq_create_port(),
and this is moved inside the lock.
This fix covers CVE-2017-15265.
Reported-and-tested-by: Michael23 Yu <ycqzsy@gmail.com>
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Vladis Dronov [Tue, 12 Sep 2017 22:21:21 +0000 (00:21 +0200)]
nl80211: check for the required netlink attributes presence
commit
e785fa0a164aa11001cba931367c7f94ffaff888 upstream.
nl80211_set_rekey_data() does not check if the required attributes
NL80211_REKEY_DATA_{REPLAY_CTR,KEK,KCK} are present when processing
NL80211_CMD_SET_REKEY_OFFLOAD request. This request can be issued by
users with CAP_NET_ADMIN privilege and may result in NULL dereference
and a system crash. Add a check for the required attributes presence.
This patch is based on the patch by bo Zhang.
This fixes CVE-2017-12153.
References: https://bugzilla.redhat.com/show_bug.cgi?id=
1491046
Fixes:
e5497d766ad ("cfg80211/nl80211: support GTK rekey offload")
Cc: <stable@vger.kernel.org> # v3.1-rc1
Reported-by: bo Zhang <zhangbo5891001@gmail.com>
Signed-off-by: Vladis Dronov <vdronov@redhat.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Vladis Dronov [Wed, 2 Aug 2017 17:50:14 +0000 (19:50 +0200)]
xfrm: policy: check policy direction value
commit
7bab09631c2a303f87a7eb7e3d69e888673b9b7e upstream.
The 'dir' parameter in xfrm_migrate() is a user-controlled byte which is used
as an array index. This can lead to an out-of-bound access, kernel lockup and
DoS. Add a check for the 'dir' value.
This fixes CVE-2017-11600.
References: https://bugzilla.redhat.com/show_bug.cgi?id=
1474928
Fixes:
80c9abaabf42 ("[XFRM]: Extension for dynamic update of endpoint address(es)")
Cc: <stable@vger.kernel.org> # v2.6.21-rc1
Reported-by: "bo Zhang" <zhangbo5891001@gmail.com>
Signed-off-by: Vladis Dronov <vdronov@redhat.com>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
David Howells [Wed, 14 Jun 2017 23:12:24 +0000 (00:12 +0100)]
rxrpc: Fix several cases where a padded len isn't checked in ticket decode
commit
5f2f97656ada8d811d3c1bef503ced266fcd53a0 upstream.
This fixes CVE-2017-7482.
When a kerberos 5 ticket is being decoded so that it can be loaded into an
rxrpc-type key, there are several places in which the length of a
variable-length field is checked to make sure that it's not going to
overrun the available data - but the data is padded to the nearest
four-byte boundary and the code doesn't check for this extra. This could
lead to the size-remaining variable wrapping and the data pointer going
over the end of the buffer.
Fix this by making the various variable-length data checks use the padded
length.
Reported-by: 石磊 <shilei-c@360.cn>
Signed-off-by: David Howells <dhowells@redhat.com>
Reviewed-by: Marc Dionne <marc.c.dionne@auristor.com>
Reviewed-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Kees Cook [Fri, 23 Jun 2017 22:08:57 +0000 (15:08 -0700)]
fs/exec.c: account for argv/envp pointers
commit
98da7d08850fb8bdeb395d6368ed15753304aa0c upstream.
When limiting the argv/envp strings during exec to 1/4 of the stack limit,
the storage of the pointers to the strings was not included. This means
that an exec with huge numbers of tiny strings could eat 1/4 of the stack
limit in strings and then additional space would be later used by the
pointers to the strings.
For example, on 32-bit with a 8MB stack rlimit, an exec with
1677721
single-byte strings would consume less than 2MB of stack, the max (8MB /
4) amount allowed, but the pointers to the strings would consume the
remaining additional stack space (
1677721 * 4 ==
6710884).
The result (
1677721 +
6710884 ==
8388605) would exhaust stack space
entirely. Controlling this stack exhaustion could result in
pathological behavior in setuid binaries (CVE-2017-
1000365).
[akpm@linux-foundation.org: additional commenting from Kees]
Fixes:
b6a2fea39318 ("mm: variable length argument support")
Link: http://lkml.kernel.org/r/20170622001720.GA32173@beast
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Rik van Riel <riel@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Qualys Security Advisory <qsa@qualys.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Kazuya Mizuguchi [Mon, 2 Oct 2017 05:01:41 +0000 (14:01 +0900)]
usb: renesas_usbhs: Fix DMAC sequence for receiving zero-length packet
commit
29c7f3e68eec4ae94d85ad7b5dfdafdb8089f513 upstream.
The DREQE bit of the DnFIFOSEL should be set to 1 after the DE bit of
USB-DMAC on R-Car SoCs is set to 1 after the USB-DMAC received a
zero-length packet. Otherwise, a transfer completion interruption
of USB-DMAC doesn't happen. Even if the driver changes the sequence,
normal operations (transmit/receive without zero-length packet) will
not cause any side-effects. So, this patch fixes the sequence anyway.
Signed-off-by: Kazuya Mizuguchi <kazuya.mizuguchi.ks@renesas.com>
[shimoda: revise the commit log]
Fixes:
e73a9891b3a1 ("usb: renesas_usbhs: add DMAEngine support")
Cc: <stable@vger.kernel.org> # v3.1+
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Yoshihiro Shimoda [Thu, 12 Mar 2015 06:35:19 +0000 (15:35 +0900)]
usb: renesas_usbhs: fix the sequence in xfer_work()
commit
9b53d9af7aac09cf249d72bfbf15f08e47c4f7fe upstream.
This patch fixes the setup sequence in xfer_work(). Otherwise,
sometimes a usb transaction will get stuck.
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Yoshihiro Shimoda [Fri, 22 Aug 2014 11:13:50 +0000 (20:13 +0900)]
usb: renesas_usbhs: fix the behavior of some usbhs_pkt_handle
commit
8355b2b3082d302091506703d2e4e239f7deed7f upstream.
Some gadget drivers will call usb_ep_queue() more than once before
the first queue doesn't finish. However, this driver didn't handle
it correctly. So, this patch fixes the behavior of some
usbhs_pkt_handle using the "running" flag. Otherwise, the oops below
happens if we use g_ncm driver and when the "iperf -u -c host -b 200M"
is running.
Unable to handle kernel NULL pointer dereference at virtual address
00000000
pgd =
c0004000
[
00000000] *pgd=
00000000
Internal error: Oops:
80000007 [#1] SMP ARM
Modules linked in: usb_f_ncm g_ncm libcomposite u_ether
CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W
3.17.0-rc1-00008-g8b2be8a-dirty #20
task:
c051c7e0 ti:
c0512000 task.ti:
c0512000
PC is at 0x0
LR is at usbhsf_pkt_handler+0xa8/0x114
pc : [<
00000000>] lr : [<
c0278fb4>] psr:
60000193
sp :
c0513ce8 ip :
c0513c58 fp :
c0513d24
r10:
00000001 r9 :
00000193 r8 :
eebec4a0
r7 :
eebec410 r6 :
eebe0c6c r5 :
00000000 r4 :
ee4a2774
r3 :
00000000 r2 :
ee251e00 r1 :
c0513cf4 r0 :
ee4a2774
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Al Viro [Sat, 30 Sep 2017 04:28:05 +0000 (05:28 +0100)]
leak in O_DIRECT readv past the EOF
In all versions from 2.5.62 to 3.15, on each iteration through the loop
by iovec array in do_blockdev_direct_IO() we used to do this:
sdio.head = 0;
sdio.tail = 0;
...
retval = do_direct_IO(dio, &sdio, &map_bh);
if (retval) {
dio_cleanup(dio, &sdio);
break;
}
with another dio_cleanup() done after the loop, catching the situation when
retval had been 0. Consider the situation when e.g. the 3rd iovec in 4-iovec
array passed to readv() has crossed the EOF. do_direct_IO() returns 0 and
buggers off *without* exhausting the page array. The loop proceeds to the
next iovec without calling dio_cleanup() and resets sdio.head and sdio.tail.
That reset of sdio.{head,tail} has prevented the eventual dio_cleanup() from
seeing anything and the page reference end up leaking.
Commit
7b2c99d15559 (new helper: iov_iter_get_pages()) in 3.16 had eliminated
the loop by iovec array, along with sdio.head and sdio.tail resets. Backporting
that is too much work - the minimal fix is simply to make sure that the only case
when do_direct_IO() buggers off early without returning non-zero will not skip
dio_cleanup().
The fix applies to all versions from 2.5.62 to 3.15.
Reported-and-tested-by: Venki Pallipadi <venki@cohesity.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Josh Poimboeuf [Tue, 25 Oct 2016 14:51:14 +0000 (09:51 -0500)]
mm/page_alloc: Remove kernel address exposure in free_reserved_area()
commit
adb1fe9ae2ee6ef6bc10f3d5a588020e7664dfa7 upstream.
Linus suggested we try to remove some of the low-hanging fruit related
to kernel address exposure in dmesg. The only leaks I see on my local
system are:
Freeing SMP alternatives memory: 32K (
ffffffff9e309000 -
ffffffff9e311000)
Freeing initrd memory: 10588K (
ffffa0b736b42000 -
ffffa0b737599000)
Freeing unused kernel memory: 3592K (
ffffffff9df87000 -
ffffffff9e309000)
Freeing unused kernel memory: 1352K (
ffffa0b7288ae000 -
ffffa0b728a00000)
Freeing unused kernel memory: 632K (
ffffa0b728d62000 -
ffffa0b728e00000)
Linus says:
"I suspect we should just remove [the addresses in the 'Freeing'
messages]. I'm sure they are useful in theory, but I suspect they
were more useful back when the whole "free init memory" was
originally done.
These days, if we have a use-after-free, I suspect the init-mem
situation is the easiest situation by far. Compared to all the dynamic
allocations which are much more likely to show it anyway. So having
debug output for that case is likely not all that productive."
With this patch the freeing messages now look like this:
Freeing SMP alternatives memory: 32K
Freeing initrd memory: 10588K
Freeing unused kernel memory: 3592K
Freeing unused kernel memory: 1352K
Freeing unused kernel memory: 632K
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Link: http://lkml.kernel.org/r/6836ff90c45b71d38e5d4405aec56fa9e5d1d4b2.1477405374.git.jpoimboe@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Neal Cardwell [Thu, 27 Jul 2017 01:43:27 +0000 (21:43 -0400)]
tcp: fix xmit timer to only be reset if data ACKed/SACKed
commit
df92c8394e6ea0469e8056946ef8add740ab8046 upstream.
Fix a TCP loss recovery performance bug raised recently on the netdev
list, in two threads:
(i) July 26, 2017: netdev thread "TCP fast retransmit issues"
(ii) July 26, 2017: netdev thread:
"[PATCH V2 net-next] TLP: Don't reschedule PTO when there's one
outstanding TLP retransmission"
The basic problem is that incoming TCP packets that did not indicate
forward progress could cause the xmit timer (TLP or RTO) to be rearmed
and pushed back in time. In certain corner cases this could result in
the following problems noted in these threads:
- Repeated ACKs coming in with bogus SACKs corrupted by middleboxes
could cause TCP to repeatedly schedule TLPs forever. We kept
sending TLPs after every ~200ms, which elicited bogus SACKs, which
caused more TLPs, ad infinitum; we never fired an RTO to fill in
the holes.
- Incoming data segments could, in some cases, cause us to reschedule
our RTO or TLP timer further out in time, for no good reason. This
could cause repeated inbound data to result in stalls in outbound
data, in the presence of packet loss.
This commit fixes these bugs by changing the TLP and RTO ACK
processing to:
(a) Only reschedule the xmit timer once per ACK.
(b) Only reschedule the xmit timer if tcp_clean_rtx_queue() deems the
ACK indicates sufficient forward progress (a packet was
cumulatively ACKed, or we got a SACK for a packet that was sent
before the most recent retransmit of the write queue head).
This brings us back into closer compliance with the RFCs, since, as
the comment for tcp_rearm_rto() notes, we should only restart the RTO
timer after forward progress on the connection. Previously we were
restarting the xmit timer even in these cases where there was no
forward progress.
As a side benefit, this commit simplifies and speeds up the TCP timer
arming logic. We had been calling inet_csk_reset_xmit_timer() three
times on normal ACKs that cumulatively acknowledged some data:
1) Once near the top of tcp_ack() to switch from TLP timer to RTO:
if (icsk->icsk_pending == ICSK_TIME_LOSS_PROBE)
tcp_rearm_rto(sk);
2) Once in tcp_clean_rtx_queue(), to update the RTO:
if (flag & FLAG_ACKED) {
tcp_rearm_rto(sk);
3) Once in tcp_ack() after tcp_fastretrans_alert() to switch from RTO
to TLP:
if (icsk->icsk_pending == ICSK_TIME_RETRANS)
tcp_schedule_loss_probe(sk);
This commit, by only rescheduling the xmit timer once per ACK,
simplifies the code and reduces CPU overhead.
This commit was tested in an A/B test with Google web server
traffic. SNMP stats and request latency metrics were within noise
levels, substantiating that for normal web traffic patterns this is a
rare issue. This commit was also tested with packetdrill tests to
verify that it fixes the timer behavior in the corner cases discussed
in the netdev threads mentioned above.
This patch is a bug fix patch intended to be queued for -stable
relases.
[This version of the commit was compiled and briefly tested
based on top of v3.10.107.]
Change-Id: If0417380fd59290b65cf04a415373aa13dd1dad7
Fixes:
6ba8a3b19e76 ("tcp: Tail loss probe (TLP)")
Reported-by: Klavs Klavsen <kl@vsen.dk>
Reported-by: Mao Wenan <maowenan@huawei.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Neal Cardwell [Thu, 27 Jul 2017 14:09:30 +0000 (10:09 -0400)]
tcp: enable xmit timer fix by having TLP use time when RTO should fire
commit
a2815817ffa68c7933a43eb55836d6e789bd4389 upstream.
Have tcp_schedule_loss_probe() base the TLP scheduling decision based
on when the RTO *should* fire. This is to enable the upcoming xmit
timer fix in this series, where tcp_schedule_loss_probe() cannot
assume that the last timer installed was an RTO timer (because we are
no longer doing the "rearm RTO, rearm RTO, rearm TLP" dance on every
ACK). So tcp_schedule_loss_probe() must independently figure out when
an RTO would want to fire.
In the new TLP implementation following in this series, we cannot
assume that icsk_timeout was set based on an RTO; after processing a
cumulative ACK the icsk_timeout we see can be from a previous TLP or
RTO. So we need to independently recalculate the RTO time (instead of
reading it out of icsk_timeout). Removing this dependency on the
nature of icsk_timeout makes things a little easier to reason about
anyway.
Note that the old and new code should be equivalent, since they are
both saying: "if the RTO is in the future, but at an earlier time than
the normal TLP time, then set the TLP timer to fire when the RTO would
have fired".
[This version of the commit was compiled and briefly tested
based on top of v3.10.107.]
Change-Id: I597ad6446edde15bf2cea8e56d603a2c52f8221b
Fixes:
6ba8a3b19e76 ("tcp: Tail loss probe (TLP)")
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Neal Cardwell [Thu, 27 Jul 2017 14:01:20 +0000 (10:01 -0400)]
tcp: introduce tcp_rto_delta_us() helper for xmit timer fix
commit
e1a10ef7fa876f8510aaec36ea5c0cf34baba410 upstream.
Pure refactor. This helper will be required in the xmit timer fix
later in the patch series. (Because the TLP logic will want to make
this calculation.)
[This version of the commit was compiled and briefly tested
based on top of v3.10.107.]
Change-Id: I1ccfba0b00465454bf5ce22e6fef5f7b5dd94d15
Fixes:
6ba8a3b19e76 ("tcp: Tail loss probe (TLP)")
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Nandita Dukkipati <nanditad@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Al Viro [Fri, 19 Dec 2014 06:20:58 +0000 (06:20 +0000)]
Bluetooth: cmtp: cmtp_add_connection() should verify that it's dealing with l2cap socket
commit
96c26653ce65bf84f3212f8b00d4316c1efcbf4c upstream.
... rather than relying on ciptool(8) never passing it anything else. Give
it e.g. an AF_UNIX connected socket (from socketpair(2)) and it'll oops,
trying to evaluate &l2cap_pi(sock->sk)->chan->dst...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Al Viro [Fri, 19 Dec 2014 06:20:59 +0000 (06:20 +0000)]
Bluetooth: bnep: bnep_add_connection() should verify that it's dealing with l2cap socket
commit
71bb99a02b32b4cc4265118e85f6035ca72923f0 upstream.
same story as cmtp
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willem de Bruijn [Thu, 10 Aug 2017 16:29:19 +0000 (12:29 -0400)]
udp: consistently apply ufo or fragmentation
commit
85f1bd9a7b5a79d5baa8bf44af19658f7bf77bfa upstream.
When iteratively building a UDP datagram with MSG_MORE and that
datagram exceeds MTU, consistently choose UFO or fragmentation.
Once skb_is_gso, always apply ufo. Conversely, once a datagram is
split across multiple skbs, do not consider ufo.
Sendpage already maintains the first invariant, only add the second.
IPv6 does not have a sendpage implementation to modify.
A gso skb must have a partial checksum, do not follow sk_no_check_tx
in udp_send_skb.
Found by syzkaller.
[gregkh - tweaks for 3.18 for ipv6, hopefully they are correct...]
[wt: s/skb_is_gso/skb_has_frags for 3.10]
Fixes:
e89e9cf539a2 ("[IPv4/IPv6]: UFO Scatter-gather approach")
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Cheah Kok Cheong [Thu, 3 Aug 2017 12:14:41 +0000 (13:14 +0100)]
Staging: comedi: comedi_fops: Avoid orphaned proc entry
commit
bf279ece37d2a3eaaa9813fcd7a1d8a81eb29c20 upstream.
Move comedi_proc_init to the end to avoid orphaned proc entry
if module loading failed.
Signed-off-by: Cheah Kok Cheong <thrust73@gmail.com>
Reviewed-by: Ian Abbott <abbotti@mev.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Dumazet [Fri, 3 Feb 2017 22:29:42 +0000 (14:29 -0800)]
net: skb_needs_check() accepts CHECKSUM_NONE for tx
commit
6e7bc478c9a006c701c14476ec9d389a484b4864 upstream.
My recent change missed fact that UFO would perform a complete
UDP checksum before segmenting in frags.
In this case skb->ip_summed is set to CHECKSUM_NONE.
We need to add this valid case to skb_needs_check()
Fixes:
b2504a5dbef3 ("net: reduce skb_warn_bad_offload() noise")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Dumazet [Tue, 31 Jan 2017 18:20:32 +0000 (10:20 -0800)]
net: reduce skb_warn_bad_offload() noise
commit
b2504a5dbef3305ef41988ad270b0e8ec289331c upstream.
Dmitry reported warnings occurring in __skb_gso_segment() [1]
All SKB_GSO_DODGY producers can allow user space to feed
packets that trigger the current check.
We could prevent them from doing so, rejecting packets, but
this might add regressions to existing programs.
It turns out our SKB_GSO_DODGY handlers properly set up checksum
information that is needed anyway when packets needs to be segmented.
By checking again skb_needs_check() after skb_mac_gso_segment(),
we should remove these pesky warnings, at a very minor cost.
With help from Willem de Bruijn
[1]
WARNING: CPU: 1 PID: 6768 at net/core/dev.c:2439 skb_warn_bad_offload+0x2af/0x390 net/core/dev.c:2434
lo: caps=(0x000000a2803b7c69, 0x0000000000000000) len=138 data_len=0 gso_size=15883 gso_type=4 ip_summed=0
Kernel panic - not syncing: panic_on_warn set ...
CPU: 1 PID: 6768 Comm: syz-executor1 Not tainted 4.9.0 #5
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
ffff8801c063ecd8 ffffffff82346bdf ffffffff00000001 1ffff100380c7d2e
ffffed00380c7d26 0000000041b58ab3 ffffffff84b37e38 ffffffff823468f1
ffffffff84820740 ffffffff84f289c0 dffffc0000000000 ffff8801c063ee20
Call Trace:
[<
ffffffff82346bdf>] __dump_stack lib/dump_stack.c:15 [inline]
[<
ffffffff82346bdf>] dump_stack+0x2ee/0x3ef lib/dump_stack.c:51
[<
ffffffff81827e34>] panic+0x1fb/0x412 kernel/panic.c:179
[<
ffffffff8141f704>] __warn+0x1c4/0x1e0 kernel/panic.c:542
[<
ffffffff8141f7e5>] warn_slowpath_fmt+0xc5/0x100 kernel/panic.c:565
[<
ffffffff8356cbaf>] skb_warn_bad_offload+0x2af/0x390 net/core/dev.c:2434
[<
ffffffff83585cd2>] __skb_gso_segment+0x482/0x780 net/core/dev.c:2706
[<
ffffffff83586f19>] skb_gso_segment include/linux/netdevice.h:3985 [inline]
[<
ffffffff83586f19>] validate_xmit_skb+0x5c9/0xc20 net/core/dev.c:2969
[<
ffffffff835892bb>] __dev_queue_xmit+0xe6b/0x1e70 net/core/dev.c:3383
[<
ffffffff8358a2d7>] dev_queue_xmit+0x17/0x20 net/core/dev.c:3424
[<
ffffffff83ad161d>] packet_snd net/packet/af_packet.c:2930 [inline]
[<
ffffffff83ad161d>] packet_sendmsg+0x32ed/0x4d30 net/packet/af_packet.c:2955
[<
ffffffff834f0aaa>] sock_sendmsg_nosec net/socket.c:621 [inline]
[<
ffffffff834f0aaa>] sock_sendmsg+0xca/0x110 net/socket.c:631
[<
ffffffff834f329a>] ___sys_sendmsg+0x8fa/0x9f0 net/socket.c:1954
[<
ffffffff834f5e58>] __sys_sendmsg+0x138/0x300 net/socket.c:1988
[<
ffffffff834f604d>] SYSC_sendmsg net/socket.c:1999 [inline]
[<
ffffffff834f604d>] SyS_sendmsg+0x2d/0x50 net/socket.c:1995
[<
ffffffff84371941>] entry_SYSCALL_64_fastpath+0x1f/0xc2
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Julian Anastasov [Mon, 17 Jul 2017 06:59:52 +0000 (09:59 +0300)]
ipvs: SNAT packet replies only for NATed connections
commit
3c5ab3f395d66a9e4e937fcfdf6ebc63894f028b upstream.
We do not check if packet from real server is for NAT
connection before performing SNAT. This causes problems
for setups that use DR/TUN and allow local clients to
access the real server directly, for example:
- local client in director creates IPVS-DR/TUN connection
CIP->VIP and the request packets are routed to RIP.
Talks are finished but IPVS connection is not expired yet.
- second local client creates non-IPVS connection CIP->RIP
with same reply tuple RIP->CIP and when replies are received
on LOCAL_IN we wrongly assign them for the first client
connection because RIP->CIP matches the reply direction.
As result, IPVS SNATs replies for non-IPVS connections.
The problem is more visible to local UDP clients but in rare
cases it can happen also for TCP or remote clients when the
real server sends the reply traffic via the director.
So, better to be more precise for the reply traffic.
As replies are not expected for DR/TUN connections, better
to not touch them.
Reported-by: Nick Moriarty <nick.moriarty@york.ac.uk>
Signed-off-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willy Tarreau [Tue, 27 Jun 2017 09:49:32 +0000 (11:49 +0200)]
Linux 3.10.107
Helge Deller [Mon, 19 Jun 2017 15:34:05 +0000 (17:34 +0200)]
Allow stack to grow up to address space limit
commit
bd726c90b6b8ce87602208701b208a208e6d5600 upstream.
Fix expand_upwards() on architectures with an upward-growing stack (parisc,
metag and partly IA-64) to allow the stack to reliably grow exactly up to
the address space limit given by TASK_SIZE.
Signed-off-by: Helge Deller <deller@gmx.de>
Acked-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Hugh Dickins [Tue, 20 Jun 2017 09:10:44 +0000 (02:10 -0700)]
mm: fix new crash in unmapped_area_topdown()
commit
f4cb767d76cf7ee72f97dd76f6cfa6c76a5edc89 upstream.
Trinity gets kernel BUG at mm/mmap.c:1963! in about 3 minutes of
mmap testing. That's the VM_BUG_ON(gap_end < gap_start) at the
end of unmapped_area_topdown(). Linus points out how MAP_FIXED
(which does not have to respect our stack guard gap intentions)
could result in gap_end below gap_start there. Fix that, and
the similar case in its alternative, unmapped_area().
Cc: stable@vger.kernel.org
Fixes:
1be7107fbe18 ("mm: larger stack guard gap, between vmas")
Reported-by: Dave Jones <davej@codemonkey.org.uk>
Debugged-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Hugh Dickins [Mon, 19 Jun 2017 11:03:24 +0000 (04:03 -0700)]
mm: larger stack guard gap, between vmas
commit
1be7107fbe18eed3e319a6c3e83c78254b693acb upstream.
Stack guard page is a useful feature to reduce a risk of stack smashing
into a different mapping. We have been using a single page gap which
is sufficient to prevent having stack adjacent to a different mapping.
But this seems to be insufficient in the light of the stack usage in
userspace. E.g. glibc uses as large as 64kB alloca() in many commonly
used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX]
which is 256kB or stack strings with MAX_ARG_STRLEN.
This will become especially dangerous for suid binaries and the default
no limit for the stack size limit because those applications can be
tricked to consume a large portion of the stack and a single glibc call
could jump over the guard page. These attacks are not theoretical,
unfortunatelly.
Make those attacks less probable by increasing the stack guard gap
to 1MB (on systems with 4k pages; but make it depend on the page size
because systems with larger base pages might cap stack allocations in
the PAGE_SIZE units) which should cover larger alloca() and VLA stack
allocations. It is obviously not a full fix because the problem is
somehow inherent, but it should reduce attack space a lot.
One could argue that the gap size should be configurable from userspace,
but that can be done later when somebody finds that the new 1MB is wrong
for some special case applications. For now, add a kernel command line
option (stack_guard_gap) to specify the stack gap size (in page units).
Implementation wise, first delete all the old code for stack guard page:
because although we could get away with accounting one extra page in a
stack vma, accounting a larger gap can break userspace - case in point,
a program run with "ulimit -S -v 20000" failed when the 1MB gap was
counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK
and strict non-overcommit mode.
Instead of keeping gap inside the stack vma, maintain the stack guard
gap as a gap between vmas: using vm_start_gap() in place of vm_start
(or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few
places which need to respect the gap - mainly arch_get_unmapped_area(),
and and the vma tree's subtree_gap support for that.
Original-patch-by: Oleg Nesterov <oleg@redhat.com>
Original-patch-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
[wt: backport to 4.11: adjust context]
[wt: backport to 4.9: adjust context ; kernel doc was not in admin-guide]
[wt: backport to 4.4: adjust context ; drop ppc hugetlb_radix changes]
[wt: backport to 3.18: adjust context ; no FOLL_POPULATE ;
s390 uses generic arch_get_unmapped_area()]
[wt: backport to 3.16: adjust context]
[wt: backport to 3.10: adjust context ; code logic in PARISC's
arch_get_unmapped_area() wasn't found ; code inserted into
expand_upwards() and expand_downwards() runs under anon_vma lock;
changes for gup.c:faultin_page go to memory.c:__get_user_pages();
included Hugh Dickins' fixes]
Signed-off-by: Willy Tarreau <w@1wt.eu>
Hector Marco-Gisbert [Thu, 10 Mar 2016 19:51:00 +0000 (20:51 +0100)]
x86/mm/32: Enable full randomization on i386 and X86_32
commit
8b8addf891de8a00e4d39fc32f93f7c5eb8feceb upstream.
Currently on i386 and on X86_64 when emulating X86_32 in legacy mode, only
the stack and the executable are randomized but not other mmapped files
(libraries, vDSO, etc.). This patch enables randomization for the
libraries, vDSO and mmap requests on i386 and in X86_32 in legacy mode.
By default on i386 there are 8 bits for the randomization of the libraries,
vDSO and mmaps which only uses 1MB of VA.
This patch preserves the original randomness, using 1MB of VA out of 3GB or
4GB. We think that 1MB out of 3GB is not a big cost for having the ASLR.
The first obvious security benefit is that all objects are randomized (not
only the stack and the executable) in legacy mode which highly increases
the ASLR effectiveness, otherwise the attackers may use these
non-randomized areas. But also sensitive setuid/setgid applications are
more secure because currently, attackers can disable the randomization of
these applications by setting the ulimit stack to "unlimited". This is a
very old and widely known trick to disable the ASLR in i386 which has been
allowed for too long.
Another trick used to disable the ASLR was to set the ADDR_NO_RANDOMIZE
personality flag, but fortunately this doesn't work on setuid/setgid
applications because there is security checks which clear Security-relevant
flags.
This patch always randomizes the mmap_legacy_base address, removing the
possibility to disable the ASLR by setting the stack to "unlimited".
Signed-off-by: Hector Marco-Gisbert <hecmargi@upv.es>
Acked-by: Ismael Ripoll Ripoll <iripoll@upv.es>
Acked-by: Kees Cook <keescook@chromium.org>
Acked-by: Arjan van de Ven <arjan@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akpm@linux-foundation.org
Cc: kees Cook <keescook@chromium.org>
Link: http://lkml.kernel.org/r/1457639460-5242-1-git-send-email-hecmargi@upv.es
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Kees Cook [Tue, 14 Apr 2015 22:47:45 +0000 (15:47 -0700)]
x86: standardize mmap_rnd() usage
commit
82168140bc4cec7ec9bad39705518541149ff8b7 upstream.
In preparation for splitting out ET_DYN ASLR, this refactors the use of
mmap_rnd() to be used similarly to arm, and extracts the checking of
PF_RANDOMIZE.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jamie Bainbridge [Wed, 26 Apr 2017 00:43:27 +0000 (10:43 +1000)]
ipv6: check raw payload size correctly in ioctl
commit
105f5528b9bbaa08b526d3405a5bcd2ff0c953c8 upstream.
In situations where an skb is paged, the transport header pointer and
tail pointer can be the same because the skb contents are in frags.
This results in ioctl(SIOCINQ/FIONREAD) incorrectly returning a
length of 0 when the length to receive is actually greater than zero.
skb->len is already correctly set in ip6_input_finish() with
pskb_pull(), so use skb->len as it always returns the correct result
for both linear and paged data.
Signed-off-by: Jamie Bainbridge <jbainbri@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Sergey Senozhatsky [Sat, 18 Feb 2017 11:42:54 +0000 (03:42 -0800)]
printk: use rcuidle console tracepoint
commit
fc98c3c8c9dcafd67adcce69e6ce3191d5306c9c upstream.
Use rcuidle console tracepoint because, apparently, it may be issued
from an idle CPU:
hw-breakpoint: Failed to enable monitor mode on CPU 0.
hw-breakpoint: CPU 0 failed to disable vector catch
===============================
[ ERR: suspicious RCU usage. ]
4.10.0-rc8-next-
20170215+ #119 Not tainted
-------------------------------
./include/trace/events/printk.h:32 suspicious rcu_dereference_check() usage!
other info that might help us debug this:
RCU used illegally from idle CPU!
rcu_scheduler_active = 2, debug_locks = 0
RCU used illegally from extended quiescent state!
2 locks held by swapper/0/0:
#0: (cpu_pm_notifier_lock){......}, at: [<
c0237e2c>] cpu_pm_exit+0x10/0x54
#1: (console_lock){+.+.+.}, at: [<
c01ab350>] vprintk_emit+0x264/0x474
stack backtrace:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.10.0-rc8-next-
20170215+ #119
Hardware name: Generic OMAP4 (Flattened Device Tree)
console_unlock
vprintk_emit
vprintk_default
printk
reset_ctrl_regs
dbg_cpu_pm_notify
notifier_call_chain
cpu_pm_exit
omap_enter_idle_coupled
cpuidle_enter_state
cpuidle_enter_state_coupled
do_idle
cpu_startup_entry
start_kernel
This RCU warning, however, is suppressed by lockdep_off() in printk().
lockdep_off() increments the ->lockdep_recursion counter and thus
disables RCU_LOCKDEP_WARN() and debug_lockdep_rcu_enabled(), which want
lockdep to be enabled "current->lockdep_recursion == 0".
Link: http://lkml.kernel.org/r/20170217015932.11898-1-sergey.senozhatsky@gmail.com
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Reported-by: Tony Lindgren <tony@atomide.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Lindgren <tony@atomide.com>
Cc: Russell King <rmk@armlinux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[wt: changes are in kernel/printk.c in 3.10]
Signed-off-by: Willy Tarreau <w@1wt.eu>
Willem de Bruijn [Fri, 3 Feb 2017 23:20:48 +0000 (18:20 -0500)]
tun: read vnet_hdr_sz once
commit
e1edab87faf6ca30cd137e0795bc73aa9a9a22ec upstream.
When IFF_VNET_HDR is enabled, a virtio_net header must precede data.
Data length is verified to be greater than or equal to expected header
length tun->vnet_hdr_sz before copying.
Read this value once and cache locally, as it can be updated between
the test and use (TOCTOU).
[js] we have TUN_VNET_HDR in 3.12
Signed-off-by: Willem de Bruijn <willemb@google.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
CC: Eric Dumazet <edumazet@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
[wt: s/READ_ONCE/ACCESS_ONCE]
Signed-off-by: Willy Tarreau <w@1wt.eu>
Jim Mattson [Mon, 12 Dec 2016 19:01:37 +0000 (11:01 -0800)]
kvm: nVMX: Allow L1 to intercept software exceptions (#BP and #OF)
commit
ef85b67385436ddc1998f45f1d6a210f935b3388 upstream.
When L2 exits to L0 due to "exception or NMI", software exceptions
(#BP and #OF) for which L1 has requested an intercept should be
handled by L1 rather than L0. Previously, only hardware exceptions
were forwarded to L1.
Signed-off-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Josh Poimboeuf [Thu, 13 Apr 2017 22:53:55 +0000 (17:53 -0500)]
ftrace/x86: Fix triple fault with graph tracing and suspend-to-ram
commit
34a477e5297cbaa6ecc6e17c042a866e1cbe80d6 upstream.
On x86-32, with CONFIG_FIRMWARE and multiple CPUs, if you enable function
graph tracing and then suspend to RAM, it will triple fault and reboot when
it resumes.
The first fault happens when booting a secondary CPU:
startup_32_smp()
load_ucode_ap()
prepare_ftrace_return()
ftrace_graph_is_dead()
(accesses 'kill_ftrace_graph')
The early head_32.S code calls into load_ucode_ap(), which has an an
ftrace hook, so it calls prepare_ftrace_return(), which calls
ftrace_graph_is_dead(), which tries to access the global
'kill_ftrace_graph' variable with a virtual address, causing a fault
because the CPU is still in real mode.
The fix is to add a check in prepare_ftrace_return() to make sure it's
running in protected mode before continuing. The check makes sure the
stack pointer is a virtual kernel address. It's a bit of a hack, but
it's not very intrusive and it works well enough.
For reference, here are a few other (more difficult) ways this could
have potentially been fixed:
- Move startup_32_smp()'s call to load_ucode_ap() down to *after* paging
is enabled. (No idea what that would break.)
- Track down load_ucode_ap()'s entire callee tree and mark all the
functions 'notrace'. (Probably not realistic.)
- Pause graph tracing in ftrace_suspend_notifier_call() or bringup_cpu()
or __cpu_up(), and ensure that the pause facility can be queried from
real mode.
Reported-by: Paul Menzel <pmenzel@molgen.mpg.de>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Tested-by: Paul Menzel <pmenzel@molgen.mpg.de>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Cc: "Rafael J . Wysocki" <rjw@rjwysocki.net>
Cc: linux-acpi@vger.kernel.org
Cc: Borislav Petkov <bp@alien8.de>
Cc: Len Brown <lenb@kernel.org>
Link: http://lkml.kernel.org/r/5c1272269a580660703ed2eccf44308e790c7a98.1492123841.git.jpoimboe@redhat.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
J. Bruce Fields [Fri, 21 Apr 2017 20:10:18 +0000 (16:10 -0400)]
nfsd: check for oversized NFSv2/v3 arguments
commit
e6838a29ecb484c97e4efef9429643b9851fba6e upstream.
A client can append random data to the end of an NFSv2 or NFSv3 RPC call
without our complaining; we'll just stop parsing at the end of the
expected data and ignore the rest.
Encoded arguments and replies are stored together in an array of pages,
and if a call is too large it could leave inadequate space for the
reply. This is normally OK because NFS RPC's typically have either
short arguments and long replies (like READ) or long arguments and short
replies (like WRITE). But a client that sends an incorrectly long reply
can violate those assumptions. This was observed to cause crashes.
Also, several operations increment rq_next_page in the decode routine
before checking the argument size, which can leave rq_next_page pointing
well past the end of the page array, causing trouble later in
svc_free_pages.
So, following a suggestion from Neil Brown, add a central check to
enforce our expectation that no NFSv2/v3 call has both a large call and
a large reply.
As followup we may also want to rewrite the encoding routines to check
more carefully that they aren't running off the end of the page array.
We may also consider rejecting calls that have any extra garbage
appended. That would be safer, and within our rights by spec, but given
the age of our server and the NFS protocol, and the fact that we've
never enforced this before, we may need to balance that against the
possibility of breaking some oddball client.
Reported-by: Tuomas Haanpää <thaan@synopsys.com>
Reported-by: Ari Kauppi <ari@synopsys.com>
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Al Viro [Fri, 14 Apr 2017 21:22:18 +0000 (17:22 -0400)]
p9_client_readdir() fix
commit
71d6ad08379304128e4bdfaf0b4185d54375423e upstream.
Don't assume that server is sane and won't return more data than
asked for.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Stefano Stabellini [Sat, 16 Apr 2016 01:23:00 +0000 (18:23 -0700)]
xen/x86: don't lose event interrupts
commit
c06b6d70feb32d28f04ba37aa3df17973fd37b6b upstream.
On slow platforms with unreliable TSC, such as QEMU emulated machines,
it is possible for the kernel to request the next event in the past. In
that case, in the current implementation of xen_vcpuop_clockevent, we
simply return -ETIME. To be precise the Xen returns -ETIME and we pass
it on. However the result of this is a missed event, which simply causes
the kernel to hang.
Instead it is better to always ask the hypervisor for a timer event,
even if the timeout is in the past. That way there are no lost
interrupts and the kernel survives. To do that, remove the
VCPU_SSHOTTMR_future flag.
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Juergen Gross <jgross@suse.com>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
santosh.shilimkar@oracle.com [Thu, 14 Apr 2016 17:43:27 +0000 (10:43 -0700)]
RDS: Fix the atomicity for congestion map update
commit
e47db94e10447fc467777a40302f2b393e9af2fa upstream.
Two different threads with different rds sockets may be in
rds_recv_rcvbuf_delta() via receive path. If their ports
both map to the same word in the congestion map, then
using non-atomic ops to update it could cause the map to
be incorrect. Lets use atomics to avoid such an issue.
Full credit to Wengang <wen.gang.wang@oracle.com> for
finding the issue, analysing it and also pointing out
to offending code with spin lock based fix.
Reviewed-by: Leon Romanovsky <leon@leon.nu>
Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Corey Minyard [Mon, 11 Apr 2016 14:10:19 +0000 (09:10 -0500)]
MIPS: Fix crash registers on non-crashing CPUs
commit
c80e1b62ffca52e2d1d865ee58bc79c4c0c55005 upstream.
As part of handling a crash on an SMP system, an IPI is send to
all other CPUs to save their current registers and stop. It was
using task_pt_regs(current) to get the registers, but that will
only be accurate if the CPU was interrupted running in userland.
Instead allow the architecture to pass in the registers (all
pass NULL now, but allow for the future) and then use get_irq_regs()
which should be accurate as we are in an interrupt. Fall back to
task_pt_regs(current) if nothing else is available.
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Cc: David Daney <ddaney@caviumnetworks.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/13050/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Nikolay Aleksandrov [Fri, 21 Apr 2017 17:42:16 +0000 (20:42 +0300)]
ip6mr: fix notification device destruction
commit
723b929ca0f79c0796f160c2eeda4597ee98d2b8 upstream.
Andrey Konovalov reported a BUG caused by the ip6mr code which is caused
because we call unregister_netdevice_many for a device that is already
being destroyed. In IPv4's ipmr that has been resolved by two commits
long time ago by introducing the "notify" parameter to the delete
function and avoiding the unregister when called from a notifier, so
let's do the same for ip6mr.
The trace from Andrey:
------------[ cut here ]------------
kernel BUG at net/core/dev.c:6813!
invalid opcode: 0000 [#1] SMP KASAN
Modules linked in:
CPU: 1 PID: 1165 Comm: kworker/u4:3 Not tainted 4.11.0-rc7+ #251
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs
01/01/2011
Workqueue: netns cleanup_net
task:
ffff880069208000 task.stack:
ffff8800692d8000
RIP: 0010:rollback_registered_many+0x348/0xeb0 net/core/dev.c:6813
RSP: 0018:
ffff8800692de7f0 EFLAGS:
00010297
RAX:
ffff880069208000 RBX:
0000000000000002 RCX:
0000000000000001
RDX:
0000000000000000 RSI:
0000000000000000 RDI:
ffff88006af90569
RBP:
ffff8800692de9f0 R08:
ffff8800692dec60 R09:
0000000000000000
R10:
0000000000000006 R11:
0000000000000000 R12:
ffff88006af90070
R13:
ffff8800692debf0 R14:
dffffc0000000000 R15:
ffff88006af90000
FS:
0000000000000000(0000) GS:
ffff88006cb00000(0000)
knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
00007fe7e897d870 CR3:
00000000657e7000 CR4:
00000000000006e0
Call Trace:
unregister_netdevice_many.part.105+0x87/0x440 net/core/dev.c:7881
unregister_netdevice_many+0xc8/0x120 net/core/dev.c:7880
ip6mr_device_event+0x362/0x3f0 net/ipv6/ip6mr.c:1346
notifier_call_chain+0x145/0x2f0 kernel/notifier.c:93
__raw_notifier_call_chain kernel/notifier.c:394
raw_notifier_call_chain+0x2d/0x40 kernel/notifier.c:401
call_netdevice_notifiers_info+0x51/0x90 net/core/dev.c:1647
call_netdevice_notifiers net/core/dev.c:1663
rollback_registered_many+0x919/0xeb0 net/core/dev.c:6841
unregister_netdevice_many.part.105+0x87/0x440 net/core/dev.c:7881
unregister_netdevice_many net/core/dev.c:7880
default_device_exit_batch+0x4fa/0x640 net/core/dev.c:8333
ops_exit_list.isra.4+0x100/0x150 net/core/net_namespace.c:144
cleanup_net+0x5a8/0xb40 net/core/net_namespace.c:463
process_one_work+0xc04/0x1c10 kernel/workqueue.c:2097
worker_thread+0x223/0x19c0 kernel/workqueue.c:2231
kthread+0x35e/0x430 kernel/kthread.c:231
ret_from_fork+0x31/0x40 arch/x86/entry/entry_64.S:430
Code: 3c 32 00 0f 85 70 0b 00 00 48 b8 00 02 00 00 00 00 ad de 49 89
47 78 e9 93 fe ff ff 49 8d 57 70 49 8d 5f 78 eb 9e e8 88 7a 14 fe <0f>
0b 48 8b 9d 28 fe ff ff e8 7a 7a 14 fe 48 b8 00 00 00 00 00
RIP: rollback_registered_many+0x348/0xeb0 RSP:
ffff8800692de7f0
---[ end trace
e0b29c57e9b3292c ]---
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Xin Long [Thu, 6 Apr 2017 05:10:52 +0000 (13:10 +0800)]
sctp: listen on the sock only when it's state is listening or closed
commit
34b2789f1d9bf8dcca9b5cb553d076ca2cd898ee upstream.
Now sctp doesn't check sock's state before listening on it. It could
even cause changing a sock with any state to become a listening sock
when doing sctp_listen.
This patch is to fix it by checking sock's state in sctp_listen, so
that it will listen on the sock with right state.
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Eric Dumazet [Thu, 23 Mar 2017 19:39:21 +0000 (12:39 -0700)]
net: neigh: guard against NULL solicit() method
commit
48481c8fa16410ffa45939b13b6c53c2ca609e5f upstream.
Dmitry posted a nice reproducer of a bug triggering in neigh_probe()
when dereferencing a NULL neigh->ops->solicit method.
This can happen for arp_direct_ops/ndisc_direct_ops and similar,
which can be used for NUD_NOARP neighbours (created when dev->header_ops
is NULL). Admin can then force changing nud_state to some other state
that would fire neigh timer.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Arnd Bergmann [Tue, 26 Jan 2016 18:08:10 +0000 (13:08 -0500)]
gfs2: avoid uninitialized variable warning
commit
67893f12e5374bbcaaffbc6e570acbc2714ea884 upstream.
We get a bogus warning about a potential uninitialized variable
use in gfs2, because the compiler does not figure out that we
never use the leaf number if get_leaf_nr() returns an error:
fs/gfs2/dir.c: In function 'get_first_leaf':
fs/gfs2/dir.c:802:9: warning: 'leaf_no' may be used uninitialized in this function [-Wmaybe-uninitialized]
fs/gfs2/dir.c: In function 'dir_split_leaf':
fs/gfs2/dir.c:1021:8: warning: 'leaf_no' may be used uninitialized in this function [-Wmaybe-uninitialized]
Changing the 'if (!error)' to 'if (!IS_ERR_VALUE(error))' is
sufficient to let gcc understand that this is exactly the same
condition as in IS_ERR() so it can optimize the code path enough
to understand it.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Arnd Bergmann [Thu, 28 Jan 2016 21:58:28 +0000 (22:58 +0100)]
hostap: avoid uninitialized variable use in hfa384x_get_rid
commit
48dc5fb3ba53b20418de8514700f63d88c5de3a3 upstream.
The driver reads a value from hfa384x_from_bap(), which may fail,
and then assigns the value to a local variable. gcc detects that
in in the failure case, the 'rlen' variable now contains
uninitialized data:
In file included from ../drivers/net/wireless/intersil/hostap/hostap_pci.c:220:0:
drivers/net/wireless/intersil/hostap/hostap_hw.c: In function 'hfa384x_get_rid':
drivers/net/wireless/intersil/hostap/hostap_hw.c:842:5: warning: 'rec' may be used uninitialized in this function [-Wmaybe-uninitialized]
if (le16_to_cpu(rec.len) == 0) {
This restructures the function as suggested by Russell King, to
make it more readable and get more reliable error handling, by
handling each failure mode using a goto.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Arnd Bergmann [Mon, 25 Jan 2016 21:54:56 +0000 (22:54 +0100)]
tty: nozomi: avoid a harmless gcc warning
commit
a4f642a8a3c2838ad09fe8313d45db46600e1478 upstream.
The nozomi wireless data driver has its own helper function to
transfer data from a FIFO, doing an extra byte swap on big-endian
architectures, presumably to bring the data back into byte-serial
order after readw() or readl() perform their implicit byteswap.
This helper function is used in the receive_data() function to
first read the length into a 32-bit variable, which causes
a compile-time warning:
drivers/tty/nozomi.c: In function 'receive_data':
drivers/tty/nozomi.c:857:9: warning: 'size' may be used uninitialized in this function [-Wmaybe-uninitialized]
The problem is that gcc is unsure whether the data was actually
read or not. We know that it is at this point, so we can replace
it with a single readl() to shut up that warning.
I am leaving the byteswap in there, to preserve the existing
behavior, even though this seems fishy: Reading the length of
the data into a cpu-endian variable should normally not use
a second byteswap on big-endian systems, unless the hardware
is aware of the CPU endianess.
There appears to be a lot more confusion about endianess in this
driver, so it probably has not worked on big-endian systems in
a long time, if ever, and I have no way to test it. It's well
possible that this driver has not been used by anyone in a while,
the last patch that looks like it was tested on the hardware is
from 2008.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Andrey Konovalov [Wed, 29 Mar 2017 14:11:22 +0000 (16:11 +0200)]
net/packet: fix overflow in check for tp_reserve
commit
bcc5364bdcfe131e6379363f089e7b4108d35b70 upstream.
When calculating po->tp_hdrlen + po->tp_reserve the result can overflow.
Fix by checking that tp_reserve <= INT_MAX on assign.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Andrey Konovalov [Wed, 29 Mar 2017 14:11:21 +0000 (16:11 +0200)]
net/packet: fix overflow in check for tp_frame_nr
commit
8f8d28e4d6d815a391285e121c3a53a0b6cb9e7b upstream.
When calculating rb->frames_per_block * req->tp_block_nr the result
can overflow.
Add a check that tp_block_size * tp_block_nr <= UINT_MAX.
Since frames_per_block <= tp_block_size, the expression would
never overflow.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Michael Ellerman [Thu, 23 Apr 2015 07:27:12 +0000 (17:27 +1000)]
powerpc: Reject binutils 2.24 when building little endian
commit
60e065f70bdb0b0e916389024922ad40f3270c96 upstream.
There is a bug in binutils 2.24 which causes miscompilation if we're
building little endian and using weak symbols (which the kernel does).
It is fixed in binutils commit
57fa7b8c7e59 "Correct elf_merge_st_other
arguments for weak symbols", which is in binutils 2.25 and has been
backported to the binutils 2.24 branch and has been picked up by most
distros it seems.
However if we're running stock 2.24 (no extra version) then the bug is
present, so check for that and bail.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Yazen Ghannam [Thu, 30 Mar 2017 11:17:14 +0000 (13:17 +0200)]
x86/mce/AMD: Give a name to MCA bank 3 when accessed with legacy MSRs
commit
29f72ce3e4d18066ec75c79c857bee0618a3504b upstream.
MCA bank 3 is reserved on systems pre-Fam17h, so it didn't have a name.
However, MCA bank 3 is defined on Fam17h systems and can be accessed
using legacy MSRs. Without a name we get a stack trace on Fam17h systems
when trying to register sysfs files for bank 3 on kernels that don't
recognize Scalable MCA.
Call MCA bank 3 "decode_unit" since this is what it represents on
Fam17h. This will allow kernels without SMCA support to see this bank on
Fam17h+ and prevent the stack trace. This will not affect older systems
since this bank is reserved on them, i.e. it'll be ignored.
Tested on AMD Fam15h and Fam17h systems.
WARNING: CPU: 26 PID: 1 at lib/kobject.c:210 kobject_add_internal
kobject: (
ffff88085bb256c0): attempted to be registered with empty name!
...
Call Trace:
kobject_add_internal
kobject_add
kobject_create_and_add
threshold_create_device
threshold_init_device
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: http://lkml.kernel.org/r/1490102285-3659-1-git-send-email-Yazen.Ghannam@amd.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Sebastian Siewior [Wed, 22 Feb 2017 16:15:21 +0000 (17:15 +0100)]
ubi/upd: Always flush after prepared for an update
commit
9cd9a21ce070be8a918ffd3381468315a7a76ba6 upstream.
In commit
6afaf8a484cb ("UBI: flush wl before clearing update marker") I
managed to trigger and fix a similar bug. Now here is another version of
which I assumed it wouldn't matter back then but it turns out UBI has a
check for it and will error out like this:
|ubi0 warning: validate_vid_hdr: inconsistent used_ebs
|ubi0 error: validate_vid_hdr: inconsistent VID header at PEB 592
All you need to trigger this is? "ubiupdatevol /dev/ubi0_0 file" + a
powercut in the middle of the operation.
ubi_start_update() sets the update-marker and puts all EBs on the erase
list. After that userland can proceed to write new data while the old EB
aren't erased completely. A powercut at this point is usually not that
much of a tragedy. UBI won't give read access to the static volume
because it has the update marker. It will most likely set the corrupted
flag because it misses some EBs.
So we are all good. Unless the size of the image that has been written
differs from the old image in the magnitude of at least one EB. In that
case UBI will find two different values for `used_ebs' and refuse to
attach the image with the error message mentioned above.
So in order not to get in the situation, the patch will ensure that we
wait until everything is removed before it tries to write any data.
The alternative would be to detect such a case and remove all EBs at the
attached time after we processed the volume-table and see the
update-marker set. The patch looks bigger and I doubt it is worth it
since usually the write() will wait from time to time for a new EB since
usually there not that many spare EB that can be used.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Vitaly Kuznetsov [Fri, 10 Jun 2016 00:08:56 +0000 (17:08 -0700)]
Drivers: hv: get rid of timeout in vmbus_open()
commit
396e287fa2ff46e83ae016cdcb300c3faa3b02f6 upstream.
vmbus_teardown_gpadl() can result in infinite wait when it is called on 5
second timeout in vmbus_open(). The issue is caused by the fact that gpadl
teardown operation won't ever succeed for an opened channel and the timeout
isn't always enough. As a guest, we can always trust the host to respond to
our request (and there is nothing we can do if it doesn't).
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Vitaly Kuznetsov [Sat, 4 Jun 2016 00:09:24 +0000 (17:09 -0700)]
Drivers: hv: don't leak memory in vmbus_establish_gpadl()
commit
7cc80c98070ccc7940fc28811c92cca0a681015d upstream.
In some cases create_gpadl_header() allocates submessages but we never
free them.
[sumits] Note for stable:
Upstream commit
4d63763296ab7865a98bc29cc7d77145815ef89f:
(Drivers: hv: get rid of redundant messagecount in create_gpadl_header())
changes the list usage to initialize list header in all cases; that patch
isn't added to stable, so the current patch is modified a little bit from
the upstream commit to check if the list is valid or not.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Sumit Semwal <sumit.semwal@linaro.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Mantas M [Fri, 16 Dec 2016 08:30:59 +0000 (10:30 +0200)]
net: ipv6: check route protocol when deleting routes
commit
c2ed1880fd61a998e3ce40254a99a2ad000f1a7d upstream.
The protocol field is checked when deleting IPv4 routes, but ignored for
IPv6, which causes problems with routing daemons accidentally deleting
externally set routes (observed by multiple bird6 users).
This can be verified using `ip -6 route del <prefix> proto something`.
Signed-off-by: Mantas Mikulėnas <grawity@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Willy Tarreau <w@1wt.eu>