Kalderon, Michal [Mon, 5 Mar 2018 08:50:11 +0000 (10:50 +0200)]
RDMA/qedr: Fix iWARP write and send with immediate
[ Upstream commit
551e1c67b4207455375a2e7a285dea1c7e8fc361 ]
iWARP does not support RDMA WRITE or SEND with immediate data.
Driver should check this before submitting to FW and return an
immediate error
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kalderon, Michal [Mon, 5 Mar 2018 08:50:10 +0000 (10:50 +0200)]
RDMA/qedr: Fix kernel panic when running fio over NFSoRDMA
[ Upstream commit
e3fd112cbf21d049faf64ba1471d72b93c22109a ]
Race in qedr_poll_cq, lastest_cqe wasn't protected by lock,
leading to a case where two context's accessing poll_cq at
the same time lead to one of them having a pointer to an old
latest_cqe and reading an invalid cqe element
Signed-off-by: Amit Radzi <Amit.Radzi@cavium.com>
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com>
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Davidlohr Bueso [Mon, 22 Jan 2018 17:21:37 +0000 (09:21 -0800)]
ia64/err-inject: Use get_user_pages_fast()
[ Upstream commit
69c907022a7d9325cdc5c9dd064571e445df9a47 ]
At the point of sysfs callback, the call to gup is
done without mmap_sem (or any lock for that matter).
This is racy. As such, use the get_user_pages_fast()
alternative and safely avoid taking the lock, if possible.
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pierre-Yves Kerbrat [Fri, 26 Jan 2018 10:24:12 +0000 (11:24 +0100)]
e1000e: allocate ring descriptors with dma_zalloc_coherent
[ Upstream commit
aea3fca005fb45f80869f2e8d56fd4e64c1d1fdb ]
Descriptor rings were not initialized at zero when allocated
When area contained garbage data, it caused skb_over_panic in
e1000_clean_rx_irq (if data had E1000_RXD_STAT_DD bit set)
This patch makes use of dma_zalloc_coherent to make sure the
ring is memset at 0 to prevent the area from containing garbage.
Following is the signature of the panic:
IODDR0@0.0: skbuff: skb_over_panic: text:
80407b20 len:64010 put:64010 head:
ab46d800 data:
ab46d842 tail:0xab47d24c end:0xab46df40 dev:eth0
IODDR0@0.0: BUG: failure at net/core/skbuff.c:105/skb_panic()!
IODDR0@0.0: Kernel panic - not syncing: BUG!
IODDR0@0.0:
IODDR0@0.0: Process swapper/0 (pid: 0, threadinfo=
81728000, task=
8173cc00 ,cpu: 0)
IODDR0@0.0: SP = <
815a1c0c>
IODDR0@0.0: Stack:
00000001
IODDR0@0.0:
b2d89800 815e33ac
IODDR0@0.0:
ea73c040 00000001
IODDR0@0.0:
60040003 0000fa0a
IODDR0@0.0:
00000002
IODDR0@0.0:
IODDR0@0.0:
804540c0 815a1c70
IODDR0@0.0:
b2744000 602ac070
IODDR0@0.0:
815a1c44 b2d89800
IODDR0@0.0:
8173cc00 815a1c08
IODDR0@0.0:
IODDR0@0.0:
00000006
IODDR0@0.0:
815a1b50 00000000
IODDR0@0.0:
80079434 00000001
IODDR0@0.0:
ab46df40 b2744000
IODDR0@0.0:
b2d89800
IODDR0@0.0:
IODDR0@0.0:
0000fa0a 8045745c
IODDR0@0.0:
815a1c88 0000fa0a
IODDR0@0.0:
80407b20 b2789f80
IODDR0@0.0:
00000005 80407b20
IODDR0@0.0:
IODDR0@0.0:
IODDR0@0.0: Call Trace:
IODDR0@0.0: [<
804540bc>] skb_panic+0xa4/0xa8
IODDR0@0.0: [<
80079430>] console_unlock+0x2f8/0x6d0
IODDR0@0.0: [<
80457458>] skb_put+0xa0/0xc0
IODDR0@0.0: [<
80407b1c>] e1000_clean_rx_irq+0x2dc/0x3e8
IODDR0@0.0: [<
80407b1c>] e1000_clean_rx_irq+0x2dc/0x3e8
IODDR0@0.0: [<
804079c8>] e1000_clean_rx_irq+0x188/0x3e8
IODDR0@0.0: [<
80407b1c>] e1000_clean_rx_irq+0x2dc/0x3e8
IODDR0@0.0: [<
80468b48>] __dev_kfree_skb_any+0x88/0xa8
IODDR0@0.0: [<
804101ac>] e1000e_poll+0x94/0x288
IODDR0@0.0: [<
8046e9d4>] net_rx_action+0x19c/0x4e8
IODDR0@0.0: ...
IODDR0@0.0: Maximum depth to print reached. Use kstack=<maximum_depth_to_print> To specify a custom value (where 0 means to display the full backtrace)
IODDR0@0.0: ---[ end Kernel panic - not syncing: BUG!
Signed-off-by: Pierre-Yves Kerbrat <pkerbrat@kalray.eu>
Signed-off-by: Marius Gligor <mgligor@kalray.eu>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Reviewed-by: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Benjamin Poirier [Tue, 20 Feb 2018 06:12:00 +0000 (15:12 +0900)]
e1000e: Fix check_for_link return value with autoneg off
[ Upstream commit
4e7dc08e57c95673d2edaba8983c3de4dd1f65f5 ]
When autoneg is off, the .check_for_link callback functions clear the
get_link_status flag and systematically return a "pseudo-error". This means
that the link is not detected as up until the next execution of the
e1000_watchdog_task() 2 seconds later.
Fixes:
19110cfbb34d ("e1000e: Separate signaling for link check/link up")
Signed-off-by: Benjamin Poirier <bpoirier@suse.com>
Acked-by: Sasha Neftin <sasha.neftin@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Lüssing [Sun, 4 Mar 2018 12:08:17 +0000 (13:08 +0100)]
batman-adv: Fix multicast packet loss with a single WANT_ALL_IPV4/6 flag
[ Upstream commit
74c12c630fe310eb7fcae1b292257d47781fff0a ]
As the kernel doc describes too the code is supposed to skip adding
multicast TT entries if both the WANT_ALL_IPV4 and WANT_ALL_IPV6 flags
are present.
Unfortunately, the current code even skips adding multicast TT entries
if only either the WANT_ALL_IPV4 or WANT_ALL_IPV6 is present.
This could lead to IPv6 multicast packet loss if only an IGMP but not an
MLD querier is present for instance or vice versa.
Fixes:
687937ab3489 ("batman-adv: Add multicast optimization support for bridged setups")
Signed-off-by: Linus Lüssing <linus.luessing@c0d3.blue>
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jayachandran C [Wed, 28 Feb 2018 10:52:20 +0000 (02:52 -0800)]
watchdog: sbsa: use 32-bit read for WCV
[ Upstream commit
93ac3deb7c220cbcec032a967220a1f109d58431 ]
According to SBSA spec v3.1 section 5.3:
All registers are 32 bits in size and should be accessed using
32-bit reads and writes. If an access size other than 32 bits
is used then the results are IMPLEMENTATION DEFINED.
[...]
The Generic Watchdog is little-endian
The current code uses readq to read the watchdog compare register
which does a 64-bit access. This fails on ThunderX2 which does not
implement 64-bit access to this register.
Fix this by using lo_hi_readq() that does two 32-bit reads.
Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Igor Pylypiv [Wed, 28 Feb 2018 08:59:12 +0000 (00:59 -0800)]
watchdog: f71808e_wdt: Fix magic close handling
[ Upstream commit
7bd3e7b743956afbec30fb525bc3c5e22e3d475c ]
Watchdog close is "expected" when any byte is 'V' not just the last one.
Writing "V" to the device fails because the last byte is the end of string.
$ echo V > /dev/watchdog
f71808e_wdt: Unexpected close, not stopping watchdog!
Signed-off-by: Igor Pylypiv <igor.pylypiv@gmail.com>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sara Sharon [Tue, 2 Jan 2018 09:40:15 +0000 (11:40 +0200)]
iwlwifi: mvm: fix TX of CCMP 256
[ Upstream commit
de04d4fbf87b769ab18c480e4f020c53e74bbdd2 ]
We don't have enough room in the TX command for a CCMP 256
key, and need to use key from table.
Fixes:
3264bf032bd9 ("[BUGFIX] iwlwifi: mvm: Fix CCMP IV setting")
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Mackerras [Fri, 2 Mar 2018 04:38:04 +0000 (15:38 +1100)]
KVM: PPC: Book3S HV: Fix VRMA initialization with 2MB or 1GB memory backing
[ Upstream commit
debd574f4195e205ba505b25e19b2b797f4bcd94 ]
The current code for initializing the VRMA (virtual real memory area)
for HPT guests requires the page size of the backing memory to be one
of 4kB, 64kB or 16MB. With a radix host we have the possibility that
the backing memory page size can be 2MB or 1GB. In these cases, if the
guest switches to HPT mode, KVM will not initialize the VRMA and the
guest will fail to run.
In fact it is not necessary that the VRMA page size is the same as the
backing memory page size; any VRMA page size less than or equal to the
backing memory page size is acceptable. Therefore we now choose the
largest page size out of the set {4k, 64k, 16M} which is not larger
than the backing memory page size.
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Ellerman [Mon, 26 Feb 2018 04:22:22 +0000 (15:22 +1100)]
selftests/powerpc: Skip the subpage_prot tests if the syscall is unavailable
[ Upstream commit
cd4a6f3ab4d80cb919d15897eb3cbc85c2009d4b ]
The subpage_prot syscall is only functional when the system is using
the Hash MMU. Since commit
5b2b80714796 ("powerpc/mm: Invalidate
subpage_prot() system call on radix platforms") it returns ENOENT when
the Radix MMU is active. Currently this just makes the test fail.
Additionally the syscall is not available if the kernel is built with
4K pages, or if CONFIG_PPC_SUBPAGE_PROT=n, in which case it returns
ENOSYS because the syscall is missing entirely.
So check explicitly for ENOENT and ENOSYS and skip if we see either of
those.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Filipe Manana [Tue, 6 Feb 2018 20:39:20 +0000 (20:39 +0000)]
Btrfs: send, fix issuing write op when processing hole in no data mode
[ Upstream commit
d4dfc0f4d39475ccbbac947880b5464a74c30b99 ]
When doing an incremental send of a filesystem with the no-holes feature
enabled, we end up issuing a write operation when using the no data mode
send flag, instead of issuing an update extent operation. Fix this by
issuing the update extent operation instead.
Trivial reproducer:
$ mkfs.btrfs -f -O no-holes /dev/sdc
$ mkfs.btrfs -f /dev/sdd
$ mount /dev/sdc /mnt/sdc
$ mount /dev/sdd /mnt/sdd
$ xfs_io -f -c "pwrite -S 0xab 0 32K" /mnt/sdc/foobar
$ btrfs subvolume snapshot -r /mnt/sdc /mnt/sdc/snap1
$ xfs_io -c "fpunch 8K 8K" /mnt/sdc/foobar
$ btrfs subvolume snapshot -r /mnt/sdc /mnt/sdc/snap2
$ btrfs send /mnt/sdc/snap1 | btrfs receive /mnt/sdd
$ btrfs send --no-data -p /mnt/sdc/snap1 /mnt/sdc/snap2 \
| btrfs receive -vv /mnt/sdd
Before this change the output of the second receive command is:
receiving snapshot snap2 uuid=
f6922049-8c22-e544-9ff9-
fc6755918447...
utimes
write foobar, offset 8192, len 8192
utimes foobar
BTRFS_IOC_SET_RECEIVED_SUBVOL uuid=
f6922049-8c22-e544-9ff9-...
After this change it is:
receiving snapshot snap2 uuid=
564d36a3-ebc8-7343-aec9-
bf6fda278e64...
utimes
update_extent foobar: offset=8192, len=8192
utimes foobar
BTRFS_IOC_SET_RECEIVED_SUBVOL uuid=
564d36a3-ebc8-7343-aec9-
bf6fda278e64...
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Giulio Benetti [Wed, 28 Feb 2018 16:46:53 +0000 (17:46 +0100)]
drm/sun4i: Fix dclk_set_phase
[ Upstream commit
e64b6afa98f3629d0c0c46233bbdbe8acdb56f06 ]
Phase value is not shifted before writing.
Shift left of 28 bits to fit right bits
Signed-off-by: Giulio Benetti <giulio.benetti@micronovasrl.com>
Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1519836413-35023-1-git-send-email-giulio.benetti@micronovasrl.com
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Roger Pau Monne [Wed, 28 Feb 2018 09:19:03 +0000 (09:19 +0000)]
xen/pirq: fix error path cleanup when binding MSIs
[ Upstream commit
910f8befdf5bccf25287d9f1743e3e546bcb7ce0 ]
Current cleanup in the error path of xen_bind_pirq_msi_to_irq is
wrong. First of all there's an off-by-one in the cleanup loop, which
can lead to unbinding wrong IRQs.
Secondly IRQs not bound won't be freed, thus leaking IRQ numbers.
Note that there's no need to differentiate between bound and unbound
IRQs when freeing them, __unbind_from_irq will deal with both of them
correctly.
Fixes:
4892c9b4ada9f9 ("xen: add support for MSI message groups")
Reported-by: Hooman Mirhadi <mirhadih@amazon.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Amit Shah <aams@amazon.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Max Gurtovoy [Wed, 24 Jan 2018 15:31:45 +0000 (17:31 +0200)]
nvmet: fix PSDT field check in command format
[ Upstream commit
bffd2b61670feef18d2535e9b53364d270a1c991 ]
PSDT field section according to NVM_Express-1.3:
"This field specifies whether PRPs or SGLs are used for any data
transfer associated with the command. PRPs shall be used for all
Admin commands for NVMe over PCIe. SGLs shall be used for all Admin
and I/O commands for NVMe over Fabrics. This field shall be set to
01b for NVMe over Fabrics 1.0 implementations.
Suggested-by: Idan Burstein <idanb@mellanox.com>
Signed-off-by: Max Gurtovoy <maxg@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Joey Pabalinas [Wed, 28 Feb 2018 08:05:53 +0000 (22:05 -1000)]
net/tcp/illinois: replace broken algorithm reference link
[ Upstream commit
ecc832758a654e375924ebf06a4ac971acb5ce60 ]
The link to the pdf containing the algorithm description is now a
dead link; it seems http://www.ifp.illinois.edu/~srikant/ has been
moved to https://sites.google.com/a/illinois.edu/srikant/ and none of
the original papers can be found there...
I have replaced it with the only working copy I was able to find.
n.b. there is also a copy available at:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.296.6350&rep=rep1&type=pdf
However, this seems to only be a *cached* version, so I am unsure
exactly how reliable that link can be expected to remain over time
and have decided against using that one.
Signed-off-by: Joey Pabalinas <joeypabalinas@gmail.com>
net/ipv4/tcp_illinois.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Claudiu Manoil [Tue, 27 Feb 2018 15:33:10 +0000 (17:33 +0200)]
gianfar: Fix Rx byte accounting for ndev stats
[ Upstream commit
590399ddf9561f2ed0839311c8ae1be21597ba68 ]
Don't include in the Rx bytecount of the packet sent up the stack:
the FCB (frame control block), and the padding bytes inserted by
the controller into the frame payload, nor the FCS. All these are
being pulled out of the skb by gfar_process_frame().
This issue is old, likely from the driver's beginnings, however
it was amplified by recent:
commit
d903ec77118c ("gianfar: simplify FCS handling and fix memory leak")
which basically added the FCS to the Rx bytecount, and so brought
this to my attention.
Signed-off-by: Claudiu Manoil <claudiu.manoil@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guenter Roeck [Fri, 23 Feb 2018 20:55:59 +0000 (12:55 -0800)]
powerpc/boot: Fix random libfdt related build errors
[ Upstream commit
64c3f648c25d108f346fdc96c15180c6b7d250e9 ]
Once in a while I see build errors similar to the following
when building images from a clean tree.
Building powerpc:virtex-ml507:44x/virtex5_defconfig ... failed
------------
Error log:
arch/powerpc/boot/treeboot-akebono.c:37:20: fatal error:
libfdt.h: No such file or directory
Building powerpc:bamboo:smpdev:44x/bamboo_defconfig ... failed
------------
Error log:
arch/powerpc/boot/treeboot-akebono.c:37:20: fatal error:
libfdt.h: No such file or directory
arch/powerpc/boot/treeboot-currituck.c:35:20: fatal error:
libfdt.h: No such file or directory
Rebuilds will succeed.
Turns out that several source files in arch/powerpc/boot/ include
libfdt.h, but Makefile dependencies are incomplete. Let's fix that.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Florian Fainelli [Tue, 27 Feb 2018 01:00:35 +0000 (17:00 -0800)]
ARM: dts: NSP: Fix amount of RAM on BCM958625HR
[ Upstream commit
0a5aff64f20d92c5a6e9aeed7b5950b0b817bcd9 ]
Jon attempted to fix the amount of RAM on the BCM958625HR in commit
c53beb47f621 ("ARM: dts: NSP: Correct RAM amount for BCM958625HR board")
but it seems like we tripped over some poorly documented schematics.
The top-level page of the schematics says the board has 2GB, but when
you end-up scrolling to page 6, you see two chips of 4GBit (512MB) but
what the bootloader really initializes only 512MB, any attempt to use
more than that results in data aborts. Fix this again back to 512MB.
Fixes:
c53beb47f621 ("ARM: dts: NSP: Correct RAM amount for BCM958625HR board")
Acked-by: Jon Mason <jon.mason@broadcom.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Xin Long [Tue, 27 Feb 2018 11:19:41 +0000 (19:19 +0800)]
sit: fix IFLA_MTU ignored on NEWLINK
[ Upstream commit
2b3957c34b6d7f03544b12ebbf875eee430745db ]
Commit
128bb975dc3c ("ip6_gre: init dev->mtu and dev->hard_header_len
correctly") fixed IFLA_MTU ignored on NEWLINK for ip6_gre. The same
mtu fix is also needed for sit.
Note that dev->hard_header_len setting for sit works fine, no need to
fix it. sit is actually ipv4 tunnel, it can't call ip6_tnl_change_mtu
to set mtu.
Reported-by: Jianlin Shi <jishi@redhat.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Xin Long [Tue, 27 Feb 2018 11:19:40 +0000 (19:19 +0800)]
ip6_tunnel: fix IFLA_MTU ignored on NEWLINK
[ Upstream commit
a6aa80446234ec0ad38eecdb8efc59e91daae565 ]
Commit
128bb975dc3c ("ip6_gre: init dev->mtu and dev->hard_header_len
correctly") fixed IFLA_MTU ignored on NEWLINK for ip6_gre. The same
mtu fix is also needed for ip6_tunnel.
Note that dev->hard_header_len setting for ip6_tunnel works fine,
no need to fix it.
Reported-by: Jianlin Shi <jishi@redhat.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tang Junhui [Tue, 27 Feb 2018 17:49:30 +0000 (09:49 -0800)]
bcache: fix kcrashes with fio in RAID5 backend dev
[ Upstream commit
60eb34ec5526e264c2bbaea4f7512d714d791caf ]
Kernel crashed when run fio in a RAID5 backend bcache device, the call
trace is bellow:
[ 440.012034] kernel BUG at block/blk-ioc.c:146!
[ 440.012696] invalid opcode: 0000 [#1] SMP NOPTI
[ 440.026537] CPU: 2 PID: 2205 Comm: md127_raid5 Not tainted 4.15.0 #8
[ 440.027441] Hardware name: HP ProLiant MicroServer Gen8, BIOS J06 07/16
/2015
[ 440.028615] RIP: 0010:put_io_context+0x8b/0x90
[ 440.029246] RSP: 0018:
ffffa8c882b43af8 EFLAGS:
00010246
[ 440.029990] RAX:
0000000000000000 RBX:
ffffa8c88294fca0 RCX:
0000000000
0f4240
[ 440.031006] RDX:
0000000000000004 RSI:
0000000000000286 RDI:
ffffa8c882
94fca0
[ 440.032030] RBP:
ffffa8c882b43b10 R08:
0000000000000003 R09:
ffff949cb8
0c1700
[ 440.033206] R10:
0000000000000104 R11:
000000000000b71c R12:
00000000000
01000
[ 440.034222] R13:
0000000000000000 R14:
ffff949cad84db70 R15:
ffff949cb11
bd1e0
[ 440.035239] FS:
0000000000000000(0000) GS:
ffff949cba280000(0000) knlGS:
0000000000000000
[ 440.060190] CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
[ 440.084967] CR2:
00007ff0493ef000 CR3:
00000002f1e0a002 CR4:
00000000001
606e0
[ 440.110498] Call Trace:
[ 440.135443] bio_disassociate_task+0x1b/0x60
[ 440.160355] bio_free+0x1b/0x60
[ 440.184666] bio_put+0x23/0x30
[ 440.208272] search_free+0x23/0x40 [bcache]
[ 440.231448] cached_dev_write_complete+0x31/0x70 [bcache]
[ 440.254468] closure_put+0xb6/0xd0 [bcache]
[ 440.277087] request_endio+0x30/0x40 [bcache]
[ 440.298703] bio_endio+0xa1/0x120
[ 440.319644] handle_stripe+0x418/0x2270 [raid456]
[ 440.340614] ? load_balance+0x17b/0x9c0
[ 440.360506] handle_active_stripes.isra.58+0x387/0x5a0 [raid456]
[ 440.380675] ? __release_stripe+0x15/0x20 [raid456]
[ 440.400132] raid5d+0x3ed/0x5d0 [raid456]
[ 440.419193] ? schedule+0x36/0x80
[ 440.437932] ? schedule_timeout+0x1d2/0x2f0
[ 440.456136] md_thread+0x122/0x150
[ 440.473687] ? wait_woken+0x80/0x80
[ 440.491411] kthread+0x102/0x140
[ 440.508636] ? find_pers+0x70/0x70
[ 440.524927] ? kthread_associate_blkcg+0xa0/0xa0
[ 440.541791] ret_from_fork+0x35/0x40
[ 440.558020] Code: c2 48 00 5b 41 5c 41 5d 5d c3 48 89 c6 4c 89 e7 e8 bb c2
48 00 48 8b 3d bc 36 4b 01 48 89 de e8 7c f7 e0 ff 5b 41 5c 41 5d 5d c3 <0f> 0b
0f 1f 00 0f 1f 44 00 00 55 48 8d 47 b8 48 89 e5 41 57 41
[ 440.610020] RIP: put_io_context+0x8b/0x90 RSP:
ffffa8c882b43af8
[ 440.628575] ---[ end trace
a1fd79d85643a73e ]--
All the crash issue happened when a bypass IO coming, in such scenario
s->iop.bio is pointed to the s->orig_bio. In search_free(), it finishes the
s->orig_bio by calling bio_complete(), and after that, s->iop.bio became
invalid, then kernel would crash when calling bio_put(). Maybe its upper
layer's faulty, since bio should not be freed before we calling bio_put(),
but we'd better calling bio_put() first before calling bio_complete() to
notify upper layer ending this bio.
This patch moves bio_complete() under bio_put() to avoid kernel crash.
[mlyle: fixed commit subject for character limits]
Reported-by: Matthias Ferdinand <bcache@mfedv.net>
Tested-by: Matthias Ferdinand <bcache@mfedv.net>
Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yoshihiro Shimoda [Wed, 14 Feb 2018 09:40:12 +0000 (18:40 +0900)]
dmaengine: rcar-dmac: fix max_chunk_size for R-Car Gen3
[ Upstream commit
d716d9b702bb759dd6fb50804f10a174bd156d71 ]
According to R-Car Gen3 Rev.0.80 manual, the DMATCR can be set to
16,777,215 as maximum. So, this patch fixes the max_chunk_size for
safety on all of SoCs. Otherwise, a system may hang if the DMATCR
is set to 0 on R-Car Gen3.
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dave Airlie [Wed, 21 Feb 2018 01:50:03 +0000 (11:50 +1000)]
virtio-gpu: fix ioctl and expose the fixed status to userspace.
[ Upstream commit
9a191b114906457c4b2494c474f58ae4142d4e67 ]
This exposes to mesa that it can use the fixed ioctl for querying
later cap sets, cap set 1 is forever frozen in time.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20180221015003.22884-1-airlied@gmail.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Mon, 26 Feb 2018 03:12:10 +0000 (19:12 -0800)]
r8152: fix tx packets accounting
[ Upstream commit
4c27bf3c5b7434ccb9ab962301da661c26b467a4 ]
r8152 driver handles TSO packets (limited to ~16KB) quite well,
but pretends each TSO logical packet is a single packet on the wire.
There is also some error since headers are accounted once, but
error rate is small enough that we do not care.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ramon Fried [Sun, 25 Feb 2018 07:49:37 +0000 (09:49 +0200)]
qrtr: add MODULE_ALIAS macro to smd
[ Upstream commit
c77f5fbbefc04612755117775e8555c2a7006cac ]
Added MODULE_ALIAS("rpmsg:IPCRTR") to ensure qrtr-smd and qrtr will load
when IPCRTR channel is detected.
Signed-off-by: Ramon Fried <rfried@codeaurora.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Mon, 26 Feb 2018 18:41:47 +0000 (13:41 -0500)]
ARM: orion5x: Revert commit
4904dbda41c8.
[ Upstream commit
13a55372b64e00e564a08d785ca87bd9d454ba30 ]
It is not valid for orion5x to use mac_pton().
First of all, the orion5x buffer is not NULL terminated. mac_pton()
has no business operating on non-NULL terminated buffers because
only the caller can know that this is valid and in what manner it
is ok to parse this NULL'less buffer.
Second of all, orion5x operates on an __iomem pointer, which cannot
be dereferenced using normal C pointer operations. Accesses to
such areas much be performed with the proper iomem accessors.
Fixes:
4904dbda41c8 ("ARM: orion5x: use mac_pton() helper")
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chengguang Xu [Fri, 9 Feb 2018 12:40:59 +0000 (20:40 +0800)]
ceph: fix dentry leak when failing to init debugfs
[ Upstream commit
18106734b512664a8541026519ce4b862498b6c3 ]
When failing from ceph_fs_debugfs_init() in ceph_real_mount(),
there is lack of dput of root_dentry and it causes slab errors,
so change the calling order of ceph_fs_debugfs_init() and
open_root_dentry() and do some cleanups to avoid this issue.
Signed-off-by: Chengguang Xu <cgxu519@icloud.com>
Reviewed-by: "Yan, Zheng" <zyan@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Colin Ian King [Mon, 26 Feb 2018 11:36:14 +0000 (11:36 +0000)]
clocksource/drivers/fsl_ftm_timer: Fix error return checking
[ Upstream commit
f287eb9013ccf199cbfa4eabd80c36fedfc15a73 ]
The error checks on freq for a negative error return always fails because
freq is unsigned and can never be negative. Fix this by making freq a
signed long.
Detected with Coccinelle:
drivers/clocksource/fsl_ftm_timer.c:287:5-9: WARNING: Unsigned expression
compared with zero: freq <= 0
drivers/clocksource/fsl_ftm_timer.c:291:5-9: WARNING: Unsigned expression
compared with zero: freq <= 0
Fixes:
2529c3a33079 ("clocksource: Add Freescale FlexTimer Module (FTM) timer support")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: kernel-janitors@vger.kernel.org
Link: https://lkml.kernel.org/r/20180226113614.3092-1-colin.king@canonical.com
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jianchao Wang [Thu, 15 Feb 2018 11:13:41 +0000 (19:13 +0800)]
nvme-pci: Fix nvme queue cleanup if IRQ setup fails
[ Upstream commit
f25a2dfc20e3a3ed8fe6618c331799dd7bd01190 ]
This patch fixes nvme queue cleanup if requesting an IRQ handler for
the queue's vector fails. It does this by resetting the cq_vector to
the uninitialized value of -1 so it is ignored for a controller reset.
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
[changelog updates, removed misc whitespace changes]
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sven Eckelmann [Sat, 24 Feb 2018 11:03:37 +0000 (12:03 +0100)]
batman-adv: Fix netlink dumping of BLA backbones
[ Upstream commit
fce672db548ff19e76a08a32a829544617229bc2 ]
The function batadv_bla_backbone_dump_bucket must be able to handle
non-complete dumps of a single bucket. It tries to do that by saving the
latest dumped index in *idx_skip to inform the caller about the current
state.
But the caller only assumes that buckets were not completely dumped when
the return code is non-zero. This function must therefore also return a
non-zero index when the dumping of an entry failed. Otherwise the caller
will just skip all remaining buckets.
And the function must also reset *idx_skip back to zero when it finished a
bucket. Otherwise it will skip the same number of entries in the next
bucket as the previous one had.
Fixes:
ea4152e11716 ("batman-adv: add backbone table netlink support")
Reported-by: Linus Lüssing <linus.luessing@c0d3.blue>
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sven Eckelmann [Sat, 24 Feb 2018 11:03:36 +0000 (12:03 +0100)]
batman-adv: Fix netlink dumping of BLA claims
[ Upstream commit
b0264ecdfeab5f889b02ec54af7ca8cc1c245e2f ]
The function batadv_bla_claim_dump_bucket must be able to handle
non-complete dumps of a single bucket. It tries to do that by saving the
latest dumped index in *idx_skip to inform the caller about the current
state.
But the caller only assumes that buckets were not completely dumped when
the return code is non-zero. This function must therefore also return a
non-zero index when the dumping of an entry failed. Otherwise the caller
will just skip all remaining buckets.
And the function must also reset *idx_skip back to zero when it finished a
bucket. Otherwise it will skip the same number of entries in the next
bucket as the previous one had.
Fixes:
04f3f5bf1883 ("batman-adv: add B.A.T.M.A.N. Dump BLA claims via netlink")
Reported-by: Linus Lüssing <linus.luessing@c0d3.blue>
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sven Eckelmann [Mon, 19 Feb 2018 13:08:53 +0000 (14:08 +0100)]
batman-adv: Ignore invalid batadv_v_gw during netlink send
[ Upstream commit
011c935fceae5252619ef730baa610c655281dda ]
The function batadv_v_gw_dump stops the processing loop when
batadv_v_gw_dump_entry returns a non-0 return code. This should only
happen when the buffer is full. Otherwise, an empty message may be
returned by batadv_gw_dump. This empty message will then stop the netlink
dumping of gateway entries. At worst, not a single entry is returned to
userspace even when plenty of possible gateways exist.
Fixes:
b71bb6f924fe ("batman-adv: add B.A.T.M.A.N. V bat_gw_dump implementations")
Signed-off-by: Sven Eckelmann <sven.eckelmann@openmesh.com>
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sven Eckelmann [Mon, 19 Feb 2018 13:08:52 +0000 (14:08 +0100)]
batman-adv: Ignore invalid batadv_iv_gw during netlink send
[ Upstream commit
10d570284258a30dc104c50787c5289ec49f3d23 ]
The function batadv_iv_gw_dump stops the processing loop when
batadv_iv_gw_dump_entry returns a non-0 return code. This should only
happen when the buffer is full. Otherwise, an empty message may be
returned by batadv_gw_dump. This empty message will then stop the netlink
dumping of gateway entries. At worst, not a single entry is returned to
userspace even when plenty of possible gateways exist.
Fixes:
efb766af06e3 ("batman-adv: add B.A.T.M.A.N. IV bat_gw_dump implementations")
Signed-off-by: Sven Eckelmann <sven.eckelmann@openmesh.com>
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Florian Westphal [Mon, 19 Feb 2018 00:24:53 +0000 (01:24 +0100)]
netfilter: ebtables: convert BUG_ONs to WARN_ONs
[ Upstream commit
fc6a5d0601c5ac1d02f283a46f60b87b2033e5ca ]
All of these conditions are not fatal and should have
been WARN_ONs from the get-go.
Convert them to WARN_ONs and bail out.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Matthias Schiffer [Tue, 23 Jan 2018 09:59:50 +0000 (10:59 +0100)]
batman-adv: invalidate checksum on fragment reassembly
[ Upstream commit
3bf2a09da956b43ecfaa630a2ef9a477f991a46a ]
A more sophisticated implementation could try to combine fragment checksums
when all fragments have CHECKSUM_COMPLETE and are split at even offsets.
For now, we just set ip_summed to CHECKSUM_NONE to avoid "hw csum failure"
warnings in the kernel log when fragmented frames are received. In
consequence, skb_pull_rcsum() can be replaced with skb_pull().
Note that in usual setups, packets don't reach batman-adv with
CHECKSUM_COMPLETE (I assume NICs bail out of checksumming when they see
batadv's ethtype?), which is why the log messages do not occur on every
system using batman-adv. I could reproduce this issue by stacking
batman-adv on top of a VXLAN interface.
Fixes:
610bfc6bc99b ("batman-adv: Receive fragmented packets and merge")
Tested-by: Maximilian Wilhelm <max@sdn.clinic>
Signed-off-by: Matthias Schiffer <mschiffer@universe-factory.net>
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Matthias Schiffer [Tue, 23 Jan 2018 09:59:49 +0000 (10:59 +0100)]
batman-adv: fix packet checksum in receive path
[ Upstream commit
abd6360591d3f8259f41c34e31ac4826dfe621b8 ]
eth_type_trans() internally calls skb_pull(), which does not adjust the
skb checksum; skb_postpull_rcsum() is necessary to avoid log spam of the
form "bat0: hw csum failure" when packets with CHECKSUM_COMPLETE are
received.
Note that in usual setups, packets don't reach batman-adv with
CHECKSUM_COMPLETE (I assume NICs bail out of checksumming when they see
batadv's ethtype?), which is why the log messages do not occur on every
system using batman-adv. I could reproduce this issue by stacking
batman-adv on top of a VXLAN interface.
Fixes:
c6c8fea29769 ("net: Add batman-adv meshing protocol")
Tested-by: Maximilian Wilhelm <max@sdn.clinic>
Signed-off-by: Matthias Schiffer <mschiffer@universe-factory.net>
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yufen Yu [Sat, 24 Feb 2018 04:05:56 +0000 (12:05 +0800)]
md/raid1: fix NULL pointer dereference
[ Upstream commit
3de59bb9d551428cbdc76a9ea57883f82e350b4d ]
In handle_write_finished(), if r1_bio->bios[m] != NULL, it thinks
the corresponding conf->mirrors[m].rdev is also not NULL. But, it
is not always true.
Even if some io hold replacement rdev(i.e. rdev->nr_pending.count > 0),
raid1_remove_disk() can also set the rdev as NULL. That means,
bios[m] != NULL, but mirrors[m].rdev is NULL, resulting in NULL
pointer dereference in handle_write_finished and sync_request_write.
This patch can fix BUGs as follows:
BUG: unable to handle kernel NULL pointer dereference at
0000000000000140
IP: [<
ffffffff815bbbbd>] raid1d+0x2bd/0xfc0
PGD
12ab52067 PUD
12f587067 PMD 0
Oops: 0000 [#1] SMP
CPU: 1 PID: 2008 Comm: md3_raid1 Not tainted 4.1.44+ #130
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1.fc26 04/01/2014
Call Trace:
? schedule+0x37/0x90
? prepare_to_wait_event+0x83/0xf0
md_thread+0x144/0x150
? wake_atomic_t_function+0x70/0x70
? md_start_sync+0xf0/0xf0
kthread+0xd8/0xf0
? kthread_worker_fn+0x160/0x160
ret_from_fork+0x42/0x70
? kthread_worker_fn+0x160/0x160
BUG: unable to handle kernel NULL pointer dereference at
00000000000000b8
IP: sync_request_write+0x9e/0x980
PGD
800000007c518067 P4D
800000007c518067 PUD
8002b067 PMD 0
Oops: 0000 [#1] SMP PTI
CPU: 24 PID: 2549 Comm: md3_raid1 Not tainted 4.15.0+ #118
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1.fc26 04/01/2014
Call Trace:
? sched_clock+0x5/0x10
? sched_clock_cpu+0xc/0xb0
? flush_pending_writes+0x3a/0xd0
? pick_next_task_fair+0x4d5/0x5f0
? __switch_to+0xa2/0x430
raid1d+0x65a/0x870
? find_pers+0x70/0x70
? find_pers+0x70/0x70
? md_thread+0x11c/0x160
md_thread+0x11c/0x160
? finish_wait+0x80/0x80
kthread+0x111/0x130
? kthread_create_worker_on_cpu+0x70/0x70
? do_syscall_64+0x6f/0x190
? SyS_exit_group+0x10/0x10
ret_from_fork+0x35/0x40
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Yufen Yu <yuyufen@huawei.com>
Signed-off-by: Shaohua Li <sh.li@alibaba-inc.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
BingJing Chang [Thu, 22 Feb 2018 05:34:46 +0000 (13:34 +0800)]
md: fix a potential deadlock of raid5/raid10 reshape
[ Upstream commit
8876391e440ba615b10eef729576e111f0315f87 ]
There is a potential deadlock if mount/umount happens when
raid5_finish_reshape() tries to grow the size of emulated disk.
How the deadlock happens?
1) The raid5 resync thread finished reshape (expanding array).
2) The mount or umount thread holds VFS sb->s_umount lock and tries to
write through critical data into raid5 emulated block device. So it
waits for raid5 kernel thread handling stripes in order to finish it
I/Os.
3) In the routine of raid5 kernel thread, md_check_recovery() will be
called first in order to reap the raid5 resync thread. That is,
raid5_finish_reshape() will be called. In this function, it will try
to update conf and call VFS revalidate_disk() to grow the raid5
emulated block device. It will try to acquire VFS sb->s_umount lock.
The raid5 kernel thread cannot continue, so no one can handle mount/
umount I/Os (stripes). Once the write-through I/Os cannot be finished,
mount/umount will not release sb->s_umount lock. The deadlock happens.
The raid5 kernel thread is an emulated block device. It is responible to
handle I/Os (stripes) from upper layers. The emulated block device
should not request any I/Os on itself. That is, it should not call VFS
layer functions. (If it did, it will try to acquire VFS locks to
guarantee the I/Os sequence.) So we have the resync thread to send
resync I/O requests and to wait for the results.
For solving this potential deadlock, we can put the size growth of the
emulated block device as the final step of reshape thread.
2017/12/29:
Thanks to Guoqing Jiang <gqjiang@suse.com>,
we confirmed that there is the same deadlock issue in raid10. It's
reproducible and can be fixed by this patch. For raid10.c, we can remove
the similar code to prevent deadlock as well since they has been called
before.
Reported-by: Alex Wu <alexwu@synology.com>
Reviewed-by: Alex Wu <alexwu@synology.com>
Reviewed-by: Chung-Chiang Cheng <cccheng@synology.com>
Signed-off-by: BingJing Chang <bingjingc@synology.com>
Signed-off-by: Shaohua Li <sh.li@alibaba-inc.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Will Deacon [Mon, 19 Feb 2018 14:55:55 +0000 (14:55 +0000)]
fs: dcache: Use READ_ONCE when accessing i_dir_seq
[ Upstream commit
8cc07c808c9d595e81cbe5aad419b7769eb2e5c9 ]
i_dir_seq is subject to concurrent modification by a cmpxchg or
store-release operation, so ensure that the relaxed access in
d_alloc_parallel uses READ_ONCE.
Reported-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Will Deacon [Mon, 19 Feb 2018 14:55:54 +0000 (14:55 +0000)]
fs: dcache: Avoid livelock between d_alloc_parallel and __d_add
[ Upstream commit
015555fd4d2930bc0c86952c46ad88b3392f66e4 ]
If d_alloc_parallel runs concurrently with __d_add, it is possible for
d_alloc_parallel to continuously retry whilst i_dir_seq has been
incremented to an odd value by __d_add:
CPU0:
__d_add
n = start_dir_add(dir);
cmpxchg(&dir->i_dir_seq, n, n + 1) == n
CPU1:
d_alloc_parallel
retry:
seq = smp_load_acquire(&parent->d_inode->i_dir_seq) & ~1;
hlist_bl_lock(b);
bit_spin_lock(0, (unsigned long *)b); // Always succeeds
CPU0:
__d_lookup_done(dentry)
hlist_bl_lock
bit_spin_lock(0, (unsigned long *)b); // Never succeeds
CPU1:
if (unlikely(parent->d_inode->i_dir_seq != seq)) {
hlist_bl_unlock(b);
goto retry;
}
Since the simple bit_spin_lock used to implement hlist_bl_lock does not
provide any fairness guarantees, then CPU1 can starve CPU0 of the lock
and prevent it from reaching end_dir_add(dir), therefore CPU1 cannot
exit its retry loop because the sequence number always has the bottom
bit set.
This patch resolves the livelock by not taking hlist_bl_lock in
d_alloc_parallel if the sequence counter is odd, since any subsequent
masked comparison with i_dir_seq will fail anyway.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Reported-by: Naresh Madhusudana <naresh.madhusudana@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Matthew Wilcox <mawilcox@microsoft.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sebastian Ott [Thu, 22 Feb 2018 12:05:41 +0000 (13:05 +0100)]
kvm: fix warning for CONFIG_HAVE_KVM_EVENTFD builds
[ Upstream commit
076467490b8176eb96eddc548a14d4135c7b5852 ]
Move the kvm_arch_irq_routing_update() prototype outside of
ifdef CONFIG_HAVE_KVM_EVENTFD guards to fix the following sparse warning:
arch/s390/kvm/../../../virt/kvm/irqchip.c:171:28: warning: symbol 'kvm_arch_irq_routing_update' was not declared. Should it be static?
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexey Kodanev [Thu, 22 Feb 2018 15:20:30 +0000 (18:20 +0300)]
macvlan: fix use-after-free in macvlan_common_newlink()
[ Upstream commit
4e14bf4236490306004782813b8b4494b18f5e60 ]
The following use-after-free was reported by KASan when running
LTP macvtap01 test on 4.16-rc2:
[10642.528443] BUG: KASAN: use-after-free in
macvlan_common_newlink+0x12ef/0x14a0 [macvlan]
[10642.626607] Read of size 8 at addr
ffff880ba49f2100 by task ip/18450
...
[10642.963873] Call Trace:
[10642.994352] dump_stack+0x5c/0x7c
[10643.035325] print_address_description+0x75/0x290
[10643.092938] kasan_report+0x28d/0x390
[10643.137971] ? macvlan_common_newlink+0x12ef/0x14a0 [macvlan]
[10643.207963] macvlan_common_newlink+0x12ef/0x14a0 [macvlan]
[10643.275978] macvtap_newlink+0x171/0x260 [macvtap]
[10643.334532] rtnl_newlink+0xd4f/0x1300
...
[10646.256176] Allocated by task 18450:
[10646.299964] kasan_kmalloc+0xa6/0xd0
[10646.343746] kmem_cache_alloc_trace+0xf1/0x210
[10646.397826] macvlan_common_newlink+0x6de/0x14a0 [macvlan]
[10646.464386] macvtap_newlink+0x171/0x260 [macvtap]
[10646.522728] rtnl_newlink+0xd4f/0x1300
...
[10647.022028] Freed by task 18450:
[10647.061549] __kasan_slab_free+0x138/0x180
[10647.111468] kfree+0x9e/0x1c0
[10647.147869] macvlan_port_destroy+0x3db/0x650 [macvlan]
[10647.211411] rollback_registered_many+0x5b9/0xb10
[10647.268715] rollback_registered+0xd9/0x190
[10647.319675] register_netdevice+0x8eb/0xc70
[10647.370635] macvlan_common_newlink+0xe58/0x14a0 [macvlan]
[10647.437195] macvtap_newlink+0x171/0x260 [macvtap]
Commit
d02fd6e7d293 ("macvlan: Fix one possible double free") handles
the case when register_netdevice() invokes ndo_uninit() on error and
as a result free the port. But 'macvlan_port_get_rtnl(dev))' check
(returns dev->rx_handler_data), which was added by this commit in order
to prevent double free, is not quite correct:
* for macvlan it always returns NULL because 'lowerdev' is the one that
was used to register rx handler (port) in macvlan_port_create() as
well as to unregister it in macvlan_port_destroy().
* for macvtap it always returns a valid pointer because macvtap registers
its own rx handler before macvlan_common_newlink().
Fixes:
d02fd6e7d293 ("macvlan: Fix one possible double free")
Signed-off-by: Alexey Kodanev <alexey.kodanev@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pratyush Anand [Mon, 5 Feb 2018 13:28:01 +0000 (14:28 +0100)]
arm64: fix unwind_frame() for filtered out fn for function graph tracing
[ Upstream commit
9f416319f40cd857d2bb517630e5855a905ef3fb ]
do_task_stat() calls get_wchan(), which further does unwind_frame().
unwind_frame() restores frame->pc to original value in case function
graph tracer has modified a return address (LR) in a stack frame to hook
a function return. However, if function graph tracer has hit a filtered
function, then we can't unwind it as ftrace_push_return_trace() has
biased the index(frame->graph) with a 'huge negative'
offset(-FTRACE_NOTRACE_DEPTH).
Moreover, arm64 stack walker defines index(frame->graph) as unsigned
int, which can not compare a -ve number.
Similar problem we can have with calling of walk_stackframe() from
save_stack_trace_tsk() or dump_backtrace().
This patch fixes unwind_frame() to test the index for -ve value and
restore index accordingly before we can restore frame->pc.
Reproducer:
cd /sys/kernel/debug/tracing/
echo schedule > set_graph_notrace
echo 1 > options/display-graph
echo wakeup > current_tracer
ps -ef | grep -i agent
Above commands result in:
Unable to handle kernel paging request at virtual address
ffff801bd3d1e000
pgd =
ffff8003cbe97c00
[
ffff801bd3d1e000] *pgd=
0000000000000000, *pud=
0000000000000000
Internal error: Oops:
96000006 [#1] SMP
[...]
CPU: 5 PID: 11696 Comm: ps Not tainted 4.11.0+ #33
[...]
task:
ffff8003c21ba000 task.stack:
ffff8003cc6c0000
PC is at unwind_frame+0x12c/0x180
LR is at get_wchan+0xd4/0x134
pc : [<
ffff00000808892c>] lr : [<
ffff0000080860b8>] pstate:
60000145
sp :
ffff8003cc6c3ab0
x29:
ffff8003cc6c3ab0 x28:
0000000000000001
x27:
0000000000000026 x26:
0000000000000026
x25:
00000000000012d8 x24:
0000000000000000
x23:
ffff8003c1c04000 x22:
ffff000008c83000
x21:
ffff8003c1c00000 x20:
000000000000000f
x19:
ffff8003c1bc0000 x18:
0000fffffc593690
x17:
0000000000000000 x16:
0000000000000001
x15:
0000b855670e2b60 x14:
0003e97f22cf1d0f
x13:
0000000000000001 x12:
0000000000000000
x11:
00000000e8f4883e x10:
0000000154f47ec8
x9 :
0000000070f367c0 x8 :
0000000000000000
x7 :
00008003f7290000 x6 :
0000000000000018
x5 :
0000000000000000 x4 :
ffff8003c1c03cb0
x3 :
ffff8003c1c03ca0 x2 :
00000017ffe80000
x1 :
ffff8003cc6c3af8 x0 :
ffff8003d3e9e000
Process ps (pid: 11696, stack limit = 0xffff8003cc6c0000)
Stack: (0xffff8003cc6c3ab0 to 0xffff8003cc6c4000)
[...]
[<
ffff00000808892c>] unwind_frame+0x12c/0x180
[<
ffff000008305008>] do_task_stat+0x864/0x870
[<
ffff000008305c44>] proc_tgid_stat+0x3c/0x48
[<
ffff0000082fde0c>] proc_single_show+0x5c/0xb8
[<
ffff0000082b27e0>] seq_read+0x160/0x414
[<
ffff000008289e6c>] __vfs_read+0x58/0x164
[<
ffff00000828b164>] vfs_read+0x88/0x144
[<
ffff00000828c2e8>] SyS_read+0x60/0xc0
[<
ffff0000080834a0>] __sys_trace_return+0x0/0x4
Fixes:
20380bb390a4 (arm64: ftrace: fix a stack tracer's output under function graph tracer)
Signed-off-by: Pratyush Anand <panand@redhat.com>
Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
[catalin.marinas@arm.com: replace WARN_ON with WARN_ON_ONCE]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Felix Fietkau [Fri, 23 Feb 2018 09:06:03 +0000 (10:06 +0100)]
mac80211: drop frames with unexpected DS bits from fast-rx to slow path
[ Upstream commit
b323ac19b7734a1c464b2785a082ee50bccd3b91 ]
Fixes rx for 4-addr packets in AP mode. These may be used for setting
up a 4-addr link for stations that are allowed to do so.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Samuel Neves [Wed, 21 Feb 2018 20:50:36 +0000 (20:50 +0000)]
x86/topology: Update the 'cpu cores' field in /proc/cpuinfo correctly across CPU hotplug operations
[ Upstream commit
4596749339e06dc7a424fc08a15eded850ed78b7 ]
Without this fix, /proc/cpuinfo will display an incorrect amount
of CPU cores, after bringing them offline and online again, as
exemplified below:
$ cat /proc/cpuinfo | grep cores
cpu cores : 4
cpu cores : 8
cpu cores : 8
cpu cores : 20
cpu cores : 4
cpu cores : 3
cpu cores : 2
cpu cores : 2
This patch fixes this by always zeroing the booted_cores variable
upon turning off a logical CPU.
Tested-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: jgross@suse.com
Cc: luto@kernel.org
Cc: prarit@redhat.com
Cc: vkuznets@redhat.com
Link: http://lkml.kernel.org/r/20180221205036.5244-1-sneves@dei.uc.pt
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrea Parri [Thu, 22 Feb 2018 09:24:48 +0000 (10:24 +0100)]
locking/xchg/alpha: Fix xchg() and cmpxchg() memory ordering bugs
[ Upstream commit
472e8c55cf6622d1c112dc2bc777f68bbd4189db ]
Successful RMW operations are supposed to be fully ordered, but
Alpha's xchg() and cmpxchg() do not meet this requirement.
Will Deacon noticed the bug:
> So MP using xchg:
>
> WRITE_ONCE(x, 1)
> xchg(y, 1)
>
> smp_load_acquire(y) == 1
> READ_ONCE(x) == 0
>
> would be allowed.
... which thus violates the above requirement.
Fix it by adding a leading smp_mb() to the xchg() and cmpxchg() implementations.
Reported-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-alpha@vger.kernel.org
Link: http://lkml.kernel.org/r/1519291488-5752-1-git-send-email-parri.andrea@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Randy Dunlap [Tue, 13 Feb 2018 01:26:20 +0000 (17:26 -0800)]
integrity/security: fix digsig.c build error with header file
[ Upstream commit
120f3b11ef88fc38ce1d0ff9c9a4b37860ad3140 ]
security/integrity/digsig.c has build errors on some $ARCH due to a
missing header file, so add it.
security/integrity/digsig.c:146:2: error: implicit declaration of function 'vfree' [-Werror=implicit-function-declaration]
Reported-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Mimi Zohar <zohar@linux.vnet.ibm.com>
Cc: linux-integrity@vger.kernel.org
Link: http://kisskb.ellerman.id.au/kisskb/head/13396/
Signed-off-by: James Morris <james.morris@microsoft.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Berg [Thu, 22 Feb 2018 19:55:28 +0000 (20:55 +0100)]
regulatory: add NUL to request alpha2
[ Upstream commit
657308f73e674e86b60509a430a46e569bf02846 ]
Similar to the ancient commit
a5fe8e7695dc ("regulatory: add NUL
to alpha2"), add another byte to alpha2 in the request struct so
that when we use nla_put_string(), we don't overrun anything.
Fixes:
73d54c9e74c4 ("cfg80211: add regulatory netlink multicast group")
Reported-by: Kees Cook <keescook@google.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Wed, 21 Feb 2018 05:42:26 +0000 (21:42 -0800)]
smsc75xx: fix smsc75xx_set_features()
[ Upstream commit
88e80c62671ceecdbb77c902731ec95a4bfa62f9 ]
If an attempt is made to disable RX checksums, USB adapter is changed
but netdev->features is not, because smsc75xx_set_features() returns a
non zero value.
This throws errors from netdev_rx_csum_fault() :
<devname>: hw csum failure
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Steve Glendinning <steve.glendinning@shawell.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tony Lindgren [Thu, 22 Feb 2018 18:02:49 +0000 (10:02 -0800)]
ARM: OMAP: Fix dmtimer init for omap1
[ Upstream commit
ba6887836178d43b3665b9da075c2c5dfe1d207c ]
We need to enable PM runtime on omap1 also as otherwise we
will get errors:
omap_timer omap_timer.1: omap_dm_timer_probe: pm_runtime_get_sync failed!
omap_timer: probe of omap_timer.1 failed with error -13
...
We are checking for OMAP_TIMER_NEEDS_RESET flag elsewhere so this is
safe to do.
Cc: Aaro Koskinen <aaro.koskinen@iki.fi>
Cc: Keerthy <j-keerthy@ti.com>
Cc: Ladislav Michl <ladis@linux-mips.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Biggers [Thu, 22 Feb 2018 14:38:33 +0000 (14:38 +0000)]
PKCS#7: fix direct verification of SignerInfo signature
[ Upstream commit
6459ae386699a5fe0dc52cf30255f75274fa43a4 ]
If none of the certificates in a SignerInfo's certificate chain match a
trusted key, nor is the last certificate signed by a trusted key, then
pkcs7_validate_trust_one() tries to check whether the SignerInfo's
signature was made directly by a trusted key. But, it actually fails to
set the 'sig' variable correctly, so it actually verifies the last
signature seen. That will only be the SignerInfo's signature if the
certificate chain is empty; otherwise it will actually be the last
certificate's signature.
This is not by itself a security problem, since verifying any of the
certificates in the chain should be sufficient to verify the SignerInfo.
Still, it's not working as intended so it should be fixed.
Fix it by setting 'sig' correctly for the direct verification case.
Fixes:
757932e6da6d ("PKCS#7: Handle PKCS#7 messages that contain no X.509 certs")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sebastian Ott [Mon, 12 Feb 2018 11:01:03 +0000 (12:01 +0100)]
s390/cio: clear timer when terminating driver I/O
[ Upstream commit
410d5e13e7638bc146321671e223d56495fbf3c7 ]
When we terminate driver I/O (because we need to stop using a certain
channel path) we also need to ensure that a timer (which may have been
set up using ccw_device_start_timeout) is cleared.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sebastian Ott [Wed, 7 Feb 2018 12:18:19 +0000 (13:18 +0100)]
s390/cio: fix return code after missing interrupt
[ Upstream commit
770b55c995d171f026a9efb85e71e3b1ea47b93d ]
When a timeout occurs for users of ccw_device_start_timeout
we will stop the IO and call the drivers int handler with
the irb pointer set to ERR_PTR(-ETIMEDOUT). Sometimes
however we'd set the irb pointer to ERR_PTR(-EIO) which is
not intended. Just set the correct value in all codepaths.
Reported-by: Julian Wiedmann <jwi@linux.vnet.ibm.com>
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sebastian Ott [Tue, 6 Feb 2018 13:59:43 +0000 (14:59 +0100)]
s390/cio: fix ccw_device_start_timeout API
[ Upstream commit
f97a6b6c47d2f329a24f92cc0ca3c6df5727ba73 ]
There are cases a device driver can't start IO because the device is
currently in use by cio. In this case the device driver is notified
when the device is usable again.
Using ccw_device_start_timeout we would set the timeout (and change
an existing timeout) before we test for internal usage. Worst case
this could lead to an unexpected timer deletion.
Fix this by setting the timeout after we test for internal usage.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mark Lord [Tue, 20 Feb 2018 19:49:20 +0000 (14:49 -0500)]
powerpc/bpf/jit: Fix 32-bit JIT for seccomp_data access
[ Upstream commit
083b20907185b076f21c265b30fe5b5f24c03d8c ]
I am using SECCOMP to filter syscalls on a ppc32 platform, and noticed
that the JIT compiler was failing on the BPF even though the
interpreter was working fine.
The issue was that the compiler was missing one of the instructions
used by SECCOMP, so here is a patch to enable JIT for that
instruction.
Fixes:
eb84bab0fb38 ("ppc: Kconfig: Enable BPF JIT on ppc32")
Signed-off-by: Mark Lord <mlord@pobox.com>
Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Rientjes [Wed, 21 Feb 2018 22:45:32 +0000 (14:45 -0800)]
kernel/relay.c: limit kmalloc size to KMALLOC_MAX_SIZE
[ Upstream commit
88913bd8ea2a75d7e460a4bed5f75e1c32660d7e ]
chan->n_subbufs is set by the user and relay_create_buf() does a kmalloc()
of chan->n_subbufs * sizeof(size_t *).
kmalloc_slab() will generate a warning when this fails if
chan->subbufs * sizeof(size_t *) > KMALLOC_MAX_SIZE.
Limit chan->n_subbufs to the maximum allowed kmalloc() size.
Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1802061216100.122576@chino.kir.corp.google.com
Fixes:
f6302f1bcd75 ("relay: prevent integer overflow in relay_open()")
Signed-off-by: David Rientjes <rientjes@google.com>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arnd Bergmann [Tue, 20 Feb 2018 13:09:11 +0000 (14:09 +0100)]
md: raid5: avoid string overflow warning
[ Upstream commit
53b8d89ddbdbb0e4625a46d2cdbb6f106c52f801 ]
gcc warns about a possible overflow of the kmem_cache string, when adding
four characters to a string of the same length:
drivers/md/raid5.c: In function 'setup_conf':
drivers/md/raid5.c:2207:34: error: '-alt' directive writing 4 bytes into a region of size between 1 and 32 [-Werror=format-overflow=]
sprintf(conf->cache_name[1], "%s-alt", conf->cache_name[0]);
^~~~
drivers/md/raid5.c:2207:2: note: 'sprintf' output between 5 and 36 bytes into a destination of size 32
sprintf(conf->cache_name[1], "%s-alt", conf->cache_name[0]);
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If I'm counting correctly, we need 11 characters for the fixed part
of the string and 18 characters for a 64-bit pointer (when no gendisk
is used), so that leaves three characters for conf->level, which should
always be sufficient.
This makes the code use snprintf() with the correct length, to
make the code more robust against changes, and to get the compiler
to shut up.
In commit
f4be6b43f1ac ("md/raid5: ensure we create a unique name for
kmem_cache when mddev has no gendisk") from 2010, Neil said that
the pointer could be removed "shortly" once devices without gendisk
are disallowed. I have no idea if that happened, but if it did, that
should probably be changed as well.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Shaohua Li <sh.li@alibaba-inc.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrea Parri [Tue, 20 Feb 2018 18:45:56 +0000 (19:45 +0100)]
locking/xchg/alpha: Add unconditional memory barrier to cmpxchg()
[ Upstream commit
cb13b424e986aed68d74cbaec3449ea23c50e167 ]
Continuing along with the fight against smp_read_barrier_depends() [1]
(or rather, against its improper use), add an unconditional barrier to
cmpxchg. This guarantees that dependency ordering is preserved when a
dependency is headed by an unsuccessful cmpxchg. As it turns out, the
change could enable further simplification of LKMM as proposed in [2].
[1] https://marc.info/?l=linux-kernel&m=
150884953419377&w=2
https://marc.info/?l=linux-kernel&m=
150884946319353&w=2
https://marc.info/?l=linux-kernel&m=
151215810824468&w=2
https://marc.info/?l=linux-kernel&m=
151215816324484&w=2
[2] https://marc.info/?l=linux-kernel&m=
151881978314872&w=2
Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-alpha@vger.kernel.org
Link: http://lkml.kernel.org/r/1519152356-4804-1-git-send-email-parri.andrea@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Wolfram Sang [Mon, 5 Feb 2018 20:09:59 +0000 (21:09 +0100)]
drm/exynos: fix comparison to bitshift when dealing with a mask
[ Upstream commit
1293b6191010672c0c9dacae8f71c6f3e4d70cbe ]
Due to a typo, the mask was destroyed by a comparison instead of a bit
shift.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arnd Bergmann [Wed, 17 Jan 2018 17:01:21 +0000 (18:01 +0100)]
drm/exynos: g2d: use monotonic timestamps
[ Upstream commit
a588a8bb7b25a3fb4f7fed00feb7aec541fc2632 ]
The exynos DRM driver uses real-time 'struct timeval' values
for exporting its timestamps to user space. This has multiple
problems:
1. signed seconds overflow in y2038
2. the 'struct timeval' definition is deprecated in the kernel
3. time may jump or go backwards after a 'settimeofday()' syscall
4. other DRM timestamps are in CLOCK_MONOTONIC domain, so they
can't be compared
5. exporting microseconds requires a division by 1000, which may
be slow on some architectures.
The code existed in two places before, but the IPP portion was
removed in
8ded59413ccc ("drm/exynos: ipp: Remove Exynos DRM
IPP subsystem"), so we no longer need to worry about it.
Ideally timestamps should just use 64-bit nanoseconds instead, but
of course we can't change that now. Instead, this tries to address
the first four points above by using monotonic 'timespec' values.
According to Tobias Jakobi, user space doesn't care about the
timestamp at the moment, so we can change the format. Even if
there is something looking at them, it will work just fine with
monotonic times as long as the application only looks at the
relative values between two events.
Link: https://patchwork.kernel.org/patch/10038593/
Cc: Tobias Jakobi <tjakobi@math.uni-bielefeld.de>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Tobias Jakobi <tjakobi@math.uni-bielefeld.de>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yufen Yu [Tue, 6 Feb 2018 09:39:15 +0000 (17:39 +0800)]
md raid10: fix NULL deference in handle_write_completed()
[ Upstream commit
01a69cab01c184d3786af09e9339311123d63d22 ]
In the case of 'recover', an r10bio with R10BIO_WriteError &
R10BIO_IsRecover will be progressed by handle_write_completed().
This function traverses all r10bio->devs[copies].
If devs[m].repl_bio != NULL, it thinks conf->mirrors[dev].replacement
is also not NULL. However, this is not always true.
When there is an rdev of raid10 has replacement, then each r10bio
->devs[m].repl_bio != NULL in conf->r10buf_pool. However, in 'recover',
even if corresponded replacement is NULL, it doesn't clear r10bio
->devs[m].repl_bio, resulting in replacement NULL deference.
This bug was introduced when replacement support for raid10 was
added in Linux 3.3.
As NeilBrown suggested:
Elsewhere the determination of "is this device part of the
resync/recovery" is made by resting bio->bi_end_io.
If this is end_sync_write, then we tried to write here.
If it is NULL, then we didn't try to write.
Fixes:
9ad1aefc8ae8 ("md/raid10: Handle replacement devices during resync.")
Cc: stable (V3.3+)
Suggested-by: NeilBrown <neilb@suse.com>
Signed-off-by: Yufen Yu <yuyufen@huawei.com>
Signed-off-by: Shaohua Li <sh.li@alibaba-inc.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ilan Peer [Mon, 19 Feb 2018 12:48:42 +0000 (14:48 +0200)]
mac80211: Do not disconnect on invalid operating class
[ Upstream commit
191da271ac260700db3e5b4bb982a17ca78769d6 ]
Some APs include a non global operating class in their extended channel
switch information element. In such a case, as the operating class is not
known, mac80211 would decide to disconnect.
However the specification states that the operating class needs to be
taken from Annex E, but it does not specify from which table it should be
taken, so it is valid for an AP to use a non global operating class.
To avoid possibly unneeded disconnection, in such a case ignore the
operating class and assume that the current band is used, and if the
resulting channel and band configuration is invalid disconnect.
Signed-off-by: Ilan Peer <ilan.peer@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sara Sharon [Mon, 19 Feb 2018 12:48:37 +0000 (14:48 +0200)]
mac80211: fix calling sleeping function in atomic context
[ Upstream commit
95f3ce6a77893ac828ba841df44421620de4314b ]
sta_info_alloc can be called from atomic paths (such as RX path)
so we need to call pcpu_alloc with the correct gfp.
Fixes:
c9c5962b56c1 ("mac80211: enable collecting station statistics per-CPU")
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sara Sharon [Mon, 19 Feb 2018 12:48:35 +0000 (14:48 +0200)]
mac80211: fix a possible leak of station stats
[ Upstream commit
d78d9ee9d40aca4781d2c5334972544601a4c3a2 ]
If sta_info_alloc fails after allocating the per CPU statistics,
they are not properly freed.
Fixes:
c9c5962b56c1 ("mac80211: enable collecting station statistics per-CPU")
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Felix Fietkau [Sat, 10 Feb 2018 12:20:34 +0000 (13:20 +0100)]
mac80211: round IEEE80211_TX_STATUS_HEADROOM up to multiple of 4
[ Upstream commit
651b9920d7a694ffb1f885aef2bbb068a25d9d66 ]
This ensures that mac80211 allocated management frames are properly
aligned, which makes copying them more efficient.
For instance, mt76 uses iowrite32_copy to copy beacon frames to beacon
template memory on the chip.
Misaligned 32-bit accesses cause CPU exceptions on MIPS and should be
avoided.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Howells [Thu, 15 Feb 2018 22:59:00 +0000 (22:59 +0000)]
rxrpc: Work around usercopy check
[ Upstream commit
a16b8d0cf2ec1e626d24bc2a7b9e64ace6f7501d ]
Due to a check recently added to copy_to_user(), it's now not permitted to
copy from slab-held data to userspace unless the slab is whitelisted. This
affects rxrpc_recvmsg() when it attempts to place an RXRPC_USER_CALL_ID
control message in the userspace control message buffer. A warning is
generated by usercopy_warn() because the source is the copy of the
user_call_ID retained in the rxrpc_call struct.
Work around the issue by copying the user_call_ID to a variable on the
stack and passing that to put_cmsg().
The warning generated looks like:
Bad or missing usercopy whitelist? Kernel memory exposure attempt detected from SLUB object 'dmaengine-unmap-128' (offset 680, size 8)!
WARNING: CPU: 0 PID: 1401 at mm/usercopy.c:81 usercopy_warn+0x7e/0xa0
...
RIP: 0010:usercopy_warn+0x7e/0xa0
...
Call Trace:
__check_object_size+0x9c/0x1a0
put_cmsg+0x98/0x120
rxrpc_recvmsg+0x6fc/0x1010 [rxrpc]
? finish_wait+0x80/0x80
___sys_recvmsg+0xf8/0x240
? __clear_rsb+0x25/0x3d
? __clear_rsb+0x15/0x3d
? __clear_rsb+0x25/0x3d
? __clear_rsb+0x15/0x3d
? __clear_rsb+0x25/0x3d
? __clear_rsb+0x15/0x3d
? __clear_rsb+0x25/0x3d
? __clear_rsb+0x15/0x3d
? finish_task_switch+0xa6/0x2b0
? trace_hardirqs_on_caller+0xed/0x180
? _raw_spin_unlock_irq+0x29/0x40
? __sys_recvmsg+0x4e/0x90
__sys_recvmsg+0x4e/0x90
do_syscall_64+0x7a/0x220
entry_SYSCALL_64_after_hwframe+0x26/0x9b
Reported-by: Jonathan Billings <jsbillings@jsbillings.org>
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Kees Cook <keescook@chromium.org>
Tested-by: Jonathan Billings <jsbillings@jsbillings.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kees Cook [Wed, 14 Feb 2018 23:45:07 +0000 (15:45 -0800)]
NFC: llcp: Limit size of SDP URI
[ Upstream commit
fe9c842695e26d8116b61b80bfb905356f07834b ]
The tlv_len is u8, so we need to limit the size of the SDP URI. Enforce
this both in the NLA policy and in the code that performs the allocation
and copy, to avoid writing past the end of the allocated buffer.
Fixes:
d9b8d8e19b073 ("NFC: llcp: Service Name Lookup netlink interface")
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Naftali Goldstein [Thu, 28 Dec 2017 13:53:04 +0000 (15:53 +0200)]
iwlwifi: mvm: always init rs with 20mhz bandwidth rates
[ Upstream commit
6b7a5aea71b342ec0593d23b08383e1f33da4c9a ]
In AP mode, when a new station associates, rs is initialized immediately
upon association completion, before the phy context is updated with the
association parameters, so the sta bandwidth might be wider than the phy
context allows.
To avoid this issue, always initialize rs with 20mhz bandwidth rate, and
after authorization, when the phy context is already up-to-date, re-init
rs with the correct bw.
Signed-off-by: Naftali Goldstein <naftali.goldstein@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sara Sharon [Tue, 29 Mar 2016 07:56:57 +0000 (10:56 +0300)]
iwlwifi: mvm: fix security bug in PN checking
[ Upstream commit
5ab2ba931255d8bf03009c06d58dce97de32797c ]
A previous patch allowed the same PN for packets originating from the
same AMSDU by copying PN only for the last packet in the series.
This however is bogus since we cannot assume the last frame will be
received on the same queue, and if it is received on a different ueue
we will end up not incrementing the PN and possibly let the next
packet to have the same PN and pass through.
Change the logic instead to driver explicitly indicate for the second
sub frame and on to be allowed to have the same PN as the first
subframe. Indicate it to mac80211 as well for the fallback queue.
Fixes:
f1ae02b186d9 ("iwlwifi: mvm: allow same PN for de-aggregated AMSDU")
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Falcon [Wed, 14 Feb 2018 00:23:42 +0000 (18:23 -0600)]
ibmvnic: Free RX socket buffer in case of adapter error
[ Upstream commit
4b9b0f01350500173f17e2b2e65beb4df4ef99c7 ]
If a RX buffer is returned to the client driver with an error, free the
corresponding socket buffer before continuing.
Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Geert Uytterhoeven [Tue, 2 Jan 2018 15:25:35 +0000 (16:25 +0100)]
ARM: OMAP1: clock: Fix debugfs_create_*() usage
[ Upstream commit
8cbbf1745dcde7ba7e423dc70619d223de90fd43 ]
When exposing data access through debugfs, the correct
debugfs_create_*() functions must be used, depending on data type.
Remove all casts from data pointers passed to debugfs_create_*()
functions, as such casts prevent the compiler from flagging bugs.
Correct all wrong usage:
- clk.rate is unsigned long, not u32,
- clk.flags is u8, not u32, which exposed the successive
clk.rate_offset and clk.src_offset fields.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Acked-by: Aaro Koskinen <aaro.koskinen@iki.fi>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tony Lindgren [Fri, 9 Feb 2018 16:15:53 +0000 (08:15 -0800)]
ARM: OMAP3: Fix prm wake interrupt for resume
[ Upstream commit
d3be6d2a08bd26580562d9714d3d97ea9ba22c73 ]
For platform_suspend_ops, the finish call is too late to re-enable wake
irqs and we need re-enable wake irqs on wake call instead.
Otherwise noirq resume for devices has already happened. And then
dev_pm_disarm_wake_irq() has already disabled the dedicated wake irqs
when the interrupt triggers and the wake irq is never handled.
For devices that are already in PM runtime suspended state when we
enter suspend this means that a possible wake irq will never trigger.
And this can lead into a situation where a device has a pending padconf
wake irq, and the device will stay unresponsive to any further wake
irqs.
This issue can be easily reproduced by setting serial console log level
to zero, letting the serial console idle, and suspend the system from
an ssh terminal. Then try to wake up the system by typing to the serial
console.
Note that this affects only omap3 PRM interrupt as that's currently
the only omap variant that does anything in omap_pm_wake().
In general, for the wake irqs to work, the interrupt must have either
IRQF_NO_SUSPEND or IRQF_EARLY_RESUME set for it to trigger before
dev_pm_disarm_wake_irq() disables the wake irqs.
Reported-by: Grygorii Strashko <grygorii.strashko@ti.com>
Cc: Tero Kristo <t-kristo@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Qi Hou [Thu, 11 Jan 2018 04:54:43 +0000 (12:54 +0800)]
ARM: OMAP2+: timer: fix a kmemleak caused in omap_get_timer_dt
[ Upstream commit
db35340c536f1af0108ec9a0b2126a05d358d14a ]
When more than one GP timers are used as kernel system timers and the
corresponding nodes in device-tree are marked with the same "disabled"
property, then the "attr" field of the property will be initialized
more than once as the property being added to sys file system via
__of_add_property_sysfs().
In __of_add_property_sysfs(), the "name" field of pp->attr.attr is set
directly to the return value of safe_name(), without taking care of
whether it's already a valid pointer to a memory block. If it is, its
old value will always be overwritten by the new one and the memory block
allocated before will a "ghost", then a kmemleak happened.
That the same "disabled" property being added to different nodes of device
tree would cause that kind of kmemleak overhead, at least once.
To fix it, allocate the property dynamically, and delete static one.
Signed-off-by: Qi Hou <qi.hou@windriver.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Anders Roxell [Tue, 6 Feb 2018 22:20:44 +0000 (16:20 -0600)]
selftests: memfd: add config fragment for fuse
[ Upstream commit
9a606f8d55cfc932ec02172aaed4124fdc150047 ]
The memfd test requires to insert the fuse module (CONFIG_FUSE_FS).
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Signed-off-by: Daniel Díaz <daniel.diaz@linaro.org>
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Naresh Kamboju [Wed, 7 Feb 2018 09:17:20 +0000 (14:47 +0530)]
selftests: pstore: Adding config fragment CONFIG_PSTORE_RAM=m
[ Upstream commit
9a379e77033f02c4a071891afdf0f0a01eff8ccb ]
pstore_tests and pstore_post_reboot_tests need CONFIG_PSTORE_RAM=m
Signed-off-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dong Bo [Fri, 26 Jan 2018 03:21:49 +0000 (11:21 +0800)]
libata: Fix compile warning with ATA_DEBUG enabled
[ Upstream commit
0d3e45bc6507bd1f8728bf586ebd16c2d9e40613 ]
This fixs the following comile warnings with ATA_DEBUG enabled,
which detected by Linaro GCC 5.2-2015.11:
drivers/ata/libata-scsi.c: In function 'ata_scsi_dump_cdb':
./include/linux/kern_levels.h:5:18: warning: format '%d' expects
argument of type 'int', but argument 6 has type 'u64 {aka long
long unsigned int}' [-Wformat=]
tj: Patch hand-applied and description trimmed.
Signed-off-by: Dong Bo <dongbo4@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jason Wang [Sun, 11 Feb 2018 03:28:12 +0000 (11:28 +0800)]
ptr_ring: prevent integer overflow when calculating size
[ Upstream commit
54e02162d4454a99227f520948bf4494c3d972d0 ]
Switch to use dividing to prevent integer overflow when size is too
big to calculate allocation size properly.
Reported-by: Eric Biggers <ebiggers3@gmail.com>
Fixes:
6e6e41c31122 ("ptr_ring: fail early if queue occupies more than KMALLOC_MAX_SIZE")
Signed-off-by: Jason Wang <jasowang@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ulf Magnusson [Mon, 5 Feb 2018 01:21:31 +0000 (02:21 +0100)]
ARC: Fix malformed ARC_EMUL_UNALIGNED default
[ Upstream commit
827cc2fa024dd6517d62de7a44c7b42f32af371b ]
'default N' should be 'default n', though they happen to have the same
effect here, due to undefined symbols (N in this case) evaluating to n
in a tristate sense.
Remove the default from ARC_EMUL_UNALIGNED instead of changing it. bool
and tristate symbols implicitly default to n.
Discovered with the
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_ulfalizer_Kconfiglib_blob_master_examples_list-5Fundefined.py&d=DwIBAg&c=DPL6_X_6JkXFx7AXWqB0tg&r=c14YS-cH-kdhTOW89KozFhBtBJgs1zXscZojEZQ0THs&m=WxxD8ozR7QQUVzNCBksiznaisBGO_crN7PBOvAoju8s&s=1LmxsNqxwT-7wcInVpZ6Z1J27duZKSoyKxHIJclXU_M&e=
script.
Signed-off-by: Ulf Magnusson <ulfalizer@gmail.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mark Salter [Fri, 2 Feb 2018 14:20:29 +0000 (09:20 -0500)]
irqchip/gic-v3: Change pr_debug message to pr_devel
[ Upstream commit
b6dd4d83dc2f78cebc9a7e6e7e4bc2be4d29b94d ]
The pr_debug() in gic-v3 gic_send_sgi() can trigger a circular locking
warning:
GICv3: CPU10: ICC_SGI1R_EL1
5000400
======================================================
WARNING: possible circular locking dependency detected
4.15.0+ #1 Tainted: G W
------------------------------------------------------
dynamic_debug01/1873 is trying to acquire lock:
((console_sem).lock){-...}, at: [<
0000000099c891ec>] down_trylock+0x20/0x4c
but task is already holding lock:
(&rq->lock){-.-.}, at: [<
00000000842e1587>] __task_rq_lock+0x54/0xdc
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&rq->lock){-.-.}:
__lock_acquire+0x3b4/0x6e0
lock_acquire+0xf4/0x2a8
_raw_spin_lock+0x4c/0x60
task_fork_fair+0x3c/0x148
sched_fork+0x10c/0x214
copy_process.isra.32.part.33+0x4e8/0x14f0
_do_fork+0xe8/0x78c
kernel_thread+0x48/0x54
rest_init+0x34/0x2a4
start_kernel+0x45c/0x488
-> #1 (&p->pi_lock){-.-.}:
__lock_acquire+0x3b4/0x6e0
lock_acquire+0xf4/0x2a8
_raw_spin_lock_irqsave+0x58/0x70
try_to_wake_up+0x48/0x600
wake_up_process+0x28/0x34
__up.isra.0+0x60/0x6c
up+0x60/0x68
__up_console_sem+0x4c/0x7c
console_unlock+0x328/0x634
vprintk_emit+0x25c/0x390
dev_vprintk_emit+0xc4/0x1fc
dev_printk_emit+0x88/0xa8
__dev_printk+0x58/0x9c
_dev_info+0x84/0xa8
usb_new_device+0x100/0x474
hub_port_connect+0x280/0x92c
hub_event+0x740/0xa84
process_one_work+0x240/0x70c
worker_thread+0x60/0x400
kthread+0x110/0x13c
ret_from_fork+0x10/0x18
-> #0 ((console_sem).lock){-...}:
validate_chain.isra.34+0x6e4/0xa20
__lock_acquire+0x3b4/0x6e0
lock_acquire+0xf4/0x2a8
_raw_spin_lock_irqsave+0x58/0x70
down_trylock+0x20/0x4c
__down_trylock_console_sem+0x3c/0x9c
console_trylock+0x20/0xb0
vprintk_emit+0x254/0x390
vprintk_default+0x58/0x90
vprintk_func+0xbc/0x164
printk+0x80/0xa0
__dynamic_pr_debug+0x84/0xac
gic_raise_softirq+0x184/0x18c
smp_cross_call+0xac/0x218
smp_send_reschedule+0x3c/0x48
resched_curr+0x60/0x9c
check_preempt_curr+0x70/0xdc
wake_up_new_task+0x310/0x470
_do_fork+0x188/0x78c
SyS_clone+0x44/0x50
__sys_trace_return+0x0/0x4
other info that might help us debug this:
Chain exists of:
(console_sem).lock --> &p->pi_lock --> &rq->lock
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&rq->lock);
lock(&p->pi_lock);
lock(&rq->lock);
lock((console_sem).lock);
*** DEADLOCK ***
2 locks held by dynamic_debug01/1873:
#0: (&p->pi_lock){-.-.}, at: [<
000000001366df53>] wake_up_new_task+0x40/0x470
#1: (&rq->lock){-.-.}, at: [<
00000000842e1587>] __task_rq_lock+0x54/0xdc
stack backtrace:
CPU: 10 PID: 1873 Comm: dynamic_debug01 Tainted: G W 4.15.0+ #1
Hardware name: GIGABYTE R120-T34-00/MT30-GS2-00, BIOS T48 10/02/2017
Call trace:
dump_backtrace+0x0/0x188
show_stack+0x24/0x2c
dump_stack+0xa4/0xe0
print_circular_bug.isra.31+0x29c/0x2b8
check_prev_add.constprop.39+0x6c8/0x6dc
validate_chain.isra.34+0x6e4/0xa20
__lock_acquire+0x3b4/0x6e0
lock_acquire+0xf4/0x2a8
_raw_spin_lock_irqsave+0x58/0x70
down_trylock+0x20/0x4c
__down_trylock_console_sem+0x3c/0x9c
console_trylock+0x20/0xb0
vprintk_emit+0x254/0x390
vprintk_default+0x58/0x90
vprintk_func+0xbc/0x164
printk+0x80/0xa0
__dynamic_pr_debug+0x84/0xac
gic_raise_softirq+0x184/0x18c
smp_cross_call+0xac/0x218
smp_send_reschedule+0x3c/0x48
resched_curr+0x60/0x9c
check_preempt_curr+0x70/0xdc
wake_up_new_task+0x310/0x470
_do_fork+0x188/0x78c
SyS_clone+0x44/0x50
__sys_trace_return+0x0/0x4
GICv3: CPU0: ICC_SGI1R_EL1 12000
This could be fixed with printk_deferred() but that might lessen its
usefulness for debugging. So change it to pr_devel to keep it out of
production kernels. Developers working on gic-v3 can enable it as
needed in their kernels.
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Kelley [Wed, 14 Feb 2018 02:54:03 +0000 (02:54 +0000)]
cpumask: Make for_each_cpu_wrap() available on UP as well
[ Upstream commit
d207af2eab3f8668b95ad02b21930481c42806fd ]
for_each_cpu_wrap() was originally added in the #else half of a
large "#if NR_CPUS == 1" statement, but was omitted in the #if
half. This patch adds the missing #if half to prevent compile
errors when NR_CPUS is 1.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Michael Kelley <mhkelley@outlook.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kys@microsoft.com
Cc: martin.petersen@oracle.com
Cc: mikelley@microsoft.com
Fixes:
c743f0a5c50f ("sched/fair, cpumask: Export for_each_cpu_wrap()")
Link: http://lkml.kernel.org/r/SN6PR1901MB2045F087F59450507D4FCC17CBF50@SN6PR1901MB2045.namprd19.prod.outlook.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stephen Boyd [Thu, 1 Feb 2018 17:03:29 +0000 (09:03 -0800)]
irqchip/gic-v3: Ignore disabled ITS nodes
[ Upstream commit
95a2562590c2f64a0398183f978d5cf3db6d0284 ]
On some platforms there's an ITS available but it's not enabled
because reading or writing the registers is denied by the
firmware. In fact, reading or writing them will cause the system
to reset. We could remove the node from DT in such a case, but
it's better to skip nodes that are marked as "disabled" in DT so
that we can describe the hardware that exists and use the status
property to indicate how the firmware has configured things.
Cc: Stuart Yoder <stuyoder@gmail.com>
Cc: Laurentiu Tudor <laurentiu.tudor@nxp.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Rajendra Nayak <rnayak@codeaurora.org>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Will Deacon [Tue, 13 Feb 2018 13:22:57 +0000 (13:22 +0000)]
locking/qspinlock: Ensure node->count is updated before initialising node
[ Upstream commit
11dc13224c975efcec96647a4768a6f1bb7a19a8 ]
When queuing on the qspinlock, the count field for the current CPU's head
node is incremented. This needn't be atomic because locking in e.g. IRQ
context is balanced and so an IRQ will return with node->count as it
found it.
However, the compiler could in theory reorder the initialisation of
node[idx] before the increment of the head node->count, causing an
IRQ to overwrite the initialised node and potentially corrupt the lock
state.
Avoid the potential for this harmful compiler reordering by placing a
barrier() between the increment of the head node->count and the subsequent
node initialisation.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1518528177-19169-3-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jia Zhang [Mon, 12 Feb 2018 14:44:53 +0000 (22:44 +0800)]
vfs/proc/kcore, x86/mm/kcore: Fix SMAP fault when dumping vsyscall user page
[ Upstream commit
595dd46ebfc10be041a365d0a3fa99df50b6ba73 ]
Commit:
df04abfd181a ("fs/proc/kcore.c: Add bounce buffer for ktext data")
... introduced a bounce buffer to work around CONFIG_HARDENED_USERCOPY=y.
However, accessing the vsyscall user page will cause an SMAP fault.
Replace memcpy() with copy_from_user() to fix this bug works, but adding
a common way to handle this sort of user page may be useful for future.
Currently, only vsyscall page requires KCORE_USER.
Signed-off-by: Jia Zhang <zhang.jia@linux.alibaba.com>
Reviewed-by: Jiri Olsa <jolsa@kernel.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: jolsa@redhat.com
Link: http://lkml.kernel.org/r/1518446694-21124-2-git-send-email-zhang.jia@linux.alibaba.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Borkmann [Fri, 9 Feb 2018 13:49:44 +0000 (14:49 +0100)]
bpf: fix rlimit in reuseport net selftest
[ Upstream commit
941ff6f11c020913f5cddf543a9ec63475d7c082 ]
Fix two issues in the reuseport_bpf selftests that were
reported by Linaro CI:
[...]
+ ./reuseport_bpf
---- IPv4 UDP ----
Testing EBPF mod 10...
Reprograming, testing mod 5...
./reuseport_bpf: ebpf error. log:
0: (bf) r6 = r1
1: (20) r0 = *(u32 *)skb[0]
2: (97) r0 %= 10
3: (95) exit
processed 4 insns
: Operation not permitted
+ echo FAIL
[...]
---- IPv4 TCP ----
Testing EBPF mod 10...
./reuseport_bpf: failed to bind send socket: Address already in use
+ echo FAIL
[...]
For the former adjust rlimit since this was the cause of
failure for loading the BPF prog, and for the latter add
SO_REUSEADDR.
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Link: https://bugs.linaro.org/show_bug.cgi?id=3502
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jesper Dangaard Brouer [Thu, 8 Feb 2018 11:48:32 +0000 (12:48 +0100)]
tools/libbpf: handle issues with bpf ELF objects containing .eh_frames
[ Upstream commit
e3d91b0ca523d53158f435a3e13df7f0cb360ea2 ]
V3: More generic skipping of relo-section (suggested by Daniel)
If clang >= 4.0.1 is missing the option '-target bpf', it will cause
llc/llvm to create two ELF sections for "Exception Frames", with
section names '.eh_frame' and '.rel.eh_frame'.
The BPF ELF loader library libbpf fails when loading files with these
sections. The other in-kernel BPF ELF loader in samples/bpf/bpf_load.c,
handle this gracefully. And iproute2 loader also seems to work with these
"eh" sections.
The issue in libbpf is caused by bpf_object__elf_collect() skipping
some sections, and later when performing relocation it will be
pointing to a skipped section, as these sections cannot be found by
bpf_object__find_prog_by_idx() in bpf_object__collect_reloc().
This is a general issue that also occurs for other sections, like
debug sections which are also skipped and can have relo section.
As suggested by Daniel. To avoid keeping state about all skipped
sections, instead perform a direct qlookup in the ELF object. Lookup
the section that the relo-section points to and check if it contains
executable machine instructions (denoted by the sh_flags
SHF_EXECINSTR). Use this check to also skip irrelevant relo-sections.
Note, for samples/bpf/ the '-target bpf' parameter to clang cannot be used
due to incompatibility with asm embedded headers, that some of the samples
include. This is explained in more details by Yonghong Song in bpf_devel_QA.
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tang Junhui [Wed, 7 Feb 2018 19:41:45 +0000 (11:41 -0800)]
bcache: return attach error when no cache set exist
[ Upstream commit
7f4fc93d4713394ee8f1cd44c238e046e11b4f15 ]
I attach a back-end device to a cache set, and the cache set is not
registered yet, this back-end device did not attach successfully, and no
error returned:
[root]# echo
87859280-fec6-4bcc-
20df7ca8f86b > /sys/block/sde/bcache/attach
[root]#
In sysfs_attach(), the return value "v" is initialized to "size" in
the beginning, and if no cache set exist in bch_cache_sets, the "v" value
would not change any more, and return to sysfs, sysfs regard it as success
since the "size" is a positive number.
This patch fixes this issue by assigning "v" with "-ENOENT" in the
initialization.
Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tang Junhui [Wed, 7 Feb 2018 19:41:46 +0000 (11:41 -0800)]
bcache: fix for data collapse after re-attaching an attached device
[ Upstream commit
73ac105be390c1de42a2f21643c9778a5e002930 ]
back-end device sdm has already attached a cache_set with ID
f67ebe1f-f8bc-4d73-bfe5-
9dc88607f119, then try to attach with
another cache set, and it returns with an error:
[root]# cd /sys/block/sdm/bcache
[root]# echo
5ccd0a63-148e-48b8-afa2-
aca9cbd6279f > attach
-bash: echo: write error: Invalid argument
After that, execute a command to modify the label of bcache
device:
[root]# echo data_disk1 > label
Then we reboot the system, when the system power on, the back-end
device can not attach to cache_set, a messages show in the log:
Feb 5 12:05:52 ceph152 kernel: [922385.508498] bcache:
bch_cached_dev_attach() couldn't find uuid for sdm in set
In sysfs_attach(), dc->sb.set_uuid was assigned to the value
which input through sysfs, no matter whether it is success
or not in bch_cached_dev_attach(). For example, If the back-end
device has already attached to an cache set, bch_cached_dev_attach()
would fail, but dc->sb.set_uuid was changed. Then modify the
label of bcache device, it will call bch_write_bdev_super(),
which would write the dc->sb.set_uuid to the super block, so we
record a wrong cache set ID in the super block, after the system
reboot, the cache set couldn't find the uuid of the back-end
device, so the bcache device couldn't exist and use any more.
In this patch, we don't assigned cache set ID to dc->sb.set_uuid
in sysfs_attach() directly, but input it into bch_cached_dev_attach(),
and assigned dc->sb.set_uuid to the cache set ID after the back-end
device attached to the cache set successful.
Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tang Junhui [Wed, 7 Feb 2018 19:41:43 +0000 (11:41 -0800)]
bcache: fix for allocator and register thread race
[ Upstream commit
682811b3ce1a5a4e20d700939a9042f01dbc66c4 ]
After long time running of random small IO writing,
I reboot the machine, and after the machine power on,
I found bcache got stuck, the stack is:
[root@ceph153 ~]# cat /proc/2510/task/*/stack
[<
ffffffffa06b2455>] closure_sync+0x25/0x90 [bcache]
[<
ffffffffa06b6be8>] bch_journal+0x118/0x2b0 [bcache]
[<
ffffffffa06b6dc7>] bch_journal_meta+0x47/0x70 [bcache]
[<
ffffffffa06be8f7>] bch_prio_write+0x237/0x340 [bcache]
[<
ffffffffa06a8018>] bch_allocator_thread+0x3c8/0x3d0 [bcache]
[<
ffffffff810a631f>] kthread+0xcf/0xe0
[<
ffffffff8164c318>] ret_from_fork+0x58/0x90
[<
ffffffffffffffff>] 0xffffffffffffffff
[root@ceph153 ~]# cat /proc/2038/task/*/stack
[<
ffffffffa06b1abd>] __bch_btree_map_nodes+0x12d/0x150 [bcache]
[<
ffffffffa06b1bd1>] bch_btree_insert+0xf1/0x170 [bcache]
[<
ffffffffa06b637f>] bch_journal_replay+0x13f/0x230 [bcache]
[<
ffffffffa06c75fe>] run_cache_set+0x79a/0x7c2 [bcache]
[<
ffffffffa06c0cf8>] register_bcache+0xd48/0x1310 [bcache]
[<
ffffffff812f702f>] kobj_attr_store+0xf/0x20
[<
ffffffff8125b216>] sysfs_write_file+0xc6/0x140
[<
ffffffff811dfbfd>] vfs_write+0xbd/0x1e0
[<
ffffffff811e069f>] SyS_write+0x7f/0xe0
[<
ffffffff8164c3c9>] system_call_fastpath+0x16/0x1
The stack shows the register thread and allocator thread
were getting stuck when registering cache device.
I reboot the machine several times, the issue always
exsit in this machine.
I debug the code, and found the call trace as bellow:
register_bcache()
==>run_cache_set()
==>bch_journal_replay()
==>bch_btree_insert()
==>__bch_btree_map_nodes()
==>btree_insert_fn()
==>btree_split() //node need split
==>btree_check_reserve()
In btree_check_reserve(), It will check if there is enough buckets
of RESERVE_BTREE type, since allocator thread did not work yet, so
no buckets of RESERVE_BTREE type allocated, so the register thread
waits on c->btree_cache_wait, and goes to sleep.
Then the allocator thread initialized, the call trace is bellow:
bch_allocator_thread()
==>bch_prio_write()
==>bch_journal_meta()
==>bch_journal()
==>journal_wait_for_write()
In journal_wait_for_write(), It will check if journal is full by
journal_full(), but the long time random small IO writing
causes the exhaustion of journal buckets(journal.blocks_free=0),
In order to release the journal buckets,
the allocator calls btree_flush_write() to flush keys to
btree nodes, and waits on c->journal.wait until btree nodes writing
over or there has already some journal buckets space, then the
allocator thread goes to sleep. but in btree_flush_write(), since
bch_journal_replay() is not finished, so no btree nodes have journal
(condition "if (btree_current_write(b)->journal)" never satisfied),
so we got no btree node to flush, no journal bucket released,
and allocator sleep all the times.
Through the above analysis, we can see that:
1) Register thread wait for allocator thread to allocate buckets of
RESERVE_BTREE type;
2) Alloctor thread wait for register thread to replay journal, so it
can flush btree nodes and get journal bucket.
then they are all got stuck by waiting for each other.
Hua Rui provided a patch for me, by allocating some buckets of
RESERVE_BTREE type in advance, so the register thread can get bucket
when btree node splitting and no need to waiting for the allocator
thread. I tested it, it has effect, and register thread run a step
forward, but finally are still got stuck, the reason is only 8 bucket
of RESERVE_BTREE type were allocated, and in bch_journal_replay(),
after 2 btree nodes splitting, only 4 bucket of RESERVE_BTREE type left,
then btree_check_reserve() is not satisfied anymore, so it goes to sleep
again, and in the same time, alloctor thread did not flush enough btree
nodes to release a journal bucket, so they all got stuck again.
So we need to allocate more buckets of RESERVE_BTREE type in advance,
but how much is enough? By experience and test, I think it should be
as much as journal buckets. Then I modify the code as this patch,
and test in the machine, and it works.
This patch modified base on Hua Rui’s patch, and allocate more buckets
of RESERVE_BTREE type in advance to avoid register thread and allocate
thread going to wait for each other.
[patch v2] ca->sb.njournal_buckets would be 0 in the first time after
cache creation, and no journal exists, so just 8 btree buckets is OK.
Signed-off-by: Hua Rui <huarui.dev@gmail.com>
Signed-off-by: Tang Junhui <tang.junhui@zte.com.cn>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Coly Li [Wed, 7 Feb 2018 19:41:41 +0000 (11:41 -0800)]
bcache: properly set task state in bch_writeback_thread()
[ Upstream commit
99361bbf26337186f02561109c17a4c4b1a7536a ]
Kernel thread routine bch_writeback_thread() has the following code block,
447 down_write(&dc->writeback_lock);
448~450 if (check conditions) {
451 up_write(&dc->writeback_lock);
452 set_current_state(TASK_INTERRUPTIBLE);
453
454 if (kthread_should_stop())
455 return 0;
456
457 schedule();
458 continue;
459 }
If condition check is true, its task state is set to TASK_INTERRUPTIBLE
and call schedule() to wait for others to wake up it.
There are 2 issues in current code,
1, Task state is set to TASK_INTERRUPTIBLE after the condition checks, if
another process changes the condition and call wake_up_process(dc->
writeback_thread), then at line 452 task state is set back to
TASK_INTERRUPTIBLE, the writeback kernel thread will lose a chance to be
waken up.
2, At line 454 if kthread_should_stop() is true, writeback kernel thread
will return to kernel/kthread.c:kthread() with TASK_INTERRUPTIBLE and
call do_exit(). It is not good to enter do_exit() with task state
TASK_INTERRUPTIBLE, in following code path might_sleep() is called and a
warning message is reported by __might_sleep(): "WARNING: do not call
blocking ops when !TASK_RUNNING; state=1 set at [xxxx]".
For the first issue, task state should be set before condition checks.
Ineed because dc->writeback_lock is required when modifying all the
conditions, calling set_current_state() inside code block where dc->
writeback_lock is hold is safe. But this is quite implicit, so I still move
set_current_state() before all the condition checks.
For the second issue, frankley speaking it does not hurt when kernel thread
exits with TASK_INTERRUPTIBLE state, but this warning message scares users,
makes them feel there might be something risky with bcache and hurt their
data. Setting task state to TASK_RUNNING before returning fixes this
problem.
In alloc.c:allocator_wait(), there is also a similar issue, and is also
fixed in this patch.
Changelog:
v3: merge two similar fixes into one patch
v2: fix the race issue in v1 patch.
v1: initial buggy fix.
Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arnd Bergmann [Fri, 2 Feb 2018 15:48:47 +0000 (16:48 +0100)]
cifs: silence compiler warnings showing up with gcc-8.0.0
[ Upstream commit
ade7db991b47ab3016a414468164f4966bd08202 ]
This bug was fixed before, but came up again with the latest
compiler in another function:
fs/cifs/cifssmb.c: In function 'CIFSSMBSetEA':
fs/cifs/cifssmb.c:6362:3: error: 'strncpy' offset 8 is out of the bounds [0, 4] [-Werror=array-bounds]
strncpy(parm_data->list[0].name, ea_name, name_len);
Let's apply the same fix that was used for the other instances.
Fixes:
b2a3ad9ca502 ("cifs: silence compiler warnings showing up with gcc-4.7.0")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexey Dobriyan [Tue, 6 Feb 2018 23:36:59 +0000 (15:36 -0800)]
proc: fix /proc/*/map_files lookup
[ Upstream commit
ac7f1061c2c11bb8936b1b6a94cdb48de732f7a4 ]
Current code does:
if (sscanf(dentry->d_name.name, "%lx-%lx", start, end) != 2)
However sscanf() is broken garbage.
It silently accepts whitespace between format specifiers
(did you know that?).
It silently accepts valid strings which result in integer overflow.
Do not use sscanf() for any even remotely reliable parsing code.
OK
# readlink '/proc/1/map_files/
55a23af39000-
55a23b05b000'
/lib/systemd/systemd
broken
# readlink '/proc/1/map_files/
55a23af39000-
55a23b05b000'
/lib/systemd/systemd
broken
# readlink '/proc/1/map_files/
55a23af39000-
55a23b05b000 '
/lib/systemd/systemd
very broken
# readlink '/proc/1/map_files/
1000000000000000055a23af39000-
55a23b05b000'
/lib/systemd/systemd
Andrei said:
: This patch breaks criu. It was a bug in criu. And this bug is on a minor
: path, which works when memfd_create() isn't available. It is a reason why
: I ask to not backport this patch to stable kernels.
:
: In CRIU this bug can be triggered, only if this patch will be backported
: to a kernel which version is lower than v3.16.
Link: http://lkml.kernel.org/r/20171120212706.GA14325@avx2
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Pavel Emelyanov <xemul@openvz.org>
Cc: Andrei Vagin <avagin@virtuozzo.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Will Deacon [Wed, 31 Jan 2018 12:12:20 +0000 (12:12 +0000)]
arm64: spinlock: Fix theoretical trylock() A-B-A with LSE atomics
[ Upstream commit
202fb4ef81e3ec765c23bd1e6746a5c25b797d0e ]
If the spinlock "next" ticket wraps around between the initial LDR
and the cmpxchg in the LSE version of spin_trylock, then we can erroneously
think that we have successfuly acquired the lock because we only check
whether the next ticket return by the cmpxchg is equal to the owner ticket
in our updated lock word.
This patch fixes the issue by performing a full 32-bit check of the lock
word when trying to determine whether or not the CASA instruction updated
memory.
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guanglei Li [Tue, 6 Feb 2018 02:43:21 +0000 (10:43 +0800)]
RDS: IB: Fix null pointer issue
[ Upstream commit
2c0aa08631b86a4678dbc93b9caa5248014b4458 ]
Scenario:
1. Port down and do fail over
2. Ap do rds_bind syscall
PID: 47039 TASK:
ffff89887e2fe640 CPU: 47 COMMAND: "kworker/u:6"
#0 [
ffff898e35f159f0] machine_kexec at
ffffffff8103abf9
#1 [
ffff898e35f15a60] crash_kexec at
ffffffff810b96e3
#2 [
ffff898e35f15b30] oops_end at
ffffffff8150f518
#3 [
ffff898e35f15b60] no_context at
ffffffff8104854c
#4 [
ffff898e35f15ba0] __bad_area_nosemaphore at
ffffffff81048675
#5 [
ffff898e35f15bf0] bad_area_nosemaphore at
ffffffff810487d3
#6 [
ffff898e35f15c00] do_page_fault at
ffffffff815120b8
#7 [
ffff898e35f15d10] page_fault at
ffffffff8150ea95
[exception RIP: unknown or invalid address]
RIP:
0000000000000000 RSP:
ffff898e35f15dc8 RFLAGS:
00010282
RAX:
00000000fffffffe RBX:
ffff889b77f6fc00 RCX:
ffffffff81c99d88
RDX:
0000000000000000 RSI:
ffff896019ee08e8 RDI:
ffff889b77f6fc00
RBP:
ffff898e35f15df0 R8:
ffff896019ee08c8 R9:
0000000000000000
R10:
0000000000000400 R11:
0000000000000000 R12:
ffff896019ee08c0
R13:
ffff889b77f6fe68 R14:
ffffffff81c99d80 R15:
ffffffffa022a1e0
ORIG_RAX:
ffffffffffffffff CS: 0010 SS: 0018
#8 [
ffff898e35f15dc8] cma_ndev_work_handler at
ffffffffa022a228 [rdma_cm]
#9 [
ffff898e35f15df8] process_one_work at
ffffffff8108a7c6
#10 [
ffff898e35f15e58] worker_thread at
ffffffff8108bda0
#11 [
ffff898e35f15ee8] kthread at
ffffffff81090fe6
PID: 45659 TASK:
ffff880d313d2500 CPU: 31 COMMAND: "oracle_45659_ap"
#0 [
ffff881024ccfc98] __schedule at
ffffffff8150bac4
#1 [
ffff881024ccfd40] schedule at
ffffffff8150c2cf
#2 [
ffff881024ccfd50] __mutex_lock_slowpath at
ffffffff8150cee7
#3 [
ffff881024ccfdc0] mutex_lock at
ffffffff8150cdeb
#4 [
ffff881024ccfde0] rdma_destroy_id at
ffffffffa022a027 [rdma_cm]
#5 [
ffff881024ccfe10] rds_ib_laddr_check at
ffffffffa0357857 [rds_rdma]
#6 [
ffff881024ccfe50] rds_trans_get_preferred at
ffffffffa0324c2a [rds]
#7 [
ffff881024ccfe80] rds_bind at
ffffffffa031d690 [rds]
#8 [
ffff881024ccfeb0] sys_bind at
ffffffff8142a670
PID: 45659 PID: 47039
rds_ib_laddr_check
/* create id_priv with a null event_handler */
rdma_create_id
rdma_bind_addr
cma_acquire_dev
/* add id_priv to cma_dev->id_list */
cma_attach_to_dev
cma_ndev_work_handler
/* event_hanlder is null */
id_priv->id.event_handler
Signed-off-by: Guanglei Li <guanglei.li@oracle.com>
Signed-off-by: Honglei Wang <honglei.wang@oracle.com>
Reviewed-by: Junxiao Bi <junxiao.bi@oracle.com>
Reviewed-by: Yanjun Zhu <yanjun.zhu@oracle.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Acked-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ross Lagerwall [Thu, 11 Jan 2018 09:36:37 +0000 (09:36 +0000)]
xen/grant-table: Use put_page instead of free_page
[ Upstream commit
3ac7292a25db1c607a50752055a18aba32ac2176 ]
The page given to gnttab_end_foreign_access() to free could be a
compound page so use put_page() instead of free_page() since it can
handle both compound and single pages correctly.
This bug was discovered when migrating a Xen VM with several VIFs and
CONFIG_DEBUG_VM enabled. It hits a BUG usually after fewer than 10
iterations. All netfront devices disconnect from the backend during a
suspend/resume and this will call gnttab_end_foreign_access() if a
netfront queue has an outstanding skb. The mismatch between calling
get_page() and free_page() on a compound page causes a reference
counting error which is detected when DEBUG_VM is enabled.
Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ross Lagerwall [Thu, 11 Jan 2018 09:36:38 +0000 (09:36 +0000)]
xen-netfront: Fix race between device setup and open
[ Upstream commit
f599c64fdf7d9c108e8717fb04bc41c680120da4 ]
When a netfront device is set up it registers a netdev fairly early on,
before it has set up the queues and is actually usable. A userspace tool
like NetworkManager will immediately try to open it and access its state
as soon as it appears. The bug can be reproduced by hotplugging VIFs
until the VM runs out of grant refs. It registers the netdev but fails
to set up any queues (since there are no more grant refs). In the
meantime, NetworkManager opens the device and the kernel crashes trying
to access the queues (of which there are none).
Fix this in two ways:
* For initial setup, register the netdev much later, after the queues
are setup. This avoids the race entirely.
* During a suspend/resume cycle, the frontend reconnects to the backend
and the queues are recreated. It is possible (though highly unlikely) to
race with something opening the device and accessing the queues after
they have been destroyed but before they have been recreated. Extend the
region covered by the rtnl semaphore to protect against this race. There
is a possibility that we fail to recreate the queues so check for this
in the open function.
Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Matt Redfearn [Mon, 29 Jan 2018 11:26:45 +0000 (11:26 +0000)]
MIPS: TXx9: use IS_BUILTIN() for CONFIG_LEDS_CLASS
[ Upstream commit
0cde5b44a30f1daaef1c34e08191239dc63271c4 ]
When commit
b27311e1cace ("MIPS: TXx9: Add RBTX4939 board support")
added board support for the RBTX4939, it added a call to
led_classdev_register even if the LED class is built as a module.
Built-in arch code cannot call module code directly like this. Commit
b33b44073734 ("MIPS: TXX9: use IS_ENABLED() macro") subsequently
changed the inclusion of this code to a single check that
CONFIG_LEDS_CLASS is either builtin or a module, but the same issue
remains.
This leads to MIPS allmodconfig builds failing when CONFIG_MACH_TX49XX=y
is set:
arch/mips/txx9/rbtx4939/setup.o: In function `rbtx4939_led_probe':
setup.c:(.init.text+0xc0): undefined reference to `of_led_classdev_register'
make: *** [Makefile:999: vmlinux] Error 1
Fix this by using the IS_BUILTIN() macro instead.
Fixes:
b27311e1cace ("MIPS: TXx9: Add RBTX4939 board support")
Signed-off-by: Matt Redfearn <matt.redfearn@mips.com>
Reviewed-by: James Hogan <jhogan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/18544/
Signed-off-by: James Hogan <jhogan@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
James Hogan [Fri, 2 Feb 2018 22:14:09 +0000 (22:14 +0000)]
MIPS: generic: Fix machine compatible matching
[ Upstream commit
9a9ab3078e2744a1a55163cfaec73a5798aae33e ]
We now have a platform (Ranchu) in the "generic" platform which matches
based on the FDT compatible string using mips_machine_is_compatible(),
however that function doesn't stop at a blank struct
of_device_id::compatible as that is an array in the struct, not a
pointer to a string.
Fix the loop completion to check the first byte of the compatible array
rather than the address of the compatible array in the struct.
Fixes:
eed0eabd12ef ("MIPS: generic: Introduce generic DT-based board support")
Signed-off-by: James Hogan <jhogan@kernel.org>
Reviewed-by: Paul Burton <paul.burton@mips.com>
Reviewed-by: Matt Redfearn <matt.redfearn@mips.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/18580/
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yonghong Song [Sat, 3 Feb 2018 06:37:15 +0000 (22:37 -0800)]
bpf: fix selftests/bpf test_kmod.sh failure when CONFIG_BPF_JIT_ALWAYS_ON=y
[ Upstream commit
09584b406742413ac4c8d7e030374d4daa045b69 ]
With CONFIG_BPF_JIT_ALWAYS_ON is defined in the config file,
tools/testing/selftests/bpf/test_kmod.sh failed like below:
[root@localhost bpf]# ./test_kmod.sh
sysctl: setting key "net.core.bpf_jit_enable": Invalid argument
[ JIT enabled:0 hardened:0 ]
[ 132.175681] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
[ 132.458834] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
[ JIT enabled:1 hardened:0 ]
[ 133.456025] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
[ 133.730935] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
[ JIT enabled:1 hardened:1 ]
[ 134.769730] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
[ 135.050864] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
[ JIT enabled:1 hardened:2 ]
[ 136.442882] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
[ 136.821810] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
[root@localhost bpf]#
The test_kmod.sh load/remove test_bpf.ko multiple times with different
settings for sysctl net.core.bpf_jit_{enable,harden}. The failed test #297
of test_bpf.ko is designed such that JIT always fails.
Commit
290af86629b2 (bpf: introduce BPF_JIT_ALWAYS_ON config)
introduced the following tightening logic:
...
if (!bpf_prog_is_dev_bound(fp->aux)) {
fp = bpf_int_jit_compile(fp);
#ifdef CONFIG_BPF_JIT_ALWAYS_ON
if (!fp->jited) {
*err = -ENOTSUPP;
return fp;
}
#endif
...
With this logic, Test #297 always gets return value -ENOTSUPP
when CONFIG_BPF_JIT_ALWAYS_ON is defined, causing the test failure.
This patch fixed the failure by marking Test #297 as expected failure
when CONFIG_BPF_JIT_ALWAYS_ON is defined.
Fixes:
290af86629b2 (bpf: introduce BPF_JIT_ALWAYS_ON config)
Signed-off-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hans de Goede [Fri, 26 Jan 2018 15:02:59 +0000 (16:02 +0100)]
ACPI / scan: Use acpi_bus_get_status() to initialize ACPI_TYPE_DEVICE devs
[ Upstream commit
63347db0affadcbccd5613116ea8431c70139b3e ]
The acpi_get_bus_status wrapper for acpi_bus_get_status_handle has some
code to handle certain device quirks, in some cases we also need this
quirk handling for the initial _STA call.
Specifically on some devices calling _STA before all _DEP dependencies
are met results in errors like these:
[ 0.123579] ACPI Error: No handler for Region [ECRM] (
00000000ba9edc4c)
[GenericSerialBus] (
20170831/evregion-166)
[ 0.123601] ACPI Error: Region GenericSerialBus (ID=9) has no handler
(
20170831/exfldio-299)
[ 0.123618] ACPI Error: Method parse/execution failed
\_SB.I2C1.BAT1._STA, AE_NOT_EXIST (
20170831/psparse-550)
acpi_get_bus_status already has code to avoid this, so by using it we
also silence these errors from the initial _STA call.
Note that in order for the acpi_get_bus_status handling for this to work,
we initialize dep_unmet to 1 until acpi_device_dep_initialize gets called,
this means that battery devices will be instantiated with an initial
status of 0. This is not a problem, acpi_bus_attach will get called soon
after the instantiation anyways and it will update the status as first
point of order.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>