Fabio Estevam [Fri, 5 Dec 2014 18:16:07 +0000 (16:16 -0200)]
ARM: dts: imx25: Fix the SPI1 clocks
commit
7a87e9cbc3a2f0ff0955815335e08c9862359130 upstream.
From Documentation/devicetree/bindings/clock/imx25-clock.txt:
cspi1_ipg 78
cspi2_ipg 79
cspi3_ipg 80
, so fix the SPI1 clocks accordingly to avoid a kernel hang when trying to
access SPI1.
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dmitry Torokhov [Thu, 8 Jan 2015 22:53:23 +0000 (14:53 -0800)]
Input: I8042 - add Acer Aspire 7738 to the nomux list
commit
9333caeaeae4f831054e0e127a6ed3948b604d3e upstream.
When KBC is in active multiplexing mode the touchpad on this laptop does
not work.
Reported-by: Bilal Koc <koc.bilo@googlemail.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Srihari Vijayaraghavan [Thu, 8 Jan 2015 00:25:53 +0000 (16:25 -0800)]
Input: i8042 - reset keyboard to fix Elantech touchpad detection
commit
148e9a711e034e06310a8c36b64957934ebe30f2 upstream.
On some laptops, keyboard needs to be reset in order to successfully detect
touchpad (e.g., some Gigabyte laptop models with Elantech touchpads).
Without resettin keyboard touchpad pretends to be completely dead.
Based on the original patch by Mateusz Jończyk this version has been
expanded to include DMI based detection & application of the fix
automatically on the affected models of laptops. This has been confirmed to
fix problem by three users already on three different models of laptops.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=81331
Signed-off-by: Srihari Vijayaraghavan <linux.bug.reporting@gmail.com>
Acked-by: Mateusz Jończyk <mat.jonczyk@o2.pl>
Tested-by: Srihari Vijayaraghavan <linux.bug.reporting@gmail.com>
Tested by: Zakariya Dehlawi <zdehlawi@gmail.com>
Tested-by: Guillaum Bouchard <guillaum.bouchard@gmail.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ahmed S. Darwish [Mon, 5 Jan 2015 17:57:13 +0000 (12:57 -0500)]
can: kvaser_usb: Don't send a RESET_CHIP for non-existing channels
commit
5e7e6e0c9b47a45576c38b4a72d67927a5e049f7 upstream.
Recent Leaf firmware versions (>= 3.1.557) do not allow to send
commands for non-existing channels. If a command is sent for a
non-existing channel, the firmware crashes.
Reported-by: Christopher Storah <Christopher.Storah@invetech.com.au>
Signed-off-by: Olivier Sobrie <olivier@sobrie.be>
Signed-off-by: Ahmed S. Darwish <ahmed.darwish@valeo.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ahmed S. Darwish [Mon, 5 Jan 2015 17:52:06 +0000 (12:52 -0500)]
can: kvaser_usb: Reset all URB tx contexts upon channel close
commit
889b77f7fd2bcc922493d73a4c51d8a851505815 upstream.
Flooding the Kvaser CAN to USB dongle with multiple reads and
writes in very high frequency (*), closing the CAN channel while
all the transmissions are on (#), opening the device again (@),
then sending a small number of packets would make the driver
enter an almost infinite loop of:
[....]
[15959.853988] kvaser_usb 4-3:1.0 can0: cannot find free context
[15959.853990] kvaser_usb 4-3:1.0 can0: cannot find free context
[15959.853991] kvaser_usb 4-3:1.0 can0: cannot find free context
[15959.853993] kvaser_usb 4-3:1.0 can0: cannot find free context
[15959.853994] kvaser_usb 4-3:1.0 can0: cannot find free context
[15959.853995] kvaser_usb 4-3:1.0 can0: cannot find free context
[....]
_dragging the whole system down_ in the process due to the
excessive logging output.
Initially, this has caused random panics in the kernel due to a
buggy error recovery path. That got fixed in an earlier commit.(%)
This patch aims at solving the root cause. -->
16 tx URBs and contexts are allocated per CAN channel per USB
device. Such URBs are protected by:
a) A simple atomic counter, up to a value of MAX_TX_URBS (16)
b) A flag in each URB context, stating if it's free
c) The fact that ndo_start_xmit calls are themselves protected
by the networking layers higher above
After grabbing one of the tx URBs, if the driver noticed that all
of them are now taken, it stops the netif transmission queue.
Such queue is worken up again only if an acknowedgment was received
from the firmware on one of our earlier-sent frames.
Meanwhile, upon channel close (#), the driver sends a CMD_STOP_CHIP
to the firmware, effectively closing all further communication. In
the high traffic case, the atomic counter remains at MAX_TX_URBS,
and all the URB contexts remain marked as active. While opening
the channel again (@), it cannot send any further frames since no
more free tx URB contexts are available.
Reset all tx URB contexts upon CAN channel close.
(*) 50 parallel instances of `cangen0 -g 0 -ix`
(#) `ifconfig can0 down`
(@) `ifconfig can0 up`
(%) "can: kvaser_usb: Don't free packets when tight on URBs"
Signed-off-by: Ahmed S. Darwish <ahmed.darwish@valeo.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ahmed S. Darwish [Mon, 5 Jan 2015 17:49:10 +0000 (12:49 -0500)]
can: kvaser_usb: Don't free packets when tight on URBs
commit
b442723fcec445fb0ae1104888dd22cd285e0a91 upstream.
Flooding the Kvaser CAN to USB dongle with multiple reads and
writes in high frequency caused seemingly-random panics in the
kernel.
On further inspection, it seems the driver erroneously freed the
to-be-transmitted packet upon getting tight on URBs and returning
NETDEV_TX_BUSY, leading to invalid memory writes and double frees
at a later point in time.
Note:
Finding no more URBs/transmit-contexts and returning NETDEV_TX_BUSY
is a driver bug in and out of itself: it means that our start/stop
queue flow control is broken.
This patch only fixes the (buggy) error handling code; the root
cause shall be fixed in a later commit.
Acked-by: Olivier Sobrie <olivier@sobrie.be>
Signed-off-by: Ahmed S. Darwish <ahmed.darwish@valeo.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Mon, 22 Dec 2014 17:39:39 +0000 (18:39 +0100)]
USB: keyspan: fix null-deref at probe
commit
b5122236bba8d7ef62153da5b55cc65d0944c61e upstream.
Fix null-pointer dereference during probe if the interface-status
completion handler is called before the individual ports have been set
up.
Fixes:
f79b2d0fe81e ("USB: keyspan: fix NULL-pointer dereferences and
memory leaks")
Reported-by: Richard <richjunk@pacbell.net>
Tested-by: Richard <richjunk@pacbell.net>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Peterson [Tue, 6 Jan 2015 15:00:52 +0000 (15:00 +0000)]
USB: cp210x: add IDs for CEL USB sticks and MeshWorks devices
commit
1ae78a4870989a354028cb17dabf819b595e70e3 upstream.
Added virtual com port VID/PID entries for CEL USB sticks and MeshWorks
devices.
Signed-off-by: David Peterson <david.peterson@cel.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Preston Fick [Sat, 27 Dec 2014 07:32:41 +0000 (01:32 -0600)]
USB: cp210x: fix ID for production CEL MeshConnect USB Stick
commit
90441b4dbe90ba0c38111ea89fa093a8c9627801 upstream.
Fixing typo for MeshConnect IDs. The original PID (0x8875) is not in
production and is not needed. Instead it has been changed to the
official production PID (0x8857).
Signed-off-by: Preston Fick <pffick@gmail.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Amit Virdi [Tue, 13 Jan 2015 08:57:21 +0000 (14:27 +0530)]
usb: dwc3: gadget: Stop TRB preparation after limit is reached
commit
39e60635a01520e8c8ed3946a28c2b98e6a46f79 upstream.
DWC3 gadget sets up a pool of 32 TRBs for each EP during initialization. This
means, the max TRBs that can be submitted for an EP is fixed to 32. Since the
request queue for an EP is a linked list, any number of requests can be queued
to it by the gadget layer. However, the dwc3 driver must not submit TRBs more
than the pool it has created for. This limit wasn't respected when SG was used
resulting in submitting more than the max TRBs, eventually leading to
non-transfer of the TRBs submitted over the max limit.
Root cause:
When SG is used, there are two loops iterating to prepare TRBs:
- Outer loop over the request_list
- Inner loop over the SG list
The code was missing break to get out of the outer loop.
Fixes:
eeb720fb21d6 (usb: dwc3: gadget: add support for SG lists)
Signed-off-by: Amit Virdi <amit.virdi@st.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Amit Virdi [Tue, 13 Jan 2015 08:57:20 +0000 (14:27 +0530)]
usb: dwc3: gadget: Fix TRB preparation during SG
commit
ec512fb8e5611fed1df2895f90317ce6797d6b32 upstream.
When scatter gather (SG) is used, multiple TRBs are prepared from one DWC3
request (dwc3_request). So while preparing TRBs, the 'last' flag should be set
only when it is the last TRB being prepared from the last dwc3_request entry.
The current implementation uses list_is_last to check if the dwc3_request is the
last entry from the request_list. However, list_is_last returns false for the
last entry too. This is because, while preparing the first TRB from a request,
the function dwc3_prepare_one_trb modifies the request's next and prev pointers
while moving the URB to req_queued. Hence, list_is_last always returns false no
matter what.
The correct way is not to access the modified pointers of dwc3_request but to
use list_empty macro instead.
Fixes:
e5ba5ec833aa (usb: dwc3: gadget: fix scatter gather implementation)
Signed-off-by: Amit Virdi <amit.virdi@st.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arseny Solokha [Sat, 6 Dec 2014 02:54:06 +0000 (09:54 +0700)]
OHCI: add a quirk for ULi M5237 blocking on reset
commit
56abcab833fafcfaeb2f5b25e0364c1dec45f53e upstream.
Commit
8dccddbc2368 ("OHCI: final fix for NVIDIA problems (I hope)")
introduced into 3.1.9 broke boot on e.g. Freescale P2020DS development
board. The code path that was previously specific to NVIDIA controllers
had then become taken for all chips.
However, the M5237 installed on the board wedges solid when accessing
its base+OHCI_FMINTERVAL register, making it impossible to boot any
kernel newer than 3.1.8 on this particular and apparently other similar
machines.
Don't readl() and writel() base+OHCI_FMINTERVAL on PCI ID 10b9:5237.
The patch is suitable for the -next tree as well as all maintained
kernels up to 3.2 inclusive.
Signed-off-by: Arseny Solokha <asolokha@kb.kras.ru>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hans Holmberg [Fri, 9 Jan 2015 08:40:43 +0000 (09:40 +0100)]
gpiolib: of: Correct error handling in of_get_named_gpiod_flags
commit
7b8792bbdffdff3abda704f89c6a45ea97afdc62 upstream.
of_get_named_gpiod_flags fails with -EPROBE_DEFER in cases
where the gpio chip is available and the GPIO translation fails.
This causes drivers to be re-probed erroneusly, and hides the
real problem(i.e. the GPIO number being out of range).
Signed-off-by: Hans Holmberg <hans.holmberg@intel.com>
Reviewed-by: Alexandre Courbot <acourbot@nvidia.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Trond Myklebust [Fri, 2 Jan 2015 21:25:08 +0000 (16:25 -0500)]
NFSv4.1: Fix client id trunking on Linux
commit
1fc0703af3143914a389bfa081c7acb09502ed5d upstream.
Currently, our trunking code will check for session trunking, but will
fail to detect client id trunking. This is a problem, because it means
that the client will fail to recognise that the two connections represent
shared state, even if they do not permit a shared session.
By removing the check for the server minor id, and only checking the
major id, we will end up doing the right thing in both cases: we close
down the new nfs_client and fall back to using the existing one.
Fixes:
05f4c350ee02e ("NFS: Discover NFSv4 server trunking when mounting")
Cc: Chuck Lever <chuck.lever@oracle.com>
Tested-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steven Rostedt (Red Hat) [Mon, 12 Jan 2015 17:12:03 +0000 (12:12 -0500)]
ftrace/jprobes/x86: Fix conflict between jprobes and function graph tracing
commit
237d28db036e411f22c03cfd5b0f6dc2aa9bf3bc upstream.
If the function graph tracer traces a jprobe callback, the system will
crash. This can easily be demonstrated by compiling the jprobe
sample module that is in the kernel tree, loading it and running the
function graph tracer.
# modprobe jprobe_example.ko
# echo function_graph > /sys/kernel/debug/tracing/current_tracer
# ls
The first two commands end up in a nice crash after the first fork.
(do_fork has a jprobe attached to it, so "ls" just triggers that fork)
The problem is caused by the jprobe_return() that all jprobe callbacks
must end with. The way jprobes works is that the function a jprobe
is attached to has a breakpoint placed at the start of it (or it uses
ftrace if fentry is supported). The breakpoint handler (or ftrace callback)
will copy the stack frame and change the ip address to return to the
jprobe handler instead of the function. The jprobe handler must end
with jprobe_return() which swaps the stack and does an int3 (breakpoint).
This breakpoint handler will then put back the saved stack frame,
simulate the instruction at the beginning of the function it added
a breakpoint to, and then continue on.
For function tracing to work, it hijakes the return address from the
stack frame, and replaces it with a hook function that will trace
the end of the call. This hook function will restore the return
address of the function call.
If the function tracer traces the jprobe handler, the hook function
for that handler will not be called, and its saved return address
will be used for the next function. This will result in a kernel crash.
To solve this, pause function tracing before the jprobe handler is called
and unpause it before it returns back to the function it probed.
Some other updates:
Used a variable "saved_sp" to hold kcb->jprobe_saved_sp. This makes the
code look a bit cleaner and easier to understand (various tries to fix
this bug required this change).
Note, if fentry is being used, jprobes will change the ip address before
the function graph tracer runs and it will not be able to trace the
function that the jprobe is probing.
Link: http://lkml.kernel.org/r/20150114154329.552437962@goodmis.org
Acked-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Wei Yang [Wed, 7 Jan 2015 17:29:11 +0000 (10:29 -0700)]
vfio-pci: Fix the check on pci device type in vfio_pci_probe()
commit
7c2e211f3c95b91912a92a8c6736343690042e2e upstream.
Current vfio-pci just supports normal pci device, so vfio_pci_probe() will
return if the pci device is not a normal device. While current code makes a
mistake. PCI_HEADER_TYPE is the offset in configuration space of the device
type, but we use this value to mask the type value.
This patch fixs this by do the check directly on the pci_dev->hdr_type.
Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Fri, 24 Oct 2014 08:10:20 +0000 (05:10 -0300)]
uvcvideo: Fix destruction order in uvc_delete()
commit
2228d80dd05a4fc5a410fde847677b8fb3eb23d7 upstream.
We've got a bug report at disconnecting a Webcam, where the kernel
spews warnings like below:
WARNING: CPU: 0 PID: 8385 at ../fs/sysfs/group.c:219 sysfs_remove_group+0x87/0x90()
sysfs group
c0b2350c not found for kobject 'event3'
CPU: 0 PID: 8385 Comm: queue2:src Not tainted 3.16.2-1.gdcee397-default #1
Hardware name: ASUSTeK Computer INC. A7N8X-E/A7N8X-E, BIOS ASUS A7N8X-E Deluxe ACPI BIOS Rev 1013 11/12/2004
c08d0705 ddc75cbc c0718c5b ddc75ccc c024b654 c08c6d44 ddc75ce8 000020c1
c08d0705 000000db c03d1ec7 c03d1ec7 00000009 00000000 c0b2350c d62c9064
ddc75cd4 c024b6a3 00000009 ddc75ccc c08c6d44 ddc75ce8 ddc75cfc c03d1ec7
Call Trace:
[<
c0205ba6>] try_stack_unwind+0x156/0x170
[<
c02046f3>] dump_trace+0x53/0x180
[<
c0205c06>] show_trace_log_lvl+0x46/0x50
[<
c0204871>] show_stack_log_lvl+0x51/0xe0
[<
c0205c67>] show_stack+0x27/0x50
[<
c0718c5b>] dump_stack+0x3e/0x4e
[<
c024b654>] warn_slowpath_common+0x84/0xa0
[<
c024b6a3>] warn_slowpath_fmt+0x33/0x40
[<
c03d1ec7>] sysfs_remove_group+0x87/0x90
[<
c05a2c54>] device_del+0x34/0x180
[<
c05e3989>] evdev_disconnect+0x19/0x50
[<
c05e06fa>] __input_unregister_device+0x9a/0x140
[<
c05e0845>] input_unregister_device+0x45/0x80
[<
f854b1d6>] uvc_delete+0x26/0x110 [uvcvideo]
[<
f84d66f8>] v4l2_device_release+0x98/0xc0 [videodev]
[<
c05a25bb>] device_release+0x2b/0x90
[<
c04ad8bf>] kobject_cleanup+0x6f/0x1a0
[<
f84d5453>] v4l2_release+0x43/0x70 [videodev]
[<
c0372f31>] __fput+0xb1/0x1b0
[<
c02650c1>] task_work_run+0x91/0xb0
[<
c024d845>] do_exit+0x265/0x910
[<
c024df64>] do_group_exit+0x34/0xa0
[<
c025a76f>] get_signal_to_deliver+0x17f/0x590
[<
c0201b6a>] do_signal+0x3a/0x960
[<
c02024f7>] do_notify_resume+0x67/0x90
[<
c071ebb5>] work_notifysig+0x30/0x3b
[<
b7739e60>] 0xb7739e5f
---[ end trace
b1e56095a485b631 ]---
The cause is that uvc_status_cleanup() is called after usb_put_*() in
uvc_delete(). usb_put_*() removes the sysfs parent and eventually
removes the children recursively, so the later device_del() can't find
its sysfs. The fix is simply rearrange the call orders in
uvc_delete() so that the child is removed before the parent.
Bugzilla: https://bugzilla.suse.com/show_bug.cgi?id=897736
Reported-and-tested-by: Martin Pluskal <mpluskal@suse.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sakari Ailus [Tue, 16 Sep 2014 18:57:07 +0000 (15:57 -0300)]
smiapp: Take mutex during PLL update in sensor initialisation
commit
f85698cd296f08218a7750f321e94607da128600 upstream.
The mutex does not serialise anything in this case but avoids a lockdep
warning from the control framework.
Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Frank Schaefer [Mon, 29 Sep 2014 18:17:35 +0000 (15:17 -0300)]
af9005: fix kernel panic on init if compiled without IR
commit
2279948735609d0d17d7384e776b674619f792ef upstream.
This patches fixes an ancient bug in the dvb_usb_af9005 driver, which
has been reported at least in the following threads:
https://lkml.org/lkml/2009/2/4/350
https://lkml.org/lkml/2014/9/18/558
If the driver is compiled in without any IR support (neither
DVB_USB_AF9005_REMOTE nor custom symbols), the symbol_request calls in
af9005_usb_module_init() return pointers != NULL although the IR
symbols are not available.
This leads to the following oops:
...
[ 8.529751] usbcore: registered new interface driver dvb_usb_af9005
[ 8.531584] BUG: unable to handle kernel paging request at
02e00000
[ 8.533385] IP: [<
7d9d67c6>] af9005_usb_module_init+0x6b/0x9d
[ 8.535613] *pde =
00000000
[ 8.536416] Oops: 0000 [#1] PREEMPT PREEMPT DEBUG_PAGEALLOCDEBUG_PAGEALLOC
[ 8.537863] CPU: 0 PID: 1 Comm: swapper Not tainted
3.15.0-rc6-00151-ga5c075c #1
[ 8.539827] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 8.541519] task:
89c9a670 ti:
89c9c000 task.ti:
89c9c000
[ 8.541519] EIP: 0060:[<
7d9d67c6>] EFLAGS:
00010206 CPU: 0
[ 8.541519] EIP is at af9005_usb_module_init+0x6b/0x9d
[ 8.541519] EAX:
02e00000 EBX:
00000000 ECX:
00000006 EDX:
00000000
[ 8.541519] ESI:
00000000 EDI:
7da33ec8 EBP:
89c9df30 ESP:
89c9df2c
[ 8.541519] DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068
[ 8.541519] CR0:
8005003b CR2:
02e00000 CR3:
05a54000 CR4:
00000690
[ 8.541519] Stack:
[ 8.541519]
7d9d675b 89c9df90 7d992a49 7d7d5914 89c9df4c 7be3a800 7d08c58c 8a4c3968
[ 8.541519]
89c9df80 7be3a966 00000192 00000006 00000006 7d7d3ff4 8a4c397a 00000200
[ 8.541519]
7d6b1280 8a4c3979 00000006 000009a6 7da32db8 b13eec81 00000006 000009a6
[ 8.541519] Call Trace:
[ 8.541519] [<
7d9d675b>] ? ttusb2_driver_init+0x16/0x16
[ 8.541519] [<
7d992a49>] do_one_initcall+0x77/0x106
[ 8.541519] [<
7be3a800>] ? parameqn+0x2/0x35
[ 8.541519] [<
7be3a966>] ? parse_args+0x113/0x25c
[ 8.541519] [<
7d992bc2>] kernel_init_freeable+0xea/0x167
[ 8.541519] [<
7cf01070>] kernel_init+0x8/0xb8
[ 8.541519] [<
7cf27ec0>] ret_from_kernel_thread+0x20/0x30
[ 8.541519] [<
7cf01068>] ? rest_init+0x10c/0x10c
[ 8.541519] Code: 08 c2 c7 05 44 ed f9 7d 00 00 e0 02 c7 05 40 ed f9 7d 00 00 e0 02 c7 05 3c ed f9 7d 00 00 e0 02 75 1f b8 00 00 e0 02 85 c0 74 16 <a1> 00 00 e0 02 c7 05 54 84 8e 7d 00 00 e0 02 a3 58 84 8e 7d eb
[ 8.541519] EIP: [<
7d9d67c6>] af9005_usb_module_init+0x6b/0x9d SS:ESP 0068:
89c9df2c
[ 8.541519] CR2:
0000000002e00000
[ 8.541519] ---[ end trace
768b6faf51370fc7 ]---
The prefered fix would be to convert the whole IR code to use the kernel IR
infrastructure (which wasn't available at the time this driver had been created).
Until anyone who still has this old hardware steps up an does the conversion,
fix it by not calling the symbol_request calls if the driver is compiled in
without the default IR symbols (CONFIG_DVB_USB_AF9005_REMOTE).
Due to the IR related pointers beeing NULL by default, IR support will then be disabled.
The downside of this solution is, that it will no longer be possible to
compile custom IR symbols (not using CONFIG_DVB_USB_AF9005_REMOTE) in.
Please note that this patch has NOT been tested with all possible cases.
I don't have the hardware and could only verify that it fixes the reported
bug.
Reported-by: Fengguag Wu <fengguang.wu@intel.com>
Signed-off-by: Frank Schäfer <fschaefer.oss@googlemail.com>
Acked-by: Luca Olivetti <luca@ventoso.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sakari Ailus [Tue, 1 Apr 2014 13:22:46 +0000 (10:22 -0300)]
smiapp-pll: Correct clock debug prints
commit
bc47150ab93988714d1fab7bc82fe5f505a107ad upstream.
The PLL flags were not used correctly.
Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tomi Valkeinen [Thu, 18 Dec 2014 11:40:06 +0000 (13:40 +0200)]
video/logo: prevent use of logos after they have been freed
commit
92b004d1aa9f367c372511ca0330f58216b25703 upstream.
If the probe of an fb driver has been deferred due to missing
dependencies, and the probe is later ran when a module is loaded, the
fbdev framework will try to find a logo to use.
However, the logos are __initdata, and have already been freed. This
causes sometimes page faults, if the logo memory is not mapped,
sometimes other random crashes as the logo data is invalid, and
sometimes nothing, if the fbdev decides to reject the logo (e.g. the
random value depicting the logo's height is too big).
This patch adds a late_initcall function to mark the logos as freed. In
reality the logos are freed later, and fbdev probe may be ran between
this late_initcall and the freeing of the logos. In that case we will
miss drawing the logo, even if it would be possible.
Signed-off-by: Tomi Valkeinen <tomi.valkeinen@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Long Li [Sat, 6 Dec 2014 03:38:18 +0000 (19:38 -0800)]
storvsc: ring buffer failures may result in I/O freeze
commit
e86fb5e8ab95f10ec5f2e9430119d5d35020c951 upstream.
When ring buffer returns an error indicating retry, storvsc may not
return a proper error code to SCSI when bounce buffer is not used.
This has introduced I/O freeze on RAID running atop storvsc devices.
This patch fixes it by always returning a proper error code.
Signed-off-by: Long Li <longli@microsoft.com>
Reviewed-by: K. Y. Srinivasan <kys@microsoft.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Bellinger [Fri, 21 Nov 2014 04:50:07 +0000 (20:50 -0800)]
iscsi-target: Fail connection on short sendmsg writes
commit
6bf6ca7515c1df06f5c03737537f5e0eb191e29e upstream.
This patch changes iscsit_do_tx_data() to fail on short writes
when kernel_sendmsg() returns a value different than requested
transfer length, returning -EPIPE and thus causing a connection
reset to occur.
This avoids a potential bug in the original code where a short
write would result in kernel_sendmsg() being called again with
the original iovec base + length.
In practice this has not been an issue because iscsit_do_tx_data()
is only used for transferring 48 byte headers + 4 byte digests,
along with seldom used control payloads from NOPIN + TEXT_RSP +
REJECT with less than 32k of data.
So following Al's audit of iovec consumers, go ahead and fail
the connection on short writes for now, and remove the bogus
logic ahead of his proper upstream fix.
Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dominique Leuenberger [Thu, 13 Nov 2014 19:57:37 +0000 (20:57 +0100)]
hp_accel: Add support for HP ZBook 15
commit
6583659e0f92e38079a8dd081e0a1181a0f37747 upstream.
HP ZBook 15 laptop needs a non-standard mapping (x_inverted).
BugLink: http://bugzilla.opensuse.org/show_bug.cgi?id=905329
Signed-off-by: Dominique Leuenberger <dimstar@opensuse.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jouni Malinen [Thu, 11 Dec 2014 21:48:55 +0000 (23:48 +0200)]
cfg80211: Fix 160 MHz channels with 80+80 and 160 MHz drivers
commit
08f6f147773b23b765b94633a8eaa82e7defcf4c upstream.
The VHT supported channel width field is a two bit integer, not a
bitfield. cfg80211_chandef_usable() was interpreting it incorrectly and
ended up rejecting 160 MHz channel width if the driver indicated support
for both 160 and 80+80 MHz channels.
Fixes:
3d9d1d6656a73 ("nl80211/cfg80211: support VHT channel configuration")
(however, no real drivers had 160 MHz support it until 3.16)
Signed-off-by: Jouni Malinen <jouni@qca.qualcomm.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vineet Gupta [Wed, 1 Oct 2014 08:58:36 +0000 (14:28 +0530)]
ARC: [nsimosci] move peripherals to match model to FPGA
commit
e8ef060b37c2d3cc5fd0c0edbe4e42ec1cb9768b upstream.
This allows the sdplite/Zebu images to run on OSCI simulation platform
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chris Wilson [Tue, 16 Dec 2014 08:44:32 +0000 (08:44 +0000)]
drm/i915: Force the CS stall for invalidate flushes
commit
add284a3a2481e759d6bec35f6444c32c8ddc383 upstream.
In order to act as a full command barrier by itself, we need to tell the
pipecontrol to actually stall the command streamer while the flush runs.
We require the full command barrier before operations like
MI_SET_CONTEXT, which currently rely on a prior invalidate flush.
References: https://bugs.freedesktop.org/show_bug.cgi?id=83677
Cc: Simon Farnsworth <simon@farnz.org.uk>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chris Wilson [Tue, 16 Dec 2014 08:44:31 +0000 (08:44 +0000)]
drm/i915: Invalidate media caches on gen7
commit
148b83d0815a3778c8949e6a97cb798cbaa0efb3 upstream.
In the gen7 pipe control there is an extra bit to flush the media
caches, so let's set it during cache invalidation flushes.
v2: Rename to MEDIA_STATE_CLEAR to be more inline with spec.
Cc: Simon Farnsworth <simon@farnz.org.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Deucher [Wed, 10 Dec 2014 14:42:10 +0000 (09:42 -0500)]
drm/radeon: properly filter DP1.2 4k modes on non-DP1.2 hw
commit
410cce2a6b82299b46ff316c6384e789ce275ecb upstream.
The check was already in place in the dp mode_valid check, but
radeon_dp_get_dp_link_clock() never returned the high clock
mode_valid was checking for because that function clipped the
clock based on the hw capabilities. Add an explicit check
in the mode_valid function.
bug:
https://bugs.freedesktop.org/show_bug.cgi?id=87172
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Deucher [Wed, 3 Dec 2014 05:03:49 +0000 (00:03 -0500)]
drm/radeon: check the right ring in radeon_evict_flags()
commit
5e5c21cac1001089007260c48b0c89ebaace0e71 upstream.
Check the that ring we are using for copies is functional
rather than the GFX ring. On newer asics we use the DMA
ring for bo moves.
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Hellstrom [Tue, 2 Dec 2014 11:36:57 +0000 (03:36 -0800)]
drm/vmwgfx: Fix fence event code
commit
89669e7a7f96be3ee8d9a22a071d7c0d3b4428fc upstream.
The commit "vmwgfx: Rework fence event action" introduced a number of bugs
that are fixed with this commit:
a) A forgotten return stateemnt.
b) An if statement with identical branches.
Reported-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Jakob Bornecrantz <jakob@vmware.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Govindarajulu Varadarajan [Thu, 18 Dec 2014 10:28:42 +0000 (15:58 +0530)]
enic: fix rx skb checksum
[ Upstream commit
17e96834fd35997ca7cdfbf15413bcd5a36ad448 ]
Hardware always provides compliment of IP pseudo checksum. Stack expects
whole packet checksum without pseudo checksum if CHECKSUM_COMPLETE is set.
This causes checksum error in nf & ovs.
kernel: qg-
19546f09-f2: hw csum failure
kernel: CPU: 9 PID: 0 Comm: swapper/9 Tainted: GF O-------------- 3.10.0-123.8.1.el7.x86_64 #1
kernel: Hardware name: Cisco Systems Inc UCSB-B200-M3/UCSB-B200-M3, BIOS B200M3.2.2.3.0.
080820141339 08/08/2014
kernel:
ffff881218f40000 df68243feb35e3a8 ffff881237a43ab8 ffffffff815e237b
kernel:
ffff881237a43ad0 ffffffff814cd4ca ffff8829ec71eb00 ffff881237a43af0
kernel:
ffffffff814c6232 0000000000000286 ffff8829ec71eb00 ffff881237a43b00
kernel: Call Trace:
kernel: <IRQ> [<
ffffffff815e237b>] dump_stack+0x19/0x1b
kernel: [<
ffffffff814cd4ca>] netdev_rx_csum_fault+0x3a/0x40
kernel: [<
ffffffff814c6232>] __skb_checksum_complete_head+0x62/0x70
kernel: [<
ffffffff814c6251>] __skb_checksum_complete+0x11/0x20
kernel: [<
ffffffff8155a20c>] nf_ip_checksum+0xcc/0x100
kernel: [<
ffffffffa049edc7>] icmp_error+0x1f7/0x35c [nf_conntrack_ipv4]
kernel: [<
ffffffff814cf419>] ? netif_rx+0xb9/0x1d0
kernel: [<
ffffffffa040eb7b>] ? internal_dev_recv+0xdb/0x130 [openvswitch]
kernel: [<
ffffffffa04c8330>] nf_conntrack_in+0xf0/0xa80 [nf_conntrack]
kernel: [<
ffffffff81509380>] ? inet_del_offload+0x40/0x40
kernel: [<
ffffffffa049e302>] ipv4_conntrack_in+0x22/0x30 [nf_conntrack_ipv4]
kernel: [<
ffffffff815005ca>] nf_iterate+0xaa/0xc0
kernel: [<
ffffffff81509380>] ? inet_del_offload+0x40/0x40
kernel: [<
ffffffff81500664>] nf_hook_slow+0x84/0x140
kernel: [<
ffffffff81509380>] ? inet_del_offload+0x40/0x40
kernel: [<
ffffffff81509dd4>] ip_rcv+0x344/0x380
Hardware verifies IP & tcp/udp header checksum but does not provide payload
checksum, use CHECKSUM_UNNECESSARY. Set it only if its valid IP tcp/udp packet.
Cc: Jiri Benc <jbenc@redhat.com>
Cc: Stefan Assmann <sassmann@redhat.com>
Reported-by: Sunil Choudhary <schoudha@redhat.com>
Signed-off-by: Govindarajulu Varadarajan <_govind@gmx.com>
Reviewed-by: Jiri Benc <jbenc@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Sun, 11 Jan 2015 18:32:18 +0000 (10:32 -0800)]
alx: fix alx_poll()
[ Upstream commit
7a05dc64e2e4c611d89007b125b20c0d2a4d31a5 ]
Commit
d75b1ade567f ("net: less interrupt masking in NAPI") uncovered
wrong alx_poll() behavior.
A NAPI poll() handler is supposed to return exactly the budget when/if
napi_complete() has not been called.
It is also supposed to return number of frames that were received, so
that netdev_budget can have a meaning.
Also, in case of TX pressure, we still have to dequeue received
packets : alx_clean_rx_irq() has to be called even if
alx_clean_tx_irq(alx) returns false, otherwise device is half duplex.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Fixes:
d75b1ade567f ("net: less interrupt masking in NAPI")
Reported-by: Oded Gabbay <oded.gabbay@amd.com>
Bisected-by: Oded Gabbay <oded.gabbay@amd.com>
Tested-by: Oded Gabbay <oded.gabbay@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Herbert Xu [Wed, 31 Dec 2014 13:39:23 +0000 (00:39 +1100)]
tcp: Do not apply TSO segment limit to non-TSO packets
[ Upstream commit
843925f33fcc293d80acf2c5c8a78adf3344d49b ]
Thomas Jarosch reported IPsec TCP stalls when a PMTU event occurs.
In fact the problem was completely unrelated to IPsec. The bug is
also reproducible if you just disable TSO/GSO.
The problem is that when the MSS goes down, existing queued packet
on the TX queue that have not been transmitted yet all look like
TSO packets and get treated as such.
This then triggers a bug where tcp_mss_split_point tells us to
generate a zero-sized packet on the TX queue. Once that happens
we're screwed because the zero-sized packet can never be removed
by ACKs.
Fixes:
1485348d242 ("tcp: Apply device TSO segment limit earlier")
Reported-by: Thomas Jarosch <thomas.jarosch@intra2net.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Cheers,
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Prashant Sreedharan [Sat, 20 Dec 2014 20:16:17 +0000 (12:16 -0800)]
tg3: tg3_disable_ints using uninitialized mailbox value to disable interrupts
[ Upstream commit
05b0aa579397b734f127af58e401a30784a1e315 ]
During driver load in tg3_init_one, if the driver detects DMA activity before
intializing the chip tg3_halt is called. As part of tg3_halt interrupts are
disabled using routine tg3_disable_ints. This routine was using mailbox value
which was not initialized (default value is 0). As a result driver was writing
0x00000001 to pci config space register 0, which is the vendor id / device id.
This driver bug was exposed because of the commit
a7877b17a667 (PCI: Check only
the Vendor ID to identify Configuration Request Retry). Also this issue is only
seen in older generation chipsets like 5722 because config space write to offset
0 from driver is possible. The newer generation chips ignore writes to offset 0.
Also without commit
a7877b17a667, for these older chips when a GRC reset is
issued the Bootcode would reprogram the vendor id/device id, which is the reason
this bug was masked earlier.
Fixed by initializing the interrupt mailbox registers before calling tg3_halt.
Please queue for -stable.
Reported-by: Nils Holland <nholland@tisys.org>
Reported-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Prashant Sreedharan <prashant@broadcom.com>
Signed-off-by: Michael Chan <mchan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Graf [Thu, 18 Dec 2014 10:30:26 +0000 (10:30 +0000)]
netlink: Don't reorder loads/stores before marking mmap netlink frame as available
[ Upstream commit
a18e6a186f53af06937a2c268c72443336f4ab56 ]
Each mmap Netlink frame contains a status field which indicates
whether the frame is unused, reserved, contains data or needs to
be skipped. Both loads and stores may not be reordeded and must
complete before the status field is changed and another CPU might
pick up the frame for use. Use an smp_mb() to cover needs of both
types of callers to netlink_set_status(), callers which have been
reading data frame from the frame, and callers which have been
filling or releasing and thus writing to the frame.
- Example code path requiring a smp_rmb():
memcpy(skb->data, (void *)hdr + NL_MMAP_HDRLEN, hdr->nm_len);
netlink_set_status(hdr, NL_MMAP_STATUS_UNUSED);
- Example code path requiring a smp_wmb():
hdr->nm_uid = from_kuid(sk_user_ns(sk), NETLINK_CB(skb).creds.uid);
hdr->nm_gid = from_kgid(sk_user_ns(sk), NETLINK_CB(skb).creds.gid);
netlink_frame_flush_dcache(hdr);
netlink_set_status(hdr, NL_MMAP_STATUS_VALID);
Fixes: f9c228 ("netlink: implement memory mapped recvmsg()")
Reported-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Miller [Tue, 16 Dec 2014 22:58:17 +0000 (17:58 -0500)]
netlink: Always copy on mmap TX.
[ Upstream commit
4682a0358639b29cf69437ed909c6221f8c89847 ]
Checking the file f_count and the nlk->mapped count is not completely
sufficient to prevent the mmap'd area contents from changing from
under us during netlink mmap sendmsg() operations.
Be careful to sample the header's length field only once, because this
could change from under us as well.
Fixes:
5fd96123ee19 ("netlink: implement memory mapped sendmsg()")
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Daniel Borkmann <dborkman@redhat.com>
Acked-by: Thomas Graf <tgraf@suug.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Fri, 16 Jan 2015 15:00:00 +0000 (07:00 -0800)]
Linux 3.10.65
Linus Torvalds [Sun, 11 Jan 2015 19:33:57 +0000 (11:33 -0800)]
mm: Don't count the stack guard page towards RLIMIT_STACK
commit
690eac53daff34169a4d74fc7bfbd388c4896abb upstream.
Commit
fee7e49d4514 ("mm: propagate error from stack expansion even for
guard page") made sure that we return the error properly for stack
growth conditions. It also theorized that counting the guard page
towards the stack limit might break something, but also said "Let's see
if anybody notices".
Somebody did notice. Apparently android-x86 sets the stack limit very
close to the limit indeed, and including the guard page in the rlimit
check causes the android 'zygote' process problems.
So this adds the (fairly trivial) code to make the stack rlimit check be
against the actual real stack size, rather than the size of the vma that
includes the guard page.
Reported-and-tested-by: Chih-Wei Huang <cwhuang@android-x86.org>
Cc: Jay Foad <jay.foad@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Tue, 6 Jan 2015 21:00:05 +0000 (13:00 -0800)]
mm: propagate error from stack expansion even for guard page
commit
fee7e49d45149fba60156f5b59014f764d3e3728 upstream.
Jay Foad reports that the address sanitizer test (asan) sometimes gets
confused by a stack pointer that ends up being outside the stack vma
that is reported by /proc/maps.
This happens due to an interaction between RLIMIT_STACK and the guard
page: when we do the guard page check, we ignore the potential error
from the stack expansion, which effectively results in a missing guard
page, since the expected stack expansion won't have been done.
And since /proc/maps explicitly ignores the guard page (commit
d7824370e263: "mm: fix up some user-visible effects of the stack guard
page"), the stack pointer ends up being outside the reported stack area.
This is the minimal patch: it just propagates the error. It also
effectively makes the guard page part of the stack limit, which in turn
measn that the actual real stack is one page less than the stack limit.
Let's see if anybody notices. We could teach acct_stack_growth() to
allow an extra page for a grow-up/grow-down stack in the rlimit test,
but I don't want to add more complexity if it isn't needed.
Reported-and-tested-by: Jay Foad <jay.foad@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vlastimil Babka [Thu, 8 Jan 2015 22:32:40 +0000 (14:32 -0800)]
mm, vmscan: prevent kswapd livelock due to pfmemalloc-throttled process being killed
commit
9e5e3661727eaf960d3480213f8e87c8d67b6956 upstream.
Charles Shirron and Paul Cassella from Cray Inc have reported kswapd
stuck in a busy loop with nothing left to balance, but
kswapd_try_to_sleep() failing to sleep. Their analysis found the cause
to be a combination of several factors:
1. A process is waiting in throttle_direct_reclaim() on pgdat->pfmemalloc_wait
2. The process has been killed (by OOM in this case), but has not yet been
scheduled to remove itself from the waitqueue and die.
3. kswapd checks for throttled processes in prepare_kswapd_sleep():
if (waitqueue_active(&pgdat->pfmemalloc_wait)) {
wake_up(&pgdat->pfmemalloc_wait);
return false; // kswapd will not go to sleep
}
However, for a process that was already killed, wake_up() does not remove
the process from the waitqueue, since try_to_wake_up() checks its state
first and returns false when the process is no longer waiting.
4. kswapd is running on the same CPU as the only CPU that the process is
allowed to run on (through cpus_allowed, or possibly single-cpu system).
5. CONFIG_PREEMPT_NONE=y kernel is used. If there's nothing to balance, kswapd
encounters no voluntary preemption points and repeatedly fails
prepare_kswapd_sleep(), blocking the process from running and removing
itself from the waitqueue, which would let kswapd sleep.
So, the source of the problem is that we prevent kswapd from going to
sleep until there are processes waiting on the pfmemalloc_wait queue,
and a process waiting on a queue is guaranteed to be removed from the
queue only when it gets scheduled. This was done to make sure that no
process is left sleeping on pfmemalloc_wait when kswapd itself goes to
sleep.
However, it isn't necessary to postpone kswapd sleep until the
pfmemalloc_wait queue actually empties. To prevent processes from being
left sleeping, it's actually enough to guarantee that all processes
waiting on pfmemalloc_wait queue have been woken up by the time we put
kswapd to sleep.
This patch therefore fixes this issue by substituting 'wake_up' with
'wake_up_all' and removing 'return false' in the code snippet from
prepare_kswapd_sleep() above. Note that if any process puts itself in
the queue after this waitqueue_active() check, or after the wake up
itself, it means that the process will also wake up kswapd - and since
we are under prepare_to_wait(), the wake up won't be missed. Also we
update the comment prepare_kswapd_sleep() to hopefully more clearly
describe the races it is preventing.
Fixes:
5515061d22f0 ("mm: throttle direct reclaimers if PF_MEMALLOC reserves are low and swap is backed by network storage")
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Vladimir Davydov <vdavydov@parallels.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiri Olsa [Wed, 26 Nov 2014 15:39:31 +0000 (16:39 +0100)]
perf session: Do not fail on processing out of order event
commit
f61ff6c06dc8f32c7036013ad802c899ec590607 upstream.
Linus reported perf report command being interrupted due to processing
of 'out of order' event, with following error:
Timestamp below last timeslice flush
0x5733a8 [0x28]: failed to process type: 3
I could reproduce the issue and in my case it was caused by one CPU
(mmap) being behind during record and userspace mmap reader seeing the
data after other CPUs data were already stored.
This is expected under some circumstances because we need to limit the
number of events that we queue for reordering when we receive a
PERF_RECORD_FINISHED_ROUND or when we force flush due to memory
pressure.
Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Fleming <matt.fleming@intel.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Stephane Eranian <eranian@google.com>
Link: http://lkml.kernel.org/r/1417016371-30249-1-git-send-email-jolsa@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
[zhangzhiqiang: backport to 3.10:
- adjust context
- commit
f61ff6c06d struct events_stats was defined in tools/perf/util/event.h
while 3.10 stable defined in tools/perf/util/hist.h.
- 3.10 stable there is no pr_oe_time() which used for debug.
- After the above adjustments, becomes same to the original patch:
https://github.com/torvalds/linux/commit/
f61ff6c06dc8f32c7036013ad802c899ec590607
]
Signed-off-by: Zhiqiang Zhang <zhangzhiqiang.zhang@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiri Olsa [Wed, 10 Dec 2014 20:23:51 +0000 (21:23 +0100)]
perf: Fix events installation during moving group
commit
9fc81d87420d0d3fd62d5e5529972c0ad9eab9cc upstream.
We allow PMU driver to change the cpu on which the event
should be installed to. This happened in patch:
e2d37cd213dc ("perf: Allow the PMU driver to choose the CPU on which to install events")
This patch also forces all the group members to follow
the currently opened events cpu if the group happened
to be moved.
This and the change of event->cpu in perf_install_in_context()
function introduced in:
0cda4c023132 ("perf: Introduce perf_pmu_migrate_context()")
forces group members to change their event->cpu,
if the currently-opened-event's PMU changed the cpu
and there is a group move.
Above behaviour causes problem for breakpoint events,
which uses event->cpu to touch cpu specific data for
breakpoints accounting. By changing event->cpu, some
breakpoints slots were wrongly accounted for given
cpu.
Vinces's perf fuzzer hit this issue and caused following
WARN on my setup:
WARNING: CPU: 0 PID: 20214 at arch/x86/kernel/hw_breakpoint.c:119 arch_install_hw_breakpoint+0x142/0x150()
Can't find any breakpoint slot
[...]
This patch changes the group moving code to keep the event's
original cpu.
Reported-by: Vince Weaver <vince@deater.net>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Vince Weaver <vince@deater.net>
Cc: Yan, Zheng <zheng.z.yan@intel.com>
Link: http://lkml.kernel.org/r/1418243031-20367-3-git-send-email-jolsa@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiri Olsa [Wed, 10 Dec 2014 20:23:50 +0000 (21:23 +0100)]
perf/x86/intel/uncore: Make sure only uncore events are collected
commit
af91568e762d04931dcbdd6bef4655433d8b9418 upstream.
The uncore_collect_events functions assumes that event group
might contain only uncore events which is wrong, because it
might contain any type of events.
This bug leads to uncore framework touching 'not' uncore events,
which could end up all sorts of bugs.
One was triggered by Vince's perf fuzzer, when the uncore code
touched breakpoint event private event space as if it was uncore
event and caused BUG:
BUG: unable to handle kernel paging request at
ffffffff82822068
IP: [<
ffffffff81020338>] uncore_assign_events+0x188/0x250
...
The code in uncore_assign_events() function was looking for
event->hw.idx data while the event was initialized as a
breakpoint with different members in event->hw union.
This patch forces uncore_collect_events() to collect only uncore
events.
Reported-by: Vince Weaver <vince@deater.net>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Yan, Zheng <zheng.z.yan@intel.com>
Link: http://lkml.kernel.org/r/1418243031-20367-2-git-send-email-jolsa@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chris Mason [Wed, 31 Dec 2014 17:18:29 +0000 (12:18 -0500)]
Btrfs: don't delay inode ref updates during log replay
commit
6f8960541b1eb6054a642da48daae2320fddba93 upstream.
Commit
1d52c78afbb (Btrfs: try not to ENOSPC on log replay) added a
check to skip delayed inode updates during log replay because it
confuses the enospc code. But the delayed processing will end up
ignoring delayed refs from log replay because the inode itself wasn't
put through the delayed code.
This can end up triggering a warning at commit time:
WARNING: CPU: 2 PID: 778 at fs/btrfs/delayed-inode.c:1410 btrfs_assert_delayed_root_empty+0x32/0x34()
Which is repeated for each commit because we never process the delayed
inode ref update.
The fix used here is to change btrfs_delayed_delete_inode_ref to return
an error if we're currently in log replay. The caller will do the ref
deletion immediately and everything will work properly.
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Petazzoni [Thu, 13 Nov 2014 09:38:57 +0000 (10:38 +0100)]
ARM: mvebu: disable I/O coherency on non-SMP situations on Armada 370/375/38x/XP
commit
e55355453600a33bb5ca4f71f2d7214875f3b061 upstream.
Enabling the hardware I/O coherency on Armada 370, Armada 375, Armada
38x and Armada XP requires a certain number of conditions:
- On Armada 370, the cache policy must be set to write-allocate.
- On Armada 375, 38x and XP, the cache policy must be set to
write-allocate, the pages must be mapped with the shareable
attribute, and the SMP bit must be set
Currently, on Armada XP, when CONFIG_SMP is enabled, those conditions
are met. However, when Armada XP is used in a !CONFIG_SMP kernel, none
of these conditions are met. With Armada 370, the situation is worse:
since the processor is single core, regardless of whether CONFIG_SMP
or !CONFIG_SMP is used, the cache policy will be set to write-back by
the kernel and not write-allocate.
Since solving this problem turns out to be quite complicated, and we
don't want to let users with a mainline kernel known to have
infrequent but existing data corruptions, this commit proposes to
simply disable hardware I/O coherency in situations where it is known
not to work.
And basically, the is_smp() function of the kernel tells us whether it
is OK to enable hardware I/O coherency or not, so this commit slightly
refactors the coherency_type() function to return
COHERENCY_FABRIC_TYPE_NONE when is_smp() is false, or the appropriate
type of the coherency fabric in the other case.
Thanks to this, the I/O coherency fabric will no longer be used at all
in !CONFIG_SMP configurations. It will continue to be used in
CONFIG_SMP configurations on Armada XP, Armada 375 and Armada 38x
(which are multiple cores processors), but will no longer be used on
Armada 370 (which is a single core processor).
In the process, it simplifies the implementation of the
coherency_type() function, and adds a missing call to of_node_put().
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Fixes:
e60304f8cb7bb545e79fe62d9b9762460c254ec2 ("arm: mvebu: Add hardware I/O Coherency support")
Cc: <stable@vger.kernel.org> # v3.8+
Acked-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
Link: https://lkml.kernel.org/r/1415871540-20302-3-git-send-email-thomas.petazzoni@free-electrons.com
Signed-off-by: Jason Cooper <jason@lakedaemon.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Berg [Wed, 10 Dec 2014 23:41:28 +0000 (15:41 -0800)]
scripts/kernel-doc: don't eat struct members with __aligned
commit
7b990789a4c3420fa57596b368733158e432d444 upstream.
The change from \d+ to .+ inside __aligned() means that the following
structure:
struct test {
u8 a __aligned(2);
u8 b __aligned(2);
};
essentially gets modified to
struct test {
u8 a;
};
for purposes of kernel-doc, thus dropping a struct member, which in
turns causes warnings and invalid kernel-doc generation.
Fix this by replacing the catch-all (".") with anything that's not a
semicolon ("[^;]").
Fixes:
9dc30918b23f ("scripts/kernel-doc: handle struct member __aligned without numbers")
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Cc: Nishanth Menon <nm@ti.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Michal Marek <mmarek@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ryusuke Konishi [Wed, 10 Dec 2014 23:54:34 +0000 (15:54 -0800)]
nilfs2: fix the nilfs_iget() vs. nilfs_new_inode() races
commit
705304a863cc41585508c0f476f6d3ec28cf7e00 upstream.
Same story as in commit
41080b5a2401 ("nfsd race fixes: ext2") (similar
ext2 fix) except that nilfs2 needs to use insert_inode_locked4() instead
of insert_inode_locked() and a bug of a check for dead inodes needs to
be fixed.
If nilfs_iget() is called from nfsd after nilfs_new_inode() calls
insert_inode_locked4(), nilfs_iget() will wait for unlock_new_inode() at
the end of nilfs_mkdir()/nilfs_create()/etc to unlock the inode.
If nilfs_iget() is called before nilfs_new_inode() calls
insert_inode_locked4(), it will create an in-core inode and read its
data from the on-disk inode. But, nilfs_iget() will find i_nlink equals
zero and fail at nilfs_read_inode_common(), which will lead it to call
iget_failed() and cleanly fail.
However, this sanity check doesn't work as expected for reused on-disk
inodes because they leave a non-zero value in i_mode field and it
hinders the test of i_nlink. This patch also fixes the issue by
removing the test on i_mode that nilfs2 doesn't need.
Signed-off-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Benjamin Coddington [Sun, 7 Dec 2014 21:05:47 +0000 (16:05 -0500)]
nfsd4: fix xdr4 inclusion of escaped char
commit
5a64e56976f1ba98743e1678c0029a98e9034c81 upstream.
Fix a bug where nfsd4_encode_components_esc() includes the esc_end char as
an additional string encoding.
Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
Fixes:
e7a0444aef4a "nfsd: add IPv6 addr escaping to fs_location hosts"
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rasmus Villemoes [Fri, 5 Dec 2014 15:40:07 +0000 (16:40 +0100)]
fs: nfsd: Fix signedness bug in compare_blob
commit
ef17af2a817db97d42dd2ec0a425231748e23dbc upstream.
Bugs similar to the one in
acbbe6fbb240 (kcmp: fix standard comparison
bug) are in rich supply.
In this variant, the problem is that struct xdr_netobj::len has type
unsigned int, so the expression o1->len - o2->len _also_ has type
unsigned int; it has completely well-defined semantics, and the result
is some non-negative integer, which is always representable in a long
long. But this means that if the conditional triggers, we are
guaranteed to return a positive value from compare_blob.
In this case it could be fixed by
- res = o1->len - o2->len;
+ res = (long long)o1->len - (long long)o2->len;
but I'd rather eliminate the usually broken 'return a - b;' idiom.
Reviewed-by: Jeff Layton <jlayton@primarydata.com>
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Robert Baldyga [Mon, 24 Nov 2014 06:56:21 +0000 (07:56 +0100)]
serial: samsung: wait for transfer completion before clock disable
commit
1ff383a4c3eda8893ec61b02831826e1b1f46b41 upstream.
This patch adds waiting until transmit buffer and shifter will be empty
before clock disabling.
Without this fix it's possible to have clock disabled while data was
not transmited yet, which causes unproper state of TX line and problems
in following data transfers.
Signed-off-by: Robert Baldyga <r.baldyga@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tejun Heo [Fri, 24 Oct 2014 19:38:21 +0000 (15:38 -0400)]
writeback: fix a subtle race condition in I_DIRTY clearing
commit
9c6ac78eb3521c5937b2dd8a7d1b300f41092f45 upstream.
After invoking ->dirty_inode(), __mark_inode_dirty() does smp_mb() and
tests inode->i_state locklessly to see whether it already has all the
necessary I_DIRTY bits set. The comment above the barrier doesn't
contain any useful information - memory barriers can't ensure "changes
are seen by all cpus" by itself.
And it sure enough was broken. Please consider the following
scenario.
CPU 0 CPU 1
-------------------------------------------------------------------------------
enters __writeback_single_inode()
grabs inode->i_lock
tests PAGECACHE_TAG_DIRTY which is clear
enters __set_page_dirty()
grabs mapping->tree_lock
sets PAGECACHE_TAG_DIRTY
releases mapping->tree_lock
leaves __set_page_dirty()
enters __mark_inode_dirty()
smp_mb()
sees I_DIRTY_PAGES set
leaves __mark_inode_dirty()
clears I_DIRTY_PAGES
releases inode->i_lock
Now @inode has dirty pages w/ I_DIRTY_PAGES clear. This doesn't seem
to lead to an immediately critical problem because requeue_inode()
later checks PAGECACHE_TAG_DIRTY instead of I_DIRTY_PAGES when
deciding whether the inode needs to be requeued for IO and there are
enough unintentional memory barriers inbetween, so while the inode
ends up with inconsistent I_DIRTY_PAGES flag, it doesn't fall off the
IO list.
The lack of explicit barrier may also theoretically affect the other
I_DIRTY bits which deal with metadata dirtiness. There is no
guarantee that a strong enough barrier exists between
I_DIRTY_[DATA]SYNC clearing and write_inode() writing out the dirtied
inode. Filesystem inode writeout path likely has enough stuff which
can behave as full barrier but it's theoretically possible that the
writeout may not see all the updates from ->dirty_inode().
Fix it by adding an explicit smp_mb() after I_DIRTY clearing. Note
that I_DIRTY_PAGES needs a special treatment as it always needs to be
cleared to be interlocked with the lockless test on
__mark_inode_dirty() side. It's cleared unconditionally and
reinstated after smp_mb() if the mapping still has dirty pages.
Also add comments explaining how and why the barriers are paired.
Lightly tested.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Mikulas Patocka <mpatocka@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oliver Neukum [Thu, 20 Nov 2014 13:54:35 +0000 (14:54 +0100)]
cdc-acm: memory leak in error case
commit
d908f8478a8d18e66c80a12adb27764920c1f1ca upstream.
If probe() fails not only the attributes need to be removed
but also the memory freed.
Reported-by: Ahmed Tamrawi <ahmedtamrawi@gmail.com>
Signed-off-by: Oliver Neukum <oneukum@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jens Axboe [Wed, 19 Nov 2014 20:06:22 +0000 (13:06 -0700)]
genhd: check for int overflow in disk_expand_part_tbl()
commit
5fabcb4c33fe11c7e3afdf805fde26c1a54d0953 upstream.
We can get here from blkdev_ioctl() -> blkpg_ioctl() -> add_partition()
with a user passed in partno value. If we pass in 0x7fffffff, the
new target in disk_expand_part_tbl() overflows the 'int' and we
access beyond the end of ptbl->part[] and even write to it when we
do the rcu_assign_pointer() to assign the new partition.
Reported-by: David Ramos <daramos@stanford.edu>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Fri, 7 Nov 2014 16:48:15 +0000 (08:48 -0800)]
USB: cdc-acm: check for valid interfaces
commit
403dff4e2c94f275e24fd85f40b2732ffec268a1 upstream.
We need to check that we have both a valid data and control inteface for both
types of headers (union and not union.)
References: https://bugzilla.kernel.org/show_bug.cgi?id=83551
Reported-by: Simon Schubert <2+kernel@0x2c.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Mon, 5 Jan 2015 12:27:33 +0000 (13:27 +0100)]
ALSA: hda - Fix wrong gpio_dir & gpio_mask hint setups for IDT/STAC codecs
commit
c507de88f6a336bd7296c9ec0073b2d4af8b4f5e upstream.
stac_store_hints() does utterly wrong for masking the values for
gpio_dir and gpio_data, likely due to copy&paste errors. Fortunately,
this feature is used very rarely, so the impact must be really small.
Reported-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Carpenter [Wed, 26 Nov 2014 22:34:43 +0000 (01:34 +0300)]
ALSA: hda - using uninitialized data
commit
69eba10e606a80665f8573221fec589430d9d1cb upstream.
In olden times the snd_hda_param_read() function always set "*start_id"
but in 2007 we introduced a new return and it causes uninitialized data
bugs in a couple of the callers: print_codec_info() and
hdmi_parse_codec().
Fixes:
e8a7f136f5ed ('[ALSA] hda-intel - Improve HD-audio codec probing robustness')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiri Jaburek [Thu, 18 Dec 2014 01:03:19 +0000 (02:03 +0100)]
ALSA: usb-audio: extend KEF X300A FU 10 tweak to Arcam rPAC
commit
d70a1b9893f820fdbcdffac408c909c50f2e6b43 upstream.
The Arcam rPAC seems to have the same problem - whenever anything
(alsamixer, udevd, 3.9+ kernel from
60af3d037eb8c, ..) attempts to
access mixer / control interface of the card, the firmware "locks up"
the entire device, resulting in
SNDRV_PCM_IOCTL_HW_PARAMS failed (-5): Input/output error
from alsa-lib.
Other operating systems can somehow read the mixer (there seems to be
playback volume/mute), but any manipulation is ignored by the device
(which has hardware volume controls).
Signed-off-by: Jiri Jaburek <jjaburek@redhat.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Williamson [Fri, 31 Oct 2014 17:13:07 +0000 (11:13 -0600)]
driver core: Fix unbalanced device reference in drivers_probe
commit
bb34cb6bbd287b57e955bc5cfd42fcde6aaca279 upstream.
bus_find_device_by_name() acquires a device reference which is never
released. This results in an object leak, which on older kernels
results in failure to release all resources of PCI devices. libvirt
uses drivers_probe to re-attach devices to the host after assignment
and is therefore a common trigger for this leak.
Example:
# cd /sys/bus/pci/
# dmesg -C
# echo 1 > devices/0000\:01\:00.0/sriov_numvfs
# echo 0 > devices/0000\:01\:00.0/sriov_numvfs
# dmesg | grep 01:10
pci 0000:01:10.0: [8086:10ca] type 00 class 0x020000
kobject: '0000:01:10.0' (
ffff8801d79cd0a8): kobject_add_internal: parent: '0000:00:01.0', set: 'devices'
kobject: '0000:01:10.0' (
ffff8801d79cd0a8): kobject_uevent_env
kobject: '0000:01:10.0' (
ffff8801d79cd0a8): fill_kobj_path: path = '/devices/pci0000:00/0000:00:01.0/0000:01:10.0'
kobject: '0000:01:10.0' (
ffff8801d79cd0a8): kobject_uevent_env
kobject: '0000:01:10.0' (
ffff8801d79cd0a8): fill_kobj_path: path = '/devices/pci0000:00/0000:00:01.0/0000:01:10.0'
kobject: '0000:01:10.0' (
ffff8801d79cd0a8): kobject_uevent_env
kobject: '0000:01:10.0' (
ffff8801d79cd0a8): fill_kobj_path: path = '/devices/pci0000:00/0000:00:01.0/0000:01:10.0'
kobject: '0000:01:10.0' (
ffff8801d79cd0a8): kobject_cleanup, parent (null)
kobject: '0000:01:10.0' (
ffff8801d79cd0a8): calling ktype release
kobject: '0000:01:10.0': free name
[kobject freed as expected]
# dmesg -C
# echo 1 > devices/0000\:01\:00.0/sriov_numvfs
# echo 0000:01:10.0 > drivers_probe
# echo 0 > devices/0000\:01\:00.0/sriov_numvfs
# dmesg | grep 01:10
pci 0000:01:10.0: [8086:10ca] type 00 class 0x020000
kobject: '0000:01:10.0' (
ffff8801d79ce0a8): kobject_add_internal: parent: '0000:00:01.0', set: 'devices'
kobject: '0000:01:10.0' (
ffff8801d79ce0a8): kobject_uevent_env
kobject: '0000:01:10.0' (
ffff8801d79ce0a8): fill_kobj_path: path = '/devices/pci0000:00/0000:00:01.0/0000:01:10.0'
kobject: '0000:01:10.0' (
ffff8801d79ce0a8): kobject_uevent_env
kobject: '0000:01:10.0' (
ffff8801d79ce0a8): fill_kobj_path: path = '/devices/pci0000:00/0000:00:01.0/0000:01:10.0'
kobject: '0000:01:10.0' (
ffff8801d79ce0a8): kobject_uevent_env
kobject: '0000:01:10.0' (
ffff8801d79ce0a8): fill_kobj_path: path = '/devices/pci0000:00/0000:00:01.0/0000:01:10.0'
[no free]
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andy Lutomirski [Sun, 21 Dec 2014 16:57:46 +0000 (08:57 -0800)]
x86, vdso: Use asm volatile in __getcpu
commit
1ddf0b1b11aa8a90cef6706e935fc31c75c406ba upstream.
In Linux 3.18 and below, GCC hoists the lsl instructions in the
pvclock code all the way to the beginning of __vdso_clock_gettime,
slowing the non-paravirt case significantly. For unknown reasons,
presumably related to the removal of a branch, the performance issue
is gone as of
e76b027e6408 x86,vdso: Use LSL unconditionally for vgetcpu
but I don't trust GCC enough to expect the problem to stay fixed.
There should be no correctness issue, because the __getcpu calls in
__vdso_vlock_gettime were never necessary in the first place.
Note to stable maintainers: In 3.18 and below, depending on
configuration, gcc 4.9.2 generates code like this:
9c3: 44 0f 03 e8 lsl %ax,%r13d
9c7: 45 89 eb mov %r13d,%r11d
9ca: 0f 03 d8 lsl %ax,%ebx
This patch won't apply as is to any released kernel, but I'll send a
trivial backported version if needed.
[
Backported by Andy Lutomirski. Should apply to all affected
versions. This fixes a functionality bug as well as a performance
bug: buggy kernels can infinite loop in __vdso_clock_gettime on
affected compilers. See, for exammple:
https://bugzilla.redhat.com/show_bug.cgi?id=
1178975
]
Fixes:
51c19b4f5927 x86: vdso: pvclock gettime support
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andy Lutomirski [Sat, 20 Dec 2014 00:04:11 +0000 (16:04 -0800)]
x86_64, vdso: Fix the vdso address randomization algorithm
commit
394f56fe480140877304d342dec46d50dc823d46 upstream.
The theory behind vdso randomization is that it's mapped at a random
offset above the top of the stack. To avoid wasting a page of
memory for an extra page table, the vdso isn't supposed to extend
past the lowest PMD into which it can fit. Other than that, the
address should be a uniformly distributed address that meets all of
the alignment requirements.
The current algorithm is buggy: the vdso has about a 50% probability
of being at the very end of a PMD. The current algorithm also has a
decent chance of failing outright due to incorrect handling of the
case where the top of the stack is near the top of its PMD.
This fixes the implementation. The paxtest estimate of vdso
"randomisation" improves from 11 bits to 18 bits. (Disclaimer: I
don't know what the paxtest code is actually calculating.)
It's worth noting that this algorithm is inherently biased: the vdso
is more likely to end up near the end of its PMD than near the
beginning. Ideally we would either nix the PMD sharing requirement
or jointly randomize the vdso and the stack to reduce the bias.
In the mean time, this is a considerable improvement with basically
no risk of compatibility issues, since the allowed outputs of the
algorithm are unchanged.
As an easy test, doing this:
for i in `seq 10000`
do grep -P vdso /proc/self/maps |cut -d- -f1
done |sort |uniq -d
used to produce lots of output (1445 lines on my most recent run).
A tiny subset looks like this:
7fffdfffe000
7fffe01fe000
7fffe05fe000
7fffe07fe000
7fffe09fe000
7fffe0bfe000
7fffe0dfe000
Note the suspicious fe000 endings. With the fix, I get a much more
palatable 76 repeated addresses.
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Giedrius Statkevičius [Fri, 26 Dec 2014 22:28:30 +0000 (00:28 +0200)]
HID: Add a new id 0x501a for Genius MousePen i608X
commit
2bacedada682d5485424f5227f27a3d5d6eb551c upstream.
New Genius MousePen i608X devices have a new id 0x501a instead of the
old 0x5011 so add a new #define with "_2" appended and change required
places.
The remaining two checkpatch warnings about line length
being over 80 characters are present in the original files too and this
patch was made in the same style (no line break).
Just adding a new id and changing the required places should make the
new device work without any issues according to the bug report in the
following url.
This patch was made according to and fixes:
https://bugzilla.kernel.org/show_bug.cgi?id=67111
Signed-off-by: Giedrius Statkevičius <giedrius.statkevicius@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Karl Relton [Tue, 16 Dec 2014 15:37:22 +0000 (15:37 +0000)]
HID: add battery quirk for USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_ISO keyboard
commit
da940db41dcf8c04166f711646df2f35376010aa upstream.
Apple bluetooth wireless keyboard (sold in UK) has always reported zero
for battery strength no matter what condition the batteries are actually
in. With this patch applied (applying same quirk as other Apple
keyboards), the battery strength is now correctly reported.
Signed-off-by: Karl Relton <karllinuxtest.relton@ntlworld.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Carpenter [Fri, 9 Jan 2015 12:32:31 +0000 (15:32 +0300)]
HID: roccat: potential out of bounds in pyra_sysfs_write_settings()
commit
606185b20caf4c57d7e41e5a5ea4aff460aef2ab upstream.
This is a static checker fix. We write some binary settings to the
sysfs file. One of the settings is the "->startup_profile". There
isn't any checking to make sure it fits into the
pyra->profile_settings[] array in the profile_activated() function.
I added a check to pyra_sysfs_write_settings() in both places because
I wasn't positive that the other callers were correct.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gwendal Grignou [Fri, 12 Dec 2014 00:02:45 +0000 (16:02 -0800)]
HID: i2c-hid: prevent buffer overflow in early IRQ
commit
d1c7e29e8d276c669e8790bb8be9f505ddc48888 upstream.
Before ->start() is called, bufsize size is set to HID_MIN_BUFFER_SIZE,
64 bytes. While processing the IRQ, we were asking to receive up to
wMaxInputLength bytes, which can be bigger than 64 bytes.
Later, when ->start is run, a proper bufsize will be calculated.
Given wMaxInputLength is said to be unreliable in other part of the
code, set to receive only what we can even if it results in truncated
reports.
Signed-off-by: Gwendal Grignou <gwendal@chromium.org>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jean-Baptiste Maneyrol [Wed, 19 Nov 2014 16:46:37 +0000 (00:46 +0800)]
HID: i2c-hid: fix race condition reading reports
commit
6296f4a8eb86f9abcc370fb7a1a116b8441c17fd upstream.
Current driver uses a common buffer for reading reports either
synchronously in i2c_hid_get_raw_report() and asynchronously in
the interrupt handler.
There is race condition if an interrupt arrives immediately after
the report is received in i2c_hid_get_raw_report(); the common
buffer is modified by the interrupt handler with the new report
and then i2c_hid_get_raw_report() proceed using wrong data.
Fix it by using a separate buffers for synchronous reports.
Signed-off-by: Jean-Baptiste Maneyrol <jmaneyrol@invensense.com>
[Antonio Borneo: cleanup, rebase to v3.17, submit mainline]
Signed-off-by: Antonio Borneo <borneo.antonio@gmail.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiang Liu [Wed, 26 Nov 2014 01:42:10 +0000 (09:42 +0800)]
iommu/vt-d: Fix an off-by-one bug in __domain_mapping()
commit
cc4f14aa170d895c9a43bdb56f62070c8a6da908 upstream.
There's an off-by-one bug in function __domain_mapping(), which may
trigger the BUG_ON(nr_pages < lvl_pages) when
(nr_pages + 1) & superpage_mask == 0
The issue was introduced by commit
9051aa0268dc "intel-iommu: Combine
domain_pfn_mapping() and domain_sg_mapping()", which sets sg_res to
"nr_pages + 1" to avoid some of the 'sg_res==0' code paths.
It's safe to remove extra "+1" because sg_res is only used to calculate
page size now.
Reported-And-Tested-by: Sudeep Dutt <sudeep.dutt@intel.com>
Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
Acked-By: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Richard Weinberger [Thu, 6 Nov 2014 15:47:49 +0000 (16:47 +0100)]
UBI: Fix double free after do_sync_erase()
commit
aa5ad3b6eb8feb2399a5d26c8fb0060561bb9534 upstream.
If the erase worker is unable to erase a PEB it will
free the ubi_wl_entry itself.
The failing ubi_wl_entry must not free()'d again after
do_sync_erase() returns.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Richard Weinberger [Sun, 26 Oct 2014 23:46:11 +0000 (00:46 +0100)]
UBI: Fix invalid vfree()
commit
f38aed975c0c3645bbdfc5ebe35726e64caaf588 upstream.
The logic of vfree()'ing vol->upd_buf is tied to vol->updating.
In ubi_start_update() vol->updating is set long before vmalloc()'ing
vol->upd_buf. If we encounter a write failure in ubi_start_update()
before vmalloc() the UBI device release function will try to vfree()
vol->upd_buf because vol->updating is set.
Fix this by allocating vol->upd_buf directly after setting vol->updating.
Fixes:
[ 31.559338] UBI warning: vol_cdev_release: update of volume 2 not finished, volume is damaged
[ 31.559340] ------------[ cut here ]------------
[ 31.559343] WARNING: CPU: 1 PID: 2747 at mm/vmalloc.c:1446 __vunmap+0xe3/0x110()
[ 31.559344] Trying to vfree() nonexistent vm area (
ffffc90001f2b000)
[ 31.559345] Modules linked in:
[ 31.565620]
0000000000000bba ffff88002a0cbdb0 ffffffff818f0497 ffff88003b9ba148
[ 31.566347]
ffff88002a0cbde0 ffffffff8156f515 ffff88003b9ba148 0000000000000bba
[ 31.567073]
0000000000000000 0000000000000000 ffff88002a0cbe88 ffffffff8156c10a
[ 31.567793] Call Trace:
[ 31.568034] [<
ffffffff818f0497>] dump_stack+0x4e/0x7a
[ 31.568510] [<
ffffffff8156f515>] ubi_io_write_vid_hdr+0x155/0x160
[ 31.569084] [<
ffffffff8156c10a>] ubi_eba_write_leb+0x23a/0x870
[ 31.569628] [<
ffffffff81569b36>] vol_cdev_write+0x226/0x380
[ 31.570155] [<
ffffffff81179265>] vfs_write+0xb5/0x1f0
[ 31.570627] [<
ffffffff81179f8a>] SyS_pwrite64+0x6a/0xa0
[ 31.571123] [<
ffffffff818fde12>] system_call_fastpath+0x16/0x1b
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tony Lindgren [Tue, 16 Sep 2014 20:50:01 +0000 (13:50 -0700)]
pstore-ram: Allow optional mapping with pgprot_noncached
commit
027bc8b08242c59e19356b4b2c189f2d849ab660 upstream.
On some ARMs the memory can be mapped pgprot_noncached() and still
be working for atomic operations. As pointed out by Colin Cross
<ccross@android.com>, in some cases you do want to use
pgprot_noncached() if the SoC supports it to see a debug printk
just before a write hanging the system.
On ARMs, the atomic operations on strongly ordered memory are
implementation defined. So let's provide an optional kernel parameter
for configuring pgprot_noncached(), and use pgprot_writecombine() by
default.
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Rob Herring <robherring2@gmail.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Anton Vorontsov <anton@enomsg.org>
Cc: Colin Cross <ccross@android.com>
Cc: Olof Johansson <olof@lixom.net>
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rob Herring [Fri, 12 Sep 2014 18:32:24 +0000 (11:32 -0700)]
pstore-ram: Fix hangs by using write-combine mappings
commit
7ae9cb81933515dc7db1aa3c47ef7653717e3090 upstream.
Currently trying to use pstore on at least ARMs can hang as we're
mapping the peristent RAM with pgprot_noncached().
On ARMs, pgprot_noncached() will actually make the memory strongly
ordered, and as the atomic operations pstore uses are implementation
defined for strongly ordered memory, they may not work. So basically
atomic operations have undefined behavior on ARM for device or strongly
ordered memory types.
Let's fix the issue by using write-combine variants for mappings. This
corresponds to normal, non-cacheable memory on ARM. For many other
architectures, this change does not change the mapping type as by
default we have:
#define pgprot_writecombine pgprot_noncached
The reason why pgprot_noncached() was originaly used for pstore
is because Colin Cross <ccross@android.com> had observed lost
debug prints right before a device hanging write operation on some
systems. For the platforms supporting pgprot_noncached(), we can
add a an optional configuration option to support that. But let's
get pstore working first before adding new features.
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Anton Vorontsov <cbouatmailru@gmail.com>
Cc: Colin Cross <ccross@android.com>
Cc: Olof Johansson <olof@lixom.net>
Cc: linux-kernel@vger.kernel.org
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Rob Herring <rob.herring@calxeda.com>
[tony@atomide.com: updated description]
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Myron Stowe [Thu, 30 Oct 2014 17:54:37 +0000 (11:54 -0600)]
PCI: Restore detection of read-only BARs
commit
36e8164882ca6d3c41cb91e6f09a3ed236841f80 upstream.
Commit
6ac665c63dca ("PCI: rewrite PCI BAR reading code") masked off
low-order bits from 'l', but not from 'sz'. Both are passed to pci_size(),
which compares 'base == maxbase' to check for read-only BARs. The masking
of 'l' means that comparison will never be 'true', so the check for
read-only BARs no longer works.
Resolve this by also masking off the low-order bits of 'sz' before passing
it into pci_size() as 'maxbase'. With this change, pci_size() will once
again catch the problems that have been encountered to date:
- AGP aperture BAR of AMD-7xx host bridges: if the AGP window is
disabled, this BAR is read-only and read as 0x00000008 [1]
- BARs 0-4 of ALi IDE controllers can be non-zero and read-only [1]
- Intel Sandy Bridge - Thermal Management Controller [8086:0103];
BAR 0 returning 0xfed98004 [2]
- Intel Xeon E5 v3/Core i7 Power Control Unit [8086:2fc0];
Bar 0 returning 0x00001a [3]
Link: [1] https://git.kernel.org/cgit/linux/kernel/git/tglx/history.git/commit/drivers/pci/probe.c?id=
1307ef6621991f1c4bc3cec1b5a4ebd6fd3d66b9 ("PCI: probing read-only BARs" (pre-git))
Link: [2] https://bugzilla.kernel.org/show_bug.cgi?id=43331
Link: [3] https://bugzilla.kernel.org/show_bug.cgi?id=85991
Reported-by: William Unruh <unruh@physics.ubc.ca>
Reported-by: Martin Lucina <martin@lucina.net>
Signed-off-by: Myron Stowe <myron.stowe@redhat.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: Matthew Wilcox <willy@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrew Jackson [Fri, 19 Dec 2014 16:18:05 +0000 (16:18 +0000)]
ASoC: dwc: Ensure FIFOs are flushed to prevent channel swap
commit
3475c3d034d7f276a474c8bd53f44b48c8bf669d upstream.
Flush the FIFOs when the stream is prepared for use. This avoids
an inadvertent swapping of the left/right channels if the FIFOs are
not empty at startup.
Signed-off-by: Andrew Jackson <Andrew.Jackson@arm.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jarkko Nikula [Mon, 24 Nov 2014 13:32:36 +0000 (15:32 +0200)]
ASoC: max98090: Fix ill-defined sidetone route
commit
48826ee590da03e9882922edf96d8d27bdfe9552 upstream.
Commit
5fe5b767dc6f ("ASoC: dapm: Do not pretend to support controls for non
mixer/mux widgets") revealed ill-defined control in a route between
"STENL Mux" and DACs in max98090.c:
max98090 i2c-
193C9890:00: Control not supported for path STENL Mux -> [NULL] -> DACL
max98090 i2c-
193C9890:00: ASoC: no dapm match for STENL Mux --> NULL --> DACL
max98090 i2c-
193C9890:00: ASoC: Failed to add route STENL Mux -> NULL -> DACL
max98090 i2c-
193C9890:00: Control not supported for path STENL Mux -> [NULL] -> DACR
max98090 i2c-
193C9890:00: ASoC: no dapm match for STENL Mux --> NULL --> DACR
max98090 i2c-
193C9890:00: ASoC: Failed to add route STENL Mux -> NULL -> DACR
Since there is no control between "STENL Mux" and DACs the control name must
be NULL not "NULL".
Signed-off-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Lars-Peter Clausen [Wed, 19 Nov 2014 17:29:02 +0000 (18:29 +0100)]
ASoC: sigmadsp: Refuse to load firmware files with a non-supported version
commit
50c0f21b42dd4cd02b51f82274f66912d9a7fa32 upstream.
Make sure to check the version field of the firmware header to make sure to
not accidentally try to parse a firmware file with a different layout.
Trying to do so can result in loading invalid firmware code to the device.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Felix Fietkau [Sun, 30 Nov 2014 20:52:57 +0000 (21:52 +0100)]
ath5k: fix hardware queue index assignment
commit
9e4982f6a51a2442f1bb588fee42521b44b4531c upstream.
Like with ath9k, ath5k queues also need to be ordered by priority.
queue_info->tqi_subtype already contains the correct index, so use it
instead of relying on the order of ath5k_hw_setup_tx_queue calls.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stefano Stabellini [Fri, 21 Nov 2014 16:56:12 +0000 (16:56 +0000)]
swiotlb-xen: pass dev_addr to swiotlb_tbl_unmap_single
commit
2c3fc8d26dd09b9d7069687eead849ee81c78e46 upstream.
Need to pass the pointer within the swiotlb internal buffer to the
swiotlb library, that in the case of xen_unmap_single is dev_addr, not
paddr.
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stephane Grosjean [Fri, 28 Nov 2014 13:08:48 +0000 (14:08 +0100)]
can: peak_usb: fix memset() usage
commit
dc50ddcd4c58a5a0226038307d6ef884bec9f8c2 upstream.
This patchs fixes a misplaced call to memset() that fills the request
buffer with 0. The problem was with sending PCAN_USBPRO_REQ_FCT
requests, the content set by the caller was thus lost.
With this patch, the memory area is zeroed only when requesting info
from the device.
Signed-off-by: Stephane Grosjean <s.grosjean@peak-system.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stephane Grosjean [Fri, 28 Nov 2014 12:49:10 +0000 (13:49 +0100)]
can: peak_usb: fix cleanup sequence order in case of error during init
commit
af35d0f1cce7a990286e2b94c260a2c2d2a0e4b0 upstream.
This patch sets the correct reverse sequence order to the instructions
set to run, when any failure occurs during the initialization steps.
It also adds the missing unregistration call of the can device if the
failure appears after having been registered.
Signed-off-by: Stephane Grosjean <s.grosjean@peak-system.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Felix Fietkau [Sun, 30 Nov 2014 19:38:41 +0000 (20:38 +0100)]
ath9k: fix BE/BK queue order
commit
78063d81d353e10cbdd279c490593113b8fdae1c upstream.
Hardware queues are ordered by priority. Use queue index 0 for BK, which
has lower priority than BE.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Felix Fietkau [Sun, 30 Nov 2014 19:38:40 +0000 (20:38 +0100)]
ath9k_hw: fix hardware queue allocation
commit
ad8fdccf9c197a89e2d2fa78c453283dcc2c343f upstream.
The driver passes the desired hardware queue index for a WMM data queue
in qinfo->tqi_subtype. This was ignored in ath9k_hw_setuptxqueue, which
instead relied on the order in which the function is called.
Reported-by: Hubert Feurstein <h.feurstein@gmail.com>
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Junxiao Bi [Fri, 19 Dec 2014 00:17:37 +0000 (16:17 -0800)]
ocfs2: fix journal commit deadlock
commit
136f49b9171074872f2a14ad0ab10486d1ba13ca upstream.
For buffer write, page lock will be got in write_begin and released in
write_end, in ocfs2_write_end_nolock(), before it unlock the page in
ocfs2_free_write_ctxt(), it calls ocfs2_run_deallocs(), this will ask
for the read lock of journal->j_trans_barrier. Holding page lock and
ask for journal->j_trans_barrier breaks the locking order.
This will cause a deadlock with journal commit threads, ocfs2cmt will
get write lock of journal->j_trans_barrier first, then it wakes up
kjournald2 to do the commit work, at last it waits until done. To
commit journal, kjournald2 needs flushing data first, it needs get the
cache page lock.
Since some ocfs2 cluster locks are holding by write process, this
deadlock may hung the whole cluster.
unlock pages before ocfs2_run_deallocs() can fix the locking order, also
put unlock before ocfs2_commit_trans() to make page lock is unlocked
before j_trans_barrier to preserve unlocking order.
Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
Reviewed-by: Wengang Wang <wen.gang.wang@oracle.com>
Reviewed-by: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Thu, 8 Jan 2015 17:58:30 +0000 (09:58 -0800)]
Linux 3.10.64
Filipe Manana [Sun, 7 Dec 2014 21:31:47 +0000 (21:31 +0000)]
Btrfs: fix fs corruption on transaction abort if device supports discard
commit
678886bdc6378c1cbd5072da2c5a3035000214e3 upstream.
When we abort a transaction we iterate over all the ranges marked as dirty
in fs_info->freed_extents[0] and fs_info->freed_extents[1], clear them
from those trees, add them back (unpin) to the free space caches and, if
the fs was mounted with "-o discard", perform a discard on those regions.
Also, after adding the regions to the free space caches, a fitrim ioctl call
can see those ranges in a block group's free space cache and perform a discard
on the ranges, so the same issue can happen without "-o discard" as well.
This causes corruption, affecting one or multiple btree nodes (in the worst
case leaving the fs unmountable) because some of those ranges (the ones in
the fs_info->pinned_extents tree) correspond to btree nodes/leafs that are
referred by the last committed super block - breaking the rule that anything
that was committed by a transaction is untouched until the next transaction
commits successfully.
I ran into this while running in a loop (for several hours) the fstest that
I recently submitted:
[PATCH] fstests: add btrfs test to stress chunk allocation/removal and fstrim
The corruption always happened when a transaction aborted and then fsck complained
like this:
_check_btrfs_filesystem: filesystem on /dev/sdc is inconsistent
*** fsck.btrfs output ***
Check tree block failed, want=
94945280, have=0
Check tree block failed, want=
94945280, have=0
Check tree block failed, want=
94945280, have=0
Check tree block failed, want=
94945280, have=0
Check tree block failed, want=
94945280, have=0
read block failed check_tree_block
Couldn't open file system
In this case
94945280 corresponded to the root of a tree.
Using frace what I observed was the following sequence of steps happened:
1) transaction N started, fs_info->pinned_extents pointed to
fs_info->freed_extents[0];
2) node/eb
94945280 is created;
3) eb is persisted to disk;
4) transaction N commit starts, fs_info->pinned_extents now points to
fs_info->freed_extents[1], and transaction N completes;
5) transaction N + 1 starts;
6) eb is COWed, and btrfs_free_tree_block() called for this eb;
7) eb range (
94945280 to
94945280 + 16Kb) is added to
fs_info->pinned_extents (fs_info->freed_extents[1]);
8) Something goes wrong in transaction N + 1, like hitting ENOSPC
for example, and the transaction is aborted, turning the fs into
readonly mode. The stack trace I got for example:
[112065.253935] [<
ffffffff8140c7b6>] dump_stack+0x4d/0x66
[112065.254271] [<
ffffffff81042984>] warn_slowpath_common+0x7f/0x98
[112065.254567] [<
ffffffffa0325990>] ? __btrfs_abort_transaction+0x50/0x10b [btrfs]
[112065.261674] [<
ffffffff810429e5>] warn_slowpath_fmt+0x48/0x50
[112065.261922] [<
ffffffffa032949e>] ? btrfs_free_path+0x26/0x29 [btrfs]
[112065.262211] [<
ffffffffa0325990>] __btrfs_abort_transaction+0x50/0x10b [btrfs]
[112065.262545] [<
ffffffffa036b1d6>] btrfs_remove_chunk+0x537/0x58b [btrfs]
[112065.262771] [<
ffffffffa033840f>] btrfs_delete_unused_bgs+0x1de/0x21b [btrfs]
[112065.263105] [<
ffffffffa0343106>] cleaner_kthread+0x100/0x12f [btrfs]
(...)
[112065.264493] ---[ end trace
dd7903a975a31a08 ]---
[112065.264673] BTRFS: error (device sdc) in btrfs_remove_chunk:2625: errno=-28 No space left
[112065.264997] BTRFS info (device sdc): forced readonly
9) The clear kthread sees that the BTRFS_FS_STATE_ERROR bit is set in
fs_info->fs_state and calls btrfs_cleanup_transaction(), which in
turn calls btrfs_destroy_pinned_extent();
10) Then btrfs_destroy_pinned_extent() iterates over all the ranges
marked as dirty in fs_info->freed_extents[], and for each one
it calls discard, if the fs was mounted with "-o discard", and
adds the range to the free space cache of the respective block
group;
11) btrfs_trim_block_group(), invoked from the fitrim ioctl code path,
sees the free space entries and performs a discard;
12) After an umount and mount (or fsck), our eb's location on disk was full
of zeroes, and it should have been untouched, because it was marked as
dirty in the fs_info->pinned_extents tree, and therefore used by the
trees that the last committed superblock points to.
Fix this by not performing a discard and not adding the ranges to the free space
caches - it's useless from this point since the fs is now in readonly mode and
we won't write free space caches to disk anymore (otherwise we would leak space)
nor any new superblock. By not adding the ranges to the free space caches, it
prevents other code paths from allocating that space and write to it as well,
therefore being safer and simpler.
This isn't a new problem, as it's been present since 2011 (git commit
acce952b0263825da32cf10489413dec78053347).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Josef Bacik [Fri, 14 Nov 2014 21:16:30 +0000 (16:16 -0500)]
Btrfs: do not move em to modified list when unpinning
commit
a28046956c71985046474283fa3bcd256915fb72 upstream.
We use the modified list to keep track of which extents have been modified so we
know which ones are candidates for logging at fsync() time. Newly modified
extents are added to the list at modification time, around the same time the
ordered extent is created. We do this so that we don't have to wait for ordered
extents to complete before we know what we need to log. The problem is when
something like this happens
log extent 0-4k on inode 1
copy csum for 0-4k from ordered extent into log
sync log
commit transaction
log some other extent on inode 1
ordered extent for 0-4k completes and adds itself onto modified list again
log changed extents
see ordered extent for 0-4k has already been logged
at this point we assume the csum has been copied
sync log
crash
On replay we will see the extent 0-4k in the log, drop the original 0-4k extent
which is the same one that we are replaying which also drops the csum, and then
we won't find the csum in the log for that bytenr. This of course causes us to
have errors about not having csums for certain ranges of our inode. So remove
the modified list manipulation in unpin_extent_cache, any modified extents
should have been added well before now, and we don't want them re-logged. This
fixes my test that I could reliably reproduce this problem with. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Halcrow [Wed, 26 Nov 2014 17:09:16 +0000 (09:09 -0800)]
eCryptfs: Remove buggy and unnecessary write in file name decode routine
commit
942080643bce061c3dd9d5718d3b745dcb39a8bc upstream.
Dmitry Chernenkov used KASAN to discover that eCryptfs writes past the
end of the allocated buffer during encrypted filename decoding. This
fix corrects the issue by getting rid of the unnecessary 0 write when
the current bit offset is 2.
Signed-off-by: Michael Halcrow <mhalcrow@google.com>
Reported-by: Dmitry Chernenkov <dmitryc@google.com>
Suggested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tyler Hicks [Tue, 7 Oct 2014 20:51:55 +0000 (15:51 -0500)]
eCryptfs: Force RO mount when encrypted view is enabled
commit
332b122d39c9cbff8b799007a825d94b2e7c12f2 upstream.
The ecryptfs_encrypted_view mount option greatly changes the
functionality of an eCryptfs mount. Instead of encrypting and decrypting
lower files, it provides a unified view of the encrypted files in the
lower filesystem. The presence of the ecryptfs_encrypted_view mount
option is intended to force a read-only mount and modifying files is not
supported when the feature is in use. See the following commit for more
information:
e77a56d [PATCH] eCryptfs: Encrypted passthrough
This patch forces the mount to be read-only when the
ecryptfs_encrypted_view mount option is specified by setting the
MS_RDONLY flag on the superblock. Additionally, this patch removes some
broken logic in ecryptfs_open() that attempted to prevent modifications
of files when the encrypted view feature was in use. The check in
ecryptfs_open() was not sufficient to prevent file modifications using
system calls that do not operate on a file descriptor.
Signed-off-by: Tyler Hicks <tyhicks@canonical.com>
Reported-by: Priya Bansal <p.bansal@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jan Kara [Fri, 19 Dec 2014 11:21:47 +0000 (12:21 +0100)]
udf: Verify symlink size before loading it
commit
a1d47b262952a45aae62bd49cfaf33dd76c11a2c upstream.
UDF specification allows arbitrarily large symlinks. However we support
only symlinks at most one block large. Check the length of the symlink
so that we don't access memory beyond end of the symlink block.
Reported-by: Carl Henrik Lunde <chlunde@gmail.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oleg Nesterov [Wed, 10 Dec 2014 23:55:25 +0000 (15:55 -0800)]
exit: pidns: alloc_pid() leaks pid_namespace if child_reaper is exiting
commit
24c037ebf5723d4d9ab0996433cee4f96c292a4d upstream.
alloc_pid() does get_pid_ns() beforehand but forgets to put_pid_ns() if it
fails because disable_pid_allocation() was called by the exiting
child_reaper.
We could simply move get_pid_ns() down to successful return, but this fix
tries to be as trivial as possible.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: "Eric W. Biederman" <ebiederm@xmission.com>
Cc: Aaron Tomlin <atomlin@redhat.com>
Cc: Pavel Emelyanov <xemul@parallels.com>
Cc: Serge Hallyn <serge.hallyn@ubuntu.com>
Cc: Sterling Alexander <stalexan@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jan Kara [Wed, 10 Dec 2014 23:52:22 +0000 (15:52 -0800)]
ncpfs: return proper error from NCP_IOC_SETROOT ioctl
commit
a682e9c28cac152e6e54c39efcf046e0c8cfcf63 upstream.
If some error happens in NCP_IOC_SETROOT ioctl, the appropriate error
return value is then (in most cases) just overwritten before we return.
This can result in reporting success to userspace although error happened.
This bug was introduced by commit
2e54eb96e2c8 ("BKL: Remove BKL from
ncpfs"). Propagate the errors correctly.
Coverity id:
1226925.
Fixes:
2e54eb96e2c80 ("BKL: Remove BKL from ncpfs")
Signed-off-by: Jan Kara <jack@suse.cz>
Cc: Petr Vandrovec <petr@vandrovec.name>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rabin Vincent [Fri, 19 Dec 2014 12:36:08 +0000 (13:36 +0100)]
crypto: af_alg - fix backlog handling
commit
7e77bdebff5cb1e9876c561f69710b9ab8fa1f7e upstream.
If a request is backlogged, it's complete() handler will get called
twice: once with -EINPROGRESS, and once with the final error code.
af_alg's complete handler, unlike other users, does not handle the
-EINPROGRESS but instead always completes the completion that recvmsg()
is waiting on. This can lead to a return to user space while the
request is still pending in the driver. If userspace closes the sockets
before the requests are handled by the driver, this will lead to
use-after-frees (and potential crashes) in the kernel due to the tfm
having been freed.
The crashes can be easily reproduced (for example) by reducing the max
queue length in cryptod.c and running the following (from
http://www.chronox.de/libkcapi.html) on AES-NI capable hardware:
$ while true; do kcapi -x 1 -e -c '__ecb-aes-aesni' \
-k
00000000000000000000000000000000 \
-p
00000000000000000000000000000000 >/dev/null & done
Signed-off-by: Rabin Vincent <rabin.vincent@axis.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Tue, 2 Dec 2014 19:56:30 +0000 (13:56 -0600)]
userns: Unbreak the unprivileged remount tests
commit
db86da7cb76f797a1a8b445166a15cb922c6ff85 upstream.
A security fix in caused the way the unprivileged remount tests were
using user namespaces to break. Tweak the way user namespaces are
being used so the test works again.
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Sat, 6 Dec 2014 01:36:04 +0000 (19:36 -0600)]
userns: Allow setting gid_maps without privilege when setgroups is disabled
commit
66d2f338ee4c449396b6f99f5e75cd18eb6df272 upstream.
Now that setgroups can be disabled and not reenabled, setting gid_map
without privielge can now be enabled when setgroups is disabled.
This restores most of the functionality that was lost when unprivileged
setting of gid_map was removed. Applications that use this functionality
will need to check to see if they use setgroups or init_groups, and if they
don't they can be fixed by simply disabling setgroups before writing to
gid_map.
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Tue, 2 Dec 2014 18:27:26 +0000 (12:27 -0600)]
userns: Add a knob to disable setgroups on a per user namespace basis
commit
9cc46516ddf497ea16e8d7cb986ae03a0f6b92f8 upstream.
- Expose the knob to user space through a proc file /proc/<pid>/setgroups
A value of "deny" means the setgroups system call is disabled in the
current processes user namespace and can not be enabled in the
future in this user namespace.
A value of "allow" means the segtoups system call is enabled.
- Descendant user namespaces inherit the value of setgroups from
their parents.
- A proc file is used (instead of a sysctl) as sysctls currently do
not allow checking the permissions at open time.
- Writing to the proc file is restricted to before the gid_map
for the user namespace is set.
This ensures that disabling setgroups at a user namespace
level will never remove the ability to call setgroups
from a process that already has that ability.
A process may opt in to the setgroups disable for itself by
creating, entering and configuring a user namespace or by calling
setns on an existing user namespace with setgroups disabled.
Processes without privileges already can not call setgroups so this
is a noop. Prodcess with privilege become processes without
privilege when entering a user namespace and as with any other path
to dropping privilege they would not have the ability to call
setgroups. So this remains within the bounds of what is possible
without a knob to disable setgroups permanently in a user namespace.
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Tue, 9 Dec 2014 20:03:14 +0000 (14:03 -0600)]
userns: Rename id_map_mutex to userns_state_mutex
commit
f0d62aec931e4ae3333c797d346dc4f188f454ba upstream.
Generalize id_map_mutex so it can be used for more state of a user namespace.
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Thu, 27 Nov 2014 05:22:14 +0000 (23:22 -0600)]
userns: Only allow the creator of the userns unprivileged mappings
commit
f95d7918bd1e724675de4940039f2865e5eec5fe upstream.
If you did not create the user namespace and are allowed
to write to uid_map or gid_map you should already have the necessary
privilege in the parent user namespace to establish any mapping
you want so this will not affect userspace in practice.
Limiting unprivileged uid mapping establishment to the creator of the
user namespace makes it easier to verify all credentials obtained with
the uid mapping can be obtained without the uid mapping without
privilege.
Limiting unprivileged gid mapping establishment (which is temporarily
absent) to the creator of the user namespace also ensures that the
combination of uid and gid can already be obtained without privilege.
This is part of the fix for CVE-2014-8989.
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Sat, 6 Dec 2014 00:26:30 +0000 (18:26 -0600)]
userns: Check euid no fsuid when establishing an unprivileged uid mapping
commit
80dd00a23784b384ccea049bfb3f259d3f973b9d upstream.
setresuid allows the euid to be set to any of uid, euid, suid, and
fsuid. Therefor it is safe to allow an unprivileged user to map
their euid and use CAP_SETUID privileged with exactly that uid,
as no new credentials can be obtained.
I can not find a combination of existing system calls that allows setting
uid, euid, suid, and fsuid from the fsuid making the previous use
of fsuid for allowing unprivileged mappings a bug.
This is part of a fix for CVE-2014-8989.
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Sat, 6 Dec 2014 00:14:19 +0000 (18:14 -0600)]
userns: Don't allow unprivileged creation of gid mappings
commit
be7c6dba2332cef0677fbabb606e279ae76652c3 upstream.
As any gid mapping will allow and must allow for backwards
compatibility dropping groups don't allow any gid mappings to be
established without CAP_SETGID in the parent user namespace.
For a small class of applications this change breaks userspace
and removes useful functionality. This small class of applications
includes tools/testing/selftests/mount/unprivilged-remount-test.c
Most of the removed functionality will be added back with the addition
of a one way knob to disable setgroups. Once setgroups is disabled
setting the gid_map becomes as safe as setting the uid_map.
For more common applications that set the uid_map and the gid_map
with privilege this change will have no affect.
This is part of a fix for CVE-2014-8989.
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Sat, 6 Dec 2014 00:01:11 +0000 (18:01 -0600)]
userns: Don't allow setgroups until a gid mapping has been setablished
commit
273d2c67c3e179adb1e74f403d1e9a06e3f841b5 upstream.
setgroups is unique in not needing a valid mapping before it can be called,
in the case of setgroups(0, NULL) which drops all supplemental groups.
The design of the user namespace assumes that CAP_SETGID can not actually
be used until a gid mapping is established. Therefore add a helper function
to see if the user namespace gid mapping has been established and call
that function in the setgroups permission check.
This is part of the fix for CVE-2014-8989, being able to drop groups
without privilege using user namespaces.
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Fri, 5 Dec 2014 23:51:47 +0000 (17:51 -0600)]
userns: Document what the invariant required for safe unprivileged mappings.
commit
0542f17bf2c1f2430d368f44c8fcf2f82ec9e53e upstream.
The rule is simple. Don't allow anything that wouldn't be allowed
without unprivileged mappings.
It was previously overlooked that establishing gid mappings would
allow dropping groups and potentially gaining permission to files and
directories that had lesser permissions for a specific group than for
all other users.
This is the rule needed to fix CVE-2014-8989 and prevent any other
security issues with new_idmap_permitted.
The reason for this rule is that the unix permission model is old and
there are programs out there somewhere that take advantage of every
little corner of it. So allowing a uid or gid mapping to be
established without privielge that would allow anything that would not
be allowed without that mapping will result in expectations from some
code somewhere being violated. Violated expectations about the
behavior of the OS is a long way to say a security issue.
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>