GitHub/exynos8895/android_kernel_samsung_universal8895.git
8 years agoANDROID: arch: x86: disable pic for Android toolchain
Greg Hackmann [Thu, 23 Jul 2015 17:40:57 +0000 (10:40 -0700)]
ANDROID: arch: x86: disable pic for Android toolchain

Android toolchains enable PIC, so explicitly disable it with
-fno-pic (this is the upstream gcc default)

Signed-off-by: Greg Hackmann <ghackmann@google.com>
(cherry picked from commit 892606ece2bebfa5a1ed62e9552cc973707ae9d3)

Change-Id: I1e600363e5d18e459479fe4eb23d76855e16868d

8 years agoANDROID: goldfish_pipe: An implementation of more parallel pipe
Yurii Zubrytskyi [Fri, 29 Jul 2016 17:51:46 +0000 (10:51 -0700)]
ANDROID: goldfish_pipe: An implementation of more parallel pipe

This is a driver code for a redesigned android pipe.
Currently it works for x86 and x64 emulators with the following
performance results:
  ADB push to /dev/null,
  Ubuntu,
  400 MB file,
  times are for 1/10/100 parallel adb commands
x86 adb push: (4.4s / 11.5s / 2m10s) -> (2.8s / 6s / 51s)
x64 adb push: (7s / 15s / (too long, 6m+) -> (2.7s / 6.2s / 52s)

ADB pull and push to /data/ have the same %% of speedup
More importantly, I don't see any signs of slowdowns when
run in parallel with Antutu benchmark, so it is definitely
making much better job at multithreading.

The code features dynamic host detection: old emulator gets
the previous version of the pipe driver code.

Combine follow patch from android-goldfish-3.10

b543285 [pipe] Increase the default pipe buffers size, make it configurable

Signed-off-by: "Yurii Zubrytskyi" <zyy@google.com>
Change-Id: I140d506204cab6e78dd503e5a43abc8886e4ffff

8 years agoANDROID: goldfish_pipe: bugfixes and performance improvements.
Yurii Zubrytskyi [Wed, 4 May 2016 20:05:38 +0000 (13:05 -0700)]
ANDROID: goldfish_pipe: bugfixes and performance improvements.

Combine following patches from android-goldfish-3.18 branch:

c0f015a [pipe] Fix the pipe driver for x64 platform + correct pages count
48e6bf5 [pipe] Use get_use_pages_fast() which is possibly faster
fb20f13 [goldfish] More pages in goldfish pipe
f180e6d goldfish_pipe: Return from read_write on signal and EIO
3dec3b7 [pipe] Fix a minor leak in setup_access_params_addr()

Change-Id: I1041fd65d7faaec123e6cedd3dbbc5a2fbb86c4d

8 years agoANDROID: goldfish: Add goldfish sync driver
Lingfeng Yang [Mon, 13 Jun 2016 16:24:07 +0000 (09:24 -0700)]
ANDROID: goldfish: Add goldfish sync driver

This is kernel driver for controlling the Goldfish sync
device on the host. It is used to maintain ordering
in critical OpenGL state changes while using
GPU emulation.

The guest open()'s the Goldfish sync device to create
a context for possibly maintaining sync timeline and fences.
There is a 1:1 correspondence between such sync contexts
and OpenGL contexts in the guest that need synchronization
(which in turn, is anything involving swapping buffers,
SurfaceFlinger, or Hardware Composer).

The ioctl QUEUE_WORK takes a handle to a sync object
and attempts to tell the host GPU to wait on the sync object
and deal with signaling it. It possibly outputs
a fence FD on which the Android systems that use them
(GLConsumer, SurfaceFlinger, anything employing
EGL_ANDROID_native_fence_sync) can use to wait.

Design decisions and work log:

- New approach is to have the guest issue ioctls that
  trigger host wait, and then host increments timeline.
- We need the host's sync object handle and sync thread handle
  as the necessary information for that.
- ioctl() from guest can work simultaneously with the
  interrupt handling for commands from host.
- optimization: don't write back on timeline inc
- Change spin lock design to be much more lightweight;
  do not call sw_sync functions or loop too long
  anywhere.
- Send read/write commands in batches to minimize guest/host
  transitions.
- robustness: BUG if we will overrun the cmd buffer.
- robustness: return fd -1 if we cannot get an unused fd.
- correctness: remove global mutex
- cleanup pass done, incl. but not limited to:
    - removal of clear_upto and
    - switching to devm_***

This is part of a sequential, multi-CL change:

external/qemu:

https://android-review.googlesource.com/239442 <- host-side device's
host interface

https://android-review.googlesource.com/221593
https://android-review.googlesource.com/248563
https://android-review.googlesource.com/248564
https://android-review.googlesource.com/223032

external/qemu-android:

https://android-review.googlesource.com/238790 <- host-side device
implementation

kernel/goldfish:

https://android-review.googlesource.com/232631 <- needed
https://android-review.googlesource.com/238399 <- this CL

Also squash following bug fixes from android-goldfish-3.18 branch.

b44d486 goldfish_sync: provide a signal to detect reboot
ad1f597 goldfish_sync: fix stalls by avoiding early kfree()
de208e8 [goldfish-sync] Fix possible race between kernel and user space

Change-Id: I22f8a0e824717a7e751b1b0e1b461455501502b6

8 years agoANDROID: goldfish: add ranchu defconfigs
Jin Qian [Fri, 7 Oct 2016 23:20:47 +0000 (16:20 -0700)]
ANDROID: goldfish: add ranchu defconfigs

Change-Id: I73ef1b132b6203ae921a1e1d4850eaadf58f8926

8 years agoANDROID: goldfish_audio: Clear audio read buffer status after each read
Joshua Lang [Sat, 18 Jun 2016 00:30:55 +0000 (17:30 -0700)]
ANDROID: goldfish_audio: Clear audio read buffer status after each read

The buffer_status field is interrupt updated. After every read request,
the buffer_status read field should be reset so that on the next loop
iteration we don't read a stale value and read data before the
device is ready.

Signed-off-by: “Joshua Lang” <joshualang@google.com>
Change-Id: I4943d5aaada1cad9c7e59a94a87c387578dabe86

8 years agoANDROID: goldfish_events: no extra EV_SYN; register goldfish
Lingfeng Yang [Fri, 18 Dec 2015 20:04:43 +0000 (12:04 -0800)]
ANDROID: goldfish_events: no extra EV_SYN; register goldfish

If we send SYN_REPORT on every single
multitouch event, it breaks the multitouch.

The multitouch becomes janky and
having to click 2-3 times to
do stuff (plus randomly activating notification
bars when not clicking)

If we suppress these SYN_REPORTS,
multitouch will work fine, plus the events
will have a protocol that looks nice.

In addition, we need to register Goldfish Events
as a multitouch device by issuing
input_mt_init_slots, otherwise
input_handle_abs_event in drivers/input/input.c
will silently drop all ABS_MT_SLOT events,
making it so that touches with more than 1 finger
do not work properly.

Signed-off-by: "Lingfeng Yang" <lfy@google.com>
Change-Id: Ib2350f7d1732449d246f6f0d9b7b08f02cc7c2dd
(cherry picked from commit 6cf40d0a16330e1ef42bdf07d9aba6c16ee11fbc)

8 years agoANDROID: goldfish_fb: Set pixclock = 0
Christoffer Dall [Thu, 19 Jun 2014 14:24:04 +0000 (16:24 +0200)]
ANDROID: goldfish_fb: Set pixclock = 0

User space Android code identifies pixclock == 0 as a sign for emulation
and will set the frame rate to 60 fps when reading this value, which is
the desired outcome.

Change-Id: I759bf518bf6683446bc786bf1be3cafa02dd8d42
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
8 years agoANDROID: goldfish: Enable ACPI-based enumeration for goldfish audio
Yu Ning [Tue, 31 Mar 2015 06:41:48 +0000 (14:41 +0800)]
ANDROID: goldfish: Enable ACPI-based enumeration for goldfish audio

Follow the same way in which ACPI was enabled for goldfish battery. See
commit d3be10e for details.

Change-Id: I6ffe38ebc80fb8af8322152370b9d1fd227eaf50
Signed-off-by: Yu Ning <yu.ning@intel.com>
8 years agoANDROID: goldfish: Enable ACPI-based enumeration for goldfish framebuffer
Yu Ning [Thu, 12 Feb 2015 03:44:40 +0000 (11:44 +0800)]
ANDROID: goldfish: Enable ACPI-based enumeration for goldfish framebuffer

Follow the same way in which ACPI was enabled for goldfish battery. See
commit d3be10e for details.

Note that this patch also depends on commit af33cac.

Change-Id: Ic63b6e7e0a4b9896ef9a9d0ed135a7796a4c1fdb
Signed-off-by: Yu Ning <yu.ning@intel.com>
8 years agoANDROID: video: goldfishfb: add devicetree bindings
Greg Hackmann [Mon, 28 Oct 2013 22:33:33 +0000 (15:33 -0700)]
ANDROID: video: goldfishfb: add devicetree bindings

Change-Id: I5f4ba861b981edf39af537001f8ac72202927031
Signed-off-by: Greg Hackmann <ghackmann@google.com>
8 years agoBACKPORT: staging: goldfish: audio: fix compiliation on arm
Greg Hackmann [Fri, 26 Feb 2016 19:00:18 +0000 (19:00 +0000)]
BACKPORT: staging: goldfish: audio: fix compiliation on arm

We do actually need slab.h, by luck we get it on other platforms but not
always on ARM. Include it properly.

Signed-off-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Jin Qian <jinqian@android.com>
Signed-off-by: Alan <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 4532150762ceb0d6fd765ebcb3ba6966fbb8faab)

Change-Id: I93a0c35da40f26aaa7c253e3c0cefaa883ea3391

8 years agoBACKPORT: Input: goldfish_events - enable ACPI-based enumeration for goldfish events
Jason Hu [Fri, 26 Feb 2016 20:06:47 +0000 (12:06 -0800)]
BACKPORT: Input: goldfish_events - enable ACPI-based enumeration for goldfish events

Add ACPI binding to the goldfish events driver.

Signed-off-by: Jason Hu <jia-cheng.hu@intel.com>
Signed-off-by: Jin Qian <jinqian@android.com>
Signed-off-by: Alan <alan@linux.intel.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
(cherry picked from commit 0581ce09fd2c976125a20791268d7206db156d2f)

Change-Id: Ic3e4f1cffb111ea6c69977e63dd598e3fcb55f19

8 years agoBACKPORT: goldfish: Enable ACPI-based enumeration for goldfish battery
Yu Ning [Tue, 1 Mar 2016 23:46:10 +0000 (23:46 +0000)]
BACKPORT: goldfish: Enable ACPI-based enumeration for goldfish battery

Add the ACPI bindings to the goldfish battery driver.

Signed-off-by: Yu Ning <yu.ning@intel.com>
Signed-off-by: Jin Qian <jinqian@android.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Sebastian Reichel <sre@kernel.org>
(cherry picked from commit fdb2f37a54470473c6b7c9d680c4c114dd9bc434)

Change-Id: I3b53481b5868b0b26848397420c9ba16a747819f

8 years agoBACKPORT: drivers: tty: goldfish: Add device tree bindings
Miodrag Dinic [Fri, 26 Feb 2016 19:00:44 +0000 (19:00 +0000)]
BACKPORT: drivers: tty: goldfish: Add device tree bindings

Enable support for registering this device using the device tree.
Device tree node example for registering Goldfish TTY device :

goldfish_tty@1f004000 {
    interrupts = <0xc>;
    reg = <0x1f004000 0x1000>;
    compatible = "google,goldfish-tty";
};

Signed-off-by: Miodrag Dinic <miodrag.dinic@imgtec.com>
Signed-off-by: Jin Qian <jinqian@android.com>
Signed-off-by: Alan <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 9b883eea26ccf043b608e398cf6a26231d44f5fb)

Change-Id: Idbe1bbac4f371e2feb6730712b08b66be1188ea7

8 years agoBACKPORT: tty: goldfish: support platform_device with id -1
Greg Hackmann [Fri, 26 Feb 2016 19:01:05 +0000 (19:01 +0000)]
BACKPORT: tty: goldfish: support platform_device with id -1

When the platform bus sets the platform_device id to -1 (PLATFORM_DEVID_NONE),
use an incrementing counter for the TTY index instead

Signed-off-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Jin Qian <jinqian@android.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 465893e18878e119d8d0255439fad8debbd646fd)

Change-Id: Ifec5ee9d71c7c076e59bb7af77c0184d1b1383cb

8 years agoBACKPORT: Input: goldfish_events - add devicetree bindings
Greg Hackmann [Fri, 26 Feb 2016 20:05:02 +0000 (12:05 -0800)]
BACKPORT: Input: goldfish_events - add devicetree bindings

Add device tree bindings to the Goldfish virtual platform event driver.

Signed-off-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Jin Qian <jinqian@android.com>
Signed-off-by: Alan <alan@linux.intel.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
(cherry picked from commit 8c5dc5a1ada2b79259e55a4bd150135d23529c6a)

Change-Id: I677d8e0d92294f53f7cc5a79300b6462b65e8aad

8 years agoBACKPORT: power: goldfish_battery: add devicetree bindings
Greg Hackmann [Fri, 26 Feb 2016 18:45:30 +0000 (18:45 +0000)]
BACKPORT: power: goldfish_battery: add devicetree bindings

Add device tree bindings to the Goldfish virtual platform battery drivers.

Signed-off-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Jin Qian <jinqian@android.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Sebastian Reichel <sre@kernel.org>
(cherry picked from commit 65d687a7b7d6f27e4306fe8cc8a1ca66a1a760f6)

Change-Id: If947ea3341ff0cb713c56e14d18d51a3f5912b64

8 years agoBACKPORT: staging: goldfish: audio: add devicetree bindings
Greg Hackmann [Fri, 26 Feb 2016 19:00:03 +0000 (19:00 +0000)]
BACKPORT: staging: goldfish: audio: add devicetree bindings

Introduce devicetree bindings to the Goldfish staging audio driver.

Signed-off-by: Greg Hackmann <ghackmann@google.com>
Signed-off-by: Jin Qian <jinqian@android.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 283ded10312a3b75e384313f6f529ec2c636cf2c)

Change-Id: Ib75d3a4cac7353084a8da18a96fb298a759bacc0

8 years agoANDROID: usb: gadget: function: cleanup: Add blank line after declaration
Anson Jacob [Fri, 11 Nov 2016 06:10:04 +0000 (01:10 -0500)]
ANDROID: usb: gadget: function: cleanup: Add blank line after declaration

Fix warning generated by checkpatch.pl:
Missing a blank line after declarations

Change-Id: Id129bb8cc8fa37c67a647e2e5996bb2817020e65
Signed-off-by: Anson Jacob <ansonjacob.aj@gmail.com>
8 years agocpufreq: sched: Fix kernel crash on accessing sysfs file
Viresh Kumar [Tue, 15 Nov 2016 06:28:52 +0000 (11:58 +0530)]
cpufreq: sched: Fix kernel crash on accessing sysfs file

If the cpufreq driver hasn't set the CPUFREQ_HAVE_GOVERNOR_PER_POLICY
flag, then the kernel will crash on accessing sysfs files for the sched
governor.

CPUFreq governors we can have the governor specific sysfs files in two
places:

A. /sys/devices/system/cpu/cpuX/cpufreq/<governor>
B. /sys/devices/system/cpu/cpufreq/<governor>

The case A. is for governor per policy case, where we can control the
governor tunables for each policy separately. The case B. is for system
wide tunable values.

The schedfreq governor only implements the case A. and not B.  The sysfs
files in case B will still be present in
/sys/devices/system/cpu/cpufreq/<governor>, but accessing them will
crash kernel as the governor doesn't support that.

Moreover the sched governor is pretty new and will be used only for the
ARM platforms and there is no need to support the case B at all.

Hence use policy->kobj instead of get_governor_parent_kobj(), so that we
always create the sysfs files in path A.

Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
8 years agoUPSTREAM: ring-buffer: Prevent overflow of size in ring_buffer_resize()
Steven Rostedt (Red Hat) [Fri, 13 May 2016 13:34:12 +0000 (09:34 -0400)]
UPSTREAM: ring-buffer: Prevent overflow of size in ring_buffer_resize()

(Cherry picked from commit 59643d1535eb220668692a5359de22545af579f6)

If the size passed to ring_buffer_resize() is greater than MAX_LONG - BUF_PAGE_SIZE
then the DIV_ROUND_UP() will return zero.

Here's the details:

  # echo 18014398509481980 > /sys/kernel/debug/tracing/buffer_size_kb

tracing_entries_write() processes this and converts kb to bytes.

 18014398509481980 << 10 = 18446744073709547520

and this is passed to ring_buffer_resize() as unsigned long size.

 size = DIV_ROUND_UP(size, BUF_PAGE_SIZE);

Where DIV_ROUND_UP(a, b) is (a + b - 1)/b

BUF_PAGE_SIZE is 4080 and here

 18446744073709547520 + 4080 - 1 = 18446744073709551599

where 18446744073709551599 is still smaller than 2^64

 2^64 - 18446744073709551599 = 17

But now 18446744073709551599 / 4080 = 4521260802379792

and size = size * 4080 = 18446744073709551360

This is checked to make sure its still greater than 2 * 4080,
which it is.

Then we convert to the number of buffer pages needed.

 nr_page = DIV_ROUND_UP(size, BUF_PAGE_SIZE)

but this time size is 18446744073709551360 and

 2^64 - (18446744073709551360 + 4080 - 1) = -3823

Thus it overflows and the resulting number is less than 4080, which makes

  3823 / 4080 = 0

an nr_pages is set to this. As we already checked against the minimum that
nr_pages may be, this causes the logic to fail as well, and we crash the
kernel.

There's no reason to have the two DIV_ROUND_UP() (that's just result of
historical code changes), clean up the code and fix this bug.

Cc: stable@vger.kernel.org # 3.5+
Fixes: 83f40318dab00 ("ring-buffer: Make removal of ring buffer pages atomic")
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Change-Id: I1147672317a3ad0fc995b1f32baaa050a7976ac4
Bug: 32659848

8 years agousb: gadget: f_mtp: simplify ptp NULL pointer check
Amit Pundir [Tue, 11 Aug 2015 07:04:45 +0000 (12:34 +0530)]
usb: gadget: f_mtp: simplify ptp NULL pointer check

Simplify MTP/PTP dev NULL pointer check introduced in
Change-Id: Ic44a699d96df2e13467fc081bff88b97dcc5afb2
and restrict it to MTP/PTP function level only.

Return ERR_PTR() instead of NULL from mtp_ptp function
to skip doing NULL pointer checks all the way up to
configfs.c

Fixes: Change-Id: Ic44a699d96df2e13467fc081bff88b97dcc5afb2
       ("usb: gadget: fix NULL ptr derefer while symlinking PTP func")
Change-Id: Iab7c55089c115550c3506f6cca960a07ae52713d
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
8 years agoANDROID: video: adf: Avoid directly referencing user pointers
Jonathan Hamilton [Wed, 21 Sep 2016 19:40:51 +0000 (12:40 -0700)]
ANDROID: video: adf: Avoid directly referencing user pointers

Enabling KASAN on a kernel using ADF causes a number of places where
user-supplied pointers to ioctls  pointers are directly dereferenced
without copy_from_user or access_ok.

Bug: 31806036

Signed-off-by: Jonathan Hamilton <jonathan.hamilton@imgtec.com>
Change-Id: I6e86237aaa6cec0f6e1c385336aefcc5332080ae

8 years agoANDROID: usb: gadget: audio_source: fix comparison of distinct pointer types
Amit Pundir [Thu, 15 Sep 2016 10:35:40 +0000 (16:05 +0530)]
ANDROID: usb: gadget: audio_source: fix comparison of distinct pointer types

Use div_s64() instead of do_div() to fix following "comparison of
distinct pointer types lacks a cast" warning in do_div() call in
audio_send() for ARCH=arm in Linux 4.8-rc6:

  CC      drivers/usb/gadget/function/f_audio_source.o
In file included from ./arch/arm/include/asm/div64.h:126:0,
                 from ./include/linux/kernel.h:142,
                 from ./include/linux/list.h:8,
                 from ./include/linux/kobject.h:20,
                 from ./include/linux/device.h:17,
                 from drivers/usb/gadget/function/f_audio_source.c:17:
drivers/usb/gadget/function/f_audio_source.c: In function ‘audio_send’:
./include/asm-generic/div64.h:207:28: warning: comparison of distinct pointer types lacks a cast
  (void)(((typeof((n)) *)0) == ((uint64_t *)0)); \
                            ^
drivers/usb/gadget/function/f_audio_source.c:381:2: note: in expansion of macro ‘do_div’
  do_div(msecs, 1000000);
  ^
./include/asm-generic/div64.h:207:28: warning: comparison of distinct pointer types lacks a cast
  (void)(((typeof((n)) *)0) == ((uint64_t *)0)); \
                            ^
drivers/usb/gadget/function/f_audio_source.c:383:2: note: in expansion of macro ‘do_div’
  do_div(frames, 1000);
  ^
  LD      drivers/usb/gadget/function/usb_f_audio_source.o

Change-Id: Ie1a920c8948f3fc3f1263add25a402ded132fd66
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
8 years agoandroid: binder: support for file-descriptor arrays.
Martijn Coenen [Tue, 18 Oct 2016 11:58:55 +0000 (13:58 +0200)]
android: binder: support for file-descriptor arrays.

This patch introduces a new binder_fd_array object,
that allows us to support one or more file descriptors
embedded in a buffer that is scatter-gathered.

Change-Id: I647a53cf0d905c7be0dfd9333806982def68dd74
Signed-off-by: Martijn Coenen <maco@google.com>
8 years agoandroid: binder: support for scatter-gather.
Martijn Coenen [Fri, 30 Sep 2016 12:10:07 +0000 (14:10 +0200)]
android: binder: support for scatter-gather.

Previously all data passed over binder needed
to be serialized, with the exception of Binder
objects and file descriptors.

This patchs adds support for scatter-gathering raw
memory buffers into a binder transaction, avoiding
the need to first serialize them into a Parcel.

To remain backwards compatibile with existing
binder clients, it introduces two new command
ioctls for this purpose - BC_TRANSACTION_SG and
BC_REPLY_SG. These commands may only be used with
the new binder_transaction_data_sg structure,
which adds a field for the total size of the
buffers we are scatter-gathering.

Because memory buffers may contain pointers to
other buffers, we allow callers to specify
a parent buffer and an offset into it, to indicate
this is a location pointing to the buffer that
we are fixing up. The kernel will then take care
of fixing up the pointer to that buffer as well.

Change-Id: I02417f28cff14688f2e1d6fcb959438fd96566cc
Signed-off-by: Martijn Coenen <maco@google.com>
8 years agoandroid: binder: add extra size to allocator.
Martijn Coenen [Fri, 30 Sep 2016 12:05:40 +0000 (14:05 +0200)]
android: binder: add extra size to allocator.

The binder_buffer allocator currently only allocates
space for the data and offsets buffers of a Parcel.
This change allows for requesting an additional chunk
of data in the buffer, which can for example be used
to hold additional meta-data about the transaction
(eg a security context).

Change-Id: I58ab9c383a2e1a3057aae6adaa596ce867f1b157
Signed-off-by: Martijn Coenen <maco@google.com>
8 years agoandroid: binder: refactor binder_transact()
Martijn Coenen [Thu, 29 Sep 2016 13:38:14 +0000 (15:38 +0200)]
android: binder: refactor binder_transact()

Moved handling of fixup for binder objects,
handles and file descriptors into separate
functions.

Change-Id: If6849f1caee3834aa87d0ab08950bb1e21ec6e38
Signed-off-by: Martijn Coenen <maco@google.com>
8 years agoandroid: binder: support multiple /dev instances.
Martijn Coenen [Fri, 30 Sep 2016 14:08:09 +0000 (16:08 +0200)]
android: binder: support multiple /dev instances.

Add a new module parameter 'devices', that can be
used to specify the names of the binder device
nodes we want to populate in /dev.

Each device node has its own context manager, and
is therefore logically separated from all the other
device nodes.

The config option CONFIG_ANDROID_BINDER_DEVICES can
be used to set the default value of the parameter.

This approach was favored over using IPC namespaces,
mostly because we require a single process to be a
part of multiple binder contexts, which seemed harder
to achieve with namespaces.

Change-Id: I3df72b2a19b5ad5a0360e6322482db7b00a12b24
Signed-off-by: Martijn Coenen <maco@google.com>
8 years agoandroid: binder: deal with contexts in debugfs.
Martijn Coenen [Mon, 17 Oct 2016 13:17:31 +0000 (15:17 +0200)]
android: binder: deal with contexts in debugfs.

Properly print the context in debugfs entries.

Change-Id: If10c2129536d9f39bae542afd7318ca79af60e3a
Signed-off-by: Martijn Coenen <maco@google.com>
8 years agoandroid: binder: support multiple context managers.
Martijn Coenen [Fri, 30 Sep 2016 13:51:48 +0000 (15:51 +0200)]
android: binder: support multiple context managers.

Move the context manager state into a separate
struct context, and allow for each process to have
its own context associated with it.

Change-Id: Ifa934370241a2d447dd519eac3fd0682c6d00ab4
Signed-off-by: Martijn Coenen <maco@google.com>
8 years agoandroid: binder: split flat_binder_object.
Martijn Coenen [Wed, 13 Jul 2016 10:06:49 +0000 (12:06 +0200)]
android: binder: split flat_binder_object.

flat_binder_object is used for both handling
binder objects and file descriptors, even though
the two are mostly independent. Since we'll
have more fixup objects in binder in the future,
instead of extending flat_binder_object again,
split out file descriptors to their own object
while retaining backwards compatibility to
existing user-space clients. All binder objects
just share a header.

Change-Id: If3c55f27a2aa8f21815383e0e807be47895e4786
Signed-off-by: Martijn Coenen <maco@google.com>
8 years agodisable aio support in recommended configuration
Daniel Micay [Thu, 20 Oct 2016 19:45:01 +0000 (15:45 -0400)]
disable aio support in recommended configuration

The aio interface adds substantial attack surface for a feature that's
not being exposed by Android at all. It's unlikely that anyone is using
the kernel feature directly either. This feature is rarely used even on
servers. The glibc POSIX aio calls really use thread pools. The lack of
widespread usage also means this is relatively poorly audited/tested.

The kernel's aio rarely provides performance benefits over using a
thread pool and is quite incomplete in terms of system call coverage
along with having edge cases where blocking can occur. Part of the
performance issue is the fact that it only supports direct io, not
buffered io. The existing API is considered fundamentally flawed
and it's unlikely it will be expanded, but rather replaced:

https://marc.info/?l=linux-aio&m=145255815216051&w=2

Since ext4 encryption means no direct io support, kernel aio isn't even
going to work properly on Android devices using file-based encryption.

Change-Id: Iccc7cab4437791240817e6275a23e1d3f4a47f2d
Signed-off-by: Daniel Micay <danielmicay@gmail.com>
8 years ago[RFC]cgroup: Change from CAP_SYS_NICE to CAP_SYS_RESOURCE for cgroup migration permis...
John Stultz [Tue, 18 Oct 2016 23:20:23 +0000 (16:20 -0700)]
[RFC]cgroup: Change from CAP_SYS_NICE to CAP_SYS_RESOURCE for cgroup migration permissions

Try to better match what we're pushing upstream, use CAP_SYS_RESOURCE
instead of CAP_SYS_NICE, which shoudln't affect Android as Zygote and
system_server already use CAP_SYS_RESOURCE.

Signed-off-by: John Stultz <john.stultz@linaro.org>
8 years agoUPSTREAM: cpu/hotplug: Handle unbalanced hotplug enable/disable
Lianwei Wang [Fri, 10 Jun 2016 06:43:28 +0000 (23:43 -0700)]
UPSTREAM: cpu/hotplug: Handle unbalanced hotplug enable/disable

(cherry picked from commit 01b41159066531cc8d664362ff0cd89dd137bbfa)

When cpu_hotplug_enable() is called unbalanced w/o a preceeding
cpu_hotplug_disable() the code emits a warning, but happily decrements the
disabled counter. This causes the next operations to malfunction.

Prevent the decrement and just emit a warning.

Signed-off-by: Lianwei Wang <lianwei.wang@gmail.com>
Cc: peterz@infradead.org
Cc: linux-pm@vger.kernel.org
Cc: oleg@redhat.com
Link: http://lkml.kernel.org/r/1465541008-12476-1-git-send-email-lianwei.wang@gmail.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agoUPSTREAM: arm64: kaslr: fix breakage with CONFIG_MODVERSIONS=y
Ard Biesheuvel [Thu, 13 Oct 2016 16:42:09 +0000 (17:42 +0100)]
UPSTREAM: arm64: kaslr: fix breakage with CONFIG_MODVERSIONS=y

As it turns out, the KASLR code breaks CONFIG_MODVERSIONS, since the
kcrctab has an absolute address field that is relocated at runtime
when the kernel offset is randomized.

This has been fixed already for PowerPC in the past, so simply wire up
the existing code dealing with this issue.

Cc: <stable@vger.kernel.org>
Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
Tested-by: Timur Tabi <timur@codeaurora.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 30369029
(cherry picked from commit 8fe88a4145cdeee486af60e61f5d5a14f804fa45)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ia40bb68eb5ba7df14214243657948d469f1d5717

8 years agoUPSTREAM: arm64: kaslr: keep modules close to the kernel when DYNAMIC_FTRACE=y
Ard Biesheuvel [Mon, 17 Oct 2016 15:18:39 +0000 (16:18 +0100)]
UPSTREAM: arm64: kaslr: keep modules close to the kernel when DYNAMIC_FTRACE=y

The RANDOMIZE_MODULE_REGION_FULL Kconfig option allows KASLR to be
configured in such a way that kernel modules and the core kernel are
allocated completely independently, which implies that modules are likely
to require branches via PLT entries to reach the core kernel. The dynamic
ftrace code does not expect that, and assumes that it can patch module
code to perform a relative branch to anywhere in the core kernel. This
may result in errors such as

  branch_imm_common: offset out of range
  ------------[ cut here ]------------
  WARNING: CPU: 3 PID: 196 at kernel/trace/ftrace.c:1995 ftrace_bug+0x220/0x2e8
  Modules linked in:

  CPU: 3 PID: 196 Comm: systemd-udevd Not tainted 4.8.0-22-generic #24
  Hardware name: AMD Seattle/Seattle, BIOS 10:34:40 Oct  6 2016
  task: ffff8d1bef7dde80 task.stack: ffff8d1bef6b0000
  PC is at ftrace_bug+0x220/0x2e8
  LR is at ftrace_process_locs+0x330/0x430

So make RANDOMIZE_MODULE_REGION_FULL mutually exclusive with DYNAMIC_FTRACE
at the Kconfig level.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 30369029
(cherry picked from commit 8fe88a4145cdeee486af60e61f5d5a14f804fa45)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ifb2474dcbb7a3066fe5724ee53a2048d61e80ccc

8 years agocgroup: Remove leftover instances of allow_attach
Guenter Roeck [Tue, 18 Oct 2016 19:35:03 +0000 (12:35 -0700)]
cgroup: Remove leftover instances of allow_attach

Fix:

kernel/sched/tune.c:718:2: error:
unknown field ‘allow_attach’ specified in initializer
kernel/cpuset.c:2087:2: error:
unknown field 'allow_attach' specified in initializer

Change-Id: Ie524350ffc6158f3182d90095cca502e58b6f197
Fixes: e78f134a78a0 ("CHROMIUM: remove Android's cgroup generic permissions checks")
Signed-off-by: Guenter Roeck <groeck@chromium.org>
8 years agoBACKPORT: lib: harden strncpy_from_user
Mark Rutland [Tue, 11 Oct 2016 20:51:27 +0000 (13:51 -0700)]
BACKPORT: lib: harden strncpy_from_user

The strncpy_from_user() accessor is effectively a copy_from_user()
specialised to copy strings, terminating early at a NUL byte if possible.
In other respects it is identical, and can be used to copy an arbitrarily
large buffer from userspace into the kernel.  Conceptually, it exposes a
similar attack surface.

As with copy_from_user(), we check the destination range when the kernel
is built with KASAN, but unlike copy_from_user() we do not check the
destination buffer when using HARDENED_USERCOPY.  As strncpy_from_user()
calls get_user() in a loop, we must call check_object_size() explicitly.

This patch adds this instrumentation to strncpy_from_user(), per the same
rationale as with the regular copy_from_user().  In the absence of
hardened usercopy this will have no impact as the instrumentation expands
to an empty static inline function.

Link: http://lkml.kernel.org/r/1472221903-31181-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Bug: 31374226
Change-Id: I898e4e9f19307e37a9be497cb1a0d7f1e3911661
(cherry picked from commit bf90e56e467ed5766722972d483e6711889ed1b0)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoCHROMIUM: cgroups: relax permissions on moving tasks between cgroups
Dmitry Torokhov [Thu, 6 Oct 2016 23:14:16 +0000 (16:14 -0700)]
CHROMIUM: cgroups: relax permissions on moving tasks between cgroups

Android expects system_server to be able to move tasks between different
cgroups/cpusets, but does not want to be running as root. Let's relax
permission check so that processes can move other tasks if they have
CAP_SYS_NICE in the affected task's user namespace.

BUG=b:31790445,chromium:647994
TEST=Boot android container, examine logcat

Change-Id: Ia919c66ab6ed6a6daf7c4cf67feb38b13b1ad09b
Signed-off-by: Dmitry Torokhov <dtor@chromium.org>
Reviewed-on: https://chromium-review.googlesource.com/394927
Reviewed-by: Ricky Zhou <rickyz@chromium.org>
8 years agoCHROMIUM: remove Android's cgroup generic permissions checks
Dmitry Torokhov [Thu, 6 Oct 2016 22:53:38 +0000 (15:53 -0700)]
CHROMIUM: remove Android's cgroup generic permissions checks

The implementation is utterly broken, resulting in all processes being
allows to move tasks between sets (as long as they have access to the
"tasks" attribute), and upstream is heading towards checking only
capability anyway, so let's get rid of this code.

BUG=b:31790445,chromium:647994
TEST=Boot android container, examine logcat

Change-Id: I2f780a5992c34e52a8f2d0b3557fc9d490da2779
Signed-off-by: Dmitry Torokhov <dtor@chromium.org>
Reviewed-on: https://chromium-review.googlesource.com/394967
Reviewed-by: Ricky Zhou <rickyz@chromium.org>
Reviewed-by: John Stultz <john.stultz@linaro.org>
8 years agoUPSTREAM: arm64: relocatable: deal with physically misaligned kernel images
Ard Biesheuvel [Mon, 18 Apr 2016 15:09:47 +0000 (17:09 +0200)]
UPSTREAM: arm64: relocatable: deal with physically misaligned kernel images

When booting a relocatable kernel image, there is no practical reason
to refuse an image whose load address is not exactly TEXT_OFFSET bytes
above a 2 MB aligned base address, as long as the physical and virtual
misalignment with respect to the swapper block size are equal, and are
both aligned to THREAD_SIZE.

Since the virtual misalignment is under our control when we first enter
the kernel proper, we can simply choose its value to be equal to the
physical misalignment.

So treat the misalignment of the physical load address as the initial
KASLR offset, and fix up the remaining code to deal with that.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 30369029
Bug: 32122850

(cherry picked from commit 08cdac619c81b3fa8cd73aeed2330ffe0a0b73ca)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I658cb3467ba9a4f5b1f5a1cbb972fdc5a3562bf0

8 years agoUPSTREAM: arm64: account for sparsemem section alignment when choosing vmemmap offset
Ard Biesheuvel [Tue, 8 Mar 2016 14:09:29 +0000 (21:09 +0700)]
UPSTREAM: arm64: account for sparsemem section alignment when choosing vmemmap offset

Commit dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear
region") fixed an issue where the struct page array would overflow into the
adjacent virtual memory region if system RAM was placed so high up in
physical memory that its addresses were not representable in the build time
configured virtual address size.

However, the fix failed to take into account that the vmemmap region needs
to be relatively aligned with respect to the sparsemem section size, so that
a sequence of page structs corresponding with a sparsemem section in the
linear region appears naturally aligned in the vmemmap region.

So round up vmemmap to sparsemem section size. Since this essentially moves
the projection of the linear region up in memory, also revert the reduction
of the size of the vmemmap region.

Cc: <stable@vger.kernel.org>
Fixes: dfd55ad85e4a ("arm64: vmemmap: use virtual projection of linear region")
Tested-by: Mark Langsdorf <mlangsdo@redhat.com>
Tested-by: David Daney <david.daney@cavium.com>
Tested-by: Robert Richter <rrichter@cavium.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 30369029

(cherry picked from commit 36e5cd6b897e17d03008f81e075625d8e43e52d0)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I77bad8c6a7c1a7c3dda92a37ceef5ddfb196ec70

8 years agoUPSTREAM: percpu: fix synchronization between synchronous map extension and chunk...
Tejun Heo [Wed, 25 May 2016 15:48:25 +0000 (11:48 -0400)]
UPSTREAM: percpu: fix synchronization between synchronous map extension and chunk destruction

(cherry picked from commit 6710e594f71ccaad8101bc64321152af7cd9ea28)

For non-atomic allocations, pcpu_alloc() can try to extend the area
map synchronously after dropping pcpu_lock; however, the extension
wasn't synchronized against chunk destruction and the chunk might get
freed while extension is in progress.

This patch fixes the bug by putting most of non-atomic allocations
under pcpu_alloc_mutex to synchronize against pcpu_balance_work which
is responsible for async chunk management including destruction.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-and-tested-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Reported-by: Vlastimil Babka <vbabka@suse.cz>
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Cc: stable@vger.kernel.org # v3.18+
Fixes: 1a4d76076cda ("percpu: implement asynchronous chunk population")
Change-Id: I8800962e658e78eac866fff4a4e00294c58a3dec
Bug: 31596597

8 years agoUPSTREAM: percpu: fix synchronization between chunk->map_extend_work and chunk destru...
Tejun Heo [Wed, 25 May 2016 15:48:25 +0000 (11:48 -0400)]
UPSTREAM: percpu: fix synchronization between chunk->map_extend_work and chunk destruction

(cherry picked from commit 4f996e234dad488e5d9ba0858bc1bae12eff82c3)

Atomic allocations can trigger async map extensions which is serviced
by chunk->map_extend_work.  pcpu_balance_work which is responsible for
destroying idle chunks wasn't synchronizing properly against
chunk->map_extend_work and may end up freeing the chunk while the work
item is still in flight.

This patch fixes the bug by rolling async map extension operations
into pcpu_balance_work.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-and-tested-by: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Reported-by: Vlastimil Babka <vbabka@suse.cz>
Reported-by: Sasha Levin <sasha.levin@oracle.com>
Cc: stable@vger.kernel.org # v3.18+
Fixes: 9c824b6a172c ("percpu: make sure chunk->map array has available space")
Change-Id: I8f4aaf7fe0bc0e9f353d41e0a7840c40d6a32117
Bug: 31596597

8 years agoANDROID: binder: Clear binder and cookie when setting handle in flat binder struct
Arve Hjønnevåg [Fri, 12 Aug 2016 23:04:28 +0000 (16:04 -0700)]
ANDROID: binder: Clear binder and cookie when setting handle in flat binder struct

Prevents leaking pointers between processes

BUG: 30768347
Change-Id: Id898076926f658a1b8b27a3ccb848756b36de4ca
Signed-off-by: Arve Hjønnevåg <arve@android.com>
8 years agoANDROID: binder: Add strong ref checks
Arve Hjønnevåg [Tue, 2 Aug 2016 22:40:39 +0000 (15:40 -0700)]
ANDROID: binder: Add strong ref checks

Prevent using a binder_ref with only weak references where a strong
reference is required.

BUG: 30445380
Change-Id: I66c15b066808f28bd27bfe50fd0e03ff45a09fca
Signed-off-by: Arve Hjønnevåg <arve@android.com>
8 years agoUPSTREAM: staging/android/ion : fix a race condition in the ion driver
EunTaik Lee [Wed, 24 Feb 2016 04:38:06 +0000 (04:38 +0000)]
UPSTREAM: staging/android/ion : fix a race condition in the ion driver

There is a use-after-free problem in the ion driver.
This is caused by a race condition in the ion_ioctl()
function.

A handle has ref count of 1 and two tasks on different
cpus calls ION_IOC_FREE simultaneously.

cpu 0                                   cpu 1
-------------------------------------------------------
ion_handle_get_by_id()
(ref == 2)
                            ion_handle_get_by_id()
                            (ref == 3)

ion_free()
(ref == 2)

ion_handle_put()
(ref == 1)

                            ion_free()
                            (ref == 0 so ion_handle_destroy() is
                            called
                            and the handle is freed.)

                            ion_handle_put() is called and it
                            decreases the slub's next free pointer

The problem is detected as an unaligned access in the
spin lock functions since it uses load exclusive
 instruction. In some cases it corrupts the slub's
free pointer which causes a mis-aligned access to the
next free pointer.(kmalloc returns a pointer like
ffffc0745b4580aa). And it causes lots of other
hard-to-debug problems.

This symptom is caused since the first member in the
ion_handle structure is the reference count and the
ion driver decrements the reference after it has been
freed.

To fix this problem client->lock mutex is extended
to protect all the codes that uses the handle.

Signed-off-by: Eun Taik Lee <eun.taik.lee@samsung.com>
Reviewed-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 9590232bb4f4cc824f3425a6e1349afbe6d6d2b7)
bug: 31568617
Change-Id: I4ea2be0cad3305c4e196126a02e2ab7108ef0976

8 years agoANDROID: android-base: CONFIG_HARDENED_USERCOPY=y
Sami Tolvanen [Wed, 5 Oct 2016 16:52:07 +0000 (09:52 -0700)]
ANDROID: android-base: CONFIG_HARDENED_USERCOPY=y

Bug: 31374226
Change-Id: I977e76395017d8d718ea634421b3635023934ef9
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: fs/proc/kcore.c: Add bounce buffer for ktext data
Jiri Olsa [Thu, 8 Sep 2016 07:57:08 +0000 (09:57 +0200)]
UPSTREAM: fs/proc/kcore.c: Add bounce buffer for ktext data

We hit hardened usercopy feature check for kernel text access by reading
kcore file:

  usercopy: kernel memory exposure attempt detected from ffffffff8179a01f (<kernel text>) (4065 bytes)
  kernel BUG at mm/usercopy.c:75!

Bypassing this check for kcore by adding bounce buffer for ktext data.

Reported-by: Steve Best <sbest@redhat.com>
Fixes: f5509cc18daa ("mm: Hardened usercopy")
Suggested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Bug: 31374226
Change-Id: Ic93e6041b67d804a994518bf4950811f828b406e
(cherry picked from commit df04abfd181acc276ba6762c8206891ae10ae00d)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: fs/proc/kcore.c: Make bounce buffer global for read
Jiri Olsa [Thu, 8 Sep 2016 07:57:07 +0000 (09:57 +0200)]
UPSTREAM: fs/proc/kcore.c: Make bounce buffer global for read

Next patch adds bounce buffer for ktext area, so it's
convenient to have single bounce buffer for both
vmalloc/module and ktext cases.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Bug: 31374226
Change-Id: I8f517354e6d12aed75ed4ae6c0a6adef0a1e61da
(cherry picked from commit f5beeb1851ea6f8cfcf2657f26cb24c0582b4945)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoBACKPORT: arm64: Correctly bounds check virt_addr_valid
Laura Abbott [Wed, 21 Sep 2016 22:25:04 +0000 (15:25 -0700)]
BACKPORT: arm64: Correctly bounds check virt_addr_valid

virt_addr_valid is supposed to return true if and only if virt_to_page
returns a valid page structure. The current macro does math on whatever
address is given and passes that to pfn_valid to verify. vmalloc and
module addresses can happen to generate a pfn that 'happens' to be
valid. Fix this by only performing the pfn_valid check on addresses that
have the potential to be valid.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 31374226
Change-Id: I75cbeb3edb059f19af992b7f5d0baa283f95991b
(cherry picked from commit ca219452c6b8a6cd1369b6a78b1cf069d0386865)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoBACKPORT: arm64: mm: Mark .rodata as RO
Jeremy Linton [Fri, 19 Feb 2016 17:50:32 +0000 (11:50 -0600)]
BACKPORT: arm64: mm: Mark .rodata as RO

Currently the .rodata section is actually still executable when DEBUG_RODATA
is enabled. This changes that so the .rodata is actually read only, no execute.
It also adds the .rodata section to the mem_init banner.

Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
[catalin.marinas@arm.com: added vm_struct vmlinux_rodata in map_kernel()]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 31374226
Change-Id: I6fd95beaf814fc91805da12c5329a57ce9008fd7
(cherry picked from commit 2f39b5f91eb4bccd786d194e70db1dccad784755)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoFix a build breakage in IO latency hist code.
Mohan Srinivasan [Mon, 3 Oct 2016 23:17:34 +0000 (16:17 -0700)]
Fix a build breakage in IO latency hist code.

Fix a build breakage where MMC is enabled, but BLOCK is not.

Change-Id: I0eb422d12264f0371f3368ae7c37342ef9efabaa
Signed-off-by: Mohan Srinivasan <srmohan@google.com>
8 years agoUPSTREAM: efi: include asm/early_ioremap.h not asm/efi.h to get early_memremap
Ard Biesheuvel [Tue, 12 Jan 2016 13:22:46 +0000 (14:22 +0100)]
UPSTREAM: efi: include asm/early_ioremap.h not asm/efi.h to get early_memremap

The code in efi.c uses early_memremap(), but relies on a transitive
include rather than including asm/early_ioremap.h directly, since
this header did not exist on ia64.

Commit f7d924894265 ("arm64/efi: refactor EFI init and runtime code
for reuse by 32-bit ARM") attempted to work around this by including
asm/efi.h, which transitively includes asm/early_ioremap.h on most
architectures. However, since asm/efi.h does not exist on ia64 either,
this is not much of an improvement.

Now that we have created an asm/early_ioremap.h for ia64, we can just
include it directly.

Reported-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Change-Id: Ifa3e69e0b4078bac1e1d29bfe56861eb394e865b
(cherry picked from commit 0f7f2f0c0fcbe5e2bcba707a628ebaedfe2be4b4)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: ia64: split off early_ioremap() declarations into asm/early_ioremap.h
Ard Biesheuvel [Tue, 12 Jan 2016 13:22:45 +0000 (14:22 +0100)]
UPSTREAM: ia64: split off early_ioremap() declarations into asm/early_ioremap.h

Unlike x86, arm64 and ARM, ia64 does not declare its implementations
of early_ioremap/early_iounmap/early_memremap/early_memunmap in a header
file called <asm/early_ioremap.h>

This complicates the use of these functions in generic code, since the
header cannot be included directly, and we have to rely on transitive
includes, which is fragile.

So create a <asm/early_ioremap.h> for ia64, and move the existing
definitions into it.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Change-Id: I31eb55a9d57596faa40aec64bd26ce3ec21b0b4d
(cherry picked from commit 809267708557ed5575831282f719ca644698084b)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoFROMLIST: arm64: Enable CONFIG_ARM64_SW_TTBR0_PAN
Catalin Marinas [Fri, 1 Jul 2016 17:25:31 +0000 (18:25 +0100)]
FROMLIST: arm64: Enable CONFIG_ARM64_SW_TTBR0_PAN

This patch adds the Kconfig option to enable support for TTBR0 PAN
emulation. The option is default off because of a slight performance hit
when enabled, caused by the additional TTBR0_EL1 switching during user
access operations or exception entry/exit code.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Id00a8ad4169d6eb6176c468d953436eb4ae887ae
(cherry picked from commit 6a2d7bad43474c48b68394d455b84a16b7d7dc3f)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoFROMLIST: arm64: xen: Enable user access before a privcmd hvc call
Catalin Marinas [Tue, 5 Jul 2016 11:25:15 +0000 (12:25 +0100)]
FROMLIST: arm64: xen: Enable user access before a privcmd hvc call

Privcmd calls are issued by the userspace. The kernel needs to enable
access to TTBR0_EL1 as the hypervisor would issue stage 1 translations
to user memory via AT instructions. Since AT instructions are not
affected by the PAN bit (ARMv8.1), we only need the explicit
uaccess_enable/disable if the TTBR0 PAN option is enabled.

Reviewed-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I927f14076ba94c83e609b19f46dd373287e11fc4
(cherry picked from commit 8cc1f33d2c9f206b6505bedba41aed2b33c203c0)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoFROMLIST: arm64: Handle faults caused by inadvertent user access with PAN enabled
Catalin Marinas [Fri, 1 Jul 2016 17:22:39 +0000 (18:22 +0100)]
FROMLIST: arm64: Handle faults caused by inadvertent user access with PAN enabled

When TTBR0_EL1 is set to the reserved page, an erroneous kernel access
to user space would generate a translation fault. This patch adds the
checks for the software-set PSR_PAN_BIT to emulate a permission fault
and report it accordingly.

This patch also updates the description of the synchronous external
aborts on translation table walks.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I623113fc8bf6d5f023aeec7a0640b62a25ef8420
(cherry picked from commit be3db9340c8011d22f06715339b66bcbbd4893bd)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoFROMLIST: arm64: Disable TTBR0_EL1 during normal kernel execution
Catalin Marinas [Fri, 2 Sep 2016 13:54:03 +0000 (14:54 +0100)]
FROMLIST: arm64: Disable TTBR0_EL1 during normal kernel execution

When the TTBR0 PAN feature is enabled, the kernel entry points need to
disable access to TTBR0_EL1. The PAN status of the interrupted context
is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22).
Restoring access to TTBR0_PAN is done on exception return if returning
to user or returning to a context where PAN was disabled.

Context switching via switch_mm() must defer the update of TTBR0_EL1
until a return to user or an explicit uaccess_enable() call.

Special care needs to be taken for two cases where TTBR0_EL1 is set
outside the normal kernel context switch operation: EFI run-time
services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap).
Code has been added to avoid deferred TTBR0_EL1 switching as in
switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the
special TTBR0_EL1.

This patch also removes a stale comment on the switch_mm() function.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Id1198cf1cde022fad10a94f95d698fae91d742aa
(cherry picked from commit d26cfd64c973b31f73091c882e07350e14fdd6c9)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoFROMLIST: arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1
Catalin Marinas [Fri, 1 Jul 2016 15:53:00 +0000 (16:53 +0100)]
FROMLIST: arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1

This patch adds the uaccess macros/functions to disable access to user
space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
written to TTBR0_EL1 must be a physical address, for simplicity this
patch introduces a reserved_ttbr0 page at a constant offset from
swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
adjusted by the reserved_ttbr0 offset.

Enabling access to user is done by restoring TTBR0_EL1 with the value
from the struct thread_info ttbr0 variable. Interrupts must be disabled
during the uaccess_ttbr0_enable code to ensure the atomicity of the
thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
get_thread_info asm macro from entry.S to assembler.h for reuse in the
uaccess_ttbr0_* macros.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Idf09a870b8612dce23215bce90d88781f0c0c3aa
(cherry picked from commit 940d37234182d2675ab8ab46084840212d735018)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoFROMLIST: arm64: Factor out TTBR0_EL1 post-update workaround into a specific asm...
Catalin Marinas [Fri, 1 Jul 2016 14:48:55 +0000 (15:48 +0100)]
FROMLIST: arm64: Factor out TTBR0_EL1 post-update workaround into a specific asm macro

This patch takes the errata workaround code out of cpu_do_switch_mm into
a dedicated post_ttbr0_update_workaround macro which will be reused in a
subsequent patch.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I69f94e4c41046bd52ca9340b72d97bfcf955b586
(cherry picked from commit 4398e6a1644373a4c2f535f4153c8378d0914630)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoFROMLIST: arm64: Factor out PAN enabling/disabling into separate uaccess_* macros
Catalin Marinas [Fri, 1 Jul 2016 13:58:21 +0000 (14:58 +0100)]
FROMLIST: arm64: Factor out PAN enabling/disabling into separate uaccess_* macros

This patch moves the directly coded alternatives for turning PAN on/off
into separate uaccess_{enable,disable} macros or functions. The asm
macros take a few arguments which will be used in subsequent patches.

Note that any (unlikely) access that the compiler might generate between
uaccess_enable() and uaccess_disable(), other than those explicitly
specified by the user access code, will not be protected by PAN.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Ic3fddd706400c8798f57456c56361d84d234f6ef
(cherry picked from commit a4820644c627b82cbc865f2425bb788c94743b16)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: Handle el1 synchronous instruction aborts cleanly
Laura Abbott [Wed, 10 Aug 2016 01:25:26 +0000 (18:25 -0700)]
UPSTREAM: arm64: Handle el1 synchronous instruction aborts cleanly

Executing from a non-executable area gives an ugly message:

lkdtm: Performing direct entry EXEC_RODATA
lkdtm: attempting ok execution at ffff0000084c0e08
lkdtm: attempting bad execution at ffff000008880700
Bad mode in Synchronous Abort handler detected on CPU2, code 0x8400000e -- IABT (current EL)
CPU: 2 PID: 998 Comm: sh Not tainted 4.7.0-rc2+ #13
Hardware name: linux,dummy-virt (DT)
task: ffff800077e35780 ti: ffff800077970000 task.ti: ffff800077970000
PC is at lkdtm_rodata_do_nothing+0x0/0x8
LR is at execute_location+0x74/0x88

The 'IABT (current EL)' indicates the error but it's a bit cryptic
without knowledge of the ARM ARM. There is also no indication of the
specific address which triggered the fault. The increase in kernel
page permissions makes hitting this case more likely as well.
Handling the case in the vectors gives a much more familiar looking
error message:

lkdtm: Performing direct entry EXEC_RODATA
lkdtm: attempting ok execution at ffff0000084c0840
lkdtm: attempting bad execution at ffff000008880680
Unable to handle kernel paging request at virtual address ffff000008880680
pgd = ffff8000089b2000
[ffff000008880680] *pgd=00000000489b4003, *pud=0000000048904003, *pmd=0000000000000000
Internal error: Oops: 8400000e [#1] PREEMPT SMP
Modules linked in:
CPU: 1 PID: 997 Comm: sh Not tainted 4.7.0-rc1+ #24
Hardware name: linux,dummy-virt (DT)
task: ffff800077f9f080 ti: ffff800008a1c000 task.ti: ffff800008a1c000
PC is at lkdtm_rodata_do_nothing+0x0/0x8
LR is at execute_location+0x74/0x88

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Ifba74589ba2cf05b28335d4fd3e3140ef73668db
(cherry picked from commit 9adeb8e72dbfe976709df01e259ed556ee60e779)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoBACKPORT: arm64: kernel: Save and restore UAO and addr_limit on exception entry
James Morse [Mon, 20 Jun 2016 17:28:01 +0000 (18:28 +0100)]
BACKPORT: arm64: kernel: Save and restore UAO and addr_limit on exception entry

If we take an exception while at EL1, the exception handler inherits
the original context's addr_limit and PSTATE.UAO values. To be consistent
always reset addr_limit and PSTATE.UAO on (re-)entry to EL1. This
prevents accidental re-use of the original context's addr_limit.

Based on a similar patch for arm from Russell King.

Cc: <stable@vger.kernel.org> # 4.6-
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: Iab453201c6e08bc6e22500b7c5570dd0fe2d1b74
(cherry picked from commit e19a6ee2460bdd0d0055a6029383422773f9999a)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: include alternative handling in dcache_by_line_op
Andre Przywara [Tue, 28 Jun 2016 17:07:29 +0000 (18:07 +0100)]
UPSTREAM: arm64: include alternative handling in dcache_by_line_op

The newly introduced dcache_by_line_op macro is used at least in
one occassion at the moment to issue a "dc cvau" instruction,
which is affected by ARM errata 819472, 826319, 827319 and 824069.
Change the macro to allow for alternative patching in there to
protect affected Cortex-A53 cores.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
[catalin.marinas@arm.com: indentation fixups]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I450594dc311b09b6b832b707a9abb357608cc6e4
(cherry picked from commit 823066d9edcdfe4cedb06216c2b1f91efaf68a87)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: fix "dc cvau" cache operation on errata-affected core
Andre Przywara [Tue, 28 Jun 2016 17:07:28 +0000 (18:07 +0100)]
UPSTREAM: arm64: fix "dc cvau" cache operation on errata-affected core

The ARM errata 819472, 826319, 827319 and 824069 for affected
Cortex-A53 cores demand to promote "dc cvau" instructions to
"dc civac" as well.
Attribute the usage of the instruction in __flush_cache_user_range
to also be covered by our alternative patching efforts.
For that we introduce an assembly macro which both deals with
alternatives while still tagging the instructions as USER.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: If5e7933ba32331b2aa28fc5d9e019649452f0f6c
(cherry picked from commit 290622efc76ece22ef76a30bf117755891ab27f6)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: Revert "arm64: alternatives: add enable parameter to conditional asm macros"
Andre Przywara [Tue, 28 Jun 2016 17:07:27 +0000 (18:07 +0100)]
UPSTREAM: Revert "arm64: alternatives: add enable parameter to conditional asm macros"

Commit 77ee306c0aea9 ("arm64: alternatives: add enable parameter to
conditional asm macros") extended the alternative assembly macros.
Unfortunately this does not really work as one would expect, as the
enable parameter in fact correctly protects the alternative section
magic, but not the actual code sequences.
This results in having both the original instruction(s) _and_  the
alternative ones, if enable if false.
Since there is no user of this macros anyway, just revert it.

This reverts commit 77ee306c0aea9a219daec256ad25982944affef8.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I608104891335dfa2dacdb364754ae2658088ddf2
(cherry picked from commit b82bfa4793cd0f8fde49b85e0ad66906682e7447)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: Add new asm macro copy_page
Geoff Levand [Wed, 27 Apr 2016 16:47:10 +0000 (17:47 +0100)]
UPSTREAM: arm64: Add new asm macro copy_page

Kexec and hibernate need to copy pages of memory, but may not have all
of the kernel mapped, and are unable to call copy_page().

Add a simplistic copy_page() macro, that can be inlined in these
situations. lib/copy_page.S provides a bigger better version, but
uses more registers.

Signed-off-by: Geoff Levand <geoff@infradead.org>
[Changed asm label to 9998, added commit message]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: If23a454e211b1f57f8ba1a2a00b44dabf4b82932
(cherry picked from commit 5003dbde45961dd7ab3d8a09ab9ad8bcb604db40)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: kill ESR_LNX_EXEC
Mark Rutland [Tue, 31 May 2016 11:33:03 +0000 (12:33 +0100)]
UPSTREAM: arm64: kill ESR_LNX_EXEC

Currently we treat ESR_EL1 bit 24 as software-defined for distinguishing
instruction aborts from data aborts, but this bit is architecturally
RES0 for instruction aborts, and could be allocated for an arbitrary
purpose in future. Additionally, we hard-code the value in entry.S
without the mnemonic, making the code difficult to understand.

Instead, remove ESR_LNX_EXEC, and distinguish aborts based on the esr,
which we already pass to the sole use of ESR_LNX_EXEC. A new helper,
is_el0_instruction_abort() is added to make the logic clear. Any
instruction aborts taken from EL1 will already have been handled by
bad_mode, so we need not handle that case in the helper.

For consistency, the existing permission_fault helper is renamed to
is_permission_fault, and the return type is changed to bool. There
should be no functional changes as the return value was a boolean
expression, and the result is only used in another boolean expression.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Dave P Martin <dave.martin@arm.com>
Cc: Huang Shijie <shijie.huang@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Iaf66fa5f3b13cf985b11a3b0a40c4333fe9ef833
(cherry picked from commit 541ec870ef31433018d245614254bd9d810a9ac3)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: add macro to extract ESR_ELx.EC
Mark Rutland [Tue, 31 May 2016 11:33:01 +0000 (12:33 +0100)]
UPSTREAM: arm64: add macro to extract ESR_ELx.EC

Several places open-code extraction of the EC field from an ESR_ELx
value, in subtly different ways. This is unfortunate duplication and
variation, and the precise logic used to extract the field is a
distraction.

This patch adds a new macro, ESR_ELx_EC(), to extract the EC field from
an ESR_ELx value in a consistent fashion.

Existing open-coded extractions in core arm64 code are moved over to the
new helper. KVM code is left as-is for the moment.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Huang Shijie <shijie.huang@arm.com>
Cc: Dave P Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Ib634a4795277d243fce5dd30b139e2ec1465bee9
(cherry picked from commit 275f344bec51e9100bae81f3cc8c6940bbfb24c0)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: mm: mark fault_info table const
Mark Rutland [Mon, 13 Jun 2016 16:57:02 +0000 (17:57 +0100)]
UPSTREAM: arm64: mm: mark fault_info table const

Unlike the debug_fault_info table, we never intentionally alter the
fault_info table at runtime, and all derived pointers are treated as
const currently.

Make the table const so that it can be placed in .rodata and protected
from unintentional writes, as we do for the syscall tables.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I3fb0bb55427835c165cc377d8dc2a3fa9e6e950d
(cherry picked from commit bbb1681ee3653bdcfc6a4ba31902738118311fd4)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: fix dump_instr when PAN and UAO are in use
Mark Rutland [Mon, 13 Jun 2016 10:15:14 +0000 (11:15 +0100)]
UPSTREAM: arm64: fix dump_instr when PAN and UAO are in use

If the kernel is set to show unhandled signals, and a user task does not
handle a SIGILL as a result of an instruction abort, we will attempt to
log the offending instruction with dump_instr before killing the task.

We use dump_instr to log the encoding of the offending userspace
instruction. However, dump_instr is also used to dump instructions from
kernel space, and internally always switches to KERNEL_DS before dumping
the instruction with get_user. When both PAN and UAO are in use, reading
a user instruction via get_user while in KERNEL_DS will result in a
permission fault, which leads to an Oops.

As we have regs corresponding to the context of the original instruction
abort, we can inspect this and only flip to KERNEL_DS if the original
abort was taken from the kernel, avoiding this issue. At the same time,
remove the redundant (and incorrect) comments regarding the order
dump_mem and dump_instr are called in.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: <stable@vger.kernel.org> #4.6+
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Fixes: 57f4959bad0a154a ("arm64: kernel: Add support for User Access Override")
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I54c00f3598d227a7e2767b357cb453075dcce7bd
(cherry picked from commit c5cea06be060f38e5400d796e61cfc8c36e52924)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoBACKPORT: arm64: Fold proc-macros.S into assembler.h
Geoff Levand [Wed, 27 Apr 2016 16:47:00 +0000 (17:47 +0100)]
BACKPORT: arm64: Fold proc-macros.S into assembler.h

To allow the assembler macros defined in arch/arm64/mm/proc-macros.S to
be used outside the mm code move the contents of proc-macros.S to
asm/assembler.h.  Also, delete proc-macros.S, and fix up all references
to proc-macros.S.

Signed-off-by: Geoff Levand <geoff@infradead.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
[rebased, included dcache_by_line_op]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I09e694442ffd25dcac864216d0369c9727ad0090
(cherry picked from commit 7b7293ae3dbd0a1965bf310b77fed5f9bb37bb93)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: introduce mov_q macro to move a constant into a 64-bit register
Ard Biesheuvel [Mon, 18 Apr 2016 15:09:44 +0000 (17:09 +0200)]
UPSTREAM: arm64: introduce mov_q macro to move a constant into a 64-bit register

Implement a macro mov_q that can be used to move an immediate constant
into a 64-bit register, using between 2 and 4 movz/movk instructions
(depending on the operand)

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I7e6b684e46cad5df79e6b8bc28d72b9e37daedd6
(cherry picked from commit 30b5ba5cf333cc650e474eaf2cc1ae91bc7cf89f)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: Implement ptep_set_access_flags() for hardware AF/DBM
Catalin Marinas [Wed, 13 Apr 2016 15:01:22 +0000 (16:01 +0100)]
UPSTREAM: arm64: Implement ptep_set_access_flags() for hardware AF/DBM

When hardware updates of the access and dirty states are enabled, the
default ptep_set_access_flags() implementation based on calling
set_pte_at() directly is potentially racy. This triggers the "racy dirty
state clearing" warning in set_pte_at() because an existing writable PTE
is overridden with a clean entry.

There are two main scenarios for this situation:

1. The CPU getting an access fault does not support hardware updates of
   the access/dirty flags. However, a different agent in the system
   (e.g. SMMU) can do this, therefore overriding a writable entry with a
   clean one could potentially lose the automatically updated dirty
   status

2. A more complex situation is possible when all CPUs support hardware
   AF/DBM:

   a) Initial state: shareable + writable vma and pte_none(pte)
   b) Read fault taken by two threads of the same process on different
      CPUs
   c) CPU0 takes the mmap_sem and proceeds to handling the fault. It
      eventually reaches do_set_pte() which sets a writable + clean pte.
      CPU0 releases the mmap_sem
   d) CPU1 acquires the mmap_sem and proceeds to handle_pte_fault(). The
      pte entry it reads is present, writable and clean and it continues
      to pte_mkyoung()
   e) CPU1 calls ptep_set_access_flags()

   If between (d) and (e) the hardware (another CPU) updates the dirty
   state (clears PTE_RDONLY), CPU1 will override the PTR_RDONLY bit
   marking the entry clean again.

This patch implements an arm64-specific ptep_set_access_flags() function
to perform an atomic update of the PTE flags.

Fixes: 2f4b829c625e ("arm64: Add support for hardware updates of the access and dirty pte bits")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Ming Lei <tom.leiming@gmail.com>
Tested-by: Julien Grall <julien.grall@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org> # 4.3+
[will: reworded comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: Id2a0b0d8eb6e7df6325ecb48b88b8401a5dd09e5
(cherry picked from commit 66dbd6e61a526ae7d11a208238ae2c17e5cacb6b)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: choose memstart_addr based on minimum sparsemem section alignment
Ard Biesheuvel [Wed, 30 Mar 2016 12:25:47 +0000 (14:25 +0200)]
UPSTREAM: arm64: choose memstart_addr based on minimum sparsemem section alignment

This redefines ARM64_MEMSTART_ALIGN in terms of the minimal alignment
required by sparsemem vmemmap. This comes down to using 1 GB for all
translation granules if CONFIG_SPARSEMEM_VMEMMAP is enabled.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I05b8bc6ab24f677f263b09d7c31fcce4f21269b1
(cherry picked from commit 06e9bf2fd9b372bc1c757995b6ca1cfab0720acb)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64/mm: ensure memstart_addr remains sufficiently aligned
Ard Biesheuvel [Wed, 30 Mar 2016 12:25:46 +0000 (14:25 +0200)]
UPSTREAM: arm64/mm: ensure memstart_addr remains sufficiently aligned

After choosing memstart_addr to be the highest multiple of
ARM64_MEMSTART_ALIGN less than or equal to the first usable physical memory
address, we clip the memblocks to the maximum size of the linear region.
Since the kernel may be high up in memory, we take care not to clip the
kernel itself, which means we have to clip some memory from the bottom if
this occurs, to ensure that the distance between the first and the last
usable physical memory address can be covered by the linear region.

However, we fail to update memstart_addr if this clipping from the bottom
occurs, which means that we may still end up with virtual addresses that
wrap into the userland range. So increment memstart_addr as appropriate to
prevent this from happening.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I72306207cc46a30b780f5e00b9ef23aa8409867e
(cherry picked from commit 2958987f5da2ebcf6a237c5f154d7e3340e60945)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64/kernel: fix incorrect EL0 check in inv_entry macro
Ard Biesheuvel [Fri, 18 Mar 2016 09:58:09 +0000 (10:58 +0100)]
UPSTREAM: arm64/kernel: fix incorrect EL0 check in inv_entry macro

The implementation of macro inv_entry refers to its 'el' argument without
the required leading backslash, which results in an undefined symbol
'el' to be passed into the kernel_entry macro rather than the index of
the exception level as intended.

This undefined symbol strangely enough does not result in build failures,
although it is visible in vmlinux:

     $ nm -n vmlinux |head
                      U el
     0000000000000000 A _kernel_flags_le_hi32
     0000000000000000 A _kernel_offset_le_hi32
     0000000000000000 A _kernel_size_le_hi32
     000000000000000a A _kernel_flags_le_lo32
     .....

However, it does result in incorrect code being generated for invalid
exceptions taken from EL0, since the argument check in kernel_entry
assumes EL1 if its argument does not equal '0'.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I406c1207682a4dff3054a019c26fdf1310b08ed1
(cherry picked from commit b660950c60a7278f9d8deb7c32a162031207c758)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: Add workaround for Cavium erratum 27456
Andrew Pinski [Thu, 25 Feb 2016 01:44:57 +0000 (17:44 -0800)]
UPSTREAM: arm64: Add workaround for Cavium erratum 27456

On ThunderX T88 pass 1.x through 2.1 parts, broadcast TLBI
instructions may cause the icache to become corrupted if it contains
data for a non-current ASID.

This patch implements the workaround (which invalidates the local
icache when switching the mm) by using code patching.

Signed-off-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: David Daney <david.daney@cavium.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I60e6d17926b067a4e022d7b159e239114303a547
(cherry picked from commit 104a0c02e8b1936c049e18a6d4e4ab040fb61213)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: Add macros to read/write system registers
Mark Rutland [Thu, 5 Nov 2015 15:09:17 +0000 (15:09 +0000)]
UPSTREAM: arm64: Add macros to read/write system registers

Rather than crafting custom macros for reading/writing each system
register provide generics accessors, read_sysreg and write_sysreg, for
this purpose.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Change-Id: I1d6cf948bc6660dfd096ff5a18eba682941098c1
(cherry picked from commit 3600c2fdc09a43a30909743569e35a29121602ed)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64/efi: refactor EFI init and runtime code for reuse by 32-bit ARM
Ard Biesheuvel [Mon, 30 Nov 2015 12:28:19 +0000 (13:28 +0100)]
UPSTREAM: arm64/efi: refactor EFI init and runtime code for reuse by 32-bit ARM

This refactors the EFI init and runtime code that will be shared
between arm64 and ARM so that it can be built for both archs.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: Ieee70bbe117170d2054a9c82c4f1a8143b7e302b
(cherry picked from commit f7d924894265794f447ea799dd853400749b5a22)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64/efi: split off EFI init and runtime code for reuse by 32-bit ARM
Ard Biesheuvel [Mon, 30 Nov 2015 12:28:18 +0000 (13:28 +0100)]
UPSTREAM: arm64/efi: split off EFI init and runtime code for reuse by 32-bit ARM

This splits off the early EFI init and runtime code that
- discovers the EFI params and the memory map from the FDT, and installs
  the memblocks and config tables.
- prepares and installs the EFI page tables so that UEFI Runtime Services
  can be invoked at the virtual address installed by the stub.

This will allow it to be reused for 32-bit ARM.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I143e4b38a5426f70027eff6cc5f732ac370ae69d
(cherry picked from commit e5bc22a42e4d46cc203fdfb6d2c76202b08666a0)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAP
Ard Biesheuvel [Mon, 30 Nov 2015 12:28:17 +0000 (13:28 +0100)]
UPSTREAM: arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAP

Change the EFI memory reservation logic to use memblock_mark_nomap()
rather than memblock_reserve() to mark UEFI reserved regions as
occupied. In addition to reserving them against allocations done by
memblock, this will also prevent them from being covered by the linear
mapping.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: Ia3ce78f40f8d41a9afdd42238fe9cbfd81bbff08
(cherry picked from commit 4dffbfc48d65e5d8157a634fd670065d237a9377)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoBACKPORT: arm64: only consider memblocks with NOMAP cleared for linear mapping
Ard Biesheuvel [Mon, 30 Nov 2015 12:28:16 +0000 (13:28 +0100)]
BACKPORT: arm64: only consider memblocks with NOMAP cleared for linear mapping

Take the new memblock attribute MEMBLOCK_NOMAP into account when
deciding whether a certain region is or should be covered by the
kernel direct mapping.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: Id7346a09bb3aee5e9a5ef8812251f80cf8265532
(cherry picked from commit 68709f45385aeddb0ca96a060c0c8259944f321b)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: mm/memblock: add MEMBLOCK_NOMAP attribute to memblock memory table
Ard Biesheuvel [Mon, 30 Nov 2015 12:28:15 +0000 (13:28 +0100)]
UPSTREAM: mm/memblock: add MEMBLOCK_NOMAP attribute to memblock memory table

This introduces the MEMBLOCK_NOMAP attribute and the required plumbing
to make it usable as an indicator that some parts of normal memory
should not be covered by the kernel direct mapping. It is up to the
arch to actually honor the attribute when laying out this mapping,
but the memblock code itself is modified to disregard these regions
for allocations and other general use.

Cc: linux-mm@kvack.org
Cc: Alexander Kuleshov <kuleshovmail@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I55cd3abdf514ac54c071fa0037d8dac73bda798d
(cherry picked from commit bf3d3cc580f9960883ebf9ea05868f336d9491c2)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoANDROID: dm: android-verity: Remove fec_header location constraint
Badhri Jagan Sridharan [Tue, 27 Sep 2016 20:48:29 +0000 (13:48 -0700)]
ANDROID: dm: android-verity: Remove fec_header location constraint

This CL removes the mandate of the fec_header being located right
after the ECC data.

(Cherry-picked from https://android-review.googlesource.com/#/c/280401)

Bug: 28865197
Signed-off-by: Badhri Jagan Sridharan <Badhri@google.com>
Change-Id: Ie04c8cf2dd755f54d02dbdc4e734a13d6f6507b5

8 years agoBACKPORT: audit: consistently record PIDs with task_tgid_nr()
Paul Moore [Tue, 30 Aug 2016 21:19:13 +0000 (17:19 -0400)]
BACKPORT: audit: consistently record PIDs with task_tgid_nr()

Unfortunately we record PIDs in audit records using a variety of
methods despite the correct way being the use of task_tgid_nr().
This patch converts all of these callers, except for the case of
AUDIT_SET in audit_receive_msg() (see the comment in the code).

Reported-by: Jeff Vander Stoep <jeffv@google.com>
Signed-off-by: Paul Moore <paul@paul-moore.com>
Bug: 28952093

(cherry picked from commit fa2bea2f5cca5b8d4a3e5520d2e8c0ede67ac108)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: If6645f9de8bc58ed9755f28dc6af5fbf08d72a00

8 years agoandroid-base.cfg: Enable kernel ASLR
Jeff Vander Stoep [Fri, 23 Sep 2016 17:44:37 +0000 (10:44 -0700)]
android-base.cfg: Enable kernel ASLR

Bug: 30369029
Change-Id: I0c1c932255866f308d67de1df2ad52c9c19c4799

8 years agoUPSTREAM: vmlinux.lds.h: allow arch specific handling of ro_after_init data section
Heiko Carstens [Tue, 7 Jun 2016 10:20:51 +0000 (12:20 +0200)]
UPSTREAM: vmlinux.lds.h: allow arch specific handling of ro_after_init data section

commit c74ba8b3480d ("arch: Introduce post-init read-only memory")
introduced the __ro_after_init attribute which allows to add variables
to the ro_after_init data section.

This new section was added to rodata, even though it contains writable
data. This in turn causes problems on architectures which mark the
page table entries read-only that point to rodata very early.

This patch allows architectures to implement an own handling of the
.data..ro_after_init section.
Usually that would be:
- mark the rodata section read-only very early
- mark the ro_after_init section read-only within mark_rodata_ro

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Bug: 31660652
Change-Id: If68cb4d86f88678c9bac8c47072775ab85ef5770
(cherry picked from commit 32fb2fc5c357fb99616bbe100dbcb27bc7f5d045)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: ARM/vdso: Mark the vDSO code read-only after init
David Brown [Wed, 17 Feb 2016 22:41:18 +0000 (14:41 -0800)]
UPSTREAM: ARM/vdso: Mark the vDSO code read-only after init

Although the ARM vDSO is cleanly separated by code/data with the code
being read-only in userspace mappings, the code page is still writable
from the kernel.

There have been exploits (such as http://itszn.com/blog/?p=21) that
take advantage of this on x86 to go from a bad kernel write to full
root.

Prevent this specific exploit class on ARM as well by putting the vDSO
code page in post-init read-only memory as well.

Before:
vdso: 1 text pages at base 80927000
root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
---[ Modules ]---
---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80600000           5M     ro x  SHD
0x80600000-0x80800000           2M     ro NX SHD
0x80800000-0xbe000000         984M     RW NX SHD

After:
vdso: 1 text pages at base 8072b000
root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
---[ Modules ]---
---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80600000           5M     ro x  SHD
0x80600000-0x80800000           2M     ro NX SHD
0x80800000-0xbe000000         984M     RW NX SHD

Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
PaX Team, Brad Spengler, and Kees Cook.

Signed-off-by: David Brown <david.brown@linaro.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brad Spengler <spender@grsecurity.net>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/1455748879-21872-8-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: I8d3cb7707644343aa907b2d584312ccdad63e270
(cherry picked from commit 11bf9b865898961cee60a41c483c9f27ec76e12e)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: x86/vdso: Mark the vDSO code read-only after init
Kees Cook [Wed, 17 Feb 2016 22:41:17 +0000 (14:41 -0800)]
UPSTREAM: x86/vdso: Mark the vDSO code read-only after init

The vDSO does not need to be writable after __init, so mark it as
__ro_after_init. The result kills the exploit method of writing to the
vDSO from kernel space resulting in userspace executing the modified code,
as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21

The memory map (with added vDSO address reporting) shows the vDSO moving
into read-only memory:

Before:
[    0.143067] vDSO @ ffffffff82004000
[    0.143551] vDSO @ ffffffff82006000
---[ High Kernel Mapping ]---
0xffffffff80000000-0xffffffff81000000      16M                         pmd
0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
0xffffffff81e00000-0xffffffff81e05000      20K   ro             GLB NX pte
0xffffffff81e05000-0xffffffff82000000    2028K   ro                 NX pte
0xffffffff82000000-0xffffffff8214f000    1340K   RW             GLB NX pte
0xffffffff8214f000-0xffffffff82281000    1224K   RW                 NX pte
0xffffffff82281000-0xffffffff82400000    1532K   RW             GLB NX pte
0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
0xffffffff83200000-0xffffffffc0000000     974M                         pmd

After:
[    0.145062] vDSO @ ffffffff81da1000
[    0.146057] vDSO @ ffffffff81da4000
---[ High Kernel Mapping ]---
0xffffffff80000000-0xffffffff81000000      16M                         pmd
0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
0xffffffff81e00000-0xffffffff81e0b000      44K   ro             GLB NX pte
0xffffffff81e0b000-0xffffffff82000000    2004K   ro                 NX pte
0xffffffff82000000-0xffffffff8214c000    1328K   RW             GLB NX pte
0xffffffff8214c000-0xffffffff8227e000    1224K   RW                 NX pte
0xffffffff8227e000-0xffffffff82400000    1544K   RW             GLB NX pte
0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
0xffffffff83200000-0xffffffffc0000000     974M                         pmd

Based on work by PaX Team and Brad Spengler.

Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Andy Lutomirski <luto@kernel.org>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brad Spengler <spender@grsecurity.net>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-7-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: Iafbf7314a1106c297ea883031ee96c4f53c04a2b
(cherry picked from commit 018ef8dcf3de5f62e2cc1a9273cc27e1c6ba8de5)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: lkdtm: Verify that '__ro_after_init' works correctly
Kees Cook [Wed, 17 Feb 2016 22:41:16 +0000 (14:41 -0800)]
UPSTREAM: lkdtm: Verify that '__ro_after_init' works correctly

The new __ro_after_init section should be writable before init, but
not after. Validate that it gets updated at init and can't be written
to afterwards.

Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-6-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: I75301d0497fde49a02f13b4e75300111ddadda9d
(cherry picked from commit 7cca071ccbd2a293ea69168ace6abbcdce53098e)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arch: Introduce post-init read-only memory
Kees Cook [Wed, 17 Feb 2016 22:41:15 +0000 (14:41 -0800)]
UPSTREAM: arch: Introduce post-init read-only memory

One of the easiest ways to protect the kernel from attack is to reduce
the internal attack surface exposed when a "write" flaw is available. By
making as much of the kernel read-only as possible, we reduce the
attack surface.

Many things are written to only during __init, and never changed
again. These cannot be made "const" since the compiler will do the wrong
thing (we do actually need to write to them). Instead, move these items
into a memory region that will be made read-only during mark_rodata_ro()
which happens after all kernel __init code has finished.

This introduces __ro_after_init as a way to mark such memory, and adds
some documentation about the existing __read_mostly marking.

This improves the security of the Linux kernel by marking formerly
read-write memory regions as read-only on a fully booted up system.

Based on work by PaX Team and Brad Spengler.

Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brad Spengler <spender@grsecurity.net>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-5-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: I640f6d858d9770a5e480d12a1c716adf8842feb0
(cherry picked from commit c74ba8b3480da6ddaea17df2263ec09b869ac496)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: x86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option
Kees Cook [Wed, 17 Feb 2016 22:41:14 +0000 (14:41 -0800)]
UPSTREAM: x86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option

This removes the CONFIG_DEBUG_RODATA option and makes it always enabled.

This simplifies the code and also makes it clearer that read-only mapped
memory is just as fundamental a security feature in kernel-space as it is
in user-space.

Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-4-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: I3e79c7c4ead79a81c1445f1b3dd28003517faf18
(cherry picked from commit 9ccaf77cf05915f51231d158abfd5448aedde758)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: mm/init: Add 'rodata=off' boot cmdline parameter to disable read-only kerne...
Kees Cook [Wed, 17 Feb 2016 22:41:13 +0000 (14:41 -0800)]
UPSTREAM: mm/init: Add 'rodata=off' boot cmdline parameter to disable read-only kernel mappings

It may be useful to debug writes to the readonly sections of memory,
so provide a cmdline "rodata=off" to allow for this. This can be
expanded in the future to support "log" and "write" modes, but that
will need to be architecture-specific.

This also makes KDB software breakpoints more usable, as read-only
mappings can now be disabled on any kernel.

Suggested-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-3-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: I67b818ca390afdd42ab1c27cb4f8ac64bbdb3b65
(cherry picked from commit d2aa1acad22f1bdd0cfa67b3861800e392254454)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: asm-generic: Consolidate mark_rodata_ro()
Kees Cook [Wed, 17 Feb 2016 22:41:12 +0000 (14:41 -0800)]
UPSTREAM: asm-generic: Consolidate mark_rodata_ro()

Instead of defining mark_rodata_ro() in each architecture, consolidate it.

Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Gross <agross@codeaurora.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Ashok Kumar <ashoks@broadcom.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Brown <david.brown@linaro.org>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Helge Deller <deller@gmx.de>
Cc: James E.J. Bottomley <jejb@parisc-linux.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Stephen Boyd <sboyd@codeaurora.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toshi Kani <toshi.kani@hp.com>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-parisc@vger.kernel.org
Link: http://lkml.kernel.org/r/1455748879-21872-2-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: Iec0c44b3f5d7948954da93fba6cb57888a2709de
(cherry picked from commit e267d97b83d9cecc16c54825f9f3ac7f72dc1e1e)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: spinlock: fix spin_unlock_wait for LSE atomics
Will Deacon [Wed, 8 Jun 2016 14:10:57 +0000 (15:10 +0100)]
UPSTREAM: arm64: spinlock: fix spin_unlock_wait for LSE atomics

Commit d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against
concurrent lockers") fixed spin_unlock_wait for LL/SC-based atomics under
the premise that the LSE atomics (in particular, the LDADDA instruction)
are indivisible.

Unfortunately, these instructions are only indivisible when used with the
-AL (full ordering) suffix and, consequently, the same issue can
theoretically be observed with LSE atomics, where a later (in program
order) load can be speculated before the write portion of the atomic
operation.

This patch fixes the issue by performing a CAS of the lock once we've
established that it's unlocked, in much the same way as the LL/SC code.

Fixes: d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 3a5facd09da848193f5bcb0dea098a298bc1a29d)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Icedaa4c508784bf43d0b5787586480fd668ccc49

8 years agoUPSTREAM: arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE
Mark Rutland [Wed, 24 Aug 2016 17:02:08 +0000 (18:02 +0100)]
UPSTREAM: arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE

When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the
kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not
invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old
mappings entries may still live in TLBs, and we risk violating
Break-Before-Make requirements, leading to TLB conflicts and/or other issues.

We invalidate TLBs when we uninsall the idmap in early setup code, but prior to
this we are subject to issues relating to the Break-Before-Make violation.

Avoid these issues by invalidating the TLBs before the new mappings can be
used by the hardware.

Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
Cc: <stable@vger.kernel.org> # 4.6+
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit fd363bd417ddb6103564c69cfcbd92d9a7877431)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I6c23ce55cdd8b66587b6787b8f28df8535e39f24