GitHub/exynos8895/android_kernel_samsung_universal8895.git
8 years agoFROMLIST: arm64: xen: Enable user access before a privcmd hvc call
Catalin Marinas [Tue, 5 Jul 2016 11:25:15 +0000 (12:25 +0100)]
FROMLIST: arm64: xen: Enable user access before a privcmd hvc call

Privcmd calls are issued by the userspace. The kernel needs to enable
access to TTBR0_EL1 as the hypervisor would issue stage 1 translations
to user memory via AT instructions. Since AT instructions are not
affected by the PAN bit (ARMv8.1), we only need the explicit
uaccess_enable/disable if the TTBR0 PAN option is enabled.

Reviewed-by: Julien Grall <julien.grall@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I927f14076ba94c83e609b19f46dd373287e11fc4
(cherry picked from commit 8cc1f33d2c9f206b6505bedba41aed2b33c203c0)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoFROMLIST: arm64: Handle faults caused by inadvertent user access with PAN enabled
Catalin Marinas [Fri, 1 Jul 2016 17:22:39 +0000 (18:22 +0100)]
FROMLIST: arm64: Handle faults caused by inadvertent user access with PAN enabled

When TTBR0_EL1 is set to the reserved page, an erroneous kernel access
to user space would generate a translation fault. This patch adds the
checks for the software-set PSR_PAN_BIT to emulate a permission fault
and report it accordingly.

This patch also updates the description of the synchronous external
aborts on translation table walks.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I623113fc8bf6d5f023aeec7a0640b62a25ef8420
(cherry picked from commit be3db9340c8011d22f06715339b66bcbbd4893bd)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoFROMLIST: arm64: Disable TTBR0_EL1 during normal kernel execution
Catalin Marinas [Fri, 2 Sep 2016 13:54:03 +0000 (14:54 +0100)]
FROMLIST: arm64: Disable TTBR0_EL1 during normal kernel execution

When the TTBR0 PAN feature is enabled, the kernel entry points need to
disable access to TTBR0_EL1. The PAN status of the interrupted context
is stored as part of the saved pstate, reusing the PSR_PAN_BIT (22).
Restoring access to TTBR0_PAN is done on exception return if returning
to user or returning to a context where PAN was disabled.

Context switching via switch_mm() must defer the update of TTBR0_EL1
until a return to user or an explicit uaccess_enable() call.

Special care needs to be taken for two cases where TTBR0_EL1 is set
outside the normal kernel context switch operation: EFI run-time
services (via efi_set_pgd) and CPU suspend (via cpu_(un)install_idmap).
Code has been added to avoid deferred TTBR0_EL1 switching as in
switch_mm() and restore the reserved TTBR0_EL1 when uninstalling the
special TTBR0_EL1.

This patch also removes a stale comment on the switch_mm() function.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Id1198cf1cde022fad10a94f95d698fae91d742aa
(cherry picked from commit d26cfd64c973b31f73091c882e07350e14fdd6c9)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoFROMLIST: arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1
Catalin Marinas [Fri, 1 Jul 2016 15:53:00 +0000 (16:53 +0100)]
FROMLIST: arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1

This patch adds the uaccess macros/functions to disable access to user
space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
written to TTBR0_EL1 must be a physical address, for simplicity this
patch introduces a reserved_ttbr0 page at a constant offset from
swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
adjusted by the reserved_ttbr0 offset.

Enabling access to user is done by restoring TTBR0_EL1 with the value
from the struct thread_info ttbr0 variable. Interrupts must be disabled
during the uaccess_ttbr0_enable code to ensure the atomicity of the
thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
get_thread_info asm macro from entry.S to assembler.h for reuse in the
uaccess_ttbr0_* macros.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Idf09a870b8612dce23215bce90d88781f0c0c3aa
(cherry picked from commit 940d37234182d2675ab8ab46084840212d735018)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoFROMLIST: arm64: Factor out TTBR0_EL1 post-update workaround into a specific asm...
Catalin Marinas [Fri, 1 Jul 2016 14:48:55 +0000 (15:48 +0100)]
FROMLIST: arm64: Factor out TTBR0_EL1 post-update workaround into a specific asm macro

This patch takes the errata workaround code out of cpu_do_switch_mm into
a dedicated post_ttbr0_update_workaround macro which will be reused in a
subsequent patch.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I69f94e4c41046bd52ca9340b72d97bfcf955b586
(cherry picked from commit 4398e6a1644373a4c2f535f4153c8378d0914630)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoFROMLIST: arm64: Factor out PAN enabling/disabling into separate uaccess_* macros
Catalin Marinas [Fri, 1 Jul 2016 13:58:21 +0000 (14:58 +0100)]
FROMLIST: arm64: Factor out PAN enabling/disabling into separate uaccess_* macros

This patch moves the directly coded alternatives for turning PAN on/off
into separate uaccess_{enable,disable} macros or functions. The asm
macros take a few arguments which will be used in subsequent patches.

Note that any (unlikely) access that the compiler might generate between
uaccess_enable() and uaccess_disable(), other than those explicitly
specified by the user access code, will not be protected by PAN.

Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Ic3fddd706400c8798f57456c56361d84d234f6ef
(cherry picked from commit a4820644c627b82cbc865f2425bb788c94743b16)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: Handle el1 synchronous instruction aborts cleanly
Laura Abbott [Wed, 10 Aug 2016 01:25:26 +0000 (18:25 -0700)]
UPSTREAM: arm64: Handle el1 synchronous instruction aborts cleanly

Executing from a non-executable area gives an ugly message:

lkdtm: Performing direct entry EXEC_RODATA
lkdtm: attempting ok execution at ffff0000084c0e08
lkdtm: attempting bad execution at ffff000008880700
Bad mode in Synchronous Abort handler detected on CPU2, code 0x8400000e -- IABT (current EL)
CPU: 2 PID: 998 Comm: sh Not tainted 4.7.0-rc2+ #13
Hardware name: linux,dummy-virt (DT)
task: ffff800077e35780 ti: ffff800077970000 task.ti: ffff800077970000
PC is at lkdtm_rodata_do_nothing+0x0/0x8
LR is at execute_location+0x74/0x88

The 'IABT (current EL)' indicates the error but it's a bit cryptic
without knowledge of the ARM ARM. There is also no indication of the
specific address which triggered the fault. The increase in kernel
page permissions makes hitting this case more likely as well.
Handling the case in the vectors gives a much more familiar looking
error message:

lkdtm: Performing direct entry EXEC_RODATA
lkdtm: attempting ok execution at ffff0000084c0840
lkdtm: attempting bad execution at ffff000008880680
Unable to handle kernel paging request at virtual address ffff000008880680
pgd = ffff8000089b2000
[ffff000008880680] *pgd=00000000489b4003, *pud=0000000048904003, *pmd=0000000000000000
Internal error: Oops: 8400000e [#1] PREEMPT SMP
Modules linked in:
CPU: 1 PID: 997 Comm: sh Not tainted 4.7.0-rc1+ #24
Hardware name: linux,dummy-virt (DT)
task: ffff800077f9f080 ti: ffff800008a1c000 task.ti: ffff800008a1c000
PC is at lkdtm_rodata_do_nothing+0x0/0x8
LR is at execute_location+0x74/0x88

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Ifba74589ba2cf05b28335d4fd3e3140ef73668db
(cherry picked from commit 9adeb8e72dbfe976709df01e259ed556ee60e779)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoBACKPORT: arm64: kernel: Save and restore UAO and addr_limit on exception entry
James Morse [Mon, 20 Jun 2016 17:28:01 +0000 (18:28 +0100)]
BACKPORT: arm64: kernel: Save and restore UAO and addr_limit on exception entry

If we take an exception while at EL1, the exception handler inherits
the original context's addr_limit and PSTATE.UAO values. To be consistent
always reset addr_limit and PSTATE.UAO on (re-)entry to EL1. This
prevents accidental re-use of the original context's addr_limit.

Based on a similar patch for arm from Russell King.

Cc: <stable@vger.kernel.org> # 4.6-
Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: Iab453201c6e08bc6e22500b7c5570dd0fe2d1b74
(cherry picked from commit e19a6ee2460bdd0d0055a6029383422773f9999a)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: include alternative handling in dcache_by_line_op
Andre Przywara [Tue, 28 Jun 2016 17:07:29 +0000 (18:07 +0100)]
UPSTREAM: arm64: include alternative handling in dcache_by_line_op

The newly introduced dcache_by_line_op macro is used at least in
one occassion at the moment to issue a "dc cvau" instruction,
which is affected by ARM errata 819472, 826319, 827319 and 824069.
Change the macro to allow for alternative patching in there to
protect affected Cortex-A53 cores.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
[catalin.marinas@arm.com: indentation fixups]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I450594dc311b09b6b832b707a9abb357608cc6e4
(cherry picked from commit 823066d9edcdfe4cedb06216c2b1f91efaf68a87)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: fix "dc cvau" cache operation on errata-affected core
Andre Przywara [Tue, 28 Jun 2016 17:07:28 +0000 (18:07 +0100)]
UPSTREAM: arm64: fix "dc cvau" cache operation on errata-affected core

The ARM errata 819472, 826319, 827319 and 824069 for affected
Cortex-A53 cores demand to promote "dc cvau" instructions to
"dc civac" as well.
Attribute the usage of the instruction in __flush_cache_user_range
to also be covered by our alternative patching efforts.
For that we introduce an assembly macro which both deals with
alternatives while still tagging the instructions as USER.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: If5e7933ba32331b2aa28fc5d9e019649452f0f6c
(cherry picked from commit 290622efc76ece22ef76a30bf117755891ab27f6)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: Revert "arm64: alternatives: add enable parameter to conditional asm macros"
Andre Przywara [Tue, 28 Jun 2016 17:07:27 +0000 (18:07 +0100)]
UPSTREAM: Revert "arm64: alternatives: add enable parameter to conditional asm macros"

Commit 77ee306c0aea9 ("arm64: alternatives: add enable parameter to
conditional asm macros") extended the alternative assembly macros.
Unfortunately this does not really work as one would expect, as the
enable parameter in fact correctly protects the alternative section
magic, but not the actual code sequences.
This results in having both the original instruction(s) _and_  the
alternative ones, if enable if false.
Since there is no user of this macros anyway, just revert it.

This reverts commit 77ee306c0aea9a219daec256ad25982944affef8.

Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I608104891335dfa2dacdb364754ae2658088ddf2
(cherry picked from commit b82bfa4793cd0f8fde49b85e0ad66906682e7447)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: Add new asm macro copy_page
Geoff Levand [Wed, 27 Apr 2016 16:47:10 +0000 (17:47 +0100)]
UPSTREAM: arm64: Add new asm macro copy_page

Kexec and hibernate need to copy pages of memory, but may not have all
of the kernel mapped, and are unable to call copy_page().

Add a simplistic copy_page() macro, that can be inlined in these
situations. lib/copy_page.S provides a bigger better version, but
uses more registers.

Signed-off-by: Geoff Levand <geoff@infradead.org>
[Changed asm label to 9998, added commit message]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: If23a454e211b1f57f8ba1a2a00b44dabf4b82932
(cherry picked from commit 5003dbde45961dd7ab3d8a09ab9ad8bcb604db40)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: kill ESR_LNX_EXEC
Mark Rutland [Tue, 31 May 2016 11:33:03 +0000 (12:33 +0100)]
UPSTREAM: arm64: kill ESR_LNX_EXEC

Currently we treat ESR_EL1 bit 24 as software-defined for distinguishing
instruction aborts from data aborts, but this bit is architecturally
RES0 for instruction aborts, and could be allocated for an arbitrary
purpose in future. Additionally, we hard-code the value in entry.S
without the mnemonic, making the code difficult to understand.

Instead, remove ESR_LNX_EXEC, and distinguish aborts based on the esr,
which we already pass to the sole use of ESR_LNX_EXEC. A new helper,
is_el0_instruction_abort() is added to make the logic clear. Any
instruction aborts taken from EL1 will already have been handled by
bad_mode, so we need not handle that case in the helper.

For consistency, the existing permission_fault helper is renamed to
is_permission_fault, and the return type is changed to bool. There
should be no functional changes as the return value was a boolean
expression, and the result is only used in another boolean expression.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Dave P Martin <dave.martin@arm.com>
Cc: Huang Shijie <shijie.huang@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Iaf66fa5f3b13cf985b11a3b0a40c4333fe9ef833
(cherry picked from commit 541ec870ef31433018d245614254bd9d810a9ac3)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: add macro to extract ESR_ELx.EC
Mark Rutland [Tue, 31 May 2016 11:33:01 +0000 (12:33 +0100)]
UPSTREAM: arm64: add macro to extract ESR_ELx.EC

Several places open-code extraction of the EC field from an ESR_ELx
value, in subtly different ways. This is unfortunate duplication and
variation, and the precise logic used to extract the field is a
distraction.

This patch adds a new macro, ESR_ELx_EC(), to extract the EC field from
an ESR_ELx value in a consistent fashion.

Existing open-coded extractions in core arm64 code are moved over to the
new helper. KVM code is left as-is for the moment.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Huang Shijie <shijie.huang@arm.com>
Cc: Dave P Martin <dave.martin@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: Ib634a4795277d243fce5dd30b139e2ec1465bee9
(cherry picked from commit 275f344bec51e9100bae81f3cc8c6940bbfb24c0)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: mm: mark fault_info table const
Mark Rutland [Mon, 13 Jun 2016 16:57:02 +0000 (17:57 +0100)]
UPSTREAM: arm64: mm: mark fault_info table const

Unlike the debug_fault_info table, we never intentionally alter the
fault_info table at runtime, and all derived pointers are treated as
const currently.

Make the table const so that it can be placed in .rodata and protected
from unintentional writes, as we do for the syscall tables.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I3fb0bb55427835c165cc377d8dc2a3fa9e6e950d
(cherry picked from commit bbb1681ee3653bdcfc6a4ba31902738118311fd4)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: fix dump_instr when PAN and UAO are in use
Mark Rutland [Mon, 13 Jun 2016 10:15:14 +0000 (11:15 +0100)]
UPSTREAM: arm64: fix dump_instr when PAN and UAO are in use

If the kernel is set to show unhandled signals, and a user task does not
handle a SIGILL as a result of an instruction abort, we will attempt to
log the offending instruction with dump_instr before killing the task.

We use dump_instr to log the encoding of the offending userspace
instruction. However, dump_instr is also used to dump instructions from
kernel space, and internally always switches to KERNEL_DS before dumping
the instruction with get_user. When both PAN and UAO are in use, reading
a user instruction via get_user while in KERNEL_DS will result in a
permission fault, which leads to an Oops.

As we have regs corresponding to the context of the original instruction
abort, we can inspect this and only flip to KERNEL_DS if the original
abort was taken from the kernel, avoiding this issue. At the same time,
remove the redundant (and incorrect) comments regarding the order
dump_mem and dump_instr are called in.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: <stable@vger.kernel.org> #4.6+
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Vladimir Murzin <vladimir.murzin@arm.com>
Fixes: 57f4959bad0a154a ("arm64: kernel: Add support for User Access Override")
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I54c00f3598d227a7e2767b357cb453075dcce7bd
(cherry picked from commit c5cea06be060f38e5400d796e61cfc8c36e52924)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoBACKPORT: arm64: Fold proc-macros.S into assembler.h
Geoff Levand [Wed, 27 Apr 2016 16:47:00 +0000 (17:47 +0100)]
BACKPORT: arm64: Fold proc-macros.S into assembler.h

To allow the assembler macros defined in arch/arm64/mm/proc-macros.S to
be used outside the mm code move the contents of proc-macros.S to
asm/assembler.h.  Also, delete proc-macros.S, and fix up all references
to proc-macros.S.

Signed-off-by: Geoff Levand <geoff@infradead.org>
Acked-by: Pavel Machek <pavel@ucw.cz>
[rebased, included dcache_by_line_op]
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I09e694442ffd25dcac864216d0369c9727ad0090
(cherry picked from commit 7b7293ae3dbd0a1965bf310b77fed5f9bb37bb93)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: introduce mov_q macro to move a constant into a 64-bit register
Ard Biesheuvel [Mon, 18 Apr 2016 15:09:44 +0000 (17:09 +0200)]
UPSTREAM: arm64: introduce mov_q macro to move a constant into a 64-bit register

Implement a macro mov_q that can be used to move an immediate constant
into a 64-bit register, using between 2 and 4 movz/movk instructions
(depending on the operand)

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I7e6b684e46cad5df79e6b8bc28d72b9e37daedd6
(cherry picked from commit 30b5ba5cf333cc650e474eaf2cc1ae91bc7cf89f)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: Implement ptep_set_access_flags() for hardware AF/DBM
Catalin Marinas [Wed, 13 Apr 2016 15:01:22 +0000 (16:01 +0100)]
UPSTREAM: arm64: Implement ptep_set_access_flags() for hardware AF/DBM

When hardware updates of the access and dirty states are enabled, the
default ptep_set_access_flags() implementation based on calling
set_pte_at() directly is potentially racy. This triggers the "racy dirty
state clearing" warning in set_pte_at() because an existing writable PTE
is overridden with a clean entry.

There are two main scenarios for this situation:

1. The CPU getting an access fault does not support hardware updates of
   the access/dirty flags. However, a different agent in the system
   (e.g. SMMU) can do this, therefore overriding a writable entry with a
   clean one could potentially lose the automatically updated dirty
   status

2. A more complex situation is possible when all CPUs support hardware
   AF/DBM:

   a) Initial state: shareable + writable vma and pte_none(pte)
   b) Read fault taken by two threads of the same process on different
      CPUs
   c) CPU0 takes the mmap_sem and proceeds to handling the fault. It
      eventually reaches do_set_pte() which sets a writable + clean pte.
      CPU0 releases the mmap_sem
   d) CPU1 acquires the mmap_sem and proceeds to handle_pte_fault(). The
      pte entry it reads is present, writable and clean and it continues
      to pte_mkyoung()
   e) CPU1 calls ptep_set_access_flags()

   If between (d) and (e) the hardware (another CPU) updates the dirty
   state (clears PTE_RDONLY), CPU1 will override the PTR_RDONLY bit
   marking the entry clean again.

This patch implements an arm64-specific ptep_set_access_flags() function
to perform an atomic update of the PTE flags.

Fixes: 2f4b829c625e ("arm64: Add support for hardware updates of the access and dirty pte bits")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Ming Lei <tom.leiming@gmail.com>
Tested-by: Julien Grall <julien.grall@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: <stable@vger.kernel.org> # 4.3+
[will: reworded comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: Id2a0b0d8eb6e7df6325ecb48b88b8401a5dd09e5
(cherry picked from commit 66dbd6e61a526ae7d11a208238ae2c17e5cacb6b)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: choose memstart_addr based on minimum sparsemem section alignment
Ard Biesheuvel [Wed, 30 Mar 2016 12:25:47 +0000 (14:25 +0200)]
UPSTREAM: arm64: choose memstart_addr based on minimum sparsemem section alignment

This redefines ARM64_MEMSTART_ALIGN in terms of the minimal alignment
required by sparsemem vmemmap. This comes down to using 1 GB for all
translation granules if CONFIG_SPARSEMEM_VMEMMAP is enabled.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I05b8bc6ab24f677f263b09d7c31fcce4f21269b1
(cherry picked from commit 06e9bf2fd9b372bc1c757995b6ca1cfab0720acb)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64/mm: ensure memstart_addr remains sufficiently aligned
Ard Biesheuvel [Wed, 30 Mar 2016 12:25:46 +0000 (14:25 +0200)]
UPSTREAM: arm64/mm: ensure memstart_addr remains sufficiently aligned

After choosing memstart_addr to be the highest multiple of
ARM64_MEMSTART_ALIGN less than or equal to the first usable physical memory
address, we clip the memblocks to the maximum size of the linear region.
Since the kernel may be high up in memory, we take care not to clip the
kernel itself, which means we have to clip some memory from the bottom if
this occurs, to ensure that the distance between the first and the last
usable physical memory address can be covered by the linear region.

However, we fail to update memstart_addr if this clipping from the bottom
occurs, which means that we may still end up with virtual addresses that
wrap into the userland range. So increment memstart_addr as appropriate to
prevent this from happening.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I72306207cc46a30b780f5e00b9ef23aa8409867e
(cherry picked from commit 2958987f5da2ebcf6a237c5f154d7e3340e60945)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64/kernel: fix incorrect EL0 check in inv_entry macro
Ard Biesheuvel [Fri, 18 Mar 2016 09:58:09 +0000 (10:58 +0100)]
UPSTREAM: arm64/kernel: fix incorrect EL0 check in inv_entry macro

The implementation of macro inv_entry refers to its 'el' argument without
the required leading backslash, which results in an undefined symbol
'el' to be passed into the kernel_entry macro rather than the index of
the exception level as intended.

This undefined symbol strangely enough does not result in build failures,
although it is visible in vmlinux:

     $ nm -n vmlinux |head
                      U el
     0000000000000000 A _kernel_flags_le_hi32
     0000000000000000 A _kernel_offset_le_hi32
     0000000000000000 A _kernel_size_le_hi32
     000000000000000a A _kernel_flags_le_lo32
     .....

However, it does result in incorrect code being generated for invalid
exceptions taken from EL0, since the argument check in kernel_entry
assumes EL1 if its argument does not equal '0'.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I406c1207682a4dff3054a019c26fdf1310b08ed1
(cherry picked from commit b660950c60a7278f9d8deb7c32a162031207c758)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: Add workaround for Cavium erratum 27456
Andrew Pinski [Thu, 25 Feb 2016 01:44:57 +0000 (17:44 -0800)]
UPSTREAM: arm64: Add workaround for Cavium erratum 27456

On ThunderX T88 pass 1.x through 2.1 parts, broadcast TLBI
instructions may cause the icache to become corrupted if it contains
data for a non-current ASID.

This patch implements the workaround (which invalidates the local
icache when switching the mm) by using code patching.

Signed-off-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: David Daney <david.daney@cavium.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Change-Id: I60e6d17926b067a4e022d7b159e239114303a547
(cherry picked from commit 104a0c02e8b1936c049e18a6d4e4ab040fb61213)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: Add macros to read/write system registers
Mark Rutland [Thu, 5 Nov 2015 15:09:17 +0000 (15:09 +0000)]
UPSTREAM: arm64: Add macros to read/write system registers

Rather than crafting custom macros for reading/writing each system
register provide generics accessors, read_sysreg and write_sysreg, for
this purpose.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Change-Id: I1d6cf948bc6660dfd096ff5a18eba682941098c1
(cherry picked from commit 3600c2fdc09a43a30909743569e35a29121602ed)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64/efi: refactor EFI init and runtime code for reuse by 32-bit ARM
Ard Biesheuvel [Mon, 30 Nov 2015 12:28:19 +0000 (13:28 +0100)]
UPSTREAM: arm64/efi: refactor EFI init and runtime code for reuse by 32-bit ARM

This refactors the EFI init and runtime code that will be shared
between arm64 and ARM so that it can be built for both archs.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: Ieee70bbe117170d2054a9c82c4f1a8143b7e302b
(cherry picked from commit f7d924894265794f447ea799dd853400749b5a22)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64/efi: split off EFI init and runtime code for reuse by 32-bit ARM
Ard Biesheuvel [Mon, 30 Nov 2015 12:28:18 +0000 (13:28 +0100)]
UPSTREAM: arm64/efi: split off EFI init and runtime code for reuse by 32-bit ARM

This splits off the early EFI init and runtime code that
- discovers the EFI params and the memory map from the FDT, and installs
  the memblocks and config tables.
- prepares and installs the EFI page tables so that UEFI Runtime Services
  can be invoked at the virtual address installed by the stub.

This will allow it to be reused for 32-bit ARM.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I143e4b38a5426f70027eff6cc5f732ac370ae69d
(cherry picked from commit e5bc22a42e4d46cc203fdfb6d2c76202b08666a0)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAP
Ard Biesheuvel [Mon, 30 Nov 2015 12:28:17 +0000 (13:28 +0100)]
UPSTREAM: arm64/efi: mark UEFI reserved regions as MEMBLOCK_NOMAP

Change the EFI memory reservation logic to use memblock_mark_nomap()
rather than memblock_reserve() to mark UEFI reserved regions as
occupied. In addition to reserving them against allocations done by
memblock, this will also prevent them from being covered by the linear
mapping.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: Ia3ce78f40f8d41a9afdd42238fe9cbfd81bbff08
(cherry picked from commit 4dffbfc48d65e5d8157a634fd670065d237a9377)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoBACKPORT: arm64: only consider memblocks with NOMAP cleared for linear mapping
Ard Biesheuvel [Mon, 30 Nov 2015 12:28:16 +0000 (13:28 +0100)]
BACKPORT: arm64: only consider memblocks with NOMAP cleared for linear mapping

Take the new memblock attribute MEMBLOCK_NOMAP into account when
deciding whether a certain region is or should be covered by the
kernel direct mapping.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: Id7346a09bb3aee5e9a5ef8812251f80cf8265532
(cherry picked from commit 68709f45385aeddb0ca96a060c0c8259944f321b)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: mm/memblock: add MEMBLOCK_NOMAP attribute to memblock memory table
Ard Biesheuvel [Mon, 30 Nov 2015 12:28:15 +0000 (13:28 +0100)]
UPSTREAM: mm/memblock: add MEMBLOCK_NOMAP attribute to memblock memory table

This introduces the MEMBLOCK_NOMAP attribute and the required plumbing
to make it usable as an indicator that some parts of normal memory
should not be covered by the kernel direct mapping. It is up to the
arch to actually honor the attribute when laying out this mapping,
but the memblock code itself is modified to disregard these regions
for allocations and other general use.

Cc: linux-mm@kvack.org
Cc: Alexander Kuleshov <kuleshovmail@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Change-Id: I55cd3abdf514ac54c071fa0037d8dac73bda798d
(cherry picked from commit bf3d3cc580f9960883ebf9ea05868f336d9491c2)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoANDROID: dm: android-verity: Remove fec_header location constraint
Badhri Jagan Sridharan [Tue, 27 Sep 2016 20:48:29 +0000 (13:48 -0700)]
ANDROID: dm: android-verity: Remove fec_header location constraint

This CL removes the mandate of the fec_header being located right
after the ECC data.

(Cherry-picked from https://android-review.googlesource.com/#/c/280401)

Bug: 28865197
Signed-off-by: Badhri Jagan Sridharan <Badhri@google.com>
Change-Id: Ie04c8cf2dd755f54d02dbdc4e734a13d6f6507b5

8 years agoBACKPORT: audit: consistently record PIDs with task_tgid_nr()
Paul Moore [Tue, 30 Aug 2016 21:19:13 +0000 (17:19 -0400)]
BACKPORT: audit: consistently record PIDs with task_tgid_nr()

Unfortunately we record PIDs in audit records using a variety of
methods despite the correct way being the use of task_tgid_nr().
This patch converts all of these callers, except for the case of
AUDIT_SET in audit_receive_msg() (see the comment in the code).

Reported-by: Jeff Vander Stoep <jeffv@google.com>
Signed-off-by: Paul Moore <paul@paul-moore.com>
Bug: 28952093

(cherry picked from commit fa2bea2f5cca5b8d4a3e5520d2e8c0ede67ac108)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: If6645f9de8bc58ed9755f28dc6af5fbf08d72a00

8 years agoandroid-base.cfg: Enable kernel ASLR
Jeff Vander Stoep [Fri, 23 Sep 2016 17:44:37 +0000 (10:44 -0700)]
android-base.cfg: Enable kernel ASLR

Bug: 30369029
Change-Id: I0c1c932255866f308d67de1df2ad52c9c19c4799

8 years agoUPSTREAM: vmlinux.lds.h: allow arch specific handling of ro_after_init data section
Heiko Carstens [Tue, 7 Jun 2016 10:20:51 +0000 (12:20 +0200)]
UPSTREAM: vmlinux.lds.h: allow arch specific handling of ro_after_init data section

commit c74ba8b3480d ("arch: Introduce post-init read-only memory")
introduced the __ro_after_init attribute which allows to add variables
to the ro_after_init data section.

This new section was added to rodata, even though it contains writable
data. This in turn causes problems on architectures which mark the
page table entries read-only that point to rodata very early.

This patch allows architectures to implement an own handling of the
.data..ro_after_init section.
Usually that would be:
- mark the rodata section read-only very early
- mark the ro_after_init section read-only within mark_rodata_ro

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Bug: 31660652
Change-Id: If68cb4d86f88678c9bac8c47072775ab85ef5770
(cherry picked from commit 32fb2fc5c357fb99616bbe100dbcb27bc7f5d045)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: ARM/vdso: Mark the vDSO code read-only after init
David Brown [Wed, 17 Feb 2016 22:41:18 +0000 (14:41 -0800)]
UPSTREAM: ARM/vdso: Mark the vDSO code read-only after init

Although the ARM vDSO is cleanly separated by code/data with the code
being read-only in userspace mappings, the code page is still writable
from the kernel.

There have been exploits (such as http://itszn.com/blog/?p=21) that
take advantage of this on x86 to go from a bad kernel write to full
root.

Prevent this specific exploit class on ARM as well by putting the vDSO
code page in post-init read-only memory as well.

Before:
vdso: 1 text pages at base 80927000
root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
---[ Modules ]---
---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80600000           5M     ro x  SHD
0x80600000-0x80800000           2M     ro NX SHD
0x80800000-0xbe000000         984M     RW NX SHD

After:
vdso: 1 text pages at base 8072b000
root@Vexpress:/ cat /sys/kernel/debug/kernel_page_tables
---[ Modules ]---
---[ Kernel Mapping ]---
0x80000000-0x80100000           1M     RW NX SHD
0x80100000-0x80600000           5M     ro x  SHD
0x80600000-0x80800000           2M     ro NX SHD
0x80800000-0xbe000000         984M     RW NX SHD

Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
PaX Team, Brad Spengler, and Kees Cook.

Signed-off-by: David Brown <david.brown@linaro.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brad Spengler <spender@grsecurity.net>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nathan Lynch <nathan_lynch@mentor.com>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/1455748879-21872-8-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: I8d3cb7707644343aa907b2d584312ccdad63e270
(cherry picked from commit 11bf9b865898961cee60a41c483c9f27ec76e12e)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: x86/vdso: Mark the vDSO code read-only after init
Kees Cook [Wed, 17 Feb 2016 22:41:17 +0000 (14:41 -0800)]
UPSTREAM: x86/vdso: Mark the vDSO code read-only after init

The vDSO does not need to be writable after __init, so mark it as
__ro_after_init. The result kills the exploit method of writing to the
vDSO from kernel space resulting in userspace executing the modified code,
as shown here to bypass SMEP restrictions: http://itszn.com/blog/?p=21

The memory map (with added vDSO address reporting) shows the vDSO moving
into read-only memory:

Before:
[    0.143067] vDSO @ ffffffff82004000
[    0.143551] vDSO @ ffffffff82006000
---[ High Kernel Mapping ]---
0xffffffff80000000-0xffffffff81000000      16M                         pmd
0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
0xffffffff81e00000-0xffffffff81e05000      20K   ro             GLB NX pte
0xffffffff81e05000-0xffffffff82000000    2028K   ro                 NX pte
0xffffffff82000000-0xffffffff8214f000    1340K   RW             GLB NX pte
0xffffffff8214f000-0xffffffff82281000    1224K   RW                 NX pte
0xffffffff82281000-0xffffffff82400000    1532K   RW             GLB NX pte
0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
0xffffffff83200000-0xffffffffc0000000     974M                         pmd

After:
[    0.145062] vDSO @ ffffffff81da1000
[    0.146057] vDSO @ ffffffff81da4000
---[ High Kernel Mapping ]---
0xffffffff80000000-0xffffffff81000000      16M                         pmd
0xffffffff81000000-0xffffffff81800000       8M   ro     PSE     GLB x  pmd
0xffffffff81800000-0xffffffff819f3000    1996K   ro             GLB x  pte
0xffffffff819f3000-0xffffffff81a00000      52K   ro                 NX pte
0xffffffff81a00000-0xffffffff81e00000       4M   ro     PSE     GLB NX pmd
0xffffffff81e00000-0xffffffff81e0b000      44K   ro             GLB NX pte
0xffffffff81e0b000-0xffffffff82000000    2004K   ro                 NX pte
0xffffffff82000000-0xffffffff8214c000    1328K   RW             GLB NX pte
0xffffffff8214c000-0xffffffff8227e000    1224K   RW                 NX pte
0xffffffff8227e000-0xffffffff82400000    1544K   RW             GLB NX pte
0xffffffff82400000-0xffffffff83200000      14M   RW     PSE     GLB NX pmd
0xffffffff83200000-0xffffffffc0000000     974M                         pmd

Based on work by PaX Team and Brad Spengler.

Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Andy Lutomirski <luto@kernel.org>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brad Spengler <spender@grsecurity.net>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-7-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: Iafbf7314a1106c297ea883031ee96c4f53c04a2b
(cherry picked from commit 018ef8dcf3de5f62e2cc1a9273cc27e1c6ba8de5)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: lkdtm: Verify that '__ro_after_init' works correctly
Kees Cook [Wed, 17 Feb 2016 22:41:16 +0000 (14:41 -0800)]
UPSTREAM: lkdtm: Verify that '__ro_after_init' works correctly

The new __ro_after_init section should be writable before init, but
not after. Validate that it gets updated at init and can't be written
to afterwards.

Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-6-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: I75301d0497fde49a02f13b4e75300111ddadda9d
(cherry picked from commit 7cca071ccbd2a293ea69168ace6abbcdce53098e)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arch: Introduce post-init read-only memory
Kees Cook [Wed, 17 Feb 2016 22:41:15 +0000 (14:41 -0800)]
UPSTREAM: arch: Introduce post-init read-only memory

One of the easiest ways to protect the kernel from attack is to reduce
the internal attack surface exposed when a "write" flaw is available. By
making as much of the kernel read-only as possible, we reduce the
attack surface.

Many things are written to only during __init, and never changed
again. These cannot be made "const" since the compiler will do the wrong
thing (we do actually need to write to them). Instead, move these items
into a memory region that will be made read-only during mark_rodata_ro()
which happens after all kernel __init code has finished.

This introduces __ro_after_init as a way to mark such memory, and adds
some documentation about the existing __read_mostly marking.

This improves the security of the Linux kernel by marking formerly
read-write memory regions as read-only on a fully booted up system.

Based on work by PaX Team and Brad Spengler.

Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brad Spengler <spender@grsecurity.net>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-5-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: I640f6d858d9770a5e480d12a1c716adf8842feb0
(cherry picked from commit c74ba8b3480da6ddaea17df2263ec09b869ac496)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: x86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option
Kees Cook [Wed, 17 Feb 2016 22:41:14 +0000 (14:41 -0800)]
UPSTREAM: x86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option

This removes the CONFIG_DEBUG_RODATA option and makes it always enabled.

This simplifies the code and also makes it clearer that read-only mapped
memory is just as fundamental a security feature in kernel-space as it is
in user-space.

Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-4-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: I3e79c7c4ead79a81c1445f1b3dd28003517faf18
(cherry picked from commit 9ccaf77cf05915f51231d158abfd5448aedde758)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: mm/init: Add 'rodata=off' boot cmdline parameter to disable read-only kerne...
Kees Cook [Wed, 17 Feb 2016 22:41:13 +0000 (14:41 -0800)]
UPSTREAM: mm/init: Add 'rodata=off' boot cmdline parameter to disable read-only kernel mappings

It may be useful to debug writes to the readonly sections of memory,
so provide a cmdline "rodata=off" to allow for this. This can be
expanded in the future to support "log" and "write" modes, but that
will need to be architecture-specific.

This also makes KDB software breakpoints more usable, as read-only
mappings can now be disabled on any kernel.

Suggested-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-3-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: I67b818ca390afdd42ab1c27cb4f8ac64bbdb3b65
(cherry picked from commit d2aa1acad22f1bdd0cfa67b3861800e392254454)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: asm-generic: Consolidate mark_rodata_ro()
Kees Cook [Wed, 17 Feb 2016 22:41:12 +0000 (14:41 -0800)]
UPSTREAM: asm-generic: Consolidate mark_rodata_ro()

Instead of defining mark_rodata_ro() in each architecture, consolidate it.

Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Gross <agross@codeaurora.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Ashok Kumar <ashoks@broadcom.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Brown <david.brown@linaro.org>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Helge Deller <deller@gmx.de>
Cc: James E.J. Bottomley <jejb@parisc-linux.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Stephen Boyd <sboyd@codeaurora.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toshi Kani <toshi.kani@hp.com>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-parisc@vger.kernel.org
Link: http://lkml.kernel.org/r/1455748879-21872-2-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug: 31660652
Change-Id: Iec0c44b3f5d7948954da93fba6cb57888a2709de
(cherry picked from commit e267d97b83d9cecc16c54825f9f3ac7f72dc1e1e)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
8 years agoUPSTREAM: arm64: spinlock: fix spin_unlock_wait for LSE atomics
Will Deacon [Wed, 8 Jun 2016 14:10:57 +0000 (15:10 +0100)]
UPSTREAM: arm64: spinlock: fix spin_unlock_wait for LSE atomics

Commit d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against
concurrent lockers") fixed spin_unlock_wait for LL/SC-based atomics under
the premise that the LSE atomics (in particular, the LDADDA instruction)
are indivisible.

Unfortunately, these instructions are only indivisible when used with the
-AL (full ordering) suffix and, consequently, the same issue can
theoretically be observed with LSE atomics, where a later (in program
order) load can be speculated before the write portion of the atomic
operation.

This patch fixes the issue by performing a CAS of the lock once we've
established that it's unlocked, in much the same way as the LL/SC code.

Fixes: d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 3a5facd09da848193f5bcb0dea098a298bc1a29d)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Icedaa4c508784bf43d0b5787586480fd668ccc49

8 years agoUPSTREAM: arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE
Mark Rutland [Wed, 24 Aug 2016 17:02:08 +0000 (18:02 +0100)]
UPSTREAM: arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE

When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the
kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not
invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old
mappings entries may still live in TLBs, and we risk violating
Break-Before-Make requirements, leading to TLB conflicts and/or other issues.

We invalidate TLBs when we uninsall the idmap in early setup code, but prior to
this we are subject to issues relating to the Break-Before-Make violation.

Avoid these issues by invalidating the TLBs before the new mappings can be
used by the hardware.

Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
Cc: <stable@vger.kernel.org> # 4.6+
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit fd363bd417ddb6103564c69cfcbd92d9a7877431)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I6c23ce55cdd8b66587b6787b8f28df8535e39f24

8 years agoUPSTREAM: arm64: Only select ARM64_MODULE_PLTS if MODULES=y
Catalin Marinas [Tue, 26 Jul 2016 17:16:55 +0000 (10:16 -0700)]
UPSTREAM: arm64: Only select ARM64_MODULE_PLTS if MODULES=y

Selecting CONFIG_RANDOMIZE_BASE=y and CONFIG_MODULES=n fails to build
the module PLTs support:

  CC      arch/arm64/kernel/module-plts.o
/work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c: In function ‘module_emit_plt_entry’:
/work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c:32:49: error: dereferencing pointer to incomplete type ‘struct module’

This patch selects ARM64_MODULE_PLTS conditionally only if MODULES is
enabled.

Fixes: f80fb3a3d508 ("arm64: add support for kernel ASLR")
Cc: <stable@vger.kernel.org> # 4.6+
Reported-by: Jeff Vander Stoep <jeffv@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit b9c220b589daaf140f5b8ebe502c98745b94e65c)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I446cb3aa78f1c64b5aa1e2e90fda13f7d46cac33

8 years agoUPSTREAM: arm64: kasan: Use actual memory node when populating the kernel image shadow
Catalin Marinas [Thu, 10 Mar 2016 18:30:56 +0000 (18:30 +0000)]
UPSTREAM: arm64: kasan: Use actual memory node when populating the kernel image shadow

With the 16KB or 64KB page configurations, the generic
vmemmap_populate() implementation warns on potential offnode
page_structs via vmemmap_verify() because the arm64 kasan_init() passes
NUMA_NO_NODE instead of the actual node for the kernel image memory.

Fixes: f9040773b7bb ("arm64: move kernel image to base of vmalloc area")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: James Morse <james.morse@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 2f76969f2eef051bdd63d38b08d78e790440b0ad)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I8985e5b4628a9c7076767d4565f7633635813b5c

8 years agoUPSTREAM: arm64: lse: deal with clobbered IP registers after branch via PLT
Ard Biesheuvel [Thu, 25 Feb 2016 19:48:53 +0000 (20:48 +0100)]
UPSTREAM: arm64: lse: deal with clobbered IP registers after branch via PLT

The LSE atomics implementation uses runtime patching to patch in calls
to out of line non-LSE atomics implementations on cores that lack hardware
support for LSE. To avoid paying the overhead cost of a function call even
if no call ends up being made, the bl instruction is kept invisible to the
compiler, and the out of line implementations preserve all registers, not
just the ones that they are required to preserve as per the AAPCS64.

However, commit fd045f6cd98e ("arm64: add support for module PLTs") added
support for routing branch instructions via veneers if the branch target
offset exceeds the range of the ordinary relative branch instructions.
Since this deals with jump and call instructions that are exposed to ELF
relocations, the PLT code uses x16 to hold the address of the branch target
when it performs an indirect branch-to-register, something which is
explicitly allowed by the AAPCS64 (and ordinary compiler generated code
does not expect register x16 or x17 to retain their values across a bl
instruction).

Since the lse runtime patched bl instructions don't adhere to the AAPCS64,
they don't deal with this clobbering of registers x16 and x17. So add them
to the clobber list of the asm() statements that perform the call
instructions, and drop x16 and x17 from the list of registers that are
callee saved in the out of line non-LSE implementations.

In addition, since we have given these functions two scratch registers,
they no longer need to stack/unstack temp registers.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: factored clobber list into #define, updated Makefile comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 5be8b70af1ca78cefb8b756d157532360a5fd663)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ia44a54eba315a47a6b8aaa2259b444e0139162c0

8 years agoUPSTREAM: arm64: mm: check at build time that PAGE_OFFSET divides the VA space evenly
Ard Biesheuvel [Wed, 2 Mar 2016 08:47:13 +0000 (09:47 +0100)]
UPSTREAM: arm64: mm: check at build time that PAGE_OFFSET divides the VA space evenly

Commit 8439e62a1561 ("arm64: mm: use bit ops rather than arithmetic in
pa/va translations") changed the boundary check against PAGE_OFFSET from
an arithmetic comparison to a bit test. This means we now silently assume
that PAGE_OFFSET is a power of 2 that divides the kernel virtual address
space into two equal halves. So make that assumption explicit.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 6d2aa549de1fc998581d216de3853aa131aa4446)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I8c3bc8cdb7d7f7dea092fd1a208b04583a141054

8 years agoUPSTREAM: arm64: kasan: Fix zero shadow mapping overriding kernel image shadow
Catalin Marinas [Thu, 10 Mar 2016 18:41:16 +0000 (18:41 +0000)]
UPSTREAM: arm64: kasan: Fix zero shadow mapping overriding kernel image shadow

With the 16KB and 64KB page size configurations, SWAPPER_BLOCK_SIZE is
PAGE_SIZE and ARM64_SWAPPER_USES_SECTION_MAPS is 0. Since
kimg_shadow_end is not page aligned (_end shifted by
KASAN_SHADOW_SCALE_SHIFT), the edges of previously mapped kernel image
shadow via vmemmap_populate() may be overridden by subsequent calls to
kasan_populate_zero_shadow(), leading to kernel panics like below:

------------------------------------------------------------------------------
Unable to handle kernel paging request at virtual address fffffc100135068c
pgd = fffffc8009ac0000
[fffffc100135068c] *pgd=00000009ffee0003, *pud=00000009ffee0003, *pmd=00000009ffee0003, *pte=00e0000081a00793
Internal error: Oops: 9600004f [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.5.0-rc4+ #1984
Hardware name: Juno (DT)
task: fffffe09001a0000 ti: fffffe0900200000 task.ti: fffffe0900200000
PC is at __memset+0x4c/0x200
LR is at kasan_unpoison_shadow+0x34/0x50
pc : [<fffffc800846f1cc>] lr : [<fffffc800821ff54>] pstate: 00000245
sp : fffffe0900203db0
x29: fffffe0900203db0 x28: 0000000000000000
x27: 0000000000000000 x26: 0000000000000000
x25: fffffc80099b69d0 x24: 0000000000000001
x23: 0000000000000000 x22: 0000000000002000
x21: dffffc8000000000 x20: 1fffff9001350a8c
x19: 0000000000002000 x18: 0000000000000008
x17: 0000000000000147 x16: ffffffffffffffff
x15: 79746972100e041d x14: ffffff0000000000
x13: ffff000000000000 x12: 0000000000000000
x11: 0101010101010101 x10: 1fffffc11c000000
x9 : 0000000000000000 x8 : fffffc100135068c
x7 : 0000000000000000 x6 : 000000000000003f
x5 : 0000000000000040 x4 : 0000000000000004
x3 : fffffc100134f651 x2 : 0000000000000400
x1 : 0000000000000000 x0 : fffffc100135068c

Process swapper/0 (pid: 1, stack limit = 0xfffffe0900200020)
Call trace:
[<fffffc800846f1cc>] __memset+0x4c/0x200
[<fffffc8008220044>] __asan_register_globals+0x5c/0xb0
[<fffffc8008a09d34>] _GLOBAL__sub_I_65535_1_sunrpc_cache_lookup+0x1c/0x28
[<fffffc8008f20d28>] kernel_init_freeable+0x104/0x274
[<fffffc80089e1948>] kernel_init+0x10/0xf8
[<fffffc8008093a00>] ret_from_fork+0x10/0x50
------------------------------------------------------------------------------

This patch aligns kimg_shadow_start and kimg_shadow_end to
SWAPPER_BLOCK_SIZE in all configurations.

Fixes: f9040773b7bb ("arm64: move kernel image to base of vmalloc area")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 2776e0e8ef683a42fe3e9a5facf576b73579700e)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I13a6b38aefbeddd20bc87cb1382f2787bbc5cf9c

8 years agoUPSTREAM: arm64: consistently use p?d_set_huge
Mark Rutland [Tue, 22 Mar 2016 10:11:45 +0000 (10:11 +0000)]
UPSTREAM: arm64: consistently use p?d_set_huge

Commit 324420bf91f60582 ("arm64: add support for ioremap() block
mappings") added new p?d_set_huge functions which do the hard work to
generate and set a correct block entry.

These differ from open-coded huge page creation in the early page table
code by explicitly setting the P?D_TYPE_SECT bits (which are implicitly
retained by mk_sect_prot() for any valid prot), but are otherwise
identical (and cannot fail on arm64).

For simplicity and consistency, make use of these in the initial page
table creation code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit c661cb1c537e2364bfdabb298fb934fd77445e98)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I25e58a1626831c2c709abcded989d1770fea851c

8 years agoUPSTREAM: arm64: fix KASLR boot-time I-cache maintenance
Mark Rutland [Tue, 15 Mar 2016 11:22:57 +0000 (11:22 +0000)]
UPSTREAM: arm64: fix KASLR boot-time I-cache maintenance

Commit f80fb3a3d50843a4 ("arm64: add support for kernel ASLR") missed a
DSB necessary to complete I-cache maintenance in the primary boot path,
and hence stale instructions may still be present in the I-cache and may
be executed until the I-cache maintenance naturally completes.

Since commit 8ec41987436d566f ("arm64: mm: ensure patched kernel text is
fetched from PoU"), all CPUs invalidate their I-caches after their MMU
is enabled. Prior a CPU's MMU having been enabled, arbitrary lines may
have been fetched from the PoC into I-caches. We never patch text
expected to be executed with the MMU off. Thus, it is unnecessary to
perform broadcast I-cache maintenance in the primary boot path.

This patch reduces the scope of the I-cache maintenance to the local
CPU, and adds the missing DSB with similar scope, matching prior
maintenance in the primary boot path.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesehvuel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit b90b4a608ea2401cc491828f7a385edd2e236e37)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ic66b5fec29867b86782ad6c3243642afc1f40080

8 years agoUPSTREAM: arm64: hugetlb: partial revert of 66b3923a1a0f
Will Deacon [Wed, 9 Mar 2016 15:22:55 +0000 (15:22 +0000)]
UPSTREAM: arm64: hugetlb: partial revert of 66b3923a1a0f

Commit 66b3923a1a0f ("arm64: hugetlb: add support for PTE contiguous bit")
introduced support for huge pages using the contiguous bit in the PTE
as opposed to block mappings, which may be slightly unwieldy (512M) in
64k page configurations.

Unfortunately, this support has resulted in some late regressions when
running the libhugetlbfs test suite with 64k pages and CONFIG_DEBUG_VM
as a result of a BUG:

 | readback (2M: 64): ------------[ cut here ]------------
 | kernel BUG at fs/hugetlbfs/inode.c:446!
 | Internal error: Oops - BUG: 0 [#1] SMP
 | Modules linked in:
 | CPU: 7 PID: 1448 Comm: readback Not tainted 4.5.0-rc7 #148
 | Hardware name: linux,dummy-virt (DT)
 | task: fffffe0040964b00 ti: fffffe00c2668000 task.ti: fffffe00c2668000
 | PC is at remove_inode_hugepages+0x44c/0x480
 | LR is at remove_inode_hugepages+0x264/0x480

Rather than revert the entire patch, simply avoid advertising the
contiguous huge page sizes for now while people are actively working on
a fix. This patch can then be reverted once things have been sorted out.

Cc: David Woods <dwoods@ezchip.com>
Reported-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit ff7925848b50050732ac0401e0acf27e8b241d7b)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I3a9751fa79b2d2871dbdc06ea1aa3d1336bb4f4f

8 years agoUPSTREAM: arm64: make irq_stack_ptr more robust
Yang Shi [Thu, 11 Feb 2016 21:53:10 +0000 (13:53 -0800)]
UPSTREAM: arm64: make irq_stack_ptr more robust

Switching between stacks is only valid if we are tracing ourselves while on the
irq_stack, so it is only valid when in current and non-preemptible context,
otherwise is is just zeroed off.

Fixes: 132cd887b5c5 ("arm64: Modify stack trace and dump for use with irq_stack")
Acked-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Signed-off-by: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit a80a0eb70c358f8c7dda4bb62b2278dc6285217b)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I431d3d5e8e1f556ddfef283af88dd2f63b825f7c

8 years agoUPSTREAM: arm64: efi: invoke EFI_RNG_PROTOCOL to supply KASLR randomness
Ard Biesheuvel [Tue, 26 Jan 2016 13:48:29 +0000 (14:48 +0100)]
UPSTREAM: arm64: efi: invoke EFI_RNG_PROTOCOL to supply KASLR randomness

Since arm64 does not use a decompressor that supplies an execution
environment where it is feasible to some extent to provide a source of
randomness, the arm64 KASLR kernel depends on the bootloader to supply
some random bits in the /chosen/kaslr-seed DT property upon kernel entry.

On UEFI systems, we can use the EFI_RNG_PROTOCOL, if supplied, to obtain
some random bits. At the same time, use it to randomize the offset of the
kernel Image in physical memory.

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 2b5fe07a78a09a32002642b8a823428ade611f16)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I9cb7ae5727dfdf3726b1c9544bce74722ec77bbd

8 years agoUPSTREAM: efi: stub: use high allocation for converted command line
Ard Biesheuvel [Mon, 11 Jan 2016 10:47:49 +0000 (11:47 +0100)]
UPSTREAM: efi: stub: use high allocation for converted command line

Before we can move the command line processing before the allocation
of the kernel, which is required for detecting the 'nokaslr' option
which controls that allocation, move the converted command line higher
up in memory, to prevent it from interfering with the kernel itself.

Since x86 needs the address to fit in 32 bits, use UINT_MAX as the upper
bound there. Otherwise, use ULONG_MAX (i.e., no limit)

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 48fcb2d0216103d15306caa4814e2381104df6d8)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ie959355658d3f2f1819bee842c77cc5eef54b8e7

8 years agoUPSTREAM: efi: stub: add implementation of efi_random_alloc()
Ard Biesheuvel [Mon, 11 Jan 2016 09:43:16 +0000 (10:43 +0100)]
UPSTREAM: efi: stub: add implementation of efi_random_alloc()

This implements efi_random_alloc(), which allocates a chunk of memory of
a certain size at a certain alignment, and uses the random_seed argument
it receives to randomize the address of the allocation.

This is implemented by iterating over the UEFI memory map, counting the
number of suitable slots (aligned offsets) within each region, and picking
a random number between 0 and 'number of slots - 1' to select the slot,
This should guarantee that each possible offset is chosen equally likely.

Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 2ddbfc81eac84a299cb4747a8764bc43f23e9008)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I8f59e3e91a71c752d69fd08ca43a890977c82919

8 years agoBACKPORT: efi: stub: implement efi_get_random_bytes() based on EFI_RNG_PROTOCOL
Ard Biesheuvel [Sun, 10 Jan 2016 10:29:07 +0000 (11:29 +0100)]
BACKPORT: efi: stub: implement efi_get_random_bytes() based on EFI_RNG_PROTOCOL

This exposes the firmware's implementation of EFI_RNG_PROTOCOL via a new
function efi_get_random_bytes().

Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit e4fbf4767440472f9d23b0f25a2b905e1c63b6a8)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Id46036b78c2efd223b6cd5488e512fd93e8f597d

8 years agoBACKPORT: arm64: kaslr: randomize the linear region
Ard Biesheuvel [Fri, 29 Jan 2016 10:59:03 +0000 (11:59 +0100)]
BACKPORT: arm64: kaslr: randomize the linear region

When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been
provided by the bootloader, randomize the placement of RAM inside the
linear region if sufficient space is available. For instance, on a 4KB
granule/3 levels kernel, the linear region is 256 GB in size, and we can
choose any 1 GB aligned offset that is far enough from the top of the
address space to fit the distance between the start of the lowest memblock
and the top of the highest memblock.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 031a4213c11a5db475f528c182f7b3858df11db)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I272de8ee358351d95eacc7dc5f47600adec3e813

8 years agoUPSTREAM: arm64: mm: treat memstart_addr as a signed quantity
Ard Biesheuvel [Fri, 26 Feb 2016 16:57:14 +0000 (17:57 +0100)]
UPSTREAM: arm64: mm: treat memstart_addr as a signed quantity

Commit c031a4213c11 ("arm64: kaslr: randomize the linear region")
implements randomization of the linear region, by subtracting a random
multiple of PUD_SIZE from memstart_addr. This causes the virtual mapping
of system RAM to move upwards in the linear region, and at the same time
causes memstart_addr to assume a value which may be negative if the offset
of system RAM in the physical space is smaller than its offset relative to
PAGE_OFFSET in the virtual space.

Since memstart_addr is effectively an offset now, redefine its type as s64
so that expressions involving shifting or division preserve its sign.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 020d044f66874eba058ce8264fc550f3eca67879)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I0482ebc13baaa9005cf372795e656c2417be9d1c

8 years agoUPSTREAM: arm64: vmemmap: use virtual projection of linear region
Ard Biesheuvel [Fri, 26 Feb 2016 16:57:13 +0000 (17:57 +0100)]
UPSTREAM: arm64: vmemmap: use virtual projection of linear region

Commit dd006da21646 ("arm64: mm: increase VA range of identity map") made
some changes to the memory mapping code to allow physical memory to reside
at an offset that exceeds the size of the virtual mapping.

However, since the size of the vmemmap area is proportional to the size of
the VA area, but it is populated relative to the physical space, we may
end up with the struct page array being mapped outside of the vmemmap
region. For instance, on my Seattle A0 box, I can see the following output
in the dmesg log.

   vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000   (     8 GB maximum)
             0xffffffbfc0000000 - 0xffffffbfd0000000   (   256 MB actual)

We can fix this by deciding that the vmemmap region is not a projection of
the physical space, but of the virtual space above PAGE_OFFSET, i.e., the
linear region. This way, we are guaranteed that the vmemmap region is of
sufficient size, and we can even reduce the size by half.

Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit dfd55ad85e4a7fbaa82df12467515ac3c81e8a3e)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I8112d910f9659941dab6de5b3791f395150c77f1

8 years agoBACKPORT: arm64: add support for kernel ASLR
Ard Biesheuvel [Tue, 26 Jan 2016 13:12:01 +0000 (14:12 +0100)]
BACKPORT: arm64: add support for kernel ASLR

This adds support for KASLR is implemented, based on entropy provided by
the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
of the address space (VA_BITS) and the page size, the entropy in the
virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
4 levels), with the sidenote that displacements that result in the kernel
image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
granule kernels, respectively) are not allowed, and will be rounded up to
an acceptable value.

If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
randomized independently from the core kernel. This makes it less likely
that the location of core kernel data structures can be determined by an
adversary, but causes all function calls from modules into the core kernel
to be resolved via entries in the module PLTs.

If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
randomized by choosing a page aligned 128 MB region inside the interval
[_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
entropy (depending on page size), independently of the kernel randomization,
but still guarantees that modules are within the range of relative branch
and jump instructions (with the caveat that, since the module region is
shared with other uses of the vmalloc area, modules may need to be loaded
further away if the module region is exhausted)

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit f80fb3a3d50843a401dac4b566b3b131da8077a2)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I3f5fafa4e92e5ff39259d57065541366237eb021

8 years agoUPSTREAM: arm64: add support for building vmlinux as a relocatable PIE binary
Ard Biesheuvel [Tue, 26 Jan 2016 08:13:44 +0000 (09:13 +0100)]
UPSTREAM: arm64: add support for building vmlinux as a relocatable PIE binary

This implements CONFIG_RELOCATABLE, which links the final vmlinux
image with a dynamic relocation section, allowing the early boot code
to perform a relocation to a different virtual address at runtime.

This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE).

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 1e48ef7fcc374051730381a2a05da77eb4eafdb0)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: If02e065722d438f85feb62240fc230e16f58e912

8 years agoUPSTREAM: arm64: switch to relative exception tables
Ard Biesheuvel [Fri, 1 Jan 2016 14:02:12 +0000 (15:02 +0100)]
UPSTREAM: arm64: switch to relative exception tables

Instead of using absolute addresses for both the exception location
and the fixup, use offsets relative to the exception table entry values.
Not only does this cut the size of the exception table in half, it is
also a prerequisite for KASLR, since absolute exception table entries
are subject to dynamic relocation, which is incompatible with the sorting
of the exception table that occurs at build time.

This patch also introduces the _ASM_EXTABLE preprocessor macro (which
exists on x86 as well) and its _asm_extable assembly counterpart, as
shorthands to emit exception table entries.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 6c94f27ac847ff8ef15b3da5b200574923bd6287)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Icedda8ee8c32843c439765783816d7d71ca0073a

8 years agoUPSTREAM: extable: add support for relative extables to search and sort routines
Ard Biesheuvel [Fri, 1 Jan 2016 11:39:09 +0000 (12:39 +0100)]
UPSTREAM: extable: add support for relative extables to search and sort routines

This adds support to the generic search_extable() and sort_extable()
implementations for dealing with exception table entries whose fields
contain relative offsets rather than absolute addresses.

Acked-by: Helge Deller <deller@gmx.de>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit a272858a3c1ecd4a935ba23c66668f81214bd110)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I9d144d351d547c49bf3203a69dfff3cb71a51177

8 years agoUPSTREAM: scripts/sortextable: add support for ET_DYN binaries
Ard Biesheuvel [Sun, 10 Jan 2016 10:42:28 +0000 (11:42 +0100)]
UPSTREAM: scripts/sortextable: add support for ET_DYN binaries

Add support to scripts/sortextable for handling relocatable (PIE)
executables, whose ELF type is ET_DYN, not ET_EXEC. Other than adding
support for the new type, no changes are needed.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 7b957b6e603623ef8b2e8222fa94b976df613fa2)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: If55296ef4934b99c38ceb5acbd7c4a7fb23f24c1

8 years agoUPSTREAM: arm64: futex.h: Add missing PAN toggling
James Morse [Tue, 2 Feb 2016 15:53:59 +0000 (15:53 +0000)]
UPSTREAM: arm64: futex.h: Add missing PAN toggling

futex.h's futex_atomic_cmpxchg_inatomic() does not use the
__futex_atomic_op() macro and needs its own PAN toggling. This was missed
when the feature was implemented.

Fixes: 338d4f49d6f ("arm64: kernel: Add support for Privileged Access Never")
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 811d61e384e24759372bb3f01772f3744b0a8327)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I6e7b338a1af17b784d4196101422c3acee3b88ed

8 years agoUPSTREAM: arm64: make asm/elf.h available to asm files
Ard Biesheuvel [Mon, 11 Jan 2016 16:08:26 +0000 (17:08 +0100)]
UPSTREAM: arm64: make asm/elf.h available to asm files

This reshuffles some code in asm/elf.h and puts a #ifndef __ASSEMBLY__
around its C definitions so that the CPP defines can be used in asm
source files as well.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 4a2e034e5cdadde4c712f79bdd57d1455c76a3db)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ic499e950d2ef297d10848862a6dfa07b90887f4c

8 years agoUPSTREAM: arm64: avoid dynamic relocations in early boot code
Ard Biesheuvel [Sat, 26 Dec 2015 11:46:40 +0000 (12:46 +0100)]
UPSTREAM: arm64: avoid dynamic relocations in early boot code

Before implementing KASLR for arm64 by building a self-relocating PIE
executable, we have to ensure that values we use before the relocation
routine is executed are not subject to dynamic relocation themselves.
This applies not only to virtual addresses, but also to values that are
supplied by the linker at build time and relocated using R_AARCH64_ABS64
relocations.

So instead, use assemble time constants, or force the use of static
relocations by folding the constants into the instructions.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 2bf31a4a05f5b00f37d65ba029d36a0230286cb7)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Icce0176591e3c0ae444e1ea54258efe677933c5b

8 years agoUPSTREAM: arm64: avoid R_AARCH64_ABS64 relocations for Image header fields
Ard Biesheuvel [Sat, 26 Dec 2015 12:48:02 +0000 (13:48 +0100)]
UPSTREAM: arm64: avoid R_AARCH64_ABS64 relocations for Image header fields

Unfortunately, the current way of using the linker to emit build time
constants into the Image header will no longer work once we switch to
the use of PIE executables. The reason is that such constants are emitted
into the binary using R_AARCH64_ABS64 relocations, which are resolved at
runtime, not at build time, and the places targeted by those relocations
will contain zeroes before that.

So refactor the endian swapping linker script constant generation code so
that it emits the upper and lower 32-bit words separately.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 6ad1fe5d9077a1ab40bf74b61994d2e770b00b14)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Iaa809a0b5fcf628e1e49cd6aaa0f31f31ce95c23

8 years agoUPSTREAM: arm64: add support for module PLTs
Ard Biesheuvel [Tue, 24 Nov 2015 11:37:35 +0000 (12:37 +0100)]
UPSTREAM: arm64: add support for module PLTs

This adds support for emitting PLTs at module load time for relative
branches that are out of range. This is a prerequisite for KASLR, which
may place the kernel and the modules anywhere in the vmalloc area,
making it more likely that branch target offsets exceed the maximum
range of +/- 128 MB.

In this version, I removed the distinction between relocations against
.init executable sections and ordinary executable sections. The reason
is that it is hardly worth the trouble, given that .init.text usually
does not contain that many far branches, and this version now only
reserves PLT entry space for jump and call relocations against undefined
symbols (since symbols defined in the same module can be assumed to be
within +/- 128 MB)

For example, the mac80211.ko module (which is fairly sizable at ~400 KB)
built with -mcmodel=large gives the following relocation counts:

                    relocs    branches   unique     !local
  .text              3925       3347       518        219
  .init.text           11          8         7          1
  .exit.text            4          4         4          1
  .text.unlikely       81         67        36         17

('unique' means branches to unique type/symbol/addend combos, of which
!local is the subset referring to undefined symbols)

IOW, we are only emitting a single PLT entry for the .init sections, and
we are better off just adding it to the core PLT section instead.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit fd045f6cd98ec4953147b318418bd45e441e52a3)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I1b46bb817e7d16a1b9a394b100c9e5de46c0837c

8 years agoUPSTREAM: arm64: move brk immediate argument definitions to separate header
Ard Biesheuvel [Tue, 23 Feb 2016 07:56:45 +0000 (08:56 +0100)]
UPSTREAM: arm64: move brk immediate argument definitions to separate header

Instead of reversing the header dependency between asm/bug.h and
asm/debug-monitors.h, split off the brk instruction immediate value
defines into a new header asm/brk-imm.h, and include it from both.

This solves the circular dependency issue that prevents BUG() from
being used in some header files, and keeps the definitions together.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit f98deee9a9f8c47d05a0f64d86440882dca772ff)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Id4827af98ab3d413828c589bc379acecabeff108

8 years agoUPSTREAM: arm64: mm: use bit ops rather than arithmetic in pa/va translations
Ard Biesheuvel [Mon, 22 Feb 2016 17:46:04 +0000 (18:46 +0100)]
UPSTREAM: arm64: mm: use bit ops rather than arithmetic in pa/va translations

Since PAGE_OFFSET is chosen such that it cuts the kernel VA space right
in half, and since the size of the kernel VA space itself is always a
power of 2, we can treat PAGE_OFFSET as a bitmask and replace the
additions/subtractions with 'or' and 'and-not' operations.

For the comparison against PAGE_OFFSET, a mov/cmp/branch sequence ends
up getting replaced with a single tbz instruction. For the additions and
subtractions, we save a mov instruction since the mask is folded into the
instruction's immediate field.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 8439e62a15614e8fcd43835d57b7245cd9870dc5)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I1ea4ef654dd7b7693f8713dab28ca0739b8a2c62

8 years agoUPSTREAM: arm64: mm: only perform memstart_addr sanity check if DEBUG_VM
Ard Biesheuvel [Mon, 22 Feb 2016 17:46:03 +0000 (18:46 +0100)]
UPSTREAM: arm64: mm: only perform memstart_addr sanity check if DEBUG_VM

Checking whether memstart_addr has been assigned every time it is
referenced adds a branch instruction that may hurt performance if
the reference in question occurs on a hot path. So only perform the
check if CONFIG_DEBUG_VM=y.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[catalin.marinas@arm.com: replaced #ifdef with VM_BUG_ON]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit a92405f082d43267575444a6927085e4c8a69e4e)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ia5f206d9a2dbbdbfc3f05fe985d4eca309f0d889

8 years agoUPSTREAM: arm64: User die() instead of panic() in do_page_fault()
Catalin Marinas [Fri, 19 Feb 2016 14:28:58 +0000 (14:28 +0000)]
UPSTREAM: arm64: User die() instead of panic() in do_page_fault()

The former gives better error reporting on unhandled permission faults
(introduced by the UAO patches).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 70c8abc28762d04e36c92e07eee2ce6ab41049cb)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ia419eccf1554a32fa4131ac15b277d4d2d4eb508

8 years agoUPSTREAM: arm64: allow kernel Image to be loaded anywhere in physical memory
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:42 +0000 (13:52 +0100)]
UPSTREAM: arm64: allow kernel Image to be loaded anywhere in physical memory

This relaxes the kernel Image placement requirements, so that it
may be placed at any 2 MB aligned offset in physical memory.

This is accomplished by ignoring PHYS_OFFSET when installing
memblocks, and accounting for the apparent virtual offset of
the kernel Image. As a result, virtual address references
below PAGE_OFFSET are correctly mapped onto physical references
into the kernel Image regardless of where it sits in memory.

Special care needs to be taken for dealing with memory limits passed
via mem=, since the generic implementation clips memory top down, which
may clip the kernel image itself if it is loaded high up in memory. To
deal with this case, we simply add back the memory covering the kernel
image, which may result in more memory to be retained than was passed
as a mem= parameter.

Since mem= should not be considered a production feature, a panic notifier
handler is installed that dumps the memory limit at panic time if one was
set.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit a7f8de168ace487fa7b88cb154e413cf40e87fc6)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I1d28cb66b658ef89f9648918565ddc07df4660f8

8 years agoUPSTREAM: arm64: defer __va translation of initrd_start and initrd_end
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:41 +0000 (13:52 +0100)]
UPSTREAM: arm64: defer __va translation of initrd_start and initrd_end

Before deferring the assignment of memstart_addr in a subsequent patch, to
the moment where all memory has been discovered and possibly clipped based
on the size of the linear region and the presence of a mem= command line
parameter, we need to ensure that memstart_addr is not used to perform __va
translations before it is assigned.

One such use is in the generic early DT discovery of the initrd location,
which is recorded as a virtual address in the globals initrd_start and
initrd_end. So wire up the generic support to declare the initrd addresses,
and implement it without __va() translations, and perform the translation
after memstart_addr has been assigned.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit a89dea585371a9d5d85499db47c93f129be8e0c4)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I7d0b3dd7adcf069d4e7c1f58fd12e59c4cb62017

8 years agoUPSTREAM: arm64: move kernel image to base of vmalloc area
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:40 +0000 (13:52 +0100)]
UPSTREAM: arm64: move kernel image to base of vmalloc area

This moves the module area to right before the vmalloc area, and moves
the kernel image to the base of the vmalloc area. This is an intermediate
step towards implementing KASLR, which allows the kernel image to be
located anywhere in the vmalloc area.

Since other subsystems such as hibernate may still need to refer to the
kernel text or data segments via their linears addresses, both are mapped
in the linear region as well. The linear alias of the text region is
mapped read-only/non-executable to prevent inadvertent modification or
execution.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit f9040773b7bbbd9e98eb6184a263512a7cfc133f)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I698faed47bb7cfc256a1b5b5407a7c586bdc63b3

8 years agoBACKPORT: arm64: kvm: deal with kernel symbols outside of linear mapping
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:39 +0000 (13:52 +0100)]
BACKPORT: arm64: kvm: deal with kernel symbols outside of linear mapping

KVM on arm64 uses a fixed offset between the linear mapping at EL1 and
the HYP mapping at EL2. Before we can move the kernel virtual mapping
out of the linear mapping, we have to make sure that references to kernel
symbols that are accessed via the HYP mapping are translated to their
linear equivalent.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit a0bf9776cd0be4490d4675d4108e13379849fc7f)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I316f029d22a16773c168a151dba59bed7921fa7e

8 years agoUPSTREAM: arm64: decouple early fixmap init from linear mapping
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:38 +0000 (13:52 +0100)]
UPSTREAM: arm64: decouple early fixmap init from linear mapping

Since the early fixmap page tables are populated using pages that are
part of the static footprint of the kernel, they are covered by the
initial kernel mapping, and we can refer to them without using __va/__pa
translations, which are tied to the linear mapping.

Since the fixmap page tables are disjoint from the kernel mapping up
to the top level pgd entry, we can refer to bm_pte[] directly, and there
is no need to walk the page tables and perform __pa()/__va() translations
at each step.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 157962f5a8f236cab898b68bdaa69ce68922f0bf)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I49221a199962aec6d4f3712bfb3dd041d64ba99b

8 years agoUPSTREAM: arm64: pgtable: implement static [pte|pmd|pud]_offset variants
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:37 +0000 (13:52 +0100)]
UPSTREAM: arm64: pgtable: implement static [pte|pmd|pud]_offset variants

The page table accessors pte_offset(), pud_offset() and pmd_offset()
rely on __va translations, so they can only be used after the linear
mapping has been installed. For the early fixmap and kasan init routines,
whose page tables are allocated statically in the kernel image, these
functions will return bogus values. So implement pte_offset_kimg(),
pmd_offset_kimg() and pud_offset_kimg(), which can be used instead
before any page tables have been allocated dynamically.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 6533945a32c762c5db70d7a3ec251a040b2d9661)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ibea400f0938db568524fb83eb2d22d8658bbb56b

8 years agoUPSTREAM: arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:36 +0000 (13:52 +0100)]
UPSTREAM: arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region

This introduces the preprocessor symbol KIMAGE_VADDR which will serve as
the symbolic virtual base of the kernel region, i.e., the kernel's virtual
offset will be KIMAGE_VADDR + TEXT_OFFSET. For now, we define it as being
equal to PAGE_OFFSET, but in the future, it will be moved below it once
we move the kernel virtual mapping out of the linear mapping.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit ab893fb9f1b17f02139bce547bb4b69e96b9ae16)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I31427bd2b948a22bb8ce1d22109682fc66efb98d

8 years agoBACKPORT: arm64: add support for ioremap() block mappings
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:35 +0000 (13:52 +0100)]
BACKPORT: arm64: add support for ioremap() block mappings

This wires up the existing generic huge-vmap feature, which allows
ioremap() to use PMD or PUD sized block mappings. It also adds support
to the unmap path for dealing with block mappings, which will allow us
to unmap the __init region using unmap_kernel_range() in a subsequent
patch.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 324420bf91f60582bb481133db9547111768ef17)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I4765ae77f7d67c3972b7e5b19d43db434e8b777c

8 years agoBACKPORT: arm64: prevent potential circular header dependencies in asm/bug.h
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:34 +0000 (13:52 +0100)]
BACKPORT: arm64: prevent potential circular header dependencies in asm/bug.h

Currently, using BUG_ON() in header files is cumbersome, due to the fact
that asm/bug.h transitively includes a lot of other header files, resulting
in the actual BUG_ON() invocation appearing before its definition in the
preprocessor input. So let's reverse the #include dependency between
asm/bug.h and asm/debug-monitors.h, by moving the definition of BUG_BRK_IMM
from the latter to the former. Also fix up one user of asm/debug-monitors.h
which relied on a transitive include.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 03336b1df9929e5d9c28fd9768948b6151cb046c)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I71470b7db9ef858a5a8368a872f931936c723a25

8 years agoUPSTREAM: of/fdt: factor out assignment of initrd_start/initrd_end
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:33 +0000 (13:52 +0100)]
UPSTREAM: of/fdt: factor out assignment of initrd_start/initrd_end

Since architectures may not yet have their linear mapping up and running
when the initrd address is discovered from the DT, factor out the
assignment of initrd_start and initrd_end, so that an architecture can
override it and use the translation it needs.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 369bc9abf22bf026e8645a4dd746b90649a2f6ee)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I8c258bdc4367955314e9a5223dc4c7751a06a98d

8 years agoUPSTREAM: of/fdt: make memblock minimum physical address arch configurable
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:32 +0000 (13:52 +0100)]
UPSTREAM: of/fdt: make memblock minimum physical address arch configurable

By default, early_init_dt_add_memory_arch() ignores memory below
the base of the kernel image since it won't be addressable via the
linear mapping. However, this is not appropriate anymore once we
decouple the kernel text mapping from the linear mapping, so archs
may want to drop the low limit entirely. So allow the minimum to be
overridden by setting MIN_MEMBLOCK_ADDR.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 270522a04f7a9911983878fa37da467f9ff1c938)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I4bb2626a87493262a64584b3d808de260129127e

8 years agoUPSTREAM: arm64: Remove the get_thread_info() function
Catalin Marinas [Thu, 18 Feb 2016 15:50:04 +0000 (15:50 +0000)]
UPSTREAM: arm64: Remove the get_thread_info() function

This function was introduced by previous commits implementing UAO.
However, it can be replaced with task_thread_info() in
uao_thread_switch() or get_fs() in do_page_fault() (the latter being
called only on the current context, so no need for using the saved
pt_regs).

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit e950631e84e7e38892ffbeee5e1816b270026b0e)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ic6e9b6af7314fa83d9b0773ae3fac5a2ff34e67a

8 years agoBACKPORT: arm64: kernel: Don't toggle PAN on systems with UAO
James Morse [Fri, 5 Feb 2016 14:58:50 +0000 (14:58 +0000)]
BACKPORT: arm64: kernel: Don't toggle PAN on systems with UAO

If a CPU supports both Privileged Access Never (PAN) and User Access
Override (UAO), we don't need to disable/re-enable PAN round all
copy_to_user() like calls.

UAO alternatives cause these calls to use the 'unprivileged' load/store
instructions, which are overridden to be the privileged kind when
fs==KERNEL_DS.

This patch changes the copy_to_user() calls to have their PAN toggling
depend on a new composite 'feature' ARM64_ALT_PAN_NOT_UAO.

If both features are detected, PAN will be enabled, but the copy_to_user()
alternatives will not be applied. This means PAN will be enabled all the
time for these functions. If only PAN is detected, the toggling will be
enabled as normal.

This will save the time taken to disable/re-enable PAN, and allow us to
catch copy_to_user() accesses that occur with fs==KERNEL_DS.

Futex and swp-emulation code continue to hang their PAN toggling code on
ARM64_HAS_PAN.

Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 705441960033e66b63524521f153fbb28c99ddbd)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I3fa35ebacaf401e1344e76932a26fdd14a8a3cdb

8 years agoUPSTREAM: arm64: cpufeature: Test 'matches' pointer to find the end of the list
James Morse [Fri, 5 Feb 2016 14:58:49 +0000 (14:58 +0000)]
UPSTREAM: arm64: cpufeature: Test 'matches' pointer to find the end of the list

CPU feature code uses the desc field as a test to find the end of the list,
this means every entry must have a description. This generates noise for
entries in the list that aren't really features, but combinations of them.
e.g.
> CPU features: detected feature: Privileged Access Never
> CPU features: detected feature: PAN and not UAO

These combination features are needed for corner cases with alternatives,
where cpu features interact.

Change all walkers of the arm64_features[] and arm64_hwcaps[] lists to test
'matches' not 'desc', and only print 'desc' if it is non-NULL.

Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by : Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 644c2ae198412c956700e55a2acf80b2541f6aa5)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I4500bb7c547e2e67ea56e242a8621df539f6fd67

8 years agoUPSTREAM: arm64: kernel: Add support for User Access Override
James Morse [Fri, 5 Feb 2016 14:58:48 +0000 (14:58 +0000)]
UPSTREAM: arm64: kernel: Add support for User Access Override

'User Access Override' is a new ARMv8.2 feature which allows the
unprivileged load and store instructions to be overridden to behave in
the normal way.

This patch converts {get,put}_user() and friends to use ldtr*/sttr*
instructions - so that they can only access EL0 memory, then enables
UAO when fs==KERNEL_DS so that these functions can access kernel memory.

This allows user space's read/write permissions to be checked against the
page tables, instead of testing addr<USER_DS, then using the kernel's
read/write permissions.

Signed-off-by: James Morse <james.morse@arm.com>
[catalin.marinas@arm.com: move uao_thread_switch() above dsb()]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 57f4959bad0a154aeca125b7d38d1d9471a12422)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I1a6a74a1f33b92d54368bd99387b55cf62930903

8 years agoUPSTREAM: arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
James Morse [Fri, 5 Feb 2016 14:58:47 +0000 (14:58 +0000)]
UPSTREAM: arm64: add ARMv8.2 id_aa64mmfr2 boiler plate

ARMv8.2 adds a new feature register id_aa64mmfr2. This patch adds the
cpu feature boiler plate used by the actual features in later patches.

Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 406e308770a92bd33995b2e5b681e86358328bb0)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I51db326696ba1e18e9a4c667acbeb3e25e0b151e

8 years agoUPSTREAM: arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro
James Morse [Fri, 5 Feb 2016 14:58:46 +0000 (14:58 +0000)]
UPSTREAM: arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro

Older assemblers may not have support for newer feature registers. To get
round this, sysreg.h provides a 'mrs_s' macro that takes a register
encoding and generates the raw instruction.

Change read_cpuid() to use mrs_s in all cases so that new registers
don't have to be a special case. Including sysreg.h means we need to move
the include and definition of read_cpuid() after the #ifndef __ASSEMBLY__
to avoid syntax errors in vmlinux.lds.

Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 0f54b14e76f5302afe164dc911b049b5df836ff5)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ic59c81a97b4585b3c0964f293ddee08f7cb594ac

8 years agoUPSTREAM: arm64: use local label prefixes for __reg_num symbols
Ard Biesheuvel [Mon, 15 Feb 2016 08:51:49 +0000 (09:51 +0100)]
UPSTREAM: arm64: use local label prefixes for __reg_num symbols

The __reg_num_xNN symbols that are used to implement the msr_s and
mrs_s macros are recorded in the ELF metadata of each object file.
This does not affect the size of the final binary, but it does clutter
the output of tools like readelf, i.e.,

  $ readelf -a vmlinux |grep -c __reg_num_x
  50976

So let's use symbols with the .L prefix, these are strictly local,
and don't end up in the object files.

  $ readelf -a vmlinux |grep -c __reg_num_x
  0

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 7abc7d833c9eb16efc8a59239d3771a6e30be367)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Idefe9841ef7d1ddcc5161fc8de14153cfadaf4f3

8 years agoUPSTREAM: arm64: vdso: Mark vDSO code as read-only
David Brown [Wed, 10 Feb 2016 21:52:22 +0000 (13:52 -0800)]
UPSTREAM: arm64: vdso: Mark vDSO code as read-only

Although the arm64 vDSO is cleanly separated by code/data with the
code being read-only in userspace mappings, the code page is still
writable from the kernel.  There have been exploits (such as
http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
from a bad kernel write to full root.

Prevent this specific exploit on arm64 by putting the vDSO code page
in read-only memory as well.

Before the change:
[    3.138366] vdso: 2 pages (1 code @ ffffffc000a71000, 1 data @ ffffffc000a70000)
---[ Kernel Mapping ]---
0xffffffc000000000-0xffffffc000082000         520K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc000082000-0xffffffc000200000        1528K     ro x  SHD AF            UXN MEM/NORMAL
0xffffffc000200000-0xffffffc000800000           6M     ro x  SHD AF        BLK UXN MEM/NORMAL
0xffffffc000800000-0xffffffc0009b6000        1752K     ro x  SHD AF            UXN MEM/NORMAL
0xffffffc0009b6000-0xffffffc000c00000        2344K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc000c00000-0xffffffc008000000         116M     RW NX SHD AF        BLK UXN MEM/NORMAL
0xffffffc00c000000-0xffffffc07f000000        1840M     RW NX SHD AF        BLK UXN MEM/NORMAL
0xffffffc800000000-0xffffffc840000000           1G     RW NX SHD AF        BLK UXN MEM/NORMAL
0xffffffc840000000-0xffffffc87ae00000         942M     RW NX SHD AF        BLK UXN MEM/NORMAL
0xffffffc87ae00000-0xffffffc87ae70000         448K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc87af80000-0xffffffc87af8a000          40K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc87af8b000-0xffffffc87b000000         468K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc87b000000-0xffffffc87fe00000          78M     RW NX SHD AF        BLK UXN MEM/NORMAL
0xffffffc87fe00000-0xffffffc87ff50000        1344K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc87ff90000-0xffffffc87ffa0000          64K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc87fff0000-0xffffffc880000000          64K     RW NX SHD AF            UXN MEM/NORMAL

After:
[    3.138368] vdso: 2 pages (1 code @ ffffffc0006de000, 1 data @ ffffffc000a74000)
---[ Kernel Mapping ]---
0xffffffc000000000-0xffffffc000082000         520K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc000082000-0xffffffc000200000        1528K     ro x  SHD AF            UXN MEM/NORMAL
0xffffffc000200000-0xffffffc000800000           6M     ro x  SHD AF        BLK UXN MEM/NORMAL
0xffffffc000800000-0xffffffc0009b8000        1760K     ro x  SHD AF            UXN MEM/NORMAL
0xffffffc0009b8000-0xffffffc000c00000        2336K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc000c00000-0xffffffc008000000         116M     RW NX SHD AF        BLK UXN MEM/NORMAL
0xffffffc00c000000-0xffffffc07f000000        1840M     RW NX SHD AF        BLK UXN MEM/NORMAL
0xffffffc800000000-0xffffffc840000000           1G     RW NX SHD AF        BLK UXN MEM/NORMAL
0xffffffc840000000-0xffffffc87ae00000         942M     RW NX SHD AF        BLK UXN MEM/NORMAL
0xffffffc87ae00000-0xffffffc87ae70000         448K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc87af80000-0xffffffc87af8a000          40K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc87af8b000-0xffffffc87b000000         468K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc87b000000-0xffffffc87fe00000          78M     RW NX SHD AF        BLK UXN MEM/NORMAL
0xffffffc87fe00000-0xffffffc87ff50000        1344K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc87ff90000-0xffffffc87ffa0000          64K     RW NX SHD AF            UXN MEM/NORMAL
0xffffffc87fff0000-0xffffffc880000000          64K     RW NX SHD AF            UXN MEM/NORMAL

Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
PaX Team, Brad Spengler, and Kees Cook.

Signed-off-by: David Brown <david.brown@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[catalin.marinas@arm.com: removed superfluous __PAGE_ALIGNED_DATA]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 88d8a7994e564d209d4b2583496631c2357d386b)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I3fe4b48df8b27313ac61c947746805442757932c

8 years agoUPSTREAM: arm64: ubsan: select ARCH_HAS_UBSAN_SANITIZE_ALL
Yang Shi [Fri, 5 Feb 2016 23:50:18 +0000 (15:50 -0800)]
UPSTREAM: arm64: ubsan: select ARCH_HAS_UBSAN_SANITIZE_ALL

To enable UBSAN on arm64, ARCH_HAS_UBSAN_SANITIZE_ALL need to be selected.

Basic kernel bootup test is passed on arm64 with CONFIG_UBSAN_SANITIZE_ALL
enabled.

Signed-off-by: Yang Shi <yang.shi@linaro.org>
Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit f0b7f8a4b44657386273a67179dd901c81cd11a6)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I640f50e70e3562bf1caf2ce164d6ed5640e041f6

8 years agoUPSTREAM: arm64: ptdump: Indicate whether memory should be faulting
Laura Abbott [Sat, 6 Feb 2016 00:24:48 +0000 (16:24 -0800)]
UPSTREAM: arm64: ptdump: Indicate whether memory should be faulting

With CONFIG_DEBUG_PAGEALLOC, pages do not have the valid bit
set when free in the buddy allocator. Add an indiciation to
the page table dumping code that the valid bit is not set,
'F' for fault, to make this easier to understand.

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit d7e9d59494a9a5d83274f5af2148b82ca22dff3f)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I7e8200549e22cb3a01b5e827bf12b872b9cefc58

8 years agoUPSTREAM: arm64: Add support for ARCH_SUPPORTS_DEBUG_PAGEALLOC
Laura Abbott [Sat, 6 Feb 2016 00:24:47 +0000 (16:24 -0800)]
UPSTREAM: arm64: Add support for ARCH_SUPPORTS_DEBUG_PAGEALLOC

ARCH_SUPPORTS_DEBUG_PAGEALLOC provides a hook to map and unmap
pages for debugging purposes. This requires memory be mapped
with PAGE_SIZE mappings since breaking down larger mappings
at runtime will lead to TLB conflicts. Check if debug_pagealloc
is enabled at runtime and if so, map everyting with PAGE_SIZE
pages. Implement the functions to actually map/unmap the
pages at runtime.

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
[catalin.marinas@arm.com: static annotation block_mappings_allowed() and #ifdef]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 83863f25e4b8214e994ef8b5647aad614d74b45d)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ia9b9bc5fcc1938bafbc2930f08d27973cfc3d038

8 years agoUPSTREAM: arm64: mm: avoid calling apply_to_page_range on empty range
Mika Penttilä [Tue, 26 Jan 2016 15:47:25 +0000 (15:47 +0000)]
UPSTREAM: arm64: mm: avoid calling apply_to_page_range on empty range

Calling apply_to_page_range with an empty range results in a BUG_ON
from the core code. This can be triggered by trying to load the st_drv
module with CONFIG_DEBUG_SET_MODULE_RONX enabled:

  kernel BUG at mm/memory.c:1874!
  Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
  Modules linked in:
  CPU: 3 PID: 1764 Comm: insmod Not tainted 4.5.0-rc1+ #2
  Hardware name: ARM Juno development board (r0) (DT)
  task: ffffffc9763b8000 ti: ffffffc975af8000 task.ti: ffffffc975af8000
  PC is at apply_to_page_range+0x2cc/0x2d0
  LR is at change_memory_common+0x80/0x108

This patch fixes the issue by making change_memory_common (called by the
set_memory_* functions) a NOP when numpages == 0, therefore avoiding the
erroneous call to apply_to_page_range and bringing us into line with x86
and s390.

Cc: <stable@vger.kernel.org>
Reviewed-by: Laura Abbott <labbott@redhat.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Mika Penttilä <mika.penttila@nextfour.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 57adec866c0440976c96a4b8f5b59fb411b1cacb)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ia107d3b324cc8237f669778a7c9c3abae8637501

8 years agoUPSTREAM: arm64: Drop alloc function from create_mapping
Laura Abbott [Sat, 6 Feb 2016 00:24:46 +0000 (16:24 -0800)]
UPSTREAM: arm64: Drop alloc function from create_mapping

create_mapping is only used in fixmap_remap_fdt. All the create_mapping
calls need to happen on existing translation table pages without
additional allocations. Rather than have an alloc function be called
and fail, just set it to NULL and catch its use. Also change
the name to create_mapping_noalloc to better capture what exactly is
going on.

Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 132233a759580f5ce9b1bfaac9073e47d03c460d)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I48db9adae540e79d47328b4045f26f22ecebc482

8 years agoUPSTREAM: arm64: prefetch: add missing #include for spin_lock_prefetch
Will Deacon [Wed, 10 Feb 2016 10:07:30 +0000 (10:07 +0000)]
UPSTREAM: arm64: prefetch: add missing #include for spin_lock_prefetch

As of 52e662326e1e ("arm64: prefetch: don't provide spin_lock_prefetch
with LSE"), spin_lock_prefetch is patched at runtime when the LSE atomics
are in use. This relies on the ARM64_LSE_ATOMIC_INSN macro to drive
the alternatives framework, but that macro is only available via
asm/lse.h, which isn't explicitly included in processor.h. Consequently,
drivers can run into build failures such as:

   In file included from include/linux/prefetch.h:14:0,
                    from drivers/net/ethernet/intel/i40e/i40e_txrx.c:27:
   arch/arm64/include/asm/processor.h: In function 'spin_lock_prefetch':
   arch/arm64/include/asm/processor.h:183:15: error: expected string literal before 'ARM64_LSE_ATOMIC_INSN'
     asm volatile(ARM64_LSE_ATOMIC_INSN(

This patch add the missing include and gets things building again.

Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit afb83cc3f0e4f86ea0e1cc3db7a90f58f1abd4d5)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I9a9e6b920f96b060781a01bff50a0ca8db78cf7f

8 years agoUPSTREAM: arm64: lib: patch in prfm for copy_page if requested
Andrew Pinski [Tue, 2 Feb 2016 12:46:26 +0000 (12:46 +0000)]
UPSTREAM: arm64: lib: patch in prfm for copy_page if requested

On ThunderX T88 pass 1 and pass 2, there is no hardware prefetching so
we need to patch in explicit software prefetching instructions

Prefetching improves this code by 60% over the original code and 2x
over the code without prefetching for the affected hardware using the
benchmark code at https://github.com/apinski-cavium/copy_page_benchmark

Signed-off-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 60e0a09db24adc8809696307e5d97cc4ba7cb3e0)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I3821a4d3a7b6fd68b4b0aca31478ec960e4e5172

8 years agoUPSTREAM: arm64: lib: improve copy_page to deal with 128 bytes at a time
Will Deacon [Tue, 2 Feb 2016 12:46:25 +0000 (12:46 +0000)]
UPSTREAM: arm64: lib: improve copy_page to deal with 128 bytes at a time

We want to avoid lots of different copy_page implementations, settling
for something that is "good enough" everywhere and hopefully easy to
understand and maintain whilst we're at it.

This patch reworks our copy_page implementation based on discussions
with Cavium on the list and benchmarking on Cortex-A processors so that:

  - The loop is unrolled to copy 128 bytes per iteration

  - The reads are offset so that we read from the next 128-byte block
    in the same iteration that we store the previous block

  - Explicit prefetch instructions are removed for now, since they hurt
    performance on CPUs with hardware prefetching

  - The loop exit condition is calculated at the start of the loop

Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit 223e23e8aa26b0bb62c597637e77295e14f6a62c)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Icabd86bbecc60ad0d730ab796e33b8762cecb1fb

8 years agoUPSTREAM: arm64: prefetch: add alternative pattern for CPUs without a prefetcher
Will Deacon [Tue, 2 Feb 2016 12:46:24 +0000 (12:46 +0000)]
UPSTREAM: arm64: prefetch: add alternative pattern for CPUs without a prefetcher

Most CPUs have a hardware prefetcher which generally performs better
without explicit prefetch instructions issued by software, however
some CPUs (e.g. Cavium ThunderX) rely solely on explicit prefetch
instructions.

This patch adds an alternative pattern (ARM64_HAS_NO_HW_PREFETCH) to
allow our library code to make use of explicit prefetch instructions
during things like copy routines only when the CPU does not have the
capability to perform the prefetching itself.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug: 30369029
Patchset: kaslr-arm64-4.4

(cherry picked from commit d5370f754875460662abe8561388e019d90dd0c4)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ie33097c9b7786922ff8c457d16515c3188b8e94b