Kees Cook [Wed, 17 Feb 2016 22:41:15 +0000 (14:41 -0800)]
UPSTREAM: arch: Introduce post-init read-only memory
One of the easiest ways to protect the kernel from attack is to reduce
the internal attack surface exposed when a "write" flaw is available. By
making as much of the kernel read-only as possible, we reduce the
attack surface.
Many things are written to only during __init, and never changed
again. These cannot be made "const" since the compiler will do the wrong
thing (we do actually need to write to them). Instead, move these items
into a memory region that will be made read-only during mark_rodata_ro()
which happens after all kernel __init code has finished.
This introduces __ro_after_init as a way to mark such memory, and adds
some documentation about the existing __read_mostly marking.
This improves the security of the Linux kernel by marking formerly
read-write memory regions as read-only on a fully booted up system.
Based on work by PaX Team and Brad Spengler.
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brad Spengler <spender@grsecurity.net>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-5-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug:
31660652
Change-Id: I640f6d858d9770a5e480d12a1c716adf8842feb0
(cherry picked from commit
c74ba8b3480da6ddaea17df2263ec09b869ac496)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Kees Cook [Wed, 17 Feb 2016 22:41:14 +0000 (14:41 -0800)]
UPSTREAM: x86/mm: Always enable CONFIG_DEBUG_RODATA and remove the Kconfig option
This removes the CONFIG_DEBUG_RODATA option and makes it always enabled.
This simplifies the code and also makes it clearer that read-only mapped
memory is just as fundamental a security feature in kernel-space as it is
in user-space.
Suggested-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-4-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug:
31660652
Change-Id: I3e79c7c4ead79a81c1445f1b3dd28003517faf18
(cherry picked from commit
9ccaf77cf05915f51231d158abfd5448aedde758)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Kees Cook [Wed, 17 Feb 2016 22:41:13 +0000 (14:41 -0800)]
UPSTREAM: mm/init: Add 'rodata=off' boot cmdline parameter to disable read-only kernel mappings
It may be useful to debug writes to the readonly sections of memory,
so provide a cmdline "rodata=off" to allow for this. This can be
expanded in the future to support "log" and "write" modes, but that
will need to be architecture-specific.
This also makes KDB software breakpoints more usable, as read-only
mappings can now be disabled on any kernel.
Suggested-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Brown <david.brown@linaro.org>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Link: http://lkml.kernel.org/r/1455748879-21872-3-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug:
31660652
Change-Id: I67b818ca390afdd42ab1c27cb4f8ac64bbdb3b65
(cherry picked from commit
d2aa1acad22f1bdd0cfa67b3861800e392254454)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Kees Cook [Wed, 17 Feb 2016 22:41:12 +0000 (14:41 -0800)]
UPSTREAM: asm-generic: Consolidate mark_rodata_ro()
Instead of defining mark_rodata_ro() in each architecture, consolidate it.
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Gross <agross@codeaurora.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Ashok Kumar <ashoks@broadcom.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Borislav Petkov <bp@suse.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Brown <david.brown@linaro.org>
Cc: David Hildenbrand <dahi@linux.vnet.ibm.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Emese Revfy <re.emese@gmail.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Helge Deller <deller@gmx.de>
Cc: James E.J. Bottomley <jejb@parisc-linux.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathias Krause <minipli@googlemail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: PaX Team <pageexec@freemail.hu>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Stephen Boyd <sboyd@codeaurora.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toshi Kani <toshi.kani@hp.com>
Cc: kernel-hardening@lists.openwall.com
Cc: linux-arch <linux-arch@vger.kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-parisc@vger.kernel.org
Link: http://lkml.kernel.org/r/1455748879-21872-2-git-send-email-keescook@chromium.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Bug:
31660652
Change-Id: Iec0c44b3f5d7948954da93fba6cb57888a2709de
(cherry picked from commit
e267d97b83d9cecc16c54825f9f3ac7f72dc1e1e)
Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
Will Deacon [Wed, 8 Jun 2016 14:10:57 +0000 (15:10 +0100)]
UPSTREAM: arm64: spinlock: fix spin_unlock_wait for LSE atomics
Commit
d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against
concurrent lockers") fixed spin_unlock_wait for LL/SC-based atomics under
the premise that the LSE atomics (in particular, the LDADDA instruction)
are indivisible.
Unfortunately, these instructions are only indivisible when used with the
-AL (full ordering) suffix and, consequently, the same issue can
theoretically be observed with LSE atomics, where a later (in program
order) load can be speculated before the write portion of the atomic
operation.
This patch fixes the issue by performing a CAS of the lock once we've
established that it's unlocked, in much the same way as the LL/SC code.
Fixes:
d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
3a5facd09da848193f5bcb0dea098a298bc1a29d)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Icedaa4c508784bf43d0b5787586480fd668ccc49
Mark Rutland [Wed, 24 Aug 2016 17:02:08 +0000 (18:02 +0100)]
UPSTREAM: arm64: avoid TLB conflict with CONFIG_RANDOMIZE_BASE
When CONFIG_RANDOMIZE_BASE is selected, we modify the page tables to remap the
kernel at a newly-chosen VA range. We do this with the MMU disabled, but do not
invalidate TLBs prior to re-enabling the MMU with the new tables. Thus the old
mappings entries may still live in TLBs, and we risk violating
Break-Before-Make requirements, leading to TLB conflicts and/or other issues.
We invalidate TLBs when we uninsall the idmap in early setup code, but prior to
this we are subject to issues relating to the Break-Before-Make violation.
Avoid these issues by invalidating the TLBs before the new mappings can be
used by the hardware.
Fixes:
f80fb3a3d508 ("arm64: add support for kernel ASLR")
Cc: <stable@vger.kernel.org> # 4.6+
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
fd363bd417ddb6103564c69cfcbd92d9a7877431)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I6c23ce55cdd8b66587b6787b8f28df8535e39f24
Catalin Marinas [Tue, 26 Jul 2016 17:16:55 +0000 (10:16 -0700)]
UPSTREAM: arm64: Only select ARM64_MODULE_PLTS if MODULES=y
Selecting CONFIG_RANDOMIZE_BASE=y and CONFIG_MODULES=n fails to build
the module PLTs support:
CC arch/arm64/kernel/module-plts.o
/work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c: In function ‘module_emit_plt_entry’:
/work/Linux/linux-2.6-aarch64/arch/arm64/kernel/module-plts.c:32:49: error: dereferencing pointer to incomplete type ‘struct module’
This patch selects ARM64_MODULE_PLTS conditionally only if MODULES is
enabled.
Fixes:
f80fb3a3d508 ("arm64: add support for kernel ASLR")
Cc: <stable@vger.kernel.org> # 4.6+
Reported-by: Jeff Vander Stoep <jeffv@google.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
b9c220b589daaf140f5b8ebe502c98745b94e65c)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I446cb3aa78f1c64b5aa1e2e90fda13f7d46cac33
Catalin Marinas [Thu, 10 Mar 2016 18:30:56 +0000 (18:30 +0000)]
UPSTREAM: arm64: kasan: Use actual memory node when populating the kernel image shadow
With the 16KB or 64KB page configurations, the generic
vmemmap_populate() implementation warns on potential offnode
page_structs via vmemmap_verify() because the arm64 kasan_init() passes
NUMA_NO_NODE instead of the actual node for the kernel image memory.
Fixes:
f9040773b7bb ("arm64: move kernel image to base of vmalloc area")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: James Morse <james.morse@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
2f76969f2eef051bdd63d38b08d78e790440b0ad)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I8985e5b4628a9c7076767d4565f7633635813b5c
Ard Biesheuvel [Thu, 25 Feb 2016 19:48:53 +0000 (20:48 +0100)]
UPSTREAM: arm64: lse: deal with clobbered IP registers after branch via PLT
The LSE atomics implementation uses runtime patching to patch in calls
to out of line non-LSE atomics implementations on cores that lack hardware
support for LSE. To avoid paying the overhead cost of a function call even
if no call ends up being made, the bl instruction is kept invisible to the
compiler, and the out of line implementations preserve all registers, not
just the ones that they are required to preserve as per the AAPCS64.
However, commit
fd045f6cd98e ("arm64: add support for module PLTs") added
support for routing branch instructions via veneers if the branch target
offset exceeds the range of the ordinary relative branch instructions.
Since this deals with jump and call instructions that are exposed to ELF
relocations, the PLT code uses x16 to hold the address of the branch target
when it performs an indirect branch-to-register, something which is
explicitly allowed by the AAPCS64 (and ordinary compiler generated code
does not expect register x16 or x17 to retain their values across a bl
instruction).
Since the lse runtime patched bl instructions don't adhere to the AAPCS64,
they don't deal with this clobbering of registers x16 and x17. So add them
to the clobber list of the asm() statements that perform the call
instructions, and drop x16 and x17 from the list of registers that are
callee saved in the out of line non-LSE implementations.
In addition, since we have given these functions two scratch registers,
they no longer need to stack/unstack temp registers.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: factored clobber list into #define, updated Makefile comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
5be8b70af1ca78cefb8b756d157532360a5fd663)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ia44a54eba315a47a6b8aaa2259b444e0139162c0
Ard Biesheuvel [Wed, 2 Mar 2016 08:47:13 +0000 (09:47 +0100)]
UPSTREAM: arm64: mm: check at build time that PAGE_OFFSET divides the VA space evenly
Commit
8439e62a1561 ("arm64: mm: use bit ops rather than arithmetic in
pa/va translations") changed the boundary check against PAGE_OFFSET from
an arithmetic comparison to a bit test. This means we now silently assume
that PAGE_OFFSET is a power of 2 that divides the kernel virtual address
space into two equal halves. So make that assumption explicit.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
6d2aa549de1fc998581d216de3853aa131aa4446)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I8c3bc8cdb7d7f7dea092fd1a208b04583a141054
Catalin Marinas [Thu, 10 Mar 2016 18:41:16 +0000 (18:41 +0000)]
UPSTREAM: arm64: kasan: Fix zero shadow mapping overriding kernel image shadow
With the 16KB and 64KB page size configurations, SWAPPER_BLOCK_SIZE is
PAGE_SIZE and ARM64_SWAPPER_USES_SECTION_MAPS is 0. Since
kimg_shadow_end is not page aligned (_end shifted by
KASAN_SHADOW_SCALE_SHIFT), the edges of previously mapped kernel image
shadow via vmemmap_populate() may be overridden by subsequent calls to
kasan_populate_zero_shadow(), leading to kernel panics like below:
------------------------------------------------------------------------------
Unable to handle kernel paging request at virtual address
fffffc100135068c
pgd =
fffffc8009ac0000
[
fffffc100135068c] *pgd=
00000009ffee0003, *pud=
00000009ffee0003, *pmd=
00000009ffee0003, *pte=
00e0000081a00793
Internal error: Oops:
9600004f [#1] PREEMPT SMP
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.5.0-rc4+ #1984
Hardware name: Juno (DT)
task:
fffffe09001a0000 ti:
fffffe0900200000 task.ti:
fffffe0900200000
PC is at __memset+0x4c/0x200
LR is at kasan_unpoison_shadow+0x34/0x50
pc : [<
fffffc800846f1cc>] lr : [<
fffffc800821ff54>] pstate:
00000245
sp :
fffffe0900203db0
x29:
fffffe0900203db0 x28:
0000000000000000
x27:
0000000000000000 x26:
0000000000000000
x25:
fffffc80099b69d0 x24:
0000000000000001
x23:
0000000000000000 x22:
0000000000002000
x21:
dffffc8000000000 x20:
1fffff9001350a8c
x19:
0000000000002000 x18:
0000000000000008
x17:
0000000000000147 x16:
ffffffffffffffff
x15:
79746972100e041d x14:
ffffff0000000000
x13:
ffff000000000000 x12:
0000000000000000
x11:
0101010101010101 x10:
1fffffc11c000000
x9 :
0000000000000000 x8 :
fffffc100135068c
x7 :
0000000000000000 x6 :
000000000000003f
x5 :
0000000000000040 x4 :
0000000000000004
x3 :
fffffc100134f651 x2 :
0000000000000400
x1 :
0000000000000000 x0 :
fffffc100135068c
Process swapper/0 (pid: 1, stack limit = 0xfffffe0900200020)
Call trace:
[<
fffffc800846f1cc>] __memset+0x4c/0x200
[<
fffffc8008220044>] __asan_register_globals+0x5c/0xb0
[<
fffffc8008a09d34>] _GLOBAL__sub_I_65535_1_sunrpc_cache_lookup+0x1c/0x28
[<
fffffc8008f20d28>] kernel_init_freeable+0x104/0x274
[<
fffffc80089e1948>] kernel_init+0x10/0xf8
[<
fffffc8008093a00>] ret_from_fork+0x10/0x50
------------------------------------------------------------------------------
This patch aligns kimg_shadow_start and kimg_shadow_end to
SWAPPER_BLOCK_SIZE in all configurations.
Fixes:
f9040773b7bb ("arm64: move kernel image to base of vmalloc area")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
2776e0e8ef683a42fe3e9a5facf576b73579700e)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I13a6b38aefbeddd20bc87cb1382f2787bbc5cf9c
Mark Rutland [Tue, 22 Mar 2016 10:11:45 +0000 (10:11 +0000)]
UPSTREAM: arm64: consistently use p?d_set_huge
Commit
324420bf91f60582 ("arm64: add support for ioremap() block
mappings") added new p?d_set_huge functions which do the hard work to
generate and set a correct block entry.
These differ from open-coded huge page creation in the early page table
code by explicitly setting the P?D_TYPE_SECT bits (which are implicitly
retained by mk_sect_prot() for any valid prot), but are otherwise
identical (and cannot fail on arm64).
For simplicity and consistency, make use of these in the initial page
table creation code.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
c661cb1c537e2364bfdabb298fb934fd77445e98)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I25e58a1626831c2c709abcded989d1770fea851c
Mark Rutland [Tue, 15 Mar 2016 11:22:57 +0000 (11:22 +0000)]
UPSTREAM: arm64: fix KASLR boot-time I-cache maintenance
Commit
f80fb3a3d50843a4 ("arm64: add support for kernel ASLR") missed a
DSB necessary to complete I-cache maintenance in the primary boot path,
and hence stale instructions may still be present in the I-cache and may
be executed until the I-cache maintenance naturally completes.
Since commit
8ec41987436d566f ("arm64: mm: ensure patched kernel text is
fetched from PoU"), all CPUs invalidate their I-caches after their MMU
is enabled. Prior a CPU's MMU having been enabled, arbitrary lines may
have been fetched from the PoC into I-caches. We never patch text
expected to be executed with the MMU off. Thus, it is unnecessary to
perform broadcast I-cache maintenance in the primary boot path.
This patch reduces the scope of the I-cache maintenance to the local
CPU, and adds the missing DSB with similar scope, matching prior
maintenance in the primary boot path.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesehvuel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
b90b4a608ea2401cc491828f7a385edd2e236e37)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ic66b5fec29867b86782ad6c3243642afc1f40080
Will Deacon [Wed, 9 Mar 2016 15:22:55 +0000 (15:22 +0000)]
UPSTREAM: arm64: hugetlb: partial revert of
66b3923a1a0f
Commit
66b3923a1a0f ("arm64: hugetlb: add support for PTE contiguous bit")
introduced support for huge pages using the contiguous bit in the PTE
as opposed to block mappings, which may be slightly unwieldy (512M) in
64k page configurations.
Unfortunately, this support has resulted in some late regressions when
running the libhugetlbfs test suite with 64k pages and CONFIG_DEBUG_VM
as a result of a BUG:
| readback (2M: 64): ------------[ cut here ]------------
| kernel BUG at fs/hugetlbfs/inode.c:446!
| Internal error: Oops - BUG: 0 [#1] SMP
| Modules linked in:
| CPU: 7 PID: 1448 Comm: readback Not tainted 4.5.0-rc7 #148
| Hardware name: linux,dummy-virt (DT)
| task:
fffffe0040964b00 ti:
fffffe00c2668000 task.ti:
fffffe00c2668000
| PC is at remove_inode_hugepages+0x44c/0x480
| LR is at remove_inode_hugepages+0x264/0x480
Rather than revert the entire patch, simply avoid advertising the
contiguous huge page sizes for now while people are actively working on
a fix. This patch can then be reverted once things have been sorted out.
Cc: David Woods <dwoods@ezchip.com>
Reported-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
ff7925848b50050732ac0401e0acf27e8b241d7b)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I3a9751fa79b2d2871dbdc06ea1aa3d1336bb4f4f
Yang Shi [Thu, 11 Feb 2016 21:53:10 +0000 (13:53 -0800)]
UPSTREAM: arm64: make irq_stack_ptr more robust
Switching between stacks is only valid if we are tracing ourselves while on the
irq_stack, so it is only valid when in current and non-preemptible context,
otherwise is is just zeroed off.
Fixes:
132cd887b5c5 ("arm64: Modify stack trace and dump for use with irq_stack")
Acked-by: James Morse <james.morse@arm.com>
Tested-by: James Morse <james.morse@arm.com>
Signed-off-by: Yang Shi <yang.shi@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
a80a0eb70c358f8c7dda4bb62b2278dc6285217b)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I431d3d5e8e1f556ddfef283af88dd2f63b825f7c
Ard Biesheuvel [Tue, 26 Jan 2016 13:48:29 +0000 (14:48 +0100)]
UPSTREAM: arm64: efi: invoke EFI_RNG_PROTOCOL to supply KASLR randomness
Since arm64 does not use a decompressor that supplies an execution
environment where it is feasible to some extent to provide a source of
randomness, the arm64 KASLR kernel depends on the bootloader to supply
some random bits in the /chosen/kaslr-seed DT property upon kernel entry.
On UEFI systems, we can use the EFI_RNG_PROTOCOL, if supplied, to obtain
some random bits. At the same time, use it to randomize the offset of the
kernel Image in physical memory.
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
2b5fe07a78a09a32002642b8a823428ade611f16)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I9cb7ae5727dfdf3726b1c9544bce74722ec77bbd
Ard Biesheuvel [Mon, 11 Jan 2016 10:47:49 +0000 (11:47 +0100)]
UPSTREAM: efi: stub: use high allocation for converted command line
Before we can move the command line processing before the allocation
of the kernel, which is required for detecting the 'nokaslr' option
which controls that allocation, move the converted command line higher
up in memory, to prevent it from interfering with the kernel itself.
Since x86 needs the address to fit in 32 bits, use UINT_MAX as the upper
bound there. Otherwise, use ULONG_MAX (i.e., no limit)
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
48fcb2d0216103d15306caa4814e2381104df6d8)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ie959355658d3f2f1819bee842c77cc5eef54b8e7
Ard Biesheuvel [Mon, 11 Jan 2016 09:43:16 +0000 (10:43 +0100)]
UPSTREAM: efi: stub: add implementation of efi_random_alloc()
This implements efi_random_alloc(), which allocates a chunk of memory of
a certain size at a certain alignment, and uses the random_seed argument
it receives to randomize the address of the allocation.
This is implemented by iterating over the UEFI memory map, counting the
number of suitable slots (aligned offsets) within each region, and picking
a random number between 0 and 'number of slots - 1' to select the slot,
This should guarantee that each possible offset is chosen equally likely.
Suggested-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
2ddbfc81eac84a299cb4747a8764bc43f23e9008)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I8f59e3e91a71c752d69fd08ca43a890977c82919
Ard Biesheuvel [Sun, 10 Jan 2016 10:29:07 +0000 (11:29 +0100)]
BACKPORT: efi: stub: implement efi_get_random_bytes() based on EFI_RNG_PROTOCOL
This exposes the firmware's implementation of EFI_RNG_PROTOCOL via a new
function efi_get_random_bytes().
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
e4fbf4767440472f9d23b0f25a2b905e1c63b6a8)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Id46036b78c2efd223b6cd5488e512fd93e8f597d
Ard Biesheuvel [Fri, 29 Jan 2016 10:59:03 +0000 (11:59 +0100)]
BACKPORT: arm64: kaslr: randomize the linear region
When KASLR is enabled (CONFIG_RANDOMIZE_BASE=y), and entropy has been
provided by the bootloader, randomize the placement of RAM inside the
linear region if sufficient space is available. For instance, on a 4KB
granule/3 levels kernel, the linear region is 256 GB in size, and we can
choose any 1 GB aligned offset that is far enough from the top of the
address space to fit the distance between the start of the lowest memblock
and the top of the highest memblock.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
031a4213c11a5db475f528c182f7b3858df11db)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I272de8ee358351d95eacc7dc5f47600adec3e813
Ard Biesheuvel [Fri, 26 Feb 2016 16:57:14 +0000 (17:57 +0100)]
UPSTREAM: arm64: mm: treat memstart_addr as a signed quantity
Commit
c031a4213c11 ("arm64: kaslr: randomize the linear region")
implements randomization of the linear region, by subtracting a random
multiple of PUD_SIZE from memstart_addr. This causes the virtual mapping
of system RAM to move upwards in the linear region, and at the same time
causes memstart_addr to assume a value which may be negative if the offset
of system RAM in the physical space is smaller than its offset relative to
PAGE_OFFSET in the virtual space.
Since memstart_addr is effectively an offset now, redefine its type as s64
so that expressions involving shifting or division preserve its sign.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
020d044f66874eba058ce8264fc550f3eca67879)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I0482ebc13baaa9005cf372795e656c2417be9d1c
Ard Biesheuvel [Fri, 26 Feb 2016 16:57:13 +0000 (17:57 +0100)]
UPSTREAM: arm64: vmemmap: use virtual projection of linear region
Commit
dd006da21646 ("arm64: mm: increase VA range of identity map") made
some changes to the memory mapping code to allow physical memory to reside
at an offset that exceeds the size of the virtual mapping.
However, since the size of the vmemmap area is proportional to the size of
the VA area, but it is populated relative to the physical space, we may
end up with the struct page array being mapped outside of the vmemmap
region. For instance, on my Seattle A0 box, I can see the following output
in the dmesg log.
vmemmap : 0xffffffbdc0000000 - 0xffffffbfc0000000 ( 8 GB maximum)
0xffffffbfc0000000 - 0xffffffbfd0000000 ( 256 MB actual)
We can fix this by deciding that the vmemmap region is not a projection of
the physical space, but of the virtual space above PAGE_OFFSET, i.e., the
linear region. This way, we are guaranteed that the vmemmap region is of
sufficient size, and we can even reduce the size by half.
Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
dfd55ad85e4a7fbaa82df12467515ac3c81e8a3e)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I8112d910f9659941dab6de5b3791f395150c77f1
Ard Biesheuvel [Tue, 26 Jan 2016 13:12:01 +0000 (14:12 +0100)]
BACKPORT: arm64: add support for kernel ASLR
This adds support for KASLR is implemented, based on entropy provided by
the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
of the address space (VA_BITS) and the page size, the entropy in the
virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
4 levels), with the sidenote that displacements that result in the kernel
image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
granule kernels, respectively) are not allowed, and will be rounded up to
an acceptable value.
If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
randomized independently from the core kernel. This makes it less likely
that the location of core kernel data structures can be determined by an
adversary, but causes all function calls from modules into the core kernel
to be resolved via entries in the module PLTs.
If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
randomized by choosing a page aligned 128 MB region inside the interval
[_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
entropy (depending on page size), independently of the kernel randomization,
but still guarantees that modules are within the range of relative branch
and jump instructions (with the caveat that, since the module region is
shared with other uses of the vmalloc area, modules may need to be loaded
further away if the module region is exhausted)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
f80fb3a3d50843a401dac4b566b3b131da8077a2)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I3f5fafa4e92e5ff39259d57065541366237eb021
Ard Biesheuvel [Tue, 26 Jan 2016 08:13:44 +0000 (09:13 +0100)]
UPSTREAM: arm64: add support for building vmlinux as a relocatable PIE binary
This implements CONFIG_RELOCATABLE, which links the final vmlinux
image with a dynamic relocation section, allowing the early boot code
to perform a relocation to a different virtual address at runtime.
This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE).
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
1e48ef7fcc374051730381a2a05da77eb4eafdb0)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: If02e065722d438f85feb62240fc230e16f58e912
Ard Biesheuvel [Fri, 1 Jan 2016 14:02:12 +0000 (15:02 +0100)]
UPSTREAM: arm64: switch to relative exception tables
Instead of using absolute addresses for both the exception location
and the fixup, use offsets relative to the exception table entry values.
Not only does this cut the size of the exception table in half, it is
also a prerequisite for KASLR, since absolute exception table entries
are subject to dynamic relocation, which is incompatible with the sorting
of the exception table that occurs at build time.
This patch also introduces the _ASM_EXTABLE preprocessor macro (which
exists on x86 as well) and its _asm_extable assembly counterpart, as
shorthands to emit exception table entries.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
6c94f27ac847ff8ef15b3da5b200574923bd6287)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Icedda8ee8c32843c439765783816d7d71ca0073a
Ard Biesheuvel [Fri, 1 Jan 2016 11:39:09 +0000 (12:39 +0100)]
UPSTREAM: extable: add support for relative extables to search and sort routines
This adds support to the generic search_extable() and sort_extable()
implementations for dealing with exception table entries whose fields
contain relative offsets rather than absolute addresses.
Acked-by: Helge Deller <deller@gmx.de>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
a272858a3c1ecd4a935ba23c66668f81214bd110)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I9d144d351d547c49bf3203a69dfff3cb71a51177
Ard Biesheuvel [Sun, 10 Jan 2016 10:42:28 +0000 (11:42 +0100)]
UPSTREAM: scripts/sortextable: add support for ET_DYN binaries
Add support to scripts/sortextable for handling relocatable (PIE)
executables, whose ELF type is ET_DYN, not ET_EXEC. Other than adding
support for the new type, no changes are needed.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
7b957b6e603623ef8b2e8222fa94b976df613fa2)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: If55296ef4934b99c38ceb5acbd7c4a7fb23f24c1
James Morse [Tue, 2 Feb 2016 15:53:59 +0000 (15:53 +0000)]
UPSTREAM: arm64: futex.h: Add missing PAN toggling
futex.h's futex_atomic_cmpxchg_inatomic() does not use the
__futex_atomic_op() macro and needs its own PAN toggling. This was missed
when the feature was implemented.
Fixes:
338d4f49d6f ("arm64: kernel: Add support for Privileged Access Never")
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
811d61e384e24759372bb3f01772f3744b0a8327)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I6e7b338a1af17b784d4196101422c3acee3b88ed
Ard Biesheuvel [Mon, 11 Jan 2016 16:08:26 +0000 (17:08 +0100)]
UPSTREAM: arm64: make asm/elf.h available to asm files
This reshuffles some code in asm/elf.h and puts a #ifndef __ASSEMBLY__
around its C definitions so that the CPP defines can be used in asm
source files as well.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
4a2e034e5cdadde4c712f79bdd57d1455c76a3db)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ic499e950d2ef297d10848862a6dfa07b90887f4c
Ard Biesheuvel [Sat, 26 Dec 2015 11:46:40 +0000 (12:46 +0100)]
UPSTREAM: arm64: avoid dynamic relocations in early boot code
Before implementing KASLR for arm64 by building a self-relocating PIE
executable, we have to ensure that values we use before the relocation
routine is executed are not subject to dynamic relocation themselves.
This applies not only to virtual addresses, but also to values that are
supplied by the linker at build time and relocated using R_AARCH64_ABS64
relocations.
So instead, use assemble time constants, or force the use of static
relocations by folding the constants into the instructions.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
2bf31a4a05f5b00f37d65ba029d36a0230286cb7)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Icce0176591e3c0ae444e1ea54258efe677933c5b
Ard Biesheuvel [Sat, 26 Dec 2015 12:48:02 +0000 (13:48 +0100)]
UPSTREAM: arm64: avoid R_AARCH64_ABS64 relocations for Image header fields
Unfortunately, the current way of using the linker to emit build time
constants into the Image header will no longer work once we switch to
the use of PIE executables. The reason is that such constants are emitted
into the binary using R_AARCH64_ABS64 relocations, which are resolved at
runtime, not at build time, and the places targeted by those relocations
will contain zeroes before that.
So refactor the endian swapping linker script constant generation code so
that it emits the upper and lower 32-bit words separately.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
6ad1fe5d9077a1ab40bf74b61994d2e770b00b14)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Iaa809a0b5fcf628e1e49cd6aaa0f31f31ce95c23
Ard Biesheuvel [Tue, 24 Nov 2015 11:37:35 +0000 (12:37 +0100)]
UPSTREAM: arm64: add support for module PLTs
This adds support for emitting PLTs at module load time for relative
branches that are out of range. This is a prerequisite for KASLR, which
may place the kernel and the modules anywhere in the vmalloc area,
making it more likely that branch target offsets exceed the maximum
range of +/- 128 MB.
In this version, I removed the distinction between relocations against
.init executable sections and ordinary executable sections. The reason
is that it is hardly worth the trouble, given that .init.text usually
does not contain that many far branches, and this version now only
reserves PLT entry space for jump and call relocations against undefined
symbols (since symbols defined in the same module can be assumed to be
within +/- 128 MB)
For example, the mac80211.ko module (which is fairly sizable at ~400 KB)
built with -mcmodel=large gives the following relocation counts:
relocs branches unique !local
.text 3925 3347 518 219
.init.text 11 8 7 1
.exit.text 4 4 4 1
.text.unlikely 81 67 36 17
('unique' means branches to unique type/symbol/addend combos, of which
!local is the subset referring to undefined symbols)
IOW, we are only emitting a single PLT entry for the .init sections, and
we are better off just adding it to the core PLT section instead.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
fd045f6cd98ec4953147b318418bd45e441e52a3)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I1b46bb817e7d16a1b9a394b100c9e5de46c0837c
Ard Biesheuvel [Tue, 23 Feb 2016 07:56:45 +0000 (08:56 +0100)]
UPSTREAM: arm64: move brk immediate argument definitions to separate header
Instead of reversing the header dependency between asm/bug.h and
asm/debug-monitors.h, split off the brk instruction immediate value
defines into a new header asm/brk-imm.h, and include it from both.
This solves the circular dependency issue that prevents BUG() from
being used in some header files, and keeps the definitions together.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
f98deee9a9f8c47d05a0f64d86440882dca772ff)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Id4827af98ab3d413828c589bc379acecabeff108
Ard Biesheuvel [Mon, 22 Feb 2016 17:46:04 +0000 (18:46 +0100)]
UPSTREAM: arm64: mm: use bit ops rather than arithmetic in pa/va translations
Since PAGE_OFFSET is chosen such that it cuts the kernel VA space right
in half, and since the size of the kernel VA space itself is always a
power of 2, we can treat PAGE_OFFSET as a bitmask and replace the
additions/subtractions with 'or' and 'and-not' operations.
For the comparison against PAGE_OFFSET, a mov/cmp/branch sequence ends
up getting replaced with a single tbz instruction. For the additions and
subtractions, we save a mov instruction since the mask is folded into the
instruction's immediate field.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
8439e62a15614e8fcd43835d57b7245cd9870dc5)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I1ea4ef654dd7b7693f8713dab28ca0739b8a2c62
Ard Biesheuvel [Mon, 22 Feb 2016 17:46:03 +0000 (18:46 +0100)]
UPSTREAM: arm64: mm: only perform memstart_addr sanity check if DEBUG_VM
Checking whether memstart_addr has been assigned every time it is
referenced adds a branch instruction that may hurt performance if
the reference in question occurs on a hot path. So only perform the
check if CONFIG_DEBUG_VM=y.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[catalin.marinas@arm.com: replaced #ifdef with VM_BUG_ON]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
a92405f082d43267575444a6927085e4c8a69e4e)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ia5f206d9a2dbbdbfc3f05fe985d4eca309f0d889
Catalin Marinas [Fri, 19 Feb 2016 14:28:58 +0000 (14:28 +0000)]
UPSTREAM: arm64: User die() instead of panic() in do_page_fault()
The former gives better error reporting on unhandled permission faults
(introduced by the UAO patches).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
70c8abc28762d04e36c92e07eee2ce6ab41049cb)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ia419eccf1554a32fa4131ac15b277d4d2d4eb508
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:42 +0000 (13:52 +0100)]
UPSTREAM: arm64: allow kernel Image to be loaded anywhere in physical memory
This relaxes the kernel Image placement requirements, so that it
may be placed at any 2 MB aligned offset in physical memory.
This is accomplished by ignoring PHYS_OFFSET when installing
memblocks, and accounting for the apparent virtual offset of
the kernel Image. As a result, virtual address references
below PAGE_OFFSET are correctly mapped onto physical references
into the kernel Image regardless of where it sits in memory.
Special care needs to be taken for dealing with memory limits passed
via mem=, since the generic implementation clips memory top down, which
may clip the kernel image itself if it is loaded high up in memory. To
deal with this case, we simply add back the memory covering the kernel
image, which may result in more memory to be retained than was passed
as a mem= parameter.
Since mem= should not be considered a production feature, a panic notifier
handler is installed that dumps the memory limit at panic time if one was
set.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
a7f8de168ace487fa7b88cb154e413cf40e87fc6)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I1d28cb66b658ef89f9648918565ddc07df4660f8
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:41 +0000 (13:52 +0100)]
UPSTREAM: arm64: defer __va translation of initrd_start and initrd_end
Before deferring the assignment of memstart_addr in a subsequent patch, to
the moment where all memory has been discovered and possibly clipped based
on the size of the linear region and the presence of a mem= command line
parameter, we need to ensure that memstart_addr is not used to perform __va
translations before it is assigned.
One such use is in the generic early DT discovery of the initrd location,
which is recorded as a virtual address in the globals initrd_start and
initrd_end. So wire up the generic support to declare the initrd addresses,
and implement it without __va() translations, and perform the translation
after memstart_addr has been assigned.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
a89dea585371a9d5d85499db47c93f129be8e0c4)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I7d0b3dd7adcf069d4e7c1f58fd12e59c4cb62017
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:40 +0000 (13:52 +0100)]
UPSTREAM: arm64: move kernel image to base of vmalloc area
This moves the module area to right before the vmalloc area, and moves
the kernel image to the base of the vmalloc area. This is an intermediate
step towards implementing KASLR, which allows the kernel image to be
located anywhere in the vmalloc area.
Since other subsystems such as hibernate may still need to refer to the
kernel text or data segments via their linears addresses, both are mapped
in the linear region as well. The linear alias of the text region is
mapped read-only/non-executable to prevent inadvertent modification or
execution.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
f9040773b7bbbd9e98eb6184a263512a7cfc133f)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I698faed47bb7cfc256a1b5b5407a7c586bdc63b3
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:39 +0000 (13:52 +0100)]
BACKPORT: arm64: kvm: deal with kernel symbols outside of linear mapping
KVM on arm64 uses a fixed offset between the linear mapping at EL1 and
the HYP mapping at EL2. Before we can move the kernel virtual mapping
out of the linear mapping, we have to make sure that references to kernel
symbols that are accessed via the HYP mapping are translated to their
linear equivalent.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
a0bf9776cd0be4490d4675d4108e13379849fc7f)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I316f029d22a16773c168a151dba59bed7921fa7e
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:38 +0000 (13:52 +0100)]
UPSTREAM: arm64: decouple early fixmap init from linear mapping
Since the early fixmap page tables are populated using pages that are
part of the static footprint of the kernel, they are covered by the
initial kernel mapping, and we can refer to them without using __va/__pa
translations, which are tied to the linear mapping.
Since the fixmap page tables are disjoint from the kernel mapping up
to the top level pgd entry, we can refer to bm_pte[] directly, and there
is no need to walk the page tables and perform __pa()/__va() translations
at each step.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
157962f5a8f236cab898b68bdaa69ce68922f0bf)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I49221a199962aec6d4f3712bfb3dd041d64ba99b
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:37 +0000 (13:52 +0100)]
UPSTREAM: arm64: pgtable: implement static [pte|pmd|pud]_offset variants
The page table accessors pte_offset(), pud_offset() and pmd_offset()
rely on __va translations, so they can only be used after the linear
mapping has been installed. For the early fixmap and kasan init routines,
whose page tables are allocated statically in the kernel image, these
functions will return bogus values. So implement pte_offset_kimg(),
pmd_offset_kimg() and pud_offset_kimg(), which can be used instead
before any page tables have been allocated dynamically.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
6533945a32c762c5db70d7a3ec251a040b2d9661)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ibea400f0938db568524fb83eb2d22d8658bbb56b
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:36 +0000 (13:52 +0100)]
UPSTREAM: arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region
This introduces the preprocessor symbol KIMAGE_VADDR which will serve as
the symbolic virtual base of the kernel region, i.e., the kernel's virtual
offset will be KIMAGE_VADDR + TEXT_OFFSET. For now, we define it as being
equal to PAGE_OFFSET, but in the future, it will be moved below it once
we move the kernel virtual mapping out of the linear mapping.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
ab893fb9f1b17f02139bce547bb4b69e96b9ae16)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I31427bd2b948a22bb8ce1d22109682fc66efb98d
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:35 +0000 (13:52 +0100)]
BACKPORT: arm64: add support for ioremap() block mappings
This wires up the existing generic huge-vmap feature, which allows
ioremap() to use PMD or PUD sized block mappings. It also adds support
to the unmap path for dealing with block mappings, which will allow us
to unmap the __init region using unmap_kernel_range() in a subsequent
patch.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
324420bf91f60582bb481133db9547111768ef17)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I4765ae77f7d67c3972b7e5b19d43db434e8b777c
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:34 +0000 (13:52 +0100)]
BACKPORT: arm64: prevent potential circular header dependencies in asm/bug.h
Currently, using BUG_ON() in header files is cumbersome, due to the fact
that asm/bug.h transitively includes a lot of other header files, resulting
in the actual BUG_ON() invocation appearing before its definition in the
preprocessor input. So let's reverse the #include dependency between
asm/bug.h and asm/debug-monitors.h, by moving the definition of BUG_BRK_IMM
from the latter to the former. Also fix up one user of asm/debug-monitors.h
which relied on a transitive include.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
03336b1df9929e5d9c28fd9768948b6151cb046c)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I71470b7db9ef858a5a8368a872f931936c723a25
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:33 +0000 (13:52 +0100)]
UPSTREAM: of/fdt: factor out assignment of initrd_start/initrd_end
Since architectures may not yet have their linear mapping up and running
when the initrd address is discovered from the DT, factor out the
assignment of initrd_start and initrd_end, so that an architecture can
override it and use the translation it needs.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
369bc9abf22bf026e8645a4dd746b90649a2f6ee)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I8c258bdc4367955314e9a5223dc4c7751a06a98d
Ard Biesheuvel [Tue, 16 Feb 2016 12:52:32 +0000 (13:52 +0100)]
UPSTREAM: of/fdt: make memblock minimum physical address arch configurable
By default, early_init_dt_add_memory_arch() ignores memory below
the base of the kernel image since it won't be addressable via the
linear mapping. However, this is not appropriate anymore once we
decouple the kernel text mapping from the linear mapping, so archs
may want to drop the low limit entirely. So allow the minimum to be
overridden by setting MIN_MEMBLOCK_ADDR.
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
270522a04f7a9911983878fa37da467f9ff1c938)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I4bb2626a87493262a64584b3d808de260129127e
Catalin Marinas [Thu, 18 Feb 2016 15:50:04 +0000 (15:50 +0000)]
UPSTREAM: arm64: Remove the get_thread_info() function
This function was introduced by previous commits implementing UAO.
However, it can be replaced with task_thread_info() in
uao_thread_switch() or get_fs() in do_page_fault() (the latter being
called only on the current context, so no need for using the saved
pt_regs).
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
e950631e84e7e38892ffbeee5e1816b270026b0e)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ic6e9b6af7314fa83d9b0773ae3fac5a2ff34e67a
James Morse [Fri, 5 Feb 2016 14:58:50 +0000 (14:58 +0000)]
BACKPORT: arm64: kernel: Don't toggle PAN on systems with UAO
If a CPU supports both Privileged Access Never (PAN) and User Access
Override (UAO), we don't need to disable/re-enable PAN round all
copy_to_user() like calls.
UAO alternatives cause these calls to use the 'unprivileged' load/store
instructions, which are overridden to be the privileged kind when
fs==KERNEL_DS.
This patch changes the copy_to_user() calls to have their PAN toggling
depend on a new composite 'feature' ARM64_ALT_PAN_NOT_UAO.
If both features are detected, PAN will be enabled, but the copy_to_user()
alternatives will not be applied. This means PAN will be enabled all the
time for these functions. If only PAN is detected, the toggling will be
enabled as normal.
This will save the time taken to disable/re-enable PAN, and allow us to
catch copy_to_user() accesses that occur with fs==KERNEL_DS.
Futex and swp-emulation code continue to hang their PAN toggling code on
ARM64_HAS_PAN.
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
705441960033e66b63524521f153fbb28c99ddbd)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I3fa35ebacaf401e1344e76932a26fdd14a8a3cdb
James Morse [Fri, 5 Feb 2016 14:58:49 +0000 (14:58 +0000)]
UPSTREAM: arm64: cpufeature: Test 'matches' pointer to find the end of the list
CPU feature code uses the desc field as a test to find the end of the list,
this means every entry must have a description. This generates noise for
entries in the list that aren't really features, but combinations of them.
e.g.
> CPU features: detected feature: Privileged Access Never
> CPU features: detected feature: PAN and not UAO
These combination features are needed for corner cases with alternatives,
where cpu features interact.
Change all walkers of the arm64_features[] and arm64_hwcaps[] lists to test
'matches' not 'desc', and only print 'desc' if it is non-NULL.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by : Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
644c2ae198412c956700e55a2acf80b2541f6aa5)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I4500bb7c547e2e67ea56e242a8621df539f6fd67
James Morse [Fri, 5 Feb 2016 14:58:48 +0000 (14:58 +0000)]
UPSTREAM: arm64: kernel: Add support for User Access Override
'User Access Override' is a new ARMv8.2 feature which allows the
unprivileged load and store instructions to be overridden to behave in
the normal way.
This patch converts {get,put}_user() and friends to use ldtr*/sttr*
instructions - so that they can only access EL0 memory, then enables
UAO when fs==KERNEL_DS so that these functions can access kernel memory.
This allows user space's read/write permissions to be checked against the
page tables, instead of testing addr<USER_DS, then using the kernel's
read/write permissions.
Signed-off-by: James Morse <james.morse@arm.com>
[catalin.marinas@arm.com: move uao_thread_switch() above dsb()]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
57f4959bad0a154aeca125b7d38d1d9471a12422)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I1a6a74a1f33b92d54368bd99387b55cf62930903
James Morse [Fri, 5 Feb 2016 14:58:47 +0000 (14:58 +0000)]
UPSTREAM: arm64: add ARMv8.2 id_aa64mmfr2 boiler plate
ARMv8.2 adds a new feature register id_aa64mmfr2. This patch adds the
cpu feature boiler plate used by the actual features in later patches.
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
406e308770a92bd33995b2e5b681e86358328bb0)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I51db326696ba1e18e9a4c667acbeb3e25e0b151e
James Morse [Fri, 5 Feb 2016 14:58:46 +0000 (14:58 +0000)]
UPSTREAM: arm64: cpufeature: Change read_cpuid() to use sysreg's mrs_s macro
Older assemblers may not have support for newer feature registers. To get
round this, sysreg.h provides a 'mrs_s' macro that takes a register
encoding and generates the raw instruction.
Change read_cpuid() to use mrs_s in all cases so that new registers
don't have to be a special case. Including sysreg.h means we need to move
the include and definition of read_cpuid() after the #ifndef __ASSEMBLY__
to avoid syntax errors in vmlinux.lds.
Signed-off-by: James Morse <james.morse@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
0f54b14e76f5302afe164dc911b049b5df836ff5)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ic59c81a97b4585b3c0964f293ddee08f7cb594ac
Ard Biesheuvel [Mon, 15 Feb 2016 08:51:49 +0000 (09:51 +0100)]
UPSTREAM: arm64: use local label prefixes for __reg_num symbols
The __reg_num_xNN symbols that are used to implement the msr_s and
mrs_s macros are recorded in the ELF metadata of each object file.
This does not affect the size of the final binary, but it does clutter
the output of tools like readelf, i.e.,
$ readelf -a vmlinux |grep -c __reg_num_x
50976
So let's use symbols with the .L prefix, these are strictly local,
and don't end up in the object files.
$ readelf -a vmlinux |grep -c __reg_num_x
0
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
7abc7d833c9eb16efc8a59239d3771a6e30be367)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Idefe9841ef7d1ddcc5161fc8de14153cfadaf4f3
David Brown [Wed, 10 Feb 2016 21:52:22 +0000 (13:52 -0800)]
UPSTREAM: arm64: vdso: Mark vDSO code as read-only
Although the arm64 vDSO is cleanly separated by code/data with the
code being read-only in userspace mappings, the code page is still
writable from the kernel. There have been exploits (such as
http://itszn.com/blog/?p=21) that take advantage of this on x86 to go
from a bad kernel write to full root.
Prevent this specific exploit on arm64 by putting the vDSO code page
in read-only memory as well.
Before the change:
[ 3.138366] vdso: 2 pages (1 code @
ffffffc000a71000, 1 data @
ffffffc000a70000)
---[ Kernel Mapping ]---
0xffffffc000000000-0xffffffc000082000 520K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000082000-0xffffffc000200000 1528K ro x SHD AF UXN MEM/NORMAL
0xffffffc000200000-0xffffffc000800000 6M ro x SHD AF BLK UXN MEM/NORMAL
0xffffffc000800000-0xffffffc0009b6000 1752K ro x SHD AF UXN MEM/NORMAL
0xffffffc0009b6000-0xffffffc000c00000 2344K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000c00000-0xffffffc008000000 116M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc00c000000-0xffffffc07f000000 1840M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc800000000-0xffffffc840000000 1G RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc840000000-0xffffffc87ae00000 942M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87ae00000-0xffffffc87ae70000 448K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af80000-0xffffffc87af8a000 40K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af8b000-0xffffffc87b000000 468K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87b000000-0xffffffc87fe00000 78M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87fe00000-0xffffffc87ff50000 1344K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87ff90000-0xffffffc87ffa0000 64K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87fff0000-0xffffffc880000000 64K RW NX SHD AF UXN MEM/NORMAL
After:
[ 3.138368] vdso: 2 pages (1 code @
ffffffc0006de000, 1 data @
ffffffc000a74000)
---[ Kernel Mapping ]---
0xffffffc000000000-0xffffffc000082000 520K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000082000-0xffffffc000200000 1528K ro x SHD AF UXN MEM/NORMAL
0xffffffc000200000-0xffffffc000800000 6M ro x SHD AF BLK UXN MEM/NORMAL
0xffffffc000800000-0xffffffc0009b8000 1760K ro x SHD AF UXN MEM/NORMAL
0xffffffc0009b8000-0xffffffc000c00000 2336K RW NX SHD AF UXN MEM/NORMAL
0xffffffc000c00000-0xffffffc008000000 116M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc00c000000-0xffffffc07f000000 1840M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc800000000-0xffffffc840000000 1G RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc840000000-0xffffffc87ae00000 942M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87ae00000-0xffffffc87ae70000 448K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af80000-0xffffffc87af8a000 40K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87af8b000-0xffffffc87b000000 468K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87b000000-0xffffffc87fe00000 78M RW NX SHD AF BLK UXN MEM/NORMAL
0xffffffc87fe00000-0xffffffc87ff50000 1344K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87ff90000-0xffffffc87ffa0000 64K RW NX SHD AF UXN MEM/NORMAL
0xffffffc87fff0000-0xffffffc880000000 64K RW NX SHD AF UXN MEM/NORMAL
Inspired by https://lkml.org/lkml/2016/1/19/494 based on work by the
PaX Team, Brad Spengler, and Kees Cook.
Signed-off-by: David Brown <david.brown@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[catalin.marinas@arm.com: removed superfluous __PAGE_ALIGNED_DATA]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
88d8a7994e564d209d4b2583496631c2357d386b)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I3fe4b48df8b27313ac61c947746805442757932c
Yang Shi [Fri, 5 Feb 2016 23:50:18 +0000 (15:50 -0800)]
UPSTREAM: arm64: ubsan: select ARCH_HAS_UBSAN_SANITIZE_ALL
To enable UBSAN on arm64, ARCH_HAS_UBSAN_SANITIZE_ALL need to be selected.
Basic kernel bootup test is passed on arm64 with CONFIG_UBSAN_SANITIZE_ALL
enabled.
Signed-off-by: Yang Shi <yang.shi@linaro.org>
Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
f0b7f8a4b44657386273a67179dd901c81cd11a6)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I640f50e70e3562bf1caf2ce164d6ed5640e041f6
Laura Abbott [Sat, 6 Feb 2016 00:24:48 +0000 (16:24 -0800)]
UPSTREAM: arm64: ptdump: Indicate whether memory should be faulting
With CONFIG_DEBUG_PAGEALLOC, pages do not have the valid bit
set when free in the buddy allocator. Add an indiciation to
the page table dumping code that the valid bit is not set,
'F' for fault, to make this easier to understand.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
d7e9d59494a9a5d83274f5af2148b82ca22dff3f)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I7e8200549e22cb3a01b5e827bf12b872b9cefc58
Laura Abbott [Sat, 6 Feb 2016 00:24:47 +0000 (16:24 -0800)]
UPSTREAM: arm64: Add support for ARCH_SUPPORTS_DEBUG_PAGEALLOC
ARCH_SUPPORTS_DEBUG_PAGEALLOC provides a hook to map and unmap
pages for debugging purposes. This requires memory be mapped
with PAGE_SIZE mappings since breaking down larger mappings
at runtime will lead to TLB conflicts. Check if debug_pagealloc
is enabled at runtime and if so, map everyting with PAGE_SIZE
pages. Implement the functions to actually map/unmap the
pages at runtime.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
[catalin.marinas@arm.com: static annotation block_mappings_allowed() and #ifdef]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
83863f25e4b8214e994ef8b5647aad614d74b45d)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ia9b9bc5fcc1938bafbc2930f08d27973cfc3d038
Mika Penttilä [Tue, 26 Jan 2016 15:47:25 +0000 (15:47 +0000)]
UPSTREAM: arm64: mm: avoid calling apply_to_page_range on empty range
Calling apply_to_page_range with an empty range results in a BUG_ON
from the core code. This can be triggered by trying to load the st_drv
module with CONFIG_DEBUG_SET_MODULE_RONX enabled:
kernel BUG at mm/memory.c:1874!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
Modules linked in:
CPU: 3 PID: 1764 Comm: insmod Not tainted 4.5.0-rc1+ #2
Hardware name: ARM Juno development board (r0) (DT)
task:
ffffffc9763b8000 ti:
ffffffc975af8000 task.ti:
ffffffc975af8000
PC is at apply_to_page_range+0x2cc/0x2d0
LR is at change_memory_common+0x80/0x108
This patch fixes the issue by making change_memory_common (called by the
set_memory_* functions) a NOP when numpages == 0, therefore avoiding the
erroneous call to apply_to_page_range and bringing us into line with x86
and s390.
Cc: <stable@vger.kernel.org>
Reviewed-by: Laura Abbott <labbott@redhat.com>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Mika Penttilä <mika.penttila@nextfour.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
57adec866c0440976c96a4b8f5b59fb411b1cacb)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ia107d3b324cc8237f669778a7c9c3abae8637501
Laura Abbott [Sat, 6 Feb 2016 00:24:46 +0000 (16:24 -0800)]
UPSTREAM: arm64: Drop alloc function from create_mapping
create_mapping is only used in fixmap_remap_fdt. All the create_mapping
calls need to happen on existing translation table pages without
additional allocations. Rather than have an alloc function be called
and fail, just set it to NULL and catch its use. Also change
the name to create_mapping_noalloc to better capture what exactly is
going on.
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
132233a759580f5ce9b1bfaac9073e47d03c460d)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I48db9adae540e79d47328b4045f26f22ecebc482
Will Deacon [Wed, 10 Feb 2016 10:07:30 +0000 (10:07 +0000)]
UPSTREAM: arm64: prefetch: add missing #include for spin_lock_prefetch
As of
52e662326e1e ("arm64: prefetch: don't provide spin_lock_prefetch
with LSE"), spin_lock_prefetch is patched at runtime when the LSE atomics
are in use. This relies on the ARM64_LSE_ATOMIC_INSN macro to drive
the alternatives framework, but that macro is only available via
asm/lse.h, which isn't explicitly included in processor.h. Consequently,
drivers can run into build failures such as:
In file included from include/linux/prefetch.h:14:0,
from drivers/net/ethernet/intel/i40e/i40e_txrx.c:27:
arch/arm64/include/asm/processor.h: In function 'spin_lock_prefetch':
arch/arm64/include/asm/processor.h:183:15: error: expected string literal before 'ARM64_LSE_ATOMIC_INSN'
asm volatile(ARM64_LSE_ATOMIC_INSN(
This patch add the missing include and gets things building again.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
afb83cc3f0e4f86ea0e1cc3db7a90f58f1abd4d5)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I9a9e6b920f96b060781a01bff50a0ca8db78cf7f
Andrew Pinski [Tue, 2 Feb 2016 12:46:26 +0000 (12:46 +0000)]
UPSTREAM: arm64: lib: patch in prfm for copy_page if requested
On ThunderX T88 pass 1 and pass 2, there is no hardware prefetching so
we need to patch in explicit software prefetching instructions
Prefetching improves this code by 60% over the original code and 2x
over the code without prefetching for the affected hardware using the
benchmark code at https://github.com/apinski-cavium/copy_page_benchmark
Signed-off-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
60e0a09db24adc8809696307e5d97cc4ba7cb3e0)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I3821a4d3a7b6fd68b4b0aca31478ec960e4e5172
Will Deacon [Tue, 2 Feb 2016 12:46:25 +0000 (12:46 +0000)]
UPSTREAM: arm64: lib: improve copy_page to deal with 128 bytes at a time
We want to avoid lots of different copy_page implementations, settling
for something that is "good enough" everywhere and hopefully easy to
understand and maintain whilst we're at it.
This patch reworks our copy_page implementation based on discussions
with Cavium on the list and benchmarking on Cortex-A processors so that:
- The loop is unrolled to copy 128 bytes per iteration
- The reads are offset so that we read from the next 128-byte block
in the same iteration that we store the previous block
- Explicit prefetch instructions are removed for now, since they hurt
performance on CPUs with hardware prefetching
- The loop exit condition is calculated at the start of the loop
Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
223e23e8aa26b0bb62c597637e77295e14f6a62c)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Icabd86bbecc60ad0d730ab796e33b8762cecb1fb
Will Deacon [Tue, 2 Feb 2016 12:46:24 +0000 (12:46 +0000)]
UPSTREAM: arm64: prefetch: add alternative pattern for CPUs without a prefetcher
Most CPUs have a hardware prefetcher which generally performs better
without explicit prefetch instructions issued by software, however
some CPUs (e.g. Cavium ThunderX) rely solely on explicit prefetch
instructions.
This patch adds an alternative pattern (ARM64_HAS_NO_HW_PREFETCH) to
allow our library code to make use of explicit prefetch instructions
during things like copy routines only when the CPU does not have the
capability to perform the prefetching itself.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
d5370f754875460662abe8561388e019d90dd0c4)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ie33097c9b7786922ff8c457d16515c3188b8e94b
Will Deacon [Tue, 2 Feb 2016 12:46:23 +0000 (12:46 +0000)]
UPSTREAM: arm64: prefetch: don't provide spin_lock_prefetch with LSE
The LSE atomics rely on us not dirtying data at L1 if we can avoid it,
otherwise many of the potential scalability benefits are lost.
This patch replaces spin_lock_prefetch with a nop when the LSE atomics
are in use, so that users don't shoot themselves in the foot by causing
needless coherence traffic at L1.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Tested-by: Andrew Pinski <apinski@cavium.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
cd5e10bdf3795d22f10787bb1991c43798c885d5)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ib8bc3f38d9306c13e017139ae4f2a7c8d6b61e18
Ard Biesheuvel [Wed, 27 Jan 2016 09:50:19 +0000 (10:50 +0100)]
UPSTREAM: arm64: allow vmalloc regions to be set with set_memory_*
The range of set_memory_* is currently restricted to the module address
range because of difficulties in breaking down larger block sizes.
vmalloc maps PAGE_SIZE pages so it is safe to use as well. Update the
function ranges and add a comment explaining why the range is restricted
the way it is.
Suggested-by: Laura Abbott <labbott@fedoraproject.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
95f5c80050ad723163aa80dc8bffd48ef4afc6d5)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I80117bfd93ffe90a8edc89cbf6f456d9423bcf73
Lorenzo Pieralisi [Tue, 26 Jan 2016 11:10:38 +0000 (11:10 +0000)]
BACKPORT: arm64: kernel: implement ACPI parking protocol
The SBBR and ACPI specifications allow ACPI based systems that do not
implement PSCI (eg systems with no EL3) to boot through the ACPI parking
protocol specification[1].
This patch implements the ACPI parking protocol CPU operations, and adds
code that eases parsing the parking protocol data structures to the
ARM64 SMP initializion carried out at the same time as cpus enumeration.
To wake-up the CPUs from the parked state, this patch implements a
wakeup IPI for ARM64 (ie arch_send_wakeup_ipi_mask()) that mirrors the
ARM one, so that a specific IPI is sent for wake-up purpose in order
to distinguish it from other IPI sources.
Given the current ACPI MADT parsing API, the patch implements a glue
layer that helps passing MADT GICC data structure from SMP initialization
code to the parking protocol implementation somewhat overriding the CPU
operations interfaces. This to avoid creating a completely trasparent
DT/ACPI CPU operations layer that would require creating opaque
structure handling for CPUs data (DT represents CPU through DT nodes, ACPI
through static MADT table entries), which seems overkill given that ACPI
on ARM64 mandates only two booting protocols (PSCI and parking protocol),
so there is no need for further protocol additions.
Based on the original work by Mark Salter <msalter@redhat.com>
[1] https://acpica.org/sites/acpica/files/MP%20Startup%20for%20ARM%20platforms.docx
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Tested-by: Loc Ho <lho@apm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Hanjun Guo <hanjun.guo@linaro.org>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mark Salter <msalter@redhat.com>
Cc: Al Stone <ahs3@redhat.com>
[catalin.marinas@arm.com: Added WARN_ONCE(!acpi_parking_protocol_valid() on the IPI]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: kaslr-arm64-4.4
(cherry picked from commit
5e89c55e4ed81d7abb1ce8828db35fa389dc0e90)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ie9a872bff124ca6b3551c812df768cc378658bcc
John Stultz [Wed, 21 Sep 2016 01:42:22 +0000 (18:42 -0700)]
sched: Add Kconfig option DEFAULT_USE_ENERGY_AWARE to set ENERGY_AWARE feature flag
The ENERGY_AWARE sched feature flag cannot be set unless
CONFIG_SCHED_DEBUG is enabled.
So this patch allows the flag to default to true at build time
if the config is set.
Change-Id: I8835a571fdb7a8f8ee6a54af1e11a69f3b5ce8e6
Signed-off-by: John Stultz <john.stultz@linaro.org>
Caesar Wang [Tue, 23 Aug 2016 10:47:02 +0000 (11:47 +0100)]
sched/fair: remove printk while schedule is in progress
It will cause deadlock and while(1) if call printk while schedule is in
progress. The block state like as below:
cpu0(hold the console sem):
printk->console_unlock->up_sem->spin_lock(&sem->lock)->wake_up_process(cpu1)
->try_to_wake_up(cpu1)->while(p->on_cpu).
cpu1(request console sem):
console_lock->down_sem->schedule->idle_banlance->update_cpu_capacity->
printk->console_trylock->spin_lock(&sem->lock).
p->on_cpu will be 1 forever, because the task is still running on cpu1,
so cpu0 is blocked in while(p->on_cpu), but cpu1 could not get
spin_lock(&sem->lock), it is blocked too, it means the task will running
on cpu1 forever.
Signed-off-by: Caesar Wang <wxt@rock-chips.com>
Mohan Srinivasan [Tue, 20 Sep 2016 00:33:50 +0000 (17:33 -0700)]
ANDROID: fs: FS tracepoints to track IO.
Adds tracepoints in ext4/f2fs/mpage to track readpages/buffered
write()s. This allows us to track files that are being read/written
to PIDs.
Change-Id: I26bd36f933108927d6903da04d8cb42fd9c3ef3d
Signed-off-by: Mohan Srinivasan <srmohan@google.com>
Chris Redpath [Tue, 20 Sep 2016 16:00:47 +0000 (17:00 +0100)]
sched/walt: Drop arch-specific timer access
On at least one platform, occasionally the timer providing the wallclock
was able to be reset/go backwards for at least some time after wakeup.
Accept that this might happen and warn the first time, but otherwise just
carry on.
Change-Id: Id3164477ba79049561af7f0889cbeebc199ead4e
Signed-off-by: Chris Redpath <chris.redpath@arm.com>
Jeff Vander Stoep [Mon, 19 Sep 2016 04:39:28 +0000 (21:39 -0700)]
ANDROID: fiq_debugger: Pass task parameter to unwind_frame()
Fixes:
fe13f95b7200 ("arm64: pass a task parameter to unwind_frame()")
Bug:
30369029
Patchset: rework-pagetable
Change-Id: I9a4ab50ef61532d27282f189f063c938c196ec08
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Srinath Sridharan [Mon, 19 Sep 2016 21:37:34 +0000 (14:37 -0700)]
eas/sched/fair: Fixing comments in find_best_target.
Change-Id: I83f5b9887e98f9fdb81318cde45408e7ebfc4b13
Signed-off-by: Srinath Sridharan <srinathsr@google.com>
Eric Ernst [Fri, 2 Sep 2016 23:12:06 +0000 (16:12 -0700)]
input: keyreset: switch to orderly_reboot
Prior restart function would make a call to sys_sync and then
execute a kernel reset. Rather than call the sync directly,
thus necessitating this driver to be builtin, call orderly_reboot,
which will take care of the file system sync.
Note: since CONFIG_INPUT Kconfig is tristate, this driver can be built
as module, despite being marked bool.
Signed-off-by: Eric Ernst <eric.ernst@linux.intel.com>
Soheil Hassas Yeganeh [Tue, 23 Aug 2016 22:22:33 +0000 (18:22 -0400)]
UPSTREAM: tun: fix transmit timestamp support
Instead of using sock_tx_timestamp, use skb_tx_timestamp to record
software transmit timestamp of a packet.
sock_tx_timestamp resets and overrides the tx_flags of the skb.
The function is intended to be called from within the protocol
layer when creating the skb, not from a device driver. This is
inconsistent with other drivers and will cause issues for TCP.
In TCP, we intend to sample the timestamps for the last byte
for each sendmsg/sendpage. For that reason, tcp_sendmsg calls
tcp_tx_timestamp only with the last skb that it generates.
For example, if a 128KB message is split into two 64KB packets
we want to sample the SND timestamp of the last packet. The current
code in the tun driver, however, will result in sampling the SND
timestamp for both packets.
Also, when the last packet is split into smaller packets for
retranmission (see tcp_fragment), the tun driver will record
timestamps for all of the retransmitted packets and not only the
last packet.
Change-Id: If7458ab31de52aa15a12364b6c1ac2a8f93f17a7
Fixes:
eda297729171 (tun: Support software transmit time stamping.)
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Francis Yan <francisyyan@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Mark Rutland [Mon, 25 Jan 2016 11:45:12 +0000 (11:45 +0000)]
BACKPORT: arm64: mm: create new fine-grained mappings at boot
At boot we may change the granularity of the tables mapping the kernel
(by splitting or making sections). This may happen when we create the
linear mapping (in __map_memblock), or at any point we try to apply
fine-grained permissions to the kernel (e.g. fixup_executable,
mark_rodata_ro, fixup_init).
Changing the active page tables in this manner may result in multiple
entries for the same address being allocated into TLBs, risking problems
such as TLB conflict aborts or issues derived from the amalgamation of
TLB entries. Generally, a break-before-make (BBM) approach is necessary
to avoid conflicts, but we cannot do this for the kernel tables as it
risks unmapping text or data being used to do so.
Instead, we can create a new set of tables from scratch in the safety of
the existing mappings, and subsequently migrate over to these using the
new cpu_replace_ttbr1 helper, which avoids the two sets of tables being
active simultaneously.
To avoid issues when we later modify permissions of the page tables
(e.g. in fixup_init), we must create the page tables at a granularity
such that later modification does not result in splitting of tables.
This patch applies this strategy, creating a new set of fine-grained
page tables from scratch, and safely migrating to them. The existing
fixmap and kasan shadow page tables are reused in the new fine-grained
tables.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
068a17a5805dfbca4bbf03e664ca6b19709cc7a8)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I7ddaa296b94106846ebf8392d91186df8673df64
Mark Rutland [Mon, 25 Jan 2016 11:45:11 +0000 (11:45 +0000)]
BACKPORT: arm64: ensure _stext and _etext are page-aligned
Currently we have separate ALIGN_DEBUG_RO{,_MIN} directives to align
_etext and __init_begin. While we ensure that __init_begin is
page-aligned, we do not provide the same guarantee for _etext. This is
not problematic currently as the alignment of __init_begin is sufficient
to prevent issues when we modify permissions.
Subsequent patches will assume page alignment of segments of the kernel
we wish to map with different permissions. To ensure this, move _etext
after the ALIGN_DEBUG_RO_MIN for the init section. This renders the
prior ALIGN_DEBUG_RO irrelevant, and hence it is removed. Likewise,
upgrade to ALIGN_DEBUG_RO_MIN(PAGE_SIZE) for _stext.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
fca082bfb543ccaaff864fc0892379ccaa1711cd)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: If1a829aa5c363f2c76eb1a18209c3c00ee3ffd0a
Mark Rutland [Mon, 25 Jan 2016 11:45:10 +0000 (11:45 +0000)]
UPSTREAM: arm64: mm: allow passing a pgdir to alloc_init_*
To allow us to initialise pgdirs which are fixmapped, allow explicitly
passing a pgdir rather than an mm. A new __create_pgd_mapping function
is added for this, with existing __create_mapping callers migrated to
this.
The mm argument was previously only used at the top level. Now that it
is redundant at all levels, it is removed. To indicate its new found
similarity to alloc_init_{pud,pmd,pte}, __create_mapping is renamed to
init_pgd.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
11509a306bb6ea595878b2d246d2d56b1783e040)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I554f90b986a0fce6c96c0a0a9da1d6e61602d0a9
Mark Rutland [Mon, 25 Jan 2016 11:45:09 +0000 (11:45 +0000)]
UPSTREAM: arm64: mm: allocate pagetables anywhere
Now that create_mapping uses fixmap slots to modify pte, pmd, and pud
entries, we can access page tables anywhere in physical memory,
regardless of the extent of the linear mapping.
Given that, we no longer need to limit memblock allocations during page
table creation, and can leave the limit as its default
MEMBLOCK_ALLOC_ANYWHERE.
We never add memory which will fall outside of the linear map range
given phys_offset and MAX_MEMBLOCK_ADDR are configured appropriately, so
any tables we create will fall in the linear map of the final tables.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
cdef5f6e9e0e5ee397759b664a9f875ff59ccf01)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I9b6487491f47351303a147147c3d274da20def59
Mark Rutland [Mon, 25 Jan 2016 11:45:08 +0000 (11:45 +0000)]
UPSTREAM: arm64: mm: use fixmap when creating page tables
As a preparatory step to allow us to allocate early page tables from
unmapped memory using memblock_alloc, modify the __create_mapping
callees to map and unmap the tables they modify using fixmap entries.
All but the top-level pgd initialisation is performed via the fixmap.
Subsequent patches will inject the pgd physical address, and migrate to
using the FIX_PGD slot.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
f4710445458c0a1bd1c3c014ada2e7d7dc7b882f)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I71d2fb02d009915b5aab7f3467c4a3c658e898de
Mark Rutland [Mon, 25 Jan 2016 11:45:07 +0000 (11:45 +0000)]
UPSTREAM: arm64: mm: add functions to walk tables in fixmap
As a preparatory step to allow us to allocate early page tables from
unmapped memory using memblock_alloc, add new p??_{set,clear}_fixmap*
functions which can be used to walk page tables outside of the linear
mapping by using fixmap slots.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
961faac114819a01e627fe9c9c82b830bb3849d4)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I58a0f54d2c637ef0fda8e0b9187ade969aecd347
Mark Rutland [Mon, 25 Jan 2016 11:45:06 +0000 (11:45 +0000)]
UPSTREAM: arm64: mm: add __{pud,pgd}_populate
We currently have __pmd_populate for creating a pmd table entry given
the physical address of a pte, but don't have equivalents for the pud or
pgd levels of table.
To enable us to manipulate tables which are mapped outside of the linear
mapping (where we have a PA, but not a linear map VA), it is useful to
have these functions.
This patch adds __{pud,pgd}_populate. As these should not be called when
the kernel uses folded {pmd,pud}s, in these cases they expand to
BUILD_BUG(). So long as the appropriate checks are made on the {pud,pgd}
entry prior to attempting population, these should be optimized out at
compile time.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
1e531cce68c92b46c7d29f36a72f9a3e5886678f)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I4bb3b67d0e8b63dfbd1292a93bdde42ead23a5c2
Mark Rutland [Mon, 25 Jan 2016 11:45:05 +0000 (11:45 +0000)]
UPSTREAM: arm64: mm: avoid redundant __pa(__va(x))
When we "upgrade" to a section mapping, we free any table we made
redundant by giving it back to memblock. To get the PA, we acquire the
physical address and convert this to a VA, then subsequently convert
this back to a PA.
This works currently, but will not work if the tables are not accessed
via linear map VAs (e.g. is we use fixmap slots).
This patch uses {pmd,pud}_page_paddr to acquire the PA. This avoids the
__pa(__va()) round trip, saving some work and avoiding reliance on the
linear mapping.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
316b39db06718d59d82736df9fc65cf05b467cc7)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I26ed77790b5b00a15ea09a5a23a176de5b73a002
Mark Rutland [Mon, 25 Jan 2016 11:45:04 +0000 (11:45 +0000)]
UPSTREAM: arm64: mm: add functions to walk page tables by PA
To allow us to walk tables allocated into the fixmap, we need to acquire
the physical address of a page, rather than the virtual address in the
linear map.
This patch adds new p??_page_paddr and p??_offset_phys functions to
acquire the physical address of a next-level table, and changes
p??_offset* into macros which simply convert this to a linear map VA.
This renders p??_page_vaddr unused, and hence they are removed.
At the pgd level, a new pgd_offset_raw function is added to find the
relevant PGD entry given the base of a PGD and a virtual address.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
dca56dca7124709f3dfca81afe61b4d98eb9cacf)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ie43d00014322e29aec70617db3af242ed8545738
Mark Rutland [Mon, 25 Jan 2016 11:45:03 +0000 (11:45 +0000)]
UPSTREAM: arm64: mm: move pte_* macros
For pmd, pud, and pgd levels of table, functions including p?d_index and
p?d_offset are defined after the p?d_page_vaddr function for the
immediately higher level of table.
The pte functions however are defined much earlier, even though several
rely on the later definition of pmd_page_vaddr. While this isn't
currently a problem as these are macros, it prevents the logical
grouping of later C functions (which cannot rely on prototypes for
functions not yet defined).
Move these definitions after pmd_page_vaddr, for consistency with the
placement of these functions for other levels of table.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
053520f7d3923cc6d37afb28f9887cb1e7d77454)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I44c7327dc0f257188c0c2204ec7da0e7fdca7f64
Mark Rutland [Mon, 25 Jan 2016 11:45:02 +0000 (11:45 +0000)]
UPSTREAM: arm64: kasan: avoid TLB conflicts
The page table modification performed during the KASAN init risks the
allocation of conflicting TLB entries, as it swaps a set of valid global
entries for another without suitable TLB maintenance.
The presence of conflicting TLB entries can result in the delivery of
synchronous TLB conflict aborts, or may result in the use of erroneous
data being returned in response to a TLB lookup. This can affect
explicit data accesses from software as well as translations performed
asynchronously (e.g. as part of page table walks or speculative I-cache
fetches), and can therefore result in a wide variety of problems.
To avoid this, use cpu_replace_ttbr1 to swap the page tables. This
ensures that when the new tables are installed there are no stale
entries from the old tables which may conflict. As all updates are made
to the tables while they are not active, the updates themselves are
safe.
At the same time, add the missing barrier to ensure that the tmp_pg_dir
entries updated via memcpy are visible to the page table walkers at the
point the tmp_pg_dir is installed. All other page table updates made as
part of KASAN initialisation have the requisite barriers due to the use
of the standard page table accessors.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
c1a88e9124a499939ebd8069d5e4d3937f019157)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I0934b236e25e015d6946dcedcca14ba9512418f7
Mark Rutland [Mon, 25 Jan 2016 11:45:01 +0000 (11:45 +0000)]
UPSTREAM: arm64: mm: add code to safely replace TTBR1_EL1
If page tables are modified without suitable TLB maintenance, the ARM
architecture permits multiple TLB entries to be allocated for the same
VA. When this occurs, it is permitted that TLB conflict aborts are
raised in response to synchronous data/instruction accesses, and/or and
amalgamation of the TLB entries may be used as a result of a TLB lookup.
The presence of conflicting TLB entries may result in a variety of
behaviours detrimental to the system (e.g. erroneous physical addresses
may be used by I-cache fetches and/or page table walks). Some of these
cases may result in unexpected changes of hardware state, and/or result
in the (asynchronous) delivery of SError.
To avoid these issues, we must avoid situations where conflicting
entries may be allocated into TLBs. For user and module mappings we can
follow a strict break-before-make approach, but this cannot work for
modifications to the swapper page tables that cover the kernel text and
data.
Instead, this patch adds code which is intended to be executed from the
idmap, which can safely unmap the swapper page tables as it only
requires the idmap to be active. This enables us to uninstall the active
TTBR1_EL1 entry, invalidate TLBs, then install a new TTBR1_EL1 entry
without potentially unmapping code or data required for the sequence.
This avoids the risk of conflict, but requires that updates are staged
in a copy of the swapper page tables prior to being installed.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
50e1881ddde2a986c7d0d2150985239e5e3d7d96)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ibd0a7c802a1abe0ba08a99819fd62ac646186005
Mark Rutland [Mon, 25 Jan 2016 11:45:00 +0000 (11:45 +0000)]
UPSTREAM: arm64: add function to install the idmap
In some cases (e.g. when making invasive changes to the kernel page
tables) we will need to execute code from the idmap.
Add a new helper which may be used to install the idmap, complementing
the existing cpu_uninstall_idmap.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
609116d202a8c5fd3fe393eb85373cbee906df68)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I1aac9e85e3d49add72c8aab7d80777a39f5fdd8e
Mark Rutland [Mon, 25 Jan 2016 11:44:59 +0000 (11:44 +0000)]
UPSTREAM: arm64: unmap idmap earlier
During boot we leave the idmap in place until paging_init, as we
previously had to wait for the zero page to become allocated and
accessible.
Now that we have a statically-allocated zero page, we can uninstall the
idmap much earlier in the boot process, making it far easier to spot
accidental use of physical addresses. This also brings the cold boot
path in line with the secondary boot path.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
86ccce896cb0aa800a7a6dcd29b41ffc4eeb1a75)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I6375bd9855e45727790697875b7cd19f84a4dd7f
Mark Rutland [Mon, 25 Jan 2016 11:44:58 +0000 (11:44 +0000)]
UPSTREAM: arm64: unify idmap removal
We currently open-code the removal of the idmap and restoration of the
current task's MMU state in a few places.
Before introducing yet more copies of this sequence, unify these to call
a new helper, cpu_uninstall_idmap.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
9e8e865bbe294a69666a1996bda3e87825b258c0)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I6e9cb0253a1d2d63232f8fa0b3f39f8f6987b239
Mark Rutland [Mon, 25 Jan 2016 11:44:57 +0000 (11:44 +0000)]
UPSTREAM: arm64: mm: place empty_zero_page in bss
Currently the zero page is set up in paging_init, and thus we cannot use
the zero page earlier. We use the zero page as a reserved TTBR value
from which no TLB entries may be allocated (e.g. when uninstalling the
idmap). To enable such usage earlier (as may be required for invasive
changes to the kernel page tables), and to minimise the time that the
idmap is active, we need to be able to use the zero page before
paging_init.
This patch follows the example set by x86, by allocating the zero page
at compile time, in .bss. This means that the zero page itself is
available immediately upon entry to start_kernel (as we zero .bss before
this), and also means that the zero page takes up no space in the raw
Image binary. The associated struct page is allocated in bootmem_init,
and remains unavailable until this time.
Outside of arch code, the only users of empty_zero_page assume that the
empty_zero_page symbol refers to the zeroed memory itself, and that
ZERO_PAGE(x) must be used to acquire the associated struct page,
following the example of x86. This patch also brings arm64 inline with
these assumptions.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
5227cfa71f9e8574373f4d0e9e754942d76cdf67)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I43005dd8533d9968d2cef48bd2821d4507cae5ad
Mark Rutland [Mon, 25 Jan 2016 11:44:56 +0000 (11:44 +0000)]
UPSTREAM: arm64: mm: specialise pagetable allocators
We pass a size parameter to early_alloc and late_alloc, but these are
only ever used to allocate single pages. In late_alloc we always
allocate a single page.
Both allocators provide us with zeroed pages (such that all entries are
invalid), but we have no barriers between allocating a page and adding
that page to existing (live) tables. A concurrent page table walk may
see stale data, leading to a number of issues.
This patch specialises the two allocators for page tables. The size
parameter is removed and the necessary dsb(ishst) is folded into each.
To make it clear that the functions are intended for use for page table
allocation, they are renamed to {early,late}_pgtable_alloc, with the
related function pointed renamed to pgtable_alloc.
As the dsb(ishst) is now in the allocator, the existing barrier for the
zero page is redundant and thus is removed. The previously missing
include of barrier.h is added.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Jeremy Linton <jeremy.linton@arm.com>
Cc: Laura Abbott <labbott@fedoraproject.org>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
21ab99c289d350f4ae454bc069870009db6df20e)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ib88fafc146506943122aa1ca6ee5a6c331ddb26c
Mark Rutland [Mon, 25 Jan 2016 11:44:55 +0000 (11:44 +0000)]
UPSTREAM: asm-generic: Fix local variable shadow in __set_fixmap_offset
Currently __set_fixmap_offset is a macro function which has a local
variable called 'addr'. If a caller passes a 'phys' parameter which is
derived from a variable also called 'addr', the local variable will
shadow this, and the compiler will complain about the use of an
uninitialized variable. To avoid the issue with namespace clashes,
'addr' is prefixed with a liberal sprinkling of underscores.
Turning __set_fixmap_offset into a static inline breaks the build for
several architectures. Fixing this properly requires updates to a number
of architectures to make them agree on the prototype of __set_fixmap (it
could be done as a subsequent patch series).
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
[catalin.marinas@arm.com: squashed the original function patch and macro fixup]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
3694bd76781b76c4f8d2ecd85018feeb1609f0e5)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Iec27cb36dfca39e333de9f7318e76da0670d0156
William Cohen [Fri, 22 Jan 2016 03:56:26 +0000 (22:56 -0500)]
BACKPORT: Eliminate the .eh_frame sections from the aarch64 vmlinux and kernel modules
By default the aarch64 gcc generates .eh_frame sections. Unlike
.debug_frame sections, the .eh_frame sections are loaded into memory
when the associated code is loaded. On an example kernel being built
with this default the .eh_frame section in vmlinux used an extra 1.7MB
of memory. The x86 disables the creation of the .eh_frame section.
The aarch64 should probably do the same to save some memory.
Signed-off-by: William Cohen <wcohen@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
728dabd6d1751cf5e0f8e0535891393da62396e9)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I697baae2209d6d11f2cc447459d935f7200eb7b1
Masanari Iida [Sun, 24 Jan 2016 06:24:12 +0000 (15:24 +0900)]
UPSTREAM: arm64: Fix an enum typo in mm/dump.c
This patch fixes a typo in mm/dump.c:
"MODUELS_END_NR" should be "MODULES_END_NR".
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
b3122023df935cf14bf951da98ca598d71b9f826)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I5e73d18fd70f20f200a974f5f2ba22cb7ae64952
Ard Biesheuvel [Mon, 11 Jan 2016 13:50:21 +0000 (14:50 +0100)]
UPSTREAM: arm64: kasan: ensure that the KASAN zero page is mapped read-only
When switching from the early KASAN shadow region, which maps the
entire shadow space read-write, to the permanent KASAN shadow region,
which uses a zero page to shadow regions that are not subject to
instrumentation, the lowest level table kasan_zero_pte[] may be
reused unmodified, which means that the mappings of the zero page
that it contains will still be read-write.
So update it explicitly to map the zero page read only when we
activate the permanent mapping.
Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
7b1af9795773d745c2a8c7d4ca5f2936e8b6adfb)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I4745dc91236c2612ad3e13be1cc176ffd923f7da
Minchan Kim [Sat, 16 Jan 2016 00:55:33 +0000 (16:55 -0800)]
UPSTREAM: arch/arm/include/asm/pgtable-3level.h: add pmd_mkclean for THP
MADV_FREE needs pmd_dirty and pmd_mkclean for detecting recent overwrite
of the contents since MADV_FREE syscall is called for THP page.
This patch adds pmd_mkclean for THP page MADV_FREE support.
Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Shaohua Li <shli@kernel.org>
Cc: <yalin.wang2010@gmail.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Chen Gang <gang.chen.5i5j@gmail.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Daniel Micay <danielmicay@gmail.com>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Helge Deller <deller@gmx.de>
Cc: Hugh Dickins <hughd@google.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Jason Evans <je@fb.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mika Penttil <mika.penttila@nextfour.com>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Rik van Riel <riel@redhat.com>
Cc: Roland Dreier <roland@kernel.org>
Cc: Shaohua Li <shli@kernel.org>
Cc: Wu Fengguang <fengguang.wu@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
44842045e4baaf406db2954dd2e07152fa61528d)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I59d53667aa8c40dea4f18fc58acc7d27f4a85a04
Ard Biesheuvel [Fri, 15 Jan 2016 12:28:57 +0000 (13:28 +0100)]
UPSTREAM: arm64: hide __efistub_ aliases from kallsyms
Commit
e8f3010f7326 ("arm64/efi: isolate EFI stub from the kernel
proper") isolated the EFI stub code from the kernel proper by prefixing
all of its symbols with __efistub_, and selectively allowing access to
core kernel symbols from the stub by emitting __efistub_ aliases for
functions and variables that the stub can access legally.
As an unintended side effect, these aliases are emitted into the
kallsyms symbol table, which means they may turn up in backtraces,
e.g.,
...
PC is at __efistub_memset+0x108/0x200
LR is at fixup_init+0x3c/0x48
...
[<
ffffff8008328608>] __efistub_memset+0x108/0x200
[<
ffffff8008094dcc>] free_initmem+0x2c/0x40
[<
ffffff8008645198>] kernel_init+0x20/0xe0
[<
ffffff8008085cd0>] ret_from_fork+0x10/0x40
The backtrace in question has nothing to do with the EFI stub, but
simply returns one of the several aliases of memset() that have been
recorded in the kallsyms table. This is undesirable, since it may
suggest to people who are not aware of this that the issue they are
seeing is somehow EFI related.
So hide the __efistub_ aliases from kallsyms, by emitting them as
absolute linker symbols explicitly. The distinction between those
and section relative symbols is completely irrelevant to these
definitions, and to the final link we are performing when these
definitions are being taken into account (the distinction is only
relevant to symbols defined inside a section definition when performing
a partial link), and so the resulting values are identical to the
original ones. Since absolute symbols are ignored by kallsyms, this
will result in these values to be omitted from its symbol table.
After this patch, the backtrace generated from the same address looks
like this:
...
PC is at __memset+0x108/0x200
LR is at fixup_init+0x3c/0x48
...
[<
ffffff8008328608>] __memset+0x108/0x200
[<
ffffff8008094dcc>] free_initmem+0x2c/0x40
[<
ffffff8008645198>] kernel_init+0x20/0xe0
[<
ffffff8008085cd0>] ret_from_fork+0x10/0x40
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
75feee3d9d51775072d3a04f47d4a439a4c4590e)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Ieeb8c576e31f0dd0b7f82982ffa6362864eed311
Mark Rutland [Wed, 6 Jan 2016 11:05:27 +0000 (11:05 +0000)]
UPSTREAM: arm64: head.S: use memset to clear BSS
Currently we use an open-coded memzero to clear the BSS. As it is a
trivial implementation, it is sub-optimal.
Our optimised memset doesn't use the stack, is position-independent, and
for the memzero case can use of DC ZVA to clear large blocks
efficiently. In __mmap_switched the MMU is on and there are no live
caller-saved registers, so we can safely call an uninstrumented memset.
This patch changes __mmap_switched to use memset when clearing the BSS.
We use the __pi_memset alias so as to avoid any instrumentation in all
kernel configurations.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
2a803c4db615d85126c5c7afd5849a3cfde71422)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: I3dc7050fe5566f2126cbea9abfa6063c8e6b029a
Ard Biesheuvel [Wed, 23 Dec 2015 09:29:28 +0000 (10:29 +0100)]
UPSTREAM: efi: stub: define DISABLE_BRANCH_PROFILING for all architectures
This moves the DISABLE_BRANCH_PROFILING define from the x86 specific
to the general CFLAGS definition for the stub. This fixes build errors
when building for arm64 with CONFIG_PROFILE_ALL_BRANCHES_ENABLED.
Reviewed-by: Matt Fleming <matt@codeblueprint.co.uk>
Reported-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Bug:
30369029
Patchset: rework-pagetable
(cherry picked from commit
b523e185bba36164ca48a190f5468c140d815414)
Signed-off-by: Jeff Vander Stoep <jeffv@google.com>
Change-Id: Id5a2a3986adc60ddf3d247c038667ce9719f3ee0