Michael Ellerman [Wed, 12 Apr 2017 12:25:02 +0000 (22:25 +1000)]
Merge branch 'topic/xive' (early part) into next
This merges the arch part of the XIVE support, leaving the final commit
with the KVM specific pieces dangling on the branch for Paul to merge
via the kvm-ppc tree.
Gautham R. Shenoy [Wed, 22 Mar 2017 15:04:17 +0000 (20:34 +0530)]
powerpc/powernv: Recover correct PACA on wakeup from a stop on P9 DD1
POWER9 DD1.0 hardware has a bug where the SPRs of a thread waking up
from stop 0,1,2 with ESL=1 can endup being misplaced in the core. Thus
the HSPRG0 of a thread waking up from can contain the paca pointer of
its sibling.
This patch implements a context recovery framework within threads of a
core, by provisioning space in paca_struct for saving every sibling
threads's paca pointers. Basically, we should be able to arrive at the
right paca pointer from any of the thread's existing paca pointer.
At bootup, during powernv idle-init, we save the paca address of every
CPU in each one its siblings paca_struct in the slot corresponding to
this CPU's index in the core.
On wakeup from a stop, the thread will determine its index in the core
from the TIR register and recover its PACA pointer by indexing into
the correct slot in the provisioned space in the current PACA.
Furthermore, ensure that the NVGPRs are restored from the stack on the
way out by setting the NAPSTATELOST in paca.
[Changelog written with inputs from svaidy@linux.vnet.ibm.com]
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Call it a bug]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Gautham R. Shenoy [Wed, 22 Mar 2017 15:04:16 +0000 (20:34 +0530)]
powerpc/powernv/idle: Don't override default/deepest directly in kernel
Currently during idle-init on power9, if we don't find suitable stop
states in the device tree that can be used as the
default_stop/deepest_stop, we set stop0 (ESL=1,EC=1) as the default
stop state psscr to be used by power9_idle and deepest stop state
which is used by CPU-Hotplug.
However, if the platform firmware has not configured or enabled a stop
state, the kernel should not make any assumptions and fallback to a
default choice.
If the kernel uses a stop state that is not configured by the platform
firmware, it may lead to further failures which should be avoided.
In this patch, we modify the init code to ensure that the kernel uses
only the stop states exposed by the firmware through the device
tree. When a suitable default stop state isn't found, we disable
ppc_md.power_save for power9. Similarly, when a suitable
deepest_stop_state is not found in the device tree exported by the
firmware, fall back to the default busy-wait loop in the CPU-Hotplug
code.
[Changelog written with inputs from svaidy@linux.vnet.ibm.com]
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Gautham R. Shenoy [Wed, 22 Mar 2017 15:04:15 +0000 (20:34 +0530)]
powerpc/powernv/smp: Add busy-wait loop as fall back for CPU-Hotplug
Currently, the powernv cpu-offline function assumes that platform idle
states such as stop on POWER9, winkle/sleep/nap on POWER8 are always
available. On POWER8, it picks nap as the default state if other deep
idle states like sleep/winkle are not available and enabled in the
platform.
On POWER9, nap is not available and all idle states are managed by
STOP instruction. The parameters to the idle state are passed through
processor stop status control register (PSSCR). Hence as such
executing STOP would take parameters from current PSSCR. We do not
want to make any assumptions in kernel on what STOP states and PSSCR
features are configured by the platform.
Ideally platform will configure a good set of stop states that can be
used in the kernel. We would like to start with a clean slate, if the
platform choose to not configure any state or there is an error in
platform firmware that lead to no stop states being configured or
allowed to be requested.
This patch adds a fallback method for CPU-Hotplug that is similar to
snooze loop at idle where the threads are left to spin at low priority
and hence reduce the cycles consumed.
This is a safe fallback mechanism in the case when no stop state would
be requested if the platform firmware did not configure them most
likely due to an error condition.
Requesting a stop state when the platform has not configured them or
enabled them would lead to further error conditions which could be
difficult to debug.
[Changelog written with inputs from svaidy@linux.vnet.ibm.com]
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Gautham R. Shenoy [Wed, 22 Mar 2017 15:04:14 +0000 (20:34 +0530)]
powerpc/powernv: Move CPU-Offline idle state invocation from smp.c to idle.c
Move the piece of code in powernv/smp.c::pnv_smp_cpu_kill_self() which
transitions the CPU to the deepest available platform idle state to a
new function named pnv_cpu_offline() in powernv/idle.c. The rationale
behind this code movement is that the data required to determine the
deepest available platform state resides in powernv/idle.c.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anshuman Khandual [Fri, 7 Apr 2017 03:55:39 +0000 (09:25 +0530)]
powerpc/hugetlb: Add ABI defines for supported HugeTLB page sizes
Add user space exported API definitions for 512KB, 1MB, 2MB, 8MB, 16MB,
1GB, 16GB non default huge page sizes to be used with mmap() system
call.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
[mpe: Reword the comment to emphasise that these are only needed to use
the non-default huge page size, and updated the change log.]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anshuman Khandual [Fri, 7 Apr 2017 06:53:11 +0000 (12:23 +0530)]
powerpc/mm: Remove reduntant initmem information from log
Generic core VM already prints these information in the log
buffer, hence there is no need for a second print. This just
removes the second print from arch powerpc NUMA init path.
Before the patch:
$ dmesg | grep "Initmem"
numa: Initmem setup node 0 [mem 0x00000000-0xffffffff]
numa: Initmem setup node 1 [mem 0x100000000-0x1ffffffff]
numa: Initmem setup node 2 [mem 0x200000000-0x2ffffffff]
numa: Initmem setup node 3 [mem 0x300000000-0x3ffffffff]
numa: Initmem setup node 4 [mem 0x400000000-0x4ffffffff]
numa: Initmem setup node 5 [mem 0x500000000-0x5ffffffff]
numa: Initmem setup node 6 [mem 0x600000000-0x6ffffffff]
numa: Initmem setup node 7 [mem 0x700000000-0x7ffffffff]
Initmem setup node 0 [mem 0x0000000000000000-0x00000000ffffffff]
Initmem setup node 1 [mem 0x0000000100000000-0x00000001ffffffff]
Initmem setup node 2 [mem 0x0000000200000000-0x00000002ffffffff]
Initmem setup node 3 [mem 0x0000000300000000-0x00000003ffffffff]
Initmem setup node 4 [mem 0x0000000400000000-0x00000004ffffffff]
Initmem setup node 5 [mem 0x0000000500000000-0x00000005ffffffff]
Initmem setup node 6 [mem 0x0000000600000000-0x00000006ffffffff]
Initmem setup node 7 [mem 0x0000000700000000-0x00000007ffffffff]
After the patch just the latter set is printed.
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Wed, 5 Apr 2017 06:10:48 +0000 (16:10 +1000)]
powerpc: Make sparsemem the default on 64-bit Book3S
Make sparsemem the default on all 64-bit Book3S platforms. It already is
for pseries and ps3, and we need to enable it for powernv because on
POWER9 memory between chips is discontiguous.
For the other platforms sparsemem should work fine, though it might add
a small amount of overhead. We can always force FLATMEM in the
defconfigs if necessary.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 3 Apr 2017 02:05:55 +0000 (12:05 +1000)]
powerpc/nohash: Fix use of mmu_has_feature() in setup_initial_memory_limit()
setup_initial_memory_limit() is called from early_init_devtree(), which
runs prior to feature patching. If the kernel is built with CONFIG_JUMP_LABEL=y
and CONFIG_JUMP_LABEL_FEATURE_CHECKS=y then we will potentially get the
wrong value.
If we also have CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG=y we get a warning
and backtrace:
Warning! mmu_has_feature() used prior to jump label init!
CPU: 0 PID: 0 Comm: swapper Not tainted
4.11.0-rc4-gccN-next-20170331-g6af2434 #1
Call Trace:
[
c000000000fc3d50] [
c000000000a26c30] .dump_stack+0xa8/0xe8 (unreliable)
[
c000000000fc3de0] [
c00000000002e6b8] .setup_initial_memory_limit+0xa4/0x104
[
c000000000fc3e60] [
c000000000d5c23c] .early_init_devtree+0xd0/0x2f8
[
c000000000fc3f00] [
c000000000d5d3b0] .early_setup+0x90/0x11c
[
c000000000fc3f90] [
c000000000000520] start_here_multiplatform+0x68/0x80
Fix it by using early_mmu_has_feature().
Fixes:
c12e6f24d413 ("powerpc: Add option to use jump label for mmu_has_feature()")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Fri, 10 Feb 2017 01:12:44 +0000 (12:12 +1100)]
powerpc: Remove unnecessary includes of asm/debug.h
These files don't seem to have any need for asm/debug.h, now that all it
includes are the debugger hooks and breakpoint definitions.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Fri, 10 Feb 2017 01:04:56 +0000 (12:04 +1100)]
powerpc: Create asm/debugfs.h and move powerpc_debugfs_root there
powerpc_debugfs_root is the dentry representing the root of the
"powerpc" directory tree in debugfs.
Currently it sits in asm/debug.h, a long with some other things that
have "debug" in the name, but are otherwise unrelated.
Pull it out into a separate header, which also includes linux/debugfs.h,
and convert all the users to include debugfs.h instead of debug.h.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alistair Popple [Mon, 10 Apr 2017 05:24:35 +0000 (15:24 +1000)]
powerpc/powernv: Require MMU_NOTIFIER to fix NPU build
In the recent commit
1ab66d1fbada ("powerpc/powernv: Introduce address
translation services for Nvlink2") the NPU code gained a dependency on MMU
notifiers.
All our defconfigs have KVM enabled, which selects MMU_NOTIFIER, but if KVM is
not enabled then the build breaks.
Fix it by always selecting MMU_NOTIFIER when we're building powernv.
Fixes:
1ab66d1fbada ("powerpc/powernv: Introduce address translation services for Nvlink2")
Signed-off-by: Alistair Popple <alistair@popple.id.au>
[mpe: Reword change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Sat, 1 Apr 2017 14:41:48 +0000 (20:11 +0530)]
powerpc/mm/radix: Remove unnecessary ptesync
For a tlbiel with pid, we need to issue tlbiel with set number encoded. We
don't need to do ptesync for each of those. Instead we need one for the entire
tlbiel pid operation.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Sat, 1 Apr 2017 14:41:47 +0000 (20:11 +0530)]
powerpc/mm/radix: Don't do page walk cache flush when doing full mm flush
For fullmm tlb flush, we do a flush with RIC_FLUSH_ALL which will invalidate all
related caches (radix__tlb_flush()). Hence the pwc flush is not needed.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Wed, 5 Apr 2017 07:54:55 +0000 (17:54 +1000)]
powerpc: Fixup LPCR:PECE and HEIC setting on POWER9
We need to set LPES in order for normal external interrupts (0x500)
to be directed to the guest while running in guest state.
We also need HEIC set to prevent them to be sent to the host while
in host state.
With XIVE the host never gets one of these and wouldn't know how to
handle it. All host external interrupts come in via the new
hypervisor virtualization interrupts vector.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Wed, 5 Apr 2017 07:54:54 +0000 (17:54 +1000)]
powerpc: Consolidate variants of real-mode MMIOs
We have all sort of variants of MMIO accessors for the real mode
instructions. This creates a clean set of accessors based on
Linux normal naming conventions, replacing all occurrences of
the old ones in the tree.
I have purposefully removed the "out/in" variants in favor of
only including __raw variants. Any code using these is already
pretty much hand tuned to operate in a very specific environment.
I've fixed up the 2 users (only one of them actually needed
a barrier in the first place).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Wed, 5 Apr 2017 07:54:53 +0000 (17:54 +1000)]
powerpc/kvm: Remove obsolete kvm_vm_ioctl_xics_irq declaration
The function doesn't exist anymore
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Wed, 5 Apr 2017 07:54:52 +0000 (17:54 +1000)]
powerpc/kvm: Make kvmppc_xics_create_icp static
It's only used within the same file it's defined
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Wed, 5 Apr 2017 07:54:51 +0000 (17:54 +1000)]
powerpc/kvm: Massage order of #include
We traditionally have linux/ before asm/
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Wed, 5 Apr 2017 07:54:50 +0000 (17:54 +1000)]
powerpc/xive: Native exploitation of the XIVE interrupt controller
The XIVE interrupt controller is the new interrupt controller
found in POWER9. It supports advanced virtualization capabilities
among other things.
Currently we use a set of firmware calls that simulate the old
"XICS" interrupt controller but this is fairly inefficient.
This adds the framework for using XIVE along with a native
backend which OPAL for configuration. Later, a backend allowing
the use in a KVM or PowerVM guest will also be provided.
This disables some fast path for interrupts in KVM when XIVE is
enabled as these rely on the firmware emulation code which is no
longer available when the XIVE is used natively by Linux.
A latter patch will make KVM also directly exploit the XIVE, thus
recovering the lost performance (and more).
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[mpe: Fixup pr_xxx("XIVE:"...), don't split pr_xxx() strings,
tweak Kconfig so XIVE_NATIVE selects XIVE and depends on POWERNV,
fix build errors when SMP=n, fold in fixes from Ben:
Don't call cpu_online() on an invalid CPU number
Fix irq target selection returning out of bounds cpu#
Extra sanity checks on cpu numbers
]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Wed, 5 Apr 2017 07:54:49 +0000 (17:54 +1000)]
powerpc/smp: Remove migrate_irq() custom implementation
Some powerpc platforms use this to move IRQs away from a CPU being
unplugged. This function has several bugs such as not taking the right
locks or failing to NULL check pointers.
There's a new generic function doing exactly the same thing without all
the bugs, so let's use it instead.
mpe: The obvious place for the select of GENERIC_IRQ_MIGRATION is on
HOTPLUG_CPU, but that doesn't work. On some configs PM_SLEEP_SMP will
select HOTPLUG_CPU even though its dependencies are not met, which means
the select of GENERIC_IRQ_MIGRATION doesn't happen. That leads to the
build breaking. Fix it by moving the select of GENERIC_IRQ_MIGRATION to
SMP.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Wed, 5 Apr 2017 07:54:48 +0000 (17:54 +1000)]
powerpc: Add optional smp_ops->prepare_cpu SMP callback
Some platforms (will) need to perform allocations before bringing
a new CPU online. Doing it from smp_ops->setup_cpu is the wrong
thing to do:
- It has no useful failure path (too late)
- Calling any allocator will enable interrupts prematurely
causing problems with large decrementer among others
Instead, add a new callback that is called from __cpu_up (so from
the context trying to online the new CPU) at a point where we
can safely allocate and handle failures.
This will be used by XIVE support.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Wed, 5 Apr 2017 07:54:47 +0000 (17:54 +1000)]
powerpc: Add more PPC bit conversion macros
Add 32 and 8 bit variants
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Wed, 5 Apr 2017 23:01:33 +0000 (09:01 +1000)]
powerpc/powernv: Add XIVE related definitions to opal-api.h
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Matt Brown [Wed, 29 Mar 2017 23:28:01 +0000 (10:28 +1100)]
powerpc/powernv: Add OPAL exports attributes to sysfs
New versions of OPAL have a device node /ibm,opal/firmware/exports, each
property of which describes a range of memory in OPAL that Linux might
want to export to userspace for debugging.
This patch adds a sysfs file under 'opal/exports' for each property
found there, and makes it read-only by root.
Signed-off-by: Matt Brown <matthew.brown.dev@gmail.com>
[mpe: Drop counting of props, rename to attr, free on sysfs error, c'log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Sukadev Bhattiprolu [Mon, 27 Mar 2017 23:43:14 +0000 (19:43 -0400)]
powerpc/prom: Increase minimum RMA size to 512MB
When booting very large systems with a large initrd, we run out of
space early in boot for either RTAS or the flattened device tree (FDT).
Boot fails with messages like:
Could not allocate memory for RTAS
or
No memory for flatten_device_tree (no room)
Increasing the minimum RMA size to 512MB fixes the problem. This
should not have an impact on smaller LPARs (with 256MB memory),
as the firmware will cap the RMA to the memory assigned to the LPAR.
Fix is based on input/discussions with Michael Ellerman. Thanks to
Praveen K. Pandey for testing on a large system.
Reported-by: Praveen K. Pandey <preveen.pandey@in.ibm.com>
Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alistair Popple [Mon, 3 Apr 2017 09:51:44 +0000 (19:51 +1000)]
powerpc/powernv: Introduce address translation services for Nvlink2
Nvlink2 supports address translation services (ATS) allowing devices
to request address translations from an mmu known as the nest MMU
which is setup to walk the CPU page tables.
To access this functionality certain firmware calls are required to
setup and manage hardware context tables in the nvlink processing unit
(NPU). The NPU also manages forwarding of TLB invalidates (known as
address translation shootdowns/ATSDs) to attached devices.
This patch exports several methods to allow device drivers to register
a process id (PASID/PID) in the hardware tables and to receive
notification of when a device should stop issuing address translation
requests (ATRs). It also adds a fault handler to allow device drivers
to demand fault pages in.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
[mpe: Fix up comment formatting, use flush_tlb_mm()]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alistair Popple [Mon, 3 Apr 2017 09:51:43 +0000 (19:51 +1000)]
powerpc/powernv: Add sanity checks to pnv_pci_get_{gpu|npu}_dev
The pnv_pci_get_{gpu|npu}_dev functions are used to find associations
between nvlink PCIe devices and standard PCIe devices. However they
lacked basic sanity checking which results in NULL pointer
dereferencing if they are incorrect called can be harder to spot than
an explicit WARN_ON.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alistair Popple [Mon, 3 Apr 2017 09:51:42 +0000 (19:51 +1000)]
drivers/of/base.c: Add of_property_read_u64_index
There is of_property_read_u32_index but no u64 variant. This patch
adds one similar to the u32 version for u64.
Signed-off-by: Alistair Popple <alistair@popple.id.au>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Oliver O'Halloran [Mon, 3 Apr 2017 08:09:06 +0000 (18:09 +1000)]
powerpc/mm: Remove stale comment about the DART hole
The code to fix the problem it describes was removed in commit
c40785ad305b ("powerpc/dart: Use a cachable DART"), and it uses the
stupid comment style. Away it goooooooooooooes!
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anton Blanchard [Mon, 3 Apr 2017 06:41:02 +0000 (16:41 +1000)]
powerpc: Avoid taking a data miss on every userspace instruction miss
Early on in do_page_fault() we call store_updates_sp(), regardless of
the type of exception. For an instruction miss this doesn't make
sense, because we only use this information to detect if a data miss
is the result of a stack expansion instruction or not.
Worse still, it results in a data miss within every userspace
instruction miss handler, because we try and load the very instruction
we are about to install a pte for!
A simple exec microbenchmark runs 6% faster on POWER8 with this fix:
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
int main(int argc, char *argv[])
{
unsigned long left = atol(argv[1]);
char leftstr[16];
if (left-- == 0)
return 0;
sprintf(leftstr, "%ld", left);
execlp(argv[0], argv[0], leftstr, NULL);
perror("exec failed\n");
return 0;
}
Pass the number of iterations on the command line (eg 10000) and time
how long it takes to execute.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 3 Apr 2017 05:29:34 +0000 (15:29 +1000)]
powerpc/book3s: Print task info if we take a machine check in user mode
For an MCE (Machine Check Exception) that hits while in user mode
MSR(PR=1), print the task info to the console MCE error log. This may
help to identify an application that triggered the MCE.
After this patch the MCE console looks like:
Severe Machine check interrupt [Recovered]
NIP: [
0000000010039778] PID: 762 Comm: ebizzy
Initiator: CPU
Error type: SLB [Multihit]
Effective address:
0000000010039778
Severe Machine check interrupt [Not recovered]
NIP: [
0000000010039778] PID: 763 Comm: ebizzy
Initiator: CPU
Error type: UE [Page table walk ifetch]
Effective address:
0000000010039778
ebizzy[763]: unhandled signal 7 at
0000000010039778 nip
0000000010039778 lr
0000000010001b44 code 30004
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Mahesh Salgaonkar [Tue, 28 Mar 2017 13:45:04 +0000 (19:15 +0530)]
powerpc/book3s: Print the kernel function name in machine check
For D-side errors we print the load/store address that caused the
machine check as 'Effective address'. But the instruction that may have
caused the machine check can also be helpful, so in addition to printing
the NIP, also print the kernel function name as well.
After this patch the MCE console log would look like:
Severe Machine check interrupt [Recovered]
NIP [
d00000001bc70194]: init_module+0x194/0x2b0 [bork_kernel]
Initiator: CPU
Error type: SLB [Parity]
Effective address:
d000000026de0000
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Thu, 30 Mar 2017 11:05:21 +0000 (16:35 +0530)]
powerpc/mm: Enable mappings above 128TB
Not all user space application is ready to handle wide addresses. It's
known that at least some JIT compilers use higher bits in pointers to
encode their information. It collides with valid pointers with 512TB
addresses and leads to crashes.
To mitigate this, we are not going to allocate virtual address space
above 128TB by default.
But userspace can ask for allocation from full address space by
specifying hint address (with or without MAP_FIXED) above 128TB.
If hint address set above 128TB, but MAP_FIXED is not specified, we try
to look for unmapped area by specified address. If it's already
occupied, we look for unmapped area in *full* address space, rather than
from 128TB window.
This approach helps to easily make application's memory allocator aware
about large address space without manually tracking allocated virtual
address space.
This is going to be a per mmap decision. ie, we can have some mmaps with
larger addresses and other that do not.
A sample memory layout looks like:
10000000-
10010000 r-xp
00000000 fc:00
9057045 /home/max_addr_512TB
10010000-
10020000 r--p
00000000 fc:00
9057045 /home/max_addr_512TB
10020000-
10030000 rw-p
00010000 fc:00
9057045 /home/max_addr_512TB
10029630000-
10029660000 rw-p
00000000 00:00 0 [heap]
7fff834a0000-
7fff834b0000 rw-p
00000000 00:00 0
7fff834b0000-
7fff83670000 r-xp
00000000 fc:00
9177190 /lib/powerpc64le-linux-gnu/libc-2.23.so
7fff83670000-
7fff83680000 r--p
001b0000 fc:00
9177190 /lib/powerpc64le-linux-gnu/libc-2.23.so
7fff83680000-
7fff83690000 rw-p
001c0000 fc:00
9177190 /lib/powerpc64le-linux-gnu/libc-2.23.so
7fff83690000-
7fff836a0000 rw-p
00000000 00:00 0
7fff836a0000-
7fff836c0000 r-xp
00000000 00:00 0 [vdso]
7fff836c0000-
7fff83700000 r-xp
00000000 fc:00
9177193 /lib/powerpc64le-linux-gnu/ld-2.23.so
7fff83700000-
7fff83710000 r--p
00030000 fc:00
9177193 /lib/powerpc64le-linux-gnu/ld-2.23.so
7fff83710000-
7fff83720000 rw-p
00040000 fc:00
9177193 /lib/powerpc64le-linux-gnu/ld-2.23.so
7fffdccf0000-
7fffdcd20000 rw-p
00000000 00:00 0 [stack]
1000000000000-
1000000010000 rw-p
00000000 00:00 0
1ffff83710000-
1ffff83720000 rw-p
00000000 00:00 0
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 22 Mar 2017 03:37:01 +0000 (09:07 +0530)]
powerpc/mm: Switch some TASK_SIZE checks to use mm_context addr_limit
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 22 Mar 2017 03:37:00 +0000 (09:07 +0530)]
powerpc/pseries: Skip using reserved virtual address range
Now that we use all the available virtual address range, we need to make
sure we don't generate VSID such that it overlaps with the reserved vsid
range. Reserved vsid range include the virtual address range used by the
adjunct partition and also the VRMA virtual segment. We find the context
value that can result in generating such a VSID and reserve it early in
boot.
We don't look at the adjunct range, because for now we disable the
adjunct usage in a Linux LPAR via CAS interface.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Rewrite hash__reserve_context_id(), move the rest into pseries]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 22 Mar 2017 03:36:59 +0000 (09:06 +0530)]
powerpc/mm/hash: Store addr_limit in PACA
We optmize the slice page size array copy to paca by copying only the
range based on addr_limit. This will require us to not look at page size
array beyond addr_limit in PACA on slb fault. To enable that copy task
size to paca which will be used during slb fault.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Rename from task_size to addr_limit, consolidate #ifdefs]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 22 Mar 2017 03:36:58 +0000 (09:06 +0530)]
powerpc/mm: Add addr_limit to mm_context and use it to derive max slice index
In the followup patch, we will increase the slice array size to handle
512TB range, but will limit the max addr to 128TB. Avoid doing
unnecessary computation and avoid doing slice mask related operation
above address limit.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 22 Mar 2017 03:36:57 +0000 (09:06 +0530)]
powerpc/mm/hash: Increase VA range to 128TB
We update the hash linux page table layout such that we can support
512TB. But we limit the TASK_SIZE to 128TB. We can switch to 128TB by
default without conditional because that is the max virtual address
supported by other architectures. We will later add a mechanism to
on-demand increase the application's effective address range to 512TB.
Having the page table layout changed to accommodate 512TB makes testing
large memory configuration easier with less code changes to kernel
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 22 Mar 2017 03:36:56 +0000 (09:06 +0530)]
powerpc/mm/hash: Convert mask to unsigned long
This doesn't have any functional change. But helps in avoiding mistakes
in case the shift bit changes
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 29 Mar 2017 06:21:53 +0000 (17:21 +1100)]
powerpc/mm/hash: Support 68 bit VA
Inorder to support large effective address range (512TB), we want to
increase the virtual address bits to 68. But we do have platforms like
p4 and p5 that can only do 65 bit VA. We support those platforms by
limiting context bits on them to 16.
The protovsid -> vsid conversion is verified to work with both 65 and 68
bit va values. I also documented the restrictions in a table format as
part of code comments.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Wed, 29 Mar 2017 12:10:34 +0000 (23:10 +1100)]
powerpc/mm/hash: Check for non-kernel address in get_kernel_vsid()
get_kernel_vsid() has a very stern comment saying that it's only valid
for kernel addresses, but there's nothing in the code to enforce that.
Rather than hoping our callers are well behaved, add a check and return
a VSID of 0 (invalid).
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 29 Mar 2017 12:10:22 +0000 (23:10 +1100)]
powerpc/mm/hash: Use context ids 1-4 for the kernel
Currently we use the top 4 context ids (0x7fffc-0x7ffff) for the kernel.
Kernel VSIDs are built using these top context values and effective the
segement ID. In subsequent patches we want to increase the max effective
address to 512TB. We will achieve that by increasing the effective
segment IDs there by increasing virtual address range.
We will be switching to a 68bit virtual address in the following patch.
But platforms like Power4 and Power5 only support a 65 bit virtual
address. We will handle that by limiting the context bits to 16 instead
of 19 on those platforms. That means the max context id will have a
different value on different platforms.
So that we don't have to deal with the kernel context ids changing
between different platforms, move the kernel context ids down to use
context ids 1-4.
We can't use segment 0 of context-id 0, because that maps to VSID 0,
which we want to keep as invalid, so we avoid context-id 0 entirely.
Similarly we can't use the last segment of the maximum context, so we
avoid it too.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Switch from 0-3 to 1-4 so VSID=0 remains invalid]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Wed, 29 Mar 2017 11:36:56 +0000 (22:36 +1100)]
powerpc/mm: Split radix vs hash mm context initialisation
Complete the split of the radix vs hash mm context initialisation.
This is mostly code movement, with the exception that we now limit the
context allocation to PRTB_ENTRIES - 1 on radix.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Wed, 29 Mar 2017 11:10:45 +0000 (22:10 +1100)]
powerpc/mm/hash: Pull hash constants into hash__alloc_context_id()
The min and max context id values used in alloc_context_id() are
currently the right values for use on hash, and happen to also be safe
for use on radix.
But we need to change that in a subsequent patch, so make the min/max
ids parameters and pull the hash values into hsah__alloc_context_id().
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Wed, 29 Mar 2017 11:00:46 +0000 (22:00 +1100)]
powerpc/mm/hash: Abstract context id allocation for KVM
KVM wants to be able to allocate an MMU context id, which it does
currently by calling __init_new_context().
We're about to rework that code, so provide a wrapper for KVM so it
can not worry about the details.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 22 Mar 2017 03:36:52 +0000 (09:06 +0530)]
powerpc/mm/slice: Update slice mask printing to use bitmap printing.
We now get output like below which is much better.
[ 0.935306] good_mask low_slice: 0-15
[ 0.935360] good_mask high_slice: 0-511
Compared to
[ 0.953414] good_mask:
1111111111111111 -
1111111111111.........
I also fixed an error with slice_dbg printing.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 22 Mar 2017 03:36:51 +0000 (09:06 +0530)]
powerpc/mm/slice: Move slice_mask struct definition to slice.c
This structure definition need not be in a header since this is used only by
slice.c file. So move it to slice.c. This also allow us to use SLICE_NUM_HIGH
instead of 64.
I also switch the low_slices type to u64 from u16. This doesn't have an impact
on size of struct due to padding added with u16 type. This helps in using
bitmap printing function for printing slice mask.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 22 Mar 2017 03:36:50 +0000 (09:06 +0530)]
powerpc/mm: Remove checks that TASK_SIZE_USER64 is too small
Remove the checks that TASK_SIZE_USER64 is smaller than H_PGTABLE_RANGE
and USER_VSID_RANGE.
In a following patch we will deliberately add support for a TASK_SIZE
smaller than both ranges, so this will no longer be an error condition.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Keep the check in pgtable_64.c that we don't exceed USER_VSID_RANGE]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 22 Mar 2017 03:36:49 +0000 (09:06 +0530)]
powerpc/mm: Move copy_mm_to_paca to paca.c
We also update the function arg to struct mm_struct. Move this so that function
finds the definition of struct mm_struct. No functional change in this patch.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 22 Mar 2017 03:36:48 +0000 (09:06 +0530)]
powerpc/mm/slice: Update the function prototype
This avoid copying the slice_mask struct as function return value
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Wed, 22 Mar 2017 03:36:47 +0000 (09:06 +0530)]
powerpc/mm/slice: Convert slice_mask high slice to a bitmap
In followup patch we want to increase the va range which will result
in us requiring high_slices to have more than 64 bits. To enable this
convert high_slices to bitmap. We keep the number bits same in this patch
and later change that to higher value
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Fold in fix to use bitmap_empty()]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Tue, 28 Mar 2017 04:21:12 +0000 (15:21 +1100)]
powerpc/mm: Move hash specific pte bits to be top bits of RPN
We don't support the full 57 bits of physical address and hence can
overload the top bits of RPN as hash specific pte bits.
Add a BUILD_BUG_ON() to enforce the relationship between H_PAGE_F_SECOND
and H_PAGE_F_GIX.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
[mpe: Move the BUILD_BUG_ON() into hash_utils_64.c and comment it]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Tue, 21 Mar 2017 17:29:59 +0000 (22:59 +0530)]
powerpc/mm: Lower the max real address to 53 bits
Max value supported by hardware is 51 bits address. Radix page table define
a slot of 57 bits for future expansion. We restrict the value supported in
linux kernel 53 bits, so that we can use the bits between 57-53 for storing
hash linux page table bits. This is done in the next patch.
This will free up the software page table bits to be used for features
that are needed for both hash and radix. The current hash linux page table
format doesn't have any free software bits. Moving hash linux page table
specific bits to top of RPN field free up the software bits for other purpose.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Tue, 21 Mar 2017 17:29:58 +0000 (22:59 +0530)]
powerpc/mm: Define all PTE bits based on radix definitions.
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Tue, 21 Mar 2017 17:29:57 +0000 (22:59 +0530)]
powerpc/mm: Define _PAGE_SOFT_DIRTY unconditionally
Conditional PTE bit definition is confusing and results in coding error.
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Tue, 21 Mar 2017 17:29:56 +0000 (22:59 +0530)]
powerpc/mm/hugetlb: Filter out hugepage size not supported by page table layout
Without this if firmware reports 1MB page size support we will crash
trying to use 1MB as hugetlb page size.
echo 300 > /sys/kernel/mm/hugepages/hugepages-1024kB/nr_hugepages
kernel BUG at ./arch/powerpc/include/asm/hugetlb.h:19!
.....
....
[
c0000000e2c27b30]
c00000000029dae8 .hugetlb_fault+0x638/0xda0
[
c0000000e2c27c30]
c00000000026fb64 .handle_mm_fault+0x844/0x1d70
[
c0000000e2c27d70]
c00000000004805c .do_page_fault+0x3dc/0x7c0
[
c0000000e2c27e30]
c00000000000ac98 handle_page_fault+0x10/0x30
With fix, we don't enable 1MB as hugepage size.
bash-4.2# cd /sys/kernel/mm/hugepages/
bash-4.2# ls
hugepages-16384kB hugepages-16777216kB
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Tue, 21 Mar 2017 17:29:55 +0000 (22:59 +0530)]
powerpc/mm: Add translation mode information in /proc/cpuinfo
With this we have on powernv and pseries /proc/cpuinfo reporting
timebase :
512000000
platform : PowerNV
model : 8247-22L
machine : PowerNV 8247-22L
firmware : OPAL
MMU : Hash
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Tue, 21 Mar 2017 17:29:54 +0000 (22:59 +0530)]
powerpc/mm/radix: rename _PAGE_LARGE to R_PAGE_LARGE
This bit is only used by radix and it is nice to follow the naming style of having
bit name start with H_/R_ depending on which translation mode they are used.
No functional change in this patch.
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Tue, 21 Mar 2017 17:29:53 +0000 (22:59 +0530)]
powerpc/mm: Cleanup bits definition between hash and radix.
Define everything based on bits present in pgtable.h. This will help in easily
identifying overlapping bits between hash/radix.
No functional change with this patch.
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Tue, 21 Mar 2017 17:29:52 +0000 (22:59 +0530)]
powerpc/mm/slice: Fix off-by-1 error when computing slice mask
For low slice, max addr should be less than 4G. Without limiting this correctly
we will end up with a low slice mask which has 17th bit set. This is not
a problem with the current code because our low slice mask is of type u16. But
in later patch I am switching low slice mask to u64 type and having the 17bit
set result in wrong slice mask which in turn results in mmap failures.
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Aneesh Kumar K.V [Tue, 21 Mar 2017 17:29:51 +0000 (22:59 +0530)]
powerpc/mm/nohash: MM_SLICE is only used by book3s 64
BOOKE code is dead code as per the Kconfig details. So make it simpler
by enabling MM_SLICE only for book3s_64. The changes w.r.t nohash is just
removing deadcode. W.r.t ppc64, 4k without hugetlb will now enable MM_SLICE.
But that is good, because we reduce one extra variant which probably is not
getting tested much.
Reviewed-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Yang Shi [Tue, 26 Apr 2016 16:49:38 +0000 (09:49 -0700)]
powerpc/4xx: Make sam440ep_setup_rtc() init
sam440ep_setup_rtc() is just called by machine_device_initcall() so make
it __init.
Signed-off-by: Yang Shi <yang.shi@windriver.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Hari Bathini [Thu, 16 Mar 2017 21:05:42 +0000 (02:35 +0530)]
powerpc/fadump: Update fadump documentation
With the unnecessary restriction to reserve memory for fadump at the
top of RAM forgone, update the documentation accordingly.
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Hari Bathini [Thu, 16 Mar 2017 21:05:26 +0000 (02:35 +0530)]
powerpc/fadump: Reserve memory at an offset closer to bottom of RAM
Currently, the area to preserve boot memory is reserved at the top of
RAM. This leaves fadump vulnerable to memory hot-remove operations. As
memory for fadump has to be reserved early in the boot process, fadump
can't be registered after a memory hot-remove operation. Though this
problem can't be eleminated completely, the impact can be minimized by
reserving memory at an offset closer to bottom of the RAM. The offset
for fadump memory reservation can be any value greater than fadump boot
memory size.
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Vipin K Parashar [Fri, 10 Mar 2017 11:57:32 +0000 (17:27 +0530)]
powerpc/powernv: Handle OPAL_WRONG_STATE in opal_get_sensor_data()
OPAL returns OPAL_WRONG_STATE upon failing to provide sensor data due to
core sleeping/offline. Add a check in opal_get_sensor_data() for sensor
read failure with OPAL_WRONG_STATE return code and return -EIO.
Signed-off-by: Vipin K Parashar <vipin@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Thadeu Lima de Souza Cascardo [Mon, 27 Mar 2017 19:32:33 +0000 (16:32 -0300)]
powerpc: Make /proc/self/stack always print the current stack
For the current task, the kernel stack would only tell the last time the
process was rescheduled, if ever. Use the current stack pointer for the
current task.
Otherwise, every once in a while, the stacktrace printed when reading
/proc/self/stack would look like the process is running in userspace,
while it's not, which some may consider as a bug.
This is also consistent with some other architectures, like x86 and arm,
at least.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 21 Mar 2017 05:24:38 +0000 (16:24 +1100)]
powerpc/64: Don't use early_cpu_has_feature() in cpu_ready_for_interrupts()
cpu_ready_for_interrupts() is called after feature patching, so there's
no need to use early_cpu_has_feature().
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anton Blanchard [Wed, 22 Mar 2017 21:22:01 +0000 (08:22 +1100)]
powerpc/configs: Re-enable POWER8 crc32c
The config option for the POWER8 crc32c recently changed from
CONFIG_CRYPT_CRC32C_VPMSUM to CONFIG_CRYPTO_CRC32C_VPMSUM. Update
the configs.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anton Blanchard [Wed, 22 Mar 2017 21:22:00 +0000 (08:22 +1100)]
powerpc/configs: Make oprofile a module
Most people use perf these days, so save about 31kB by making oprofile
a module.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Anton Blanchard [Wed, 22 Mar 2017 21:21:59 +0000 (08:21 +1100)]
powerpc/configs: Re-enable ISO9660_FS as a built-in in 64 bit configs
It turns out cloud-config uses ISO9660 filesystems to inject
configuration data into cloud images. The cloud-config failures when
ISO9660_FS is not enabled are cryptic, and building it in makes
mainline testing easier, so re-enable it.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Neuling [Fri, 24 Mar 2017 10:20:56 +0000 (21:20 +1100)]
powerpc/powernv: Fix XSCOM address mangling for form 1 indirect
POWER9 adds form 1 scoms. The form of the indirection is specified in
the top nibble of the scom address.
Currently we do some (ugly) bit mangling so that we can fit a 64 bit
scom address into the debugfs interface. The current code only shifts
the top bit (indirect bit).
This patch changes it to shift the whole top nibble so that the form
of the indirection is also shifted.
This patch is backwards compatible with older scoms.
(This change isn't required in the arch/powerpc/platforms/powernv/opal-prd.c
scom interface as it passes the whole 64bit scom address without any bit
mangling)
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Oliver O'Halloran [Thu, 23 Mar 2017 07:54:01 +0000 (18:54 +1100)]
powerpc/powernv: de-deuplicate OPAL call wrappers
Currently the code to perform an OPAL call is duplicated between the
normal path and path taken when tracepoints are enabled. There's no
real need for this and combining them makes opal_tracepoint_entry
considerably easier to understand.
Signed-off-by: Oliver O'Halloran <oohall@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Guilherme G. Piccoli [Wed, 22 Mar 2017 19:27:51 +0000 (16:27 -0300)]
powerpc/xmon: add debugfs entry for xmon
Currently the xmon debugger is set only via kernel boot command-line.
It's disabled by default, and can be enabled with "xmon=on" on the
command-line. Also, xmon may be accessed via sysrq mechanism.
But we cannot enable/disable xmon in runtime, it needs kernel reload.
This patch introduces a debugfs entry for xmon, allowing user to query
its current state and change it if desired. Basically, the "xmon" file
to read from/write to is under the debugfs mount point, on powerpc
directory. It's a simple attribute, value 0 meaning xmon is disabled
and value 1 the opposite. Writing these states to the file will take
immediate effect in the debugger.
Signed-off-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Guilherme G. Piccoli [Wed, 22 Mar 2017 19:27:50 +0000 (16:27 -0300)]
powerpc/xmon: drop the nobt option from xmon plus minor fixes
The xmon parameter nobt was added long time ago, by commit
26c8af5f01df
("[POWERPC] print backtrace when entering xmon"). The problem that time
was that during a crash in a machine with USB keyboard, xmon wouldn't
respond to commands from the keyboard, so printing the backtrace wouldn't
be possible.
Idea then was to show automatically the backtrace on xmon crash for the
first time it's invoked (if it recovers, next time xmon won't show
backtrace automatically). The nobt parameter was added _only_ to prevent
this automatic trace show. Seems long time ago USB keyboards didn't work
that well!
We don't need this parameter anymore, the feature of auto showing the
backtrace is interesting (imagine a case of auto-reboot script),
so this patch extends the functionality, by always showing the backtrace
automatically when xmon is invoked; it removes the nobt parameter too.
Also, this patch fixes __initdata placement on xmon_early and replaces
__initcall() with modern device_initcall() on sysrq handler.
Signed-off-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Pan Xinhui [Wed, 22 Mar 2017 19:27:49 +0000 (16:27 -0300)]
powerpc/xmon: Fix an unexpected xmon on/off state change
Once xmon is triggered by sysrq-x, it is enabled always afterwards even
if it is disabled during boot. This will cause a system reset interrupt
fail to dump. So keep xmon in its original state after exit.
We have several ways to set xmon on or off.
1) by a build config CONFIG_XMON_DEFAULT.
2) by a boot cmdline with xmon or xmon=early or xmon=on to enable xmon
and xmon=off to disable xmon. This value will override that in step 1.
3) by a debugfs interface, as proposed in this patchset.
And this value can override those in step 1 and 2.
Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Signed-off-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 14 Mar 2017 12:36:48 +0000 (22:36 +1000)]
powerpc/64s: POWER8 add missing machine check definitions
POWER8 uses bit 36 in SRR1 like POWER9 for i-side machine checks, and
contains several conditions for link timeouts that are not currently
handled.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 14 Mar 2017 12:36:47 +0000 (22:36 +1000)]
powerpc/64s: Data driven machine check handling
Move the handling (corrective action) of machine checks to the table
based evaluation.
This changes P7 and P8 ERAT flushing from using SLB flush to using ERAT
flush.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 14 Mar 2017 12:36:46 +0000 (22:36 +1000)]
powerpc/64s: Data driven machine check evaluation
Have machine types define i-side and d-side tables to describe their
machine check encodings, and match entries to evaluate (for reporting)
machine checks.
Functionality is mostly unchanged (tested with a userspace harness), but
it does make a change in that it no longer records DAR as the effective
address for those errors where it is specified to be invalid (which is a
reporting change only).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 14 Mar 2017 12:36:45 +0000 (22:36 +1000)]
powerpc/64s: Move POWER machine check defines into mce_power.c
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 14 Mar 2017 12:36:44 +0000 (22:36 +1000)]
powerpc/64s: Clean up machine check recovery flushing
Use the flush function introduced with the POWER9 machine check handler
for POWER7 and 8, rather than open coding it multiple times in callers.
There is a specific ERAT flush type introduced for POWER9, but the
POWER7-8 ERAT errors continue to do SLB flushing (which also flushes
ERAT), so as not to introduce functional changes with this cleanup
patch.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 14 Mar 2017 12:36:43 +0000 (22:36 +1000)]
powerpc/64s: Machine check print NIP
Print the faulting address of the machine check that may help with
debugging. The effective address reported can be a target memory address
rather than the faulting instruction address.
Fix up a dangling bracket while here.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Sat, 10 Sep 2016 10:01:30 +0000 (20:01 +1000)]
drivers/pcmcia: NO_IRQ removal for electra_cf.c
We'd like to eventually remove NO_IRQ on powerpc, so remove usages of it
from electra_cf.c which is a powerpc-only driver.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Geert Uytterhoeven [Sun, 12 Mar 2017 13:17:00 +0000 (14:17 +0100)]
MAINTAINERS: Add file patterns for powerpc device tree bindings
Submitters of device tree binding documentation may forget to CC
the subsystem maintainer if this is missing.
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Ben Hutchings [Fri, 2 Dec 2016 02:38:38 +0000 (02:38 +0000)]
powerpc: Fix missing CRCs, add more asm-prototypes.h declarations
Add declarations for:
- __mfdcr, __mtdcr (if CONFIG_PPC_DCR_NATIVE=y; through <asm/dcr.h>)
- switch_mmu_context (if CONFIG_PPC_BOOK3S_64=n; through <asm/mmu_context.h>)
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Ben Hutchings [Fri, 2 Dec 2016 02:35:52 +0000 (02:35 +0000)]
powerpc/32: Remove Mac-on-Linux/rtlinux hooks
The symbols exported for use by MOL/rtlinux aren't getting CRCs and I
was about to fix that. But MOL is dead upstream, and the latest work on
it was to make it use KVM instead of its own kernel module. So remove
them instead.
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Laurent Dufour [Tue, 14 Feb 2017 16:45:12 +0000 (17:45 +0100)]
powerpc/mm: Move mmap_sem unlocking in do_page_fault()
Since the fault retry is now handled earlier, we can release the
mmap_sem lock earlier too and remove later unlocking previously done in
mm_fault_error().
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Laurent Dufour [Tue, 14 Feb 2017 16:45:11 +0000 (17:45 +0100)]
powerpc/mm: Handle VM_FAULT_RETRY earlier
In do_page_fault() if handle_mm_fault() returns VM_FAULT_RETRY, retry
the page fault handling before anything else.
This would simplify the handling of the mmap_sem lock in this part of
the code.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Laurent Dufour [Tue, 14 Feb 2017 16:45:10 +0000 (17:45 +0100)]
powerpc/mm: Move mmap_sem unlock up from do_sigbus
Move mmap_sem releasing in the do_sigbus()'s unique caller : mm_fault_error()
No functional changes.
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Tue, 21 Feb 2017 02:40:20 +0000 (13:40 +1100)]
powerpc/powernv/npu: Remove dead iommu code
PNV_IODA_PE_DEV is only used for NPU devices (emulated PCI bridges
representing NVLink). These are added to IOMMU groups with corresponding
NVIDIA devices after all non-NPU PEs are setup; a special helper -
pnv_pci_ioda_setup_iommu_api() - handles this in pnv_pci_ioda_fixup().
The pnv_pci_ioda2_setup_dma_pe() helper sets up DMA for a PE. It is called
for VFs (so it does not handle NPU case) and PCI bridges but only
IODA1 and IODA2 types. An NPU bridge has its own type id (PNV_PHB_NPU)
so pnv_pci_ioda2_setup_dma_pe() cannot be called on NPU and therefore
(pe->flags & PNV_IODA_PE_DEV) is always "false".
This removes not used iommu_add_device(). This should not cause any
behavioral change.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Acked-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Alexey Kardashevskiy [Tue, 21 Feb 2017 02:38:54 +0000 (13:38 +1100)]
powerpc/powernv: Fix it_ops::get() callback to return in cpu endian
The iommu_table_ops callbacks are declared CPU endian as they take and
return "unsigned long"; underlying hardware tables are big-endian.
However get() was missing be64_to_cpu(), this adds the missing conversion.
The only caller of this is crash dump at arch/powerpc/kernel/iommu.c,
iommu_table_clear() which only compares TCE to zero so this change
should not cause behavioral change.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Tobin C. Harding [Mon, 6 Mar 2017 08:49:46 +0000 (19:49 +1100)]
powerpc/ftrace: Add prototype for prepare_ftrace_return()
Sparse emits a warning: symbol 'prepare_ftrace_return' was not
declared. Should it be static? prepare_ftrace_return() is called from
assembler and should not be static.
Add a prototype for it to asm-prototypes.h and include that in ftrace.c.
Signed-off-by: Tobin C. Harding <me@tobin.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Tobin C. Harding [Mon, 6 Mar 2017 08:25:31 +0000 (19:25 +1100)]
powerpc/swsusp: Include suspend.h to silence sparse warnings
Sparse emits two symbol not declared warnings for swsusp.c. The two
functions, save_processor_state() and restore_processor_state() are
declared already in suspend.h, so include it.
Signed-off-by: Tobin C. Harding <me@tobin.cc>
Reviewed-by: Daniel Axtens <dja@axtens.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Tobin C. Harding [Tue, 7 Mar 2017 09:32:42 +0000 (20:32 +1100)]
powerpc/pseries: Move struct hcall_stats to hvCall_inst.c
struct hcall_stats is only used in hvCall_inst.c, so move it there.
Signed-off-by: Tobin C. Harding <me@tobin.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 6 Feb 2017 10:13:28 +0000 (21:13 +1100)]
selftests/powerpc: Add cache_shape sniff test
This is a very basic test of the new cache shape AUXV entries. All it
does at the moment is look for the entries and error out if we don't
find all the ones we expect. Primarily intended for folks bringing up a
new chip to check that the cache info is making it all the way to
userspace correctly.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 6 Feb 2017 10:13:27 +0000 (21:13 +1100)]
selftests/powerpc: Refactor the AUXV routines
Refactor the AUXV routines so they are more composable. In a future test
we want to look for many AUXV entries and we don't want to have to read
/proc/self/auxv each time.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Hamish Martin [Fri, 24 Feb 2017 00:52:10 +0000 (13:52 +1300)]
powerpc/64: Allow for THREAD_SIZE > 16k
Fix an assembler error when the THREAD_SIZE is greater than 16k.
Signed-off-by: Hamish Martin <hamish.martin@alliedtelesis.co.nz>
Reviewed-by: Chris Packham <chris.packham@alliedtelesis.co.nz>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Hamish Martin [Fri, 24 Feb 2017 00:52:09 +0000 (13:52 +1300)]
powerpc: Move THREAD_SHIFT config to Kconfig
Shift the logic for defining THREAD_SHIFT logic to Kconfig in order to
allow override by users.
Signed-off-by: Hamish Martin <hamish.martin@alliedtelesis.co.nz>
Reviewed-by: Chris Packham <chris.packham@alliedtelesis.co.nz>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Linus Torvalds [Mon, 20 Mar 2017 02:09:39 +0000 (19:09 -0700)]
Linux 4.11-rc3
Linus Torvalds [Mon, 20 Mar 2017 02:00:47 +0000 (19:00 -0700)]
mm/swap: don't BUG_ON() due to uninitialized swap slot cache
This BUG_ON() triggered for me once at shutdown, and I don't see a
reason for the check. The code correctly checks whether the swap slot
cache is usable or not, so an uninitialized swap slot cache is not
actually problematic afaik.
I've temporarily just switched the BUG_ON() to a WARN_ON_ONCE(), since
I'm not sure why that seemingly pointless check was there. I suspect
the real fix is to just remove it entirely, but for now we'll warn about
it but not bring the machine down.
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>