GitHub/moto-9609/android_kernel_motorola_exynos9610.git
7 years agopowerpc/mm: Implement STRICT_KERNEL_RWX on PPC32
Christophe Leroy [Wed, 2 Aug 2017 13:51:05 +0000 (15:51 +0200)]
powerpc/mm: Implement STRICT_KERNEL_RWX on PPC32

This patch implements STRICT_KERNEL_RWX on PPC32.

As for CONFIG_DEBUG_PAGEALLOC, it deactivates BAT and LTLB mappings
in order to allow page protection setup at the level of each page.

As BAT/LTLB mappings are deactivated, there might be a performance
impact.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32
Christophe Leroy [Wed, 2 Aug 2017 13:51:03 +0000 (15:51 +0200)]
powerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32

As seen below, allthough the init sections have been freed, the
associated memory area is still marked as executable in the
page tables.

~ dmesg
[    5.860093] Freeing unused kernel memory: 592K (c0570000 - c0604000)

~ cat /sys/kernel/debug/kernel_page_tables
---[ Start of kernel VM ]---
0xc0000000-0xc0497fff        4704K  rw  X  present dirty accessed shared
0xc0498000-0xc056ffff         864K  rw     present dirty accessed shared
0xc0570000-0xc059ffff         192K  rw  X  present dirty accessed shared
0xc05a0000-0xc7ffffff      125312K  rw     present dirty accessed shared
---[ vmalloc() Area ]---

This patch fixes that.

The implementation is done by reusing the change_page_attr()
function implemented for CONFIG_DEBUG_PAGEALLOC

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs
Christophe Leroy [Wed, 2 Aug 2017 13:51:01 +0000 (15:51 +0200)]
powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs

__change_page_attr() uses flush_tlb_page().
flush_tlb_page() uses tlbie instruction, which also invalidates
pinned TLBs, which is not what we expect.

This patch modifies the implementation to use flush_tlb_kernel_range()
instead. This will make use of tlbia which will preserve pinned TLBs.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Reduce DTLB miss handler by one insn
Christophe Leroy [Wed, 12 Jul 2017 10:08:57 +0000 (12:08 +0200)]
powerpc/8xx: Reduce DTLB miss handler by one insn

This reduces the DTLB miss handler hot path (user address path)
by one instruction by preserving r10.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: mark init functions with __init
Christophe Leroy [Wed, 12 Jul 2017 10:08:55 +0000 (12:08 +0200)]
powerpc/8xx: mark init functions with __init

setup_initial_memory_limit() is only called during init.
mmu_patch_cmp_limit() is only called from 8xx_mmu.c

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Do not allow Pinned TLBs with STRICT_KERNEL_RWX or DEBUG_PAGEALLOC
Christophe Leroy [Wed, 12 Jul 2017 10:08:53 +0000 (12:08 +0200)]
powerpc/8xx: Do not allow Pinned TLBs with STRICT_KERNEL_RWX or DEBUG_PAGEALLOC

Pinning TLBs bypasses STRICT_KERNEL_RWX or DEBUG_PAGEALLOC protections
so it should only be allowed when those are not selected

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Make pinning of ITLBs optional
Christophe Leroy [Wed, 12 Jul 2017 10:08:51 +0000 (12:08 +0200)]
powerpc/8xx: Make pinning of ITLBs optional

As stated in a comment in head_8xx.S, today we "Always pin the first
8 MB ITLB to prevent ITLB misses while mucking around with SRR0/SRR1
in asm".

This issue has just been cleared by the preceding patch, therefore
we can make this pinning optional (on by default) and independent
of DATA pinning.

This patch also makes pinning of IMMR independent of pinning of DATA.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/32: Avoid risk of unrecoverable TLBmiss inside entry_32.S
Christophe Leroy [Wed, 12 Jul 2017 10:08:49 +0000 (12:08 +0200)]
powerpc/32: Avoid risk of unrecoverable TLBmiss inside entry_32.S

By default, the 8xx pins an ITLB on the first 8M of memory in order
to avoid any ITLB miss on kernel code.
However, with some debug functions like DEBUG_PAGEALLOC and
DEBUG_RODATA, pinning TLBs is contradictory.

In order to avoid any ITLB miss in a critical section without pinning
TLBs, we have to ensure that there is no page boundary crossed between
the setup of a new value in SRR0/SRR1 and the associated RFI.

The functions modifying srr0/srr1 are all located in setup_32.S.
They are spread over almost 4kbytes.

The patch forces a 12 bits (4kbytes) alignment for those
functions. This garanties that the functions remain in a
single 4k page.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Remove macro that checks kernel address
Christophe Leroy [Wed, 12 Jul 2017 10:08:47 +0000 (12:08 +0200)]
powerpc/8xx: Remove macro that checks kernel address

The macro to check if an address is a kernel address or not is
not used anymore in DTLBmiss handler. It is used in ITLB miss handler
and in DTLB error handler. DTLB error handler is not a hot path, it
doesn't need such optimisation.

In order to simplify a following patch which will rework ITLB miss
handler, we remove the macros and reintroduce them inside the handler.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Ensures RAM mapped with LTLB is seen as block mapped on 8xx.
Christophe Leroy [Wed, 12 Jul 2017 10:08:45 +0000 (12:08 +0200)]
powerpc/8xx: Ensures RAM mapped with LTLB is seen as block mapped on 8xx.

On the 8xx, the RAM mapped with LTLBs must be seen as block mapped,
just like areas mapped with BATs on standard PPC32.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/chrp: Store the intended structure
Julia Lawall [Sun, 13 Aug 2017 13:24:23 +0000 (15:24 +0200)]
powerpc/chrp: Store the intended structure

Normally the values in the resource field and the argument to ARRAY_SIZE
in the num_resources are the same.  In this case, the value in the reousrce
field is the same as the one in the previous platform_device structure, and
appears to be a copy-paste error.  Replace the value in the resource field
with the argument to the local call to ARRAY_SIZE.

Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/l2cr_6xx: Fix invalid use of register expressions
Andreas Schwab [Mon, 14 Aug 2017 18:42:43 +0000 (20:42 +0200)]
powerpc/l2cr_6xx: Fix invalid use of register expressions

This fixes another invalid use of register expressions.

Signed-off-by: Andreas Schwab <schwab@linux-m68k.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/iommu: Avoid undefined right shift in iommu_range_alloc()
Michael Ellerman [Tue, 8 Aug 2017 07:06:32 +0000 (17:06 +1000)]
powerpc/iommu: Avoid undefined right shift in iommu_range_alloc()

In iommu_range_alloc() we generate a mask by right shifting ~0,
however if the specified alignment is 0 then we right shift by 64,
which is undefined. UBSAN tells us so:

  UBSAN: Undefined behaviour in ../arch/powerpc/kernel/iommu.c:193:35
  shift exponent 64 is too large for 64-bit type 'long unsigned int'

We can avoid it by instead generating the mask with:

  align_mask = (1ull << align_order) - 1;

That will also generate an undefined shift if align_order is 64 or
greater, but that shouldn't be a problem for a while.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/perf/imc: Fix nest events on muti socket system
Anju T [Mon, 14 Aug 2017 11:42:23 +0000 (17:12 +0530)]
powerpc/perf/imc: Fix nest events on muti socket system

In a multi node system with discontiguous node ids, nest event values
are not showing up properly. eg. lscpu output:

  NUMA node0 CPU(s): 0-15
  NUMA node8 CPU(s): 16-31

Nest event values on such systems can be counted on CPUs <= 15:

  $./perf stat -e 'nest_powerbus0_imc/PM_PB_CYC/' -C 0-14 -I 1000 sleep 1000
  #           time             counts unit events
       1.000294577    30,17,24,42,880 nest_powerbus0_imc/PM_PB_CYC/

But not on CPUs >= 16:

  $./perf stat -e 'nest_powerbus0_imc/PM_PB_CYC/' -C 16-28 -I 1000 sleep 1000
  #           time             counts unit events
       1.000049902    <not supported> nest_powerbus0_imc/PM_PB_CYC/

This is because, when fetching the reference count, the node id (which
may be sparse) is used as the array index, not the node number (which
is 0 based and contiguous).

Fix it by using the node number as the array index.

  $./perf stat -e 'nest_powerbus0_imc/PM_PB_CYC/' -C 16-28 -I 1000 sleep 1000
  #           time             counts unit events
       1.000241961    26,12,35,28,704 nest_powerbus0_imc/PM_PB_CYC/

Signed-off-by: Anju T Sudhakar <anju@linux.vnet.ibm.com>
[mpe: Change log tweaks for clarity and brevity]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm/nohash: Move definition of PGALLOC_GFP to fix build errors
Michael Ellerman [Tue, 15 Aug 2017 10:02:56 +0000 (20:02 +1000)]
powerpc/mm/nohash: Move definition of PGALLOC_GFP to fix build errors

In some obscure Book3E configs (randconfig) we can end up missing a
definition for PGALLOC_GFP in pgtable_64.c.

Fix it by moving the definition to asm/pgalloc.h.

Fixes: de3b87611dd1 ("powerpc/mm/book(e)(3s)/64: Add page table accounting")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/xmon: Exclude all of xmon from ftrace
Naveen N. Rao [Wed, 2 Aug 2017 18:25:38 +0000 (23:55 +0530)]
powerpc/xmon: Exclude all of xmon from ftrace

Exclude core xmon files from ftrace (along with an xmon xive helper
outside of xmon/) to minimize impact of ftrace while within xmon.

Before:
  /sys/kernel/debug/tracing# grep -ci xmon available_filter_functions
  26

After:
  /sys/kernel/debug/tracing# grep -ci xmon available_filter_functions
  0

Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
[mpe: Use $(subst ..) on KBUILD_CFLAGS rather than CFLAGS_REMOVE_xxx]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/xmon: Disable tracing when entering xmon
Breno Leitao [Wed, 2 Aug 2017 20:14:06 +0000 (17:14 -0300)]
powerpc/xmon: Disable tracing when entering xmon

If tracing is enabled and you get into xmon, the tracing buffer
continues to be updated, causing possible loss of data and unnecessary
tracing information coming from xmon functions.

This patch simple disables tracing when entering xmon, and re-enables it
if the kernel is resumed (with 'x').

Signed-off-by: Breno Leitao <leitao@debian.org>
Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/xmon: Dump ftrace buffers for the current CPU only
Breno Leitao [Wed, 2 Aug 2017 20:14:05 +0000 (17:14 -0300)]
powerpc/xmon: Dump ftrace buffers for the current CPU only

Current xmon 'dt' command dumps the tracing buffer for all the CPUs,
which makes it very hard to read due to the fact that most of
powerpc machines currently have many CPUs. Other than that, the CPU
lines are interleaved in the ftrace log.

This new option just dumps the ftrace buffer for the current CPU.

Signed-off-by: Breno Leitao <leitao@debian.org>
Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agodrivers/macintosh: Make wf_control_ops and wf_pid_param const
Bhumika Goyal [Fri, 11 Aug 2017 17:38:45 +0000 (23:08 +0530)]
drivers/macintosh: Make wf_control_ops and wf_pid_param const

Make wf_control_ops const as they are only stored in the ops field of a
wf_control structure, which is const.
Make wf_pid_param const as they are only used during a copy operation.
Done using Coccinelle.

Signed-off-by: Bhumika Goyal <bhumirks@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/perf: Fix double unlock in imc_common_cpuhp_mem_free()
Dan Carpenter [Fri, 11 Aug 2017 20:05:41 +0000 (23:05 +0300)]
powerpc/perf: Fix double unlock in imc_common_cpuhp_mem_free()

This function is not called with the nest_init_lock held, and it also
unlocks the nest_init_lock immediately below, so it's fairly clear
that this is a typo and should be locking the lock.

Fixes: 885dcd709ba9 ("powerpc/perf: Add nest IMC PMU support")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Fix two CONFIG_8xx left behind
Christophe Leroy [Mon, 14 Aug 2017 07:14:19 +0000 (09:14 +0200)]
powerpc/8xx: Fix two CONFIG_8xx left behind

Commit 968159c0031ac ("powerpc/8xx: Getting rid of remaining use of
CONFIG_8xx") removed all but 2 references to 8xx in Kconfigs.

This patch removes the two remaining ones.

Fixes: 968159c0031a ("powerpc/8xx: Getting rid of remaining use of CONFIG_8xx")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/xive: Fix section mismatch warnings
Michael Ellerman [Tue, 8 Aug 2017 11:44:14 +0000 (21:44 +1000)]
powerpc/xive: Fix section mismatch warnings

Both xive_core_init() and xive_native_init() are called from and call
__init routines, so they should also be __init.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Fix section mismatch warning in early_check_vec5()
Michael Ellerman [Tue, 8 Aug 2017 11:44:08 +0000 (21:44 +1000)]
powerpc/mm: Fix section mismatch warning in early_check_vec5()

early_check_vec5() is called from and calls __init routines, so should
also be __init.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Remove cpu dependent macro instructions from head_8xx
Christophe Leroy [Tue, 8 Aug 2017 11:59:02 +0000 (13:59 +0200)]
powerpc/8xx: Remove cpu dependent macro instructions from head_8xx

head_8xx is dedicated to 8xx so no need to use macros that
depends on the CPU

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Use symbolic names for DSISR bits in DSI
Christophe Leroy [Tue, 8 Aug 2017 11:59:00 +0000 (13:59 +0200)]
powerpc/8xx: Use symbolic names for DSISR bits in DSI

Use symbolic names for DSISR bits in DSI

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Use symbolic PVR value
Christophe Leroy [Tue, 8 Aug 2017 11:58:58 +0000 (13:58 +0200)]
powerpc/8xx: Use symbolic PVR value

For the 8xx, PVR values defined in arch/powerpc/include/asm/reg.h
are nowhere used.

Remove all defines and add PVR_8xx

Use it in arch/powerpc/kernel/cputable.c

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: remove CONFIG_8xx
Christophe Leroy [Tue, 8 Aug 2017 11:58:56 +0000 (13:58 +0200)]
powerpc/8xx: remove CONFIG_8xx

Two config options exist to define powerpc MPC8xx:
* CONFIG_PPC_8xx
* CONFIG_8xx

arch/powerpc/platforms/Kconfig.cputype has contained the following
comment about CONFIG_8xx item for some years:
"# this is temp to handle compat with arch=ppc"

There is no more users of CONFIG_8xx, so remove it.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Getting rid of remaining use of CONFIG_8xx
Christophe Leroy [Tue, 8 Aug 2017 11:58:54 +0000 (13:58 +0200)]
powerpc/8xx: Getting rid of remaining use of CONFIG_8xx

Two config options exist to define powerpc MPC8xx:
* CONFIG_PPC_8xx
* CONFIG_8xx

arch/powerpc/platforms/Kconfig.cputype has contained the following
comment about CONFIG_8xx item for some years:
"# this is temp to handle compat with arch=ppc"

arch/powerpc is now the only place with remaining use of
CONFIG_8xx: get rid of them.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/kconfig: Simplify PCI_QSPAN selection
Christophe Leroy [Tue, 8 Aug 2017 11:58:52 +0000 (13:58 +0200)]
powerpc/kconfig: Simplify PCI_QSPAN selection

4xx, CPM2 and 8xx cannot be selected at the same time, so
no need to test 8xx && !4xx && !CPM2. Testing 8xx is enough.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/time: refactor MFTB() to limit number of ifdefs
Christophe Leroy [Tue, 8 Aug 2017 11:58:50 +0000 (13:58 +0200)]
powerpc/time: refactor MFTB() to limit number of ifdefs

The 8xx cannot access the TBL and TBU registers using mfspr/mtspr
It must be accessed using mftb/mftbu

Due to this, there is a number of places with #ifdef CONFIG_8xx

This patch defines new macros MFTBL(x) and MFTBU(x) on the same model
as MFTB(x) and tries to make use of them as much as possible.

In arch/powerpc/include/asm/timex.h, we also remove the ifdef
for the asm() operands as the compiler doesn't mind unused operands

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Move mpc8xx_pic.c from sysdev to platform/8xx
Christophe Leroy [Tue, 8 Aug 2017 11:58:48 +0000 (13:58 +0200)]
powerpc/8xx: Move mpc8xx_pic.c from sysdev to platform/8xx

mpc8xx_pic.c is dedicated to the 8xx, so move it to platform/8xx

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/cpm1: link to CONFIG_CPM1 instead of CONFIG_8xx
Christophe Leroy [Tue, 8 Aug 2017 11:58:46 +0000 (13:58 +0200)]
powerpc/cpm1: link to CONFIG_CPM1 instead of CONFIG_8xx

To remain consistent with what is done with CPM2, let's link
CPM1 related parts to CONFIG_CPM1 instead of CONFIG_8xx

When something depends on both CPM1 and CPM2 we associate it
with CONFIG_CPM

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Remove SoftwareEmulation()
Christophe Leroy [Tue, 8 Aug 2017 11:58:44 +0000 (13:58 +0200)]
powerpc/8xx: Remove SoftwareEmulation()

Since commit aa42c69c67f82 ("[POWERPC] Add support for FP emulation
for the e300c2 core"), program_check_exception() can be called for
math emulation. In that case, 'reason' is 0.

On the 8xx, there is a Software Emulation interrupt which is
called for all unimplemented and illegal instructions. This
interrupt calls SoftwareEmulation() which does almost the
same as program_check_exception() called with reason = 0.

The Software Emulation interrupt sets all reason bits to 0,
it is therefore possible to call program_check_exception()
directly from the interrupt handler.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Move 8xx machine check handlers into platforms/8xx
Christophe Leroy [Tue, 8 Aug 2017 11:58:42 +0000 (13:58 +0200)]
powerpc/8xx: Move 8xx machine check handlers into platforms/8xx

In the same spirit as what was done for 4xx and 44x, move
the 8xx machine check into platforms/8xx

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/8xx: Simplify CONFIG_8xx checks in Makefile
Christophe Leroy [Tue, 8 Aug 2017 11:58:40 +0000 (13:58 +0200)]
powerpc/8xx: Simplify CONFIG_8xx checks in Makefile

The entire 8xx directory is omitted if CONFIG_8xx is not enabled, so
within the 8xx/Makefile CONFIG_8xx is always y. So convert
obj-$(CONFIG_8xx) to the more obvious obj-y.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/traps: Use SRR1 defines for program check reasons
Michael Ellerman [Tue, 8 Aug 2017 06:39:25 +0000 (16:39 +1000)]
powerpc/traps: Use SRR1 defines for program check reasons

Currently we open code the reason codes for program checks. Instead use
the existing SRR1 defines.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mce: Move 64-bit machine check code into mce.c
Michael Ellerman [Tue, 8 Aug 2017 06:39:24 +0000 (16:39 +1000)]
powerpc/mce: Move 64-bit machine check code into mce.c

We already have mce.c which is built for 64bit and contains other parts
of the machine check code, so move these bits in there too.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/traps: machine_check_generic() is only used on 32-bit
Michael Ellerman [Tue, 8 Aug 2017 06:39:23 +0000 (16:39 +1000)]
powerpc/traps: machine_check_generic() is only used on 32-bit

Make it clear that the fallback version of machine_check_generic() is
only used on 32-bit configs.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/traps: Inline get_mc_reason()
Michael Ellerman [Tue, 8 Aug 2017 06:39:22 +0000 (16:39 +1000)]
powerpc/traps: Inline get_mc_reason()

get_mc_reason() no longer provides (if it ever really did) any
meaningful abstraction, so remove it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/4xx: Move machine_check_4xx() into platforms/4xx
Michael Ellerman [Tue, 8 Aug 2017 06:39:21 +0000 (16:39 +1000)]
powerpc/4xx: Move machine_check_4xx() into platforms/4xx

Now that we have 4xx platform directory we can move the 4xx machine
check handler in there. Again we drop get_mc_reason() and replace it
with regs->dsisr directly (which is actually SPRN_ESR).

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/4xx: Create 4xx pseudo-platform in platforms/4xx
Michael Ellerman [Tue, 8 Aug 2017 06:39:20 +0000 (16:39 +1000)]
powerpc/4xx: Create 4xx pseudo-platform in platforms/4xx

We have a lot of code in sysdev for supporting 4xx, ie. either 40x or
44x. Instead it would be cleaner if it was all in platforms/4xx.

This is slightly odd in that we don't actually define any machines in
the 4xx platform, as is usual for a platform directory. But still it
seems like a better result to have all this related code in a directory
by itself.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/44x: Move 44x machine check handlers into platforms/44x
Michael Ellerman [Tue, 8 Aug 2017 06:39:19 +0000 (16:39 +1000)]
powerpc/44x: Move 44x machine check handlers into platforms/44x

We have several 44x machine check handlers defined in traps.c. It would
be preferable if they were split out with the platforms that use them.
Do that.

In the process, drop get_mc_reason() and instead just open code the
lookup of reason from regs->dsisr. This avoids a pointless layer of
abstraction.

We know to use regs->dsisr because 44x enables BOOKE which enables
PPC_ADV_DEBUG_REGS, and FSL_BOOKE is not enabled on 44x builds.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/44x: Simplify CONFIG_44x checks in Makefile
Michael Ellerman [Tue, 8 Aug 2017 06:39:18 +0000 (16:39 +1000)]
powerpc/44x: Simplify CONFIG_44x checks in Makefile

The entire 44x directory is omitted if CONFIG_44x is not enabled, so
within the 44x/Makefile CONFIG_44x is always y. So convert
obj-$(CONFIG_44x) to the more obvious obj-y.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/47x: Guard 47x cputable entries with CONFIG_PPC_47x
Michael Ellerman [Tue, 8 Aug 2017 06:39:17 +0000 (16:39 +1000)]
powerpc/47x: Guard 47x cputable entries with CONFIG_PPC_47x

Currently we build the 47x cputable entries even when CONFIG_PPC_47x is
disabled. That means a kernel built without CONFIG_PPC_47x will claim to
support a 47x CPU and start booting, only to break somewhere later
because it doesn't have 47x support compiled in.

So guard the 47x cputable entries with CONFIG_PPC_47x. Note that this is
inside the #ifdef CONFIG_44x section, because 47x depends on 44x.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/powernv: Add support to clear sensor groups data
Shilpasri G Bhat [Thu, 10 Aug 2017 03:31:20 +0000 (09:01 +0530)]
powerpc/powernv: Add support to clear sensor groups data

Adds support for clearing different sensor groups. OCC inband sensor
groups like CSM, Profiler, Job Scheduler can be cleared using this
driver. The min/max of all sensors belonging to these sensor groups
will be cleared.

Signed-off-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/powernv: Add support to set power-shifting-ratio
Shilpasri G Bhat [Thu, 10 Aug 2017 03:31:19 +0000 (09:01 +0530)]
powerpc/powernv: Add support to set power-shifting-ratio

This patch adds support to set power-shifting-ratio which hints the
firmware how to distribute/throttle power between different entities
in a system (e.g CPU v/s GPU). This ratio is used by OCC for power
capping algorithm.

Signed-off-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/powernv: Add support for powercap framework
Shilpasri G Bhat [Thu, 10 Aug 2017 03:31:18 +0000 (09:01 +0530)]
powerpc/powernv: Add support for powercap framework

Adds a generic powercap framework to change the system powercap
inband through OPAL-OCC command/response interface.

Signed-off-by: Shilpasri G Bhat <shilpa.bhat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/pseries: Don't print failure message in energy driver
Nicholas Piggin [Fri, 21 Jul 2017 01:16:44 +0000 (11:16 +1000)]
powerpc/pseries: Don't print failure message in energy driver

This driver currently reports the H_BEST_ENERGY hypervisor call is
unsupported (even when booting in a non-virtualised environment). This
is not something the administrator can do much with, and not
significant for debugging.

Remove it.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/lib/sstep: Add isel instruction emulation
Matt Brown [Mon, 31 Jul 2017 00:58:26 +0000 (10:58 +1000)]
powerpc/lib/sstep: Add isel instruction emulation

This adds emulation for the isel instruction.
Tested for correctness against the isel instruction and its extended
mnemonics (lt, gt, eq) on ppc64le.

Signed-off-by: Matt Brown <matthew.brown.dev@gmail.com>
Reviewed-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/lib/sstep: Add prty instruction emulation
Matt Brown [Mon, 31 Jul 2017 00:58:25 +0000 (10:58 +1000)]
powerpc/lib/sstep: Add prty instruction emulation

This adds emulation for the prtyw and prtyd instructions.
Tested for logical correctness against the prtyw and prtyd instructions
on ppc64le.

Signed-off-by: Matt Brown <matthew.brown.dev@gmail.com>
Reviewed-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/lib/sstep: Add bpermd instruction emulation
Matt Brown [Mon, 31 Jul 2017 00:58:24 +0000 (10:58 +1000)]
powerpc/lib/sstep: Add bpermd instruction emulation

This adds emulation for the bpermd instruction.
Tested for correctness against the bpermd instruction on ppc64le.

Signed-off-by: Matt Brown <matthew.brown.dev@gmail.com>
Reviewed-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/lib/sstep: Add popcnt instruction emulation
Matt Brown [Mon, 31 Jul 2017 00:58:23 +0000 (10:58 +1000)]
powerpc/lib/sstep: Add popcnt instruction emulation

This adds emulations for the popcntb, popcntw, and popcntd instructions.
Tested for correctness against the popcnt{b,w,d} instructions on ppc64le.

Signed-off-by: Matt Brown <matthew.brown.dev@gmail.com>
Reviewed-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/lib/sstep: Add cmpb instruction emulation
Matt Brown [Mon, 31 Jul 2017 00:58:22 +0000 (10:58 +1000)]
powerpc/lib/sstep: Add cmpb instruction emulation

This patch adds emulation of the cmpb instruction, enabling xmon to
emulate this instruction.
Tested for correctness against the cmpb asm instruction on ppc64le.

Signed-off-by: Matt Brown <matthew.brown.dev@gmail.com>
Reviewed-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/perf: Cleanup of PM_BR_CMPL vs. PM_BRU_CMPL in Power9 event list
Madhavan Srinivasan [Wed, 9 Aug 2017 12:48:24 +0000 (22:48 +1000)]
powerpc/perf: Cleanup of PM_BR_CMPL vs. PM_BRU_CMPL in Power9 event list

Fixes: 34922527a2bc ("powerpc/perf: Add power9 event list macros for generic and cache events")
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/perf: Add PM_LD_MISS_L1 and PM_BR_2PATH to power9 event list
Madhavan Srinivasan [Mon, 31 Jul 2017 09:33:21 +0000 (15:03 +0530)]
powerpc/perf: Add PM_LD_MISS_L1 and PM_BR_2PATH to power9 event list

Add couple of more events (PM_LD_MISS_L1 and PM_BR_2PATH) to
power9 event list and power9_event_alternatives array (these
events can be counted in more than one PMC).

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/perf: Factor out PPMU_ONLY_COUNT_RUN check code from power8
Madhavan Srinivasan [Mon, 31 Jul 2017 08:02:41 +0000 (13:32 +0530)]
powerpc/perf: Factor out PPMU_ONLY_COUNT_RUN check code from power8

There are some hardware events on Power systems which only count when
the processor is not idle, and there are some fixed-function counters
which count such events. For example, the "run cycles" event counts
cycles when the processor is not idle. If the user asks to count
cycles, we can use "run cycles" if this is a per-task event, since the
processor is running when the task is running, by definition. We can't
use "run cycles" if the user asks for "cycles" on a system-wide
counter.

Currently in power8 this check is done using PPMU_ONLY_COUNT_RUN flag
in power8_get_alternatives() function. Based on the flag, events are
switched if needed. This function should also be enabled in power9, so
factor out the code to isa207_get_alternatives().

Fixes: efe881afdd999 ('powerpc/perf: Factor out event_alternative function')
Reported-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/perf: Update default sdar_mode value for power9
Madhavan Srinivasan [Tue, 25 Jul 2017 05:35:51 +0000 (11:05 +0530)]
powerpc/perf: Update default sdar_mode value for power9

Commit 20dd4c624d251 ('powerpc/perf: Fix SDAR_MODE value for continous
sampling on Power9') set the default sdar_mode value in MMCRA[SDAR_MODE]
to be used as 0b01 (Update on TLB miss). And this value is set if sdar_mode
from event is zero, or we are in continous sampling mode in power9 dd1.

But it is preferred to have the sdar_mode value for power9 as
0b10 (Update on dcache miss) for better sampling updates instead
of 0b01 (Update on TLB miss).

From Anton:

Using a bandwidth test case with a 1MB footprint, I profiled cycles and
chose TLB updates of the SDAR:

  $ perf record -d -e r000400000000001E:u ./bw2001 1M
                        ^
                        SDAR TLB

  $ perf report -D | grep PERF_RECORD_SAMPLE | sed 's/.*addr: //' | sort -u | wc -l
  4

  I get 4 unique addresses. If I ran with dcache misses:

  $ perf record -d -e r000800000000001E:u ./bw2001 1M
                        ^
                        SDAR dcache miss

  $ perf report -D|grep PERF_RECORD_SAMPLE| sed 's/.*addr: //'|sort -u | wc -l
  5217

I get 5217 unique addresses. No surprises here, but it does show why
TLB misses is the wrong event to default to - we get very little useful
information out of it.

Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Acked-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/44x/fsp2: Enable eMMC arasan for fsp2 platform
Ivan Mikhaylov [Tue, 25 Jul 2017 11:40:04 +0000 (14:40 +0300)]
powerpc/44x/fsp2: Enable eMMC arasan for fsp2 platform

Add mmc0 changes for enabling arasan emmc and change
defconfig appropriately.

Signed-off-by: Ivan Mikhaylov <ivan@de.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Properly invalidate when setting process table base
Suraj Jitindar Singh [Thu, 3 Aug 2017 04:15:51 +0000 (14:15 +1000)]
powerpc/mm: Properly invalidate when setting process table base

The host process table base is stored in the partition table by calling
the function native_register_process_table(). Currently this just sets
the entry in memory and is missing a subsequent cache invalidation
instruction. Any update to the partition table should be followed by a
cache invalidation instruction specifying invalidation of the caching of
any partition table entries (RIC = 2, PRS = 0).

We already have a function to update the partition table with the
required cache invalidation instructions - mmu_partition_table_set_entry().
Update the native_register_process_table() function to call
mmu_partition_table_set_entry(), this ensures all appropriate
invalidation will be performed.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[mpe: Use a local for patb0 to clean it up slightly]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/xive: Ensure active irqd when setting affinity
Benjamin Herrenschmidt [Wed, 2 Aug 2017 01:54:41 +0000 (20:54 -0500)]
powerpc/xive: Ensure active irqd when setting affinity

Ensure irqd is active before attempting to set affinity. This should
make the set affinity code more robust. For instance, this prevents
these messages seen on a 4.12 based kernel when taking cpus offline:

   [  123.053037264,3] XIVE[ IC 00  ] ISN 2 lead to invalid IVE !
   [   77.885859] xive: Error -6 reconfiguring irq 17
   [   77.885862] IRQ17: set affinity failed(-6).

That particular case has been fixed in 4.13-rc1 by commit
91f26cb4cd3c ("genirq/cpuhotplug: Do not migrated shutdown irqs").

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Add irq accounting for watchdog interrupts
Nicholas Piggin [Tue, 1 Aug 2017 12:00:54 +0000 (22:00 +1000)]
powerpc: Add irq accounting for watchdog interrupts

This adds an irq counter for the watchdog soft-NMI. This interrupt
only fires when interrupts are soft-disabled, so it will not
increment much even when the watchdog is running. However it's
useful for debugging and sanity checking.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Add irq accounting for system reset interrupts
Nicholas Piggin [Tue, 1 Aug 2017 12:00:53 +0000 (22:00 +1000)]
powerpc: Add irq accounting for system reset interrupts

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Fix powerpc-specific watchdog build configuration
Nicholas Piggin [Tue, 1 Aug 2017 12:00:52 +0000 (22:00 +1000)]
powerpc: Fix powerpc-specific watchdog build configuration

The powerpc kernel/watchdog.o should be built when HARDLOCKUP_DETECTOR
and HAVE_HARDLOCKUP_DETECTOR_ARCH are both selected. If only the former
is selected, then the generic perf watchdog has been selected.

To simplify this check, introduce a new Kconfig symbol PPC_WATCHDOG that
depends on both. This Kconfig option means the powerpc specific
watchdog is enabled.

Without this patch, Book3E will attempt to build the powerpc watchdog.

Fixes: 2104180a53 ("powerpc/64s: implement arch-specific hardlockup watchdog")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/64s: Fix mce accounting for powernv
Nicholas Piggin [Tue, 1 Aug 2017 12:00:51 +0000 (22:00 +1000)]
powerpc/64s: Fix mce accounting for powernv

On 64-bit Book3s, when we're in HV mode, we have already counted the
machine check exception in machine_check_early().

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Use IS_ENABLED() rather than an #ifdef]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/pseries: Check memory device state before onlining/offlining
Nathan Fontenot [Wed, 2 Aug 2017 18:03:22 +0000 (14:03 -0400)]
powerpc/pseries: Check memory device state before onlining/offlining

When DLPAR adding or removing memory we need to check the device
offline status before trying to online/offline the memory. This is
needed because calls to device_online() and device_offline() will
return non-zero for memory that is already online and offline
respectively.

This update resolves two scenarios. First, for a kernel built with
auto-online memory enabled (CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE=y),
memory will be onlined as part of calls to add_memory(). After adding
the memory the pseries DLPAR code tries to online it and fails since
the memory is already online. The DLPAR code then tries to remove the
memory which produces the oops message below because the memory is not
offline.

The second scenario occurs when removing memory that is already
offline, i.e. marking memory offline (via sysfs) and then trying to
remove that memory. This doesn't work because offlining the already
offline memory does not succeed and the DLPAR code then fails the
DLPAR remove operation.

The fix for both scenarios is to check the device.offline status
before making the calls to device_online() or device_offline().

  kernel BUG at mm/memory_hotplug.c:1936!
  ...
  NIP [c0000000002ca428] .remove_memory+0xb8/0xc0
  LR [c0000000002ca3cc] .remove_memory+0x5c/0xc0
  Call Trace:
    .remove_memory+0x5c/0xc0 (unreliable)
    .dlpar_add_lmb+0x384/0x400
    .dlpar_memory+0x5dc/0xca0
    .handle_dlpar_errorlog+0x74/0xe0
    .pseries_hp_work_fn+0x2c/0x90
    .process_one_work+0x17c/0x460
    .worker_thread+0x88/0x500
    .kthread+0x15c/0x1a0
    .ret_from_kernel_thread+0x58/0xc0

Fixes: 943db62c316c ("powerpc/pseries: Revert 'Auto-online hotplugged memory'")
Signed-off-by: Nathan Fontenot <nfont@linux.vnet.ibm.com>
[mpe: Use bool, add explicit rc=0 case, change log typos & formatting]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Fix invalid use of register expressions
Andreas Schwab [Sat, 5 Aug 2017 17:55:11 +0000 (19:55 +0200)]
powerpc: Fix invalid use of register expressions

binutils >= 2.26 now warns about misuse of register expressions in
assembler operands that are actually literals, for example:

  arch/powerpc/kernel/entry_64.S:535: Warning: invalid register expression

In practice these are almost all uses of r0 that should just be a
literal 0.

Signed-off-by: Andreas Schwab <schwab@linux-m68k.org>
[mpe: Mention r0 is almost always the culprit, fold in purgatory change]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm/hash64: Make vmalloc 56T on hash
Michael Ellerman [Tue, 1 Aug 2017 10:29:24 +0000 (20:29 +1000)]
powerpc/mm/hash64: Make vmalloc 56T on hash

On 64-bit book3s, with the hash MMU, we currently define the kernel
virtual space (vmalloc, ioremap etc.), to be 16T in size. This is a
leftover from pre v3.7 when our user VM was also 16T.

Of that 16T we split it 50/50, with half used for PCI IO and ioremap
and the other 8T for vmalloc.

We never bothered to make it any bigger because 8T of vmalloc ought to
be enough for anybody. But it turns out that's not true, the per cpu
allocator wants large amounts of vmalloc space, not to make large
allocations, but to allow a large stride between allocations, because
we use pcpu_embed_first_chunk().

With a bit of juggling we can increase the entire kernel virtual space
to 64T. The only real complication is the check of the address in the
SLB miss handler, see the comment in the code.

Although we could continue to split virtual space 50/50 as we do now,
no one seems to be running out of PCI IO or ioremap space. So instead
keep that as 8T, and use the remaining 56T for vmalloc.

In future we should be able to increase the kernel virtual space to
512T, the code already supports that, it just needs testing on older
hardware.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
7 years agopowerpc/mm/slb: Move comment next to the code it's referring to
Michael Ellerman [Tue, 1 Aug 2017 10:29:23 +0000 (20:29 +1000)]
powerpc/mm/slb: Move comment next to the code it's referring to

There is a comment in slb_allocate() referring to the load of
paca->vmalloc_sllp, but it's several lines prior in the assembly.
We're about to change this code, and we want to add another comment,
so move the comment immediately prior to the instruction it's talking
about.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm/book3s64: Make KERN_IO_START a variable
Michael Ellerman [Tue, 1 Aug 2017 10:29:22 +0000 (20:29 +1000)]
powerpc/mm/book3s64: Make KERN_IO_START a variable

Currently KERN_IO_START is defined as:

 #define KERN_IO_START  (KERN_VIRT_START + (KERN_VIRT_SIZE >> 1))

Although it looks like a constant, both the components are actually
variables, to allow us to have a different value between Radix and
Hash with a single kernel.

However that still requires both Radix and Hash to place the kernel IO
region at the same location relative to the start and end of the
kernel virtual region (namely 1/2 way through it), and we'd like to
change that.

So split KERN_IO_START out into its own variable, and initialise it
for Radix and Hash. In the medium term we should be able to
reconsolidate this, by doing a more involved rearrangement of the
location of the regions.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/powernv: Use darn instruction for get_random_seed() on Power9
Matt Brown [Fri, 4 Aug 2017 01:12:18 +0000 (11:12 +1000)]
powerpc/powernv: Use darn instruction for get_random_seed() on Power9

This adds powernv_get_random_darn() which utilises the darn instruction,
introduced in ISA v3.0/POWER9.

The darn instruction can potentially return an error, which is supported
by the get_random_seed() API, in normal usage if we see an error we just
return that to the caller.

However when detecting whether darn is functional at boot we try up to
10 times, before deciding that darn doesn't work and failing the
registration of get_random_seed(). That way an intermittent failure
at boot doesn't deprive the system of randomness until the next reboot.

Signed-off-by: Matt Brown <matthew.brown.dev@gmail.com>
[mpe: Move init into a function, tweak change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/32: Fix boot failure on non 6xx platforms
Christophe Leroy [Tue, 8 Aug 2017 06:37:24 +0000 (08:37 +0200)]
powerpc/32: Fix boot failure on non 6xx platforms

Commit d300627c6a536 ("powerpc/6xx: Handle DABR match before calling
do_page_fault") breaks non 6xx platforms.

  Failed to execute /init (error -14)
  Starting init: /bin/sh exists but couldn't execute it (error -14)
  Kernel panic - not syncing: No working init found.  Try passing init= ...
  CPU: 0 PID: 1 Comm: init Not tainted 4.13.0-rc3-s3k-dev-00143-g7aa62e972a56 #56
  Call Trace:
    panic+0x108/0x250 (unreliable)
    rootfs_mount+0x0/0x58
    ret_from_kernel_thread+0x5c/0x64
  Rebooting in 180 seconds..

This is because in handle_page_fault(), the call to do_page_fault() has been
mistakenly enclosed inside an #ifdef CONFIG_6xx

Fixes: d300627c6a536 ("powerpc/6xx: Handle DABR match before calling do_page_fault")
Brown-paper-bag-to-be-worn-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/powernv: Enable PCI peer-to-peer
Frederic Barrat [Fri, 4 Aug 2017 09:55:14 +0000 (11:55 +0200)]
powerpc/powernv: Enable PCI peer-to-peer

P9 has support for PCI peer-to-peer, enabling a device to write in the
MMIO space of another device directly, without interrupting the CPU.

This patch adds support for it on powernv, by adding a new API to be
called by drivers. The pnv_pci_set_p2p(...) call configures an
'initiator', i.e the device which will issue the MMIO operation, and a
'target', i.e. the device on the receiving side.

P9 really only supports MMIO stores for the time being but that's
expected to change in the future, so the API allows to define both
load and store operations.

  /* PCI p2p descriptor */
  #define OPAL_PCI_P2P_ENABLE           0x1
  #define OPAL_PCI_P2P_LOAD             0x2
  #define OPAL_PCI_P2P_STORE            0x4

  int pnv_pci_set_p2p(struct pci_dev *initiator, struct pci_dev *target,
                      u64 desc)

It uses a new OPAL call, as the configuration magic is done on the
PHBs by skiboot.

Signed-off-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Reviewed-by: Russell Currey <ruscur@russell.cc>
[mpe: Drop unrelated OPAL calls, s/uint64_t/u64/, minor formatting]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc: Remove old unused icswx based coprocessor support
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:46 +0000 (14:49 +1000)]
powerpc: Remove old unused icswx based coprocessor support

We have a whole pile of unused code to maintain the ACOP register,
allocate coprocessor PIDs and handle ACOP faults. This mechanism
was used for the HFI adapter on POWER7 which is dead and gone and
whose driver never went upstream. It was used on some A2 core based
stuff that also never saw the light of day.

Take out all that code.

There is still some POWER8 coprocessor code that uses icswx but it's
kernel only and thus doesn't use any of that infrastructure.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Cleanup check for stack expansion
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:45 +0000 (14:49 +1000)]
powerpc/mm: Cleanup check for stack expansion

When hitting below a VM_GROWSDOWN vma (typically growing the stack),
we check whether it's a valid stack-growing instruction and we
check the distance to GPR1. This is largely open coded with lots
of comments, so move it out to a helper.

While at it, make store_update_sp a boolean.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Don't lose "major" fault indication on retry
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:44 +0000 (14:49 +1000)]
powerpc/mm: Don't lose "major" fault indication on retry

If the first iteration returns VM_FAULT_MAJOR but the second
one doesn't, we fail to account the fault as a major fault.

This fixes it and brings the code in line with x86.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Move page fault VMA access checks to a helper
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:43 +0000 (14:49 +1000)]
powerpc/mm: Move page fault VMA access checks to a helper

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Set fault flags earlier
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:42 +0000 (14:49 +1000)]
powerpc/mm: Set fault flags earlier

Move out the code that sets FAULT_FLAG_WRITE so the block that check
access permissions can be extracted. While at it also set
FAULT_FLAG_INSTRUCTION which will be used for protection keys.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Add a bunch of (un)likely annotations to do_page_fault
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:41 +0000 (14:49 +1000)]
powerpc/mm: Add a bunch of (un)likely annotations to do_page_fault

Mostly for the failure cases

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Move/simplify faulthandler_disabled() and !mm check
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:40 +0000 (14:49 +1000)]
powerpc/mm: Move/simplify faulthandler_disabled() and !mm check

Do the check before we re-enable interrupts and clean the code
up a bit.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Move the DSISR_PROTFAULT sanity check
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:39 +0000 (14:49 +1000)]
powerpc/mm: Move the DSISR_PROTFAULT sanity check

This has a page of comment explaining what's going on right in
the middle of do_page_fault() which makes things a bit hard to
follow. Move it to a helper instead. Also do the test earlier
as there's no point waiting until after we found the VMA.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Cosmetic fix to page fault accounting
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:38 +0000 (14:49 +1000)]
powerpc/mm: Cosmetic fix to page fault accounting

No need to break those lines, they aren't that long

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Move CMO accounting out of do_page_fault into a helper
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:37 +0000 (14:49 +1000)]
powerpc/mm: Move CMO accounting out of do_page_fault into a helper

It makes do_page_fault() more readable. No functional change.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Rework mm_fault_error()
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:36 +0000 (14:49 +1000)]
powerpc/mm: Rework mm_fault_error()

First, handle the normal retry failure in do_page_fault itself,
since it's a simple return statement. That allows us to remove
the "continue" special return code from mm_fault_error().

Once that's done, we can have an implementation much closer to
x86 where we only call mm_fault_error() if VM_FAULT_ERROR is set
and directly return.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Make bad_area* helper functions
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:35 +0000 (14:49 +1000)]
powerpc/mm: Make bad_area* helper functions

Instead of goto labels, instead call those functions and return.

This gets us closer to x86 and allows us to shring do_page_fault()
even more.

The main difference with x86 is that those function return a value
which we then return from do_page_fault(). That value is our
return value from do_page_fault() which we use to generate
kernel faults.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Fix reporting of kernel execute faults
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:34 +0000 (14:49 +1000)]
powerpc/mm: Fix reporting of kernel execute faults

We currently test for is_exec and DSISR_PROTFAULT but that doesn't
make sense as this is the wrong error bit to test for an execute
permission failure.

In fact, we had code that would return early if we had an exec
fault in kernel mode so I think that was just dead code anyway.

Finally the location of that test is awkward and prevents further
simplifications.

So instead move that test into a helper along with the existing
early test for kernel exec faults and out of range accesses,
and put it all in a "bad_kernel_fault()" helper. While at it
test the correct error bits.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Simplify returns from __do_page_fault
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:33 +0000 (14:49 +1000)]
powerpc/mm: Simplify returns from __do_page_fault

Now that we moved the exception state handling to a wrapper, we can
just directly return rather than "goto bail"

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Move debugger check to notify_page_fault()
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:32 +0000 (14:49 +1000)]
powerpc/mm: Move debugger check to notify_page_fault()

unclutters the main path

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Overhaul handling of bad page faults
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:31 +0000 (14:49 +1000)]
powerpc/mm: Overhaul handling of bad page faults

A bad page fault is when the HW signals an error such as a bad
copy/paste, an AMO error, or some other type of error that will
not be fixed by updating the PTE.

Use a helper page_fault_is_bad() to check for bad page faults thus
removing the per-processor family open-coding in __do_page_fault()
and trigger a SIGBUS rather than a SIGSEGV which is more appropriate.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Move error_code checks for bad faults earlier
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:30 +0000 (14:49 +1000)]
powerpc/mm: Move error_code checks for bad faults earlier

There's no point looking for the VMA etc.. when we already know
we are going to fail.

This adds some code to set "code" for the si_code but that will
be gone in subsequent patches.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Move out definition of CPU specific is_write bits
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:29 +0000 (14:49 +1000)]
powerpc/mm: Move out definition of CPU specific is_write bits

Define a common page_fault_is_write() helper and use it

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Use symbolic constants for filtering SRR1 bits on ISIs
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:28 +0000 (14:49 +1000)]
powerpc/mm: Use symbolic constants for filtering SRR1 bits on ISIs

This uses the newly defined constants for this rather than open-coded
numbers. There is a side effect on 64-bit which is to pass through
some of the new P9 bits which we didn't before.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Update bits used to skip hash_page
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:27 +0000 (14:49 +1000)]
powerpc/mm: Update bits used to skip hash_page

We test a number of bits from DSISR/SRR1 before deciding
to call hash_page(). If any of these is set, we go directly
to do_page_fault() as the bit indicate a fault that needs
to be handled there (no hashing needed).

This updates the current open-coded masks to use the new
DSISR definitions.

This *does* change the masks actually used in two ways:

 - We used to test various bits that were defined as "always 0"
in the architecture and could be repurposed for something
else. From now on, we just ignore such bits.

 - We were missing some new bits defined on P9

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Update definitions of DSISR bits
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:26 +0000 (14:49 +1000)]
powerpc/mm: Update definitions of DSISR bits

This updates the definitions for the various DSISR bits to
match both some historical stuff and to match new bits on
POWER9.

In addition, we define some masks corresponding to the "bad"
faults on Book3S, and some masks corresponding to the bits
that match between DSISR and SRR1 for a DSI and an ISI.

This comes with a small code update to change the definition
of DSISR_PGDIRFAULT which becomes DSISR_PRTABLE_FAULT to
match architecture 3.0B

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/6xx: Handle DABR match before calling do_page_fault
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:25 +0000 (14:49 +1000)]
powerpc/6xx: Handle DABR match before calling do_page_fault

On legacy 6xx 32-bit procesors, we checked for the DABR match bit
in DSISR from do_page_fault(), in the middle of a pile of ifdef's
because all other CPU types do it in assembly prior to calling
do_page_fault. Fix that.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[mpe: Add #ifdef CONFIG_6xx]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Pre-filter SRR1 bits before do_page_fault()
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:24 +0000 (14:49 +1000)]
powerpc/mm: Pre-filter SRR1 bits before do_page_fault()

By filtering the relevant SRR1 bits in the assembly rather than
in do_page_fault() itself, we avoid a conditional branch (since we
already come from different path for data and instruction faults).

This will allow more simplifications later

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm: Move exception_enter/exit to a do_page_fault wrapper
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:23 +0000 (14:49 +1000)]
powerpc/mm: Move exception_enter/exit to a do_page_fault wrapper

This will allow simplifying the returns from do_page_fault

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm/radix: Avoid flushing the PWC on every flush_tlb_range
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:06 +0000 (14:49 +1000)]
powerpc/mm/radix: Avoid flushing the PWC on every flush_tlb_range

We do that because it's used by THP pmd collapsing, so use
instead a dedicated flush function.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm/radix: Improve TLB/PWC flushes
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:05 +0000 (14:49 +1000)]
powerpc/mm/radix: Improve TLB/PWC flushes

At the moment we have to rather sub-optimal flushing behaviours:

 - flush_tlb_mm() will flush the PWC which is unnecessary (for example
   when doing a fork)

 - A large unmap will call flush_tlb_pwc() multiple times causing us
   to perform that fairly expensive operation repeatedly. This happens
   often in batches of 3 on every new process.

So we change flush_tlb_mm() to only flush the TLB, and we use the
existing "need_flush_all" flag in struct mmu_gather to indicate
that the PWC needs flushing.

Unfortunately, flush_tlb_range() still needs to do a full flush
for now as it's used by the THP collapsing. We will fix that later.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/mm/radix: Improve _tlbiel_pid to be usable for PWC flushes
Benjamin Herrenschmidt [Wed, 19 Jul 2017 04:49:04 +0000 (14:49 +1000)]
powerpc/mm/radix: Improve _tlbiel_pid to be usable for PWC flushes

The PWC flush only needs a single set call, just like the
full (RIC=2) flush.

This will allow us to get rid of the dedicated _tlbiel_pwc()

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
7 years agopowerpc/kernel: Avoid preemption check in iommu_range_alloc()
Victor Aoqui [Thu, 20 Jul 2017 17:26:06 +0000 (14:26 -0300)]
powerpc/kernel: Avoid preemption check in iommu_range_alloc()

Replace the __this_cpu_read() with raw_cpu_read() in
iommu_range_alloc(). Otherwise we get a warning about using
__this_cpu_read() in preemptible code:

  BUG: using __this_cpu_read() in preemptible
  caller is iommu_range_alloc+0xa8/0x3d0

Preemption doesn't need to be disabled since according to the comment
any CPU can safely use any IOMMU pool.

Signed-off-by: Victor Aoqui <victora@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>