GitHub/moto-9609/android_kernel_motorola_exynos9610.git
8 years agopowerpc/8xx: move setup_initial_memory_limit() into 8xx_mmu.c
Christophe Leroy [Tue, 9 Feb 2016 16:07:54 +0000 (17:07 +0100)]
powerpc/8xx: move setup_initial_memory_limit() into 8xx_mmu.c

Now we have a 8xx specific .c file for that so put it in there
as other powerpc variants do

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc: Update documentation for noltlbs kernel parameter
Christophe Leroy [Tue, 9 Feb 2016 16:07:52 +0000 (17:07 +0100)]
powerpc: Update documentation for noltlbs kernel parameter

Now the noltlbs kernel parameter is also applicable to PPC8xx

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/8xx: Map linear kernel RAM with 8M pages
Christophe Leroy [Tue, 9 Feb 2016 16:07:50 +0000 (17:07 +0100)]
powerpc/8xx: Map linear kernel RAM with 8M pages

On a live running system (VoIP gateway for Air Trafic Control), over
a 10 minutes period (with 277s idle), we get 87 millions DTLB misses
and approximatly 35 secondes are spent in DTLB handler.
This represents 5.8% of the overall time and even 10.8% of the
non-idle time.
Among those 87 millions DTLB misses, 15% are on user addresses and
85% are on kernel addresses. And within the kernel addresses, 93%
are on addresses from the linear address space and only 7% are on
addresses from the virtual address space.

MPC8xx has no BATs but it has 8Mb page size. This patch implements
mapping of kernel RAM using 8Mb pages, on the same model as what is
done on the 40x.

In 4k pages mode, each PGD entry maps a 4Mb area: we map every two
entries to the same 8Mb physical page. In each second entry, we add
4Mb to the page physical address to ease life of the FixupDAR
routine. This is just ignored by HW.

In 16k pages mode, each PGD entry maps a 64Mb area: each PGD entry
will point to the first page of the area. The DTLB handler adds
the 3 bits from EPN to map the correct page.

With this patch applied, we now get only 13 millions TLB misses
during the 10 minutes period. The idle time has increased to 313s
and the overall time spent in DTLB miss handler is 6.3s, which
represents 1% of the overall time and 2.2% of non-idle time.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/8xx: Save r3 all the time in DTLB miss handler
Christophe Leroy [Tue, 9 Feb 2016 16:07:48 +0000 (17:07 +0100)]
powerpc/8xx: Save r3 all the time in DTLB miss handler

We are spending between 40 and 160 cycles with a mean of 65 cycles in
the DTLB handling routine (measured with mftbl) so make it more
simple althought it adds one instruction.
With this modification, we get three registers available at all time,
which will help with following patch.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/p5040: Add device node for RAID Engine
Xuelin Shi [Thu, 25 Feb 2016 01:14:05 +0000 (09:14 +0800)]
powerpc/p5040: Add device node for RAID Engine

add the missing RAID Engine device node for p5040.
otherwise, the device can not be detected.

Signed-off-by: Xuelin Shi <xuelin.shi@nxp.com>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc: optimise csum_partial() call when len is constant
Christophe Leroy [Mon, 7 Mar 2016 17:44:37 +0000 (18:44 +0100)]
powerpc: optimise csum_partial() call when len is constant

csum_partial is often called for small fixed length packets
for which it is suboptimal to use the generic csum_partial()
function.

For instance, in my configuration, I got:
* One place calling it with constant len 4
* Seven places calling it with constant len 8
* Three places calling it with constant len 14
* One place calling it with constant len 20
* One place calling it with constant len 24
* One place calling it with constant len 32

This patch renames csum_partial() to __csum_partial() and
implements csum_partial() as a wrapper inline function which
* uses csum_add() for small 16bits multiple constant length
* uses ip_fast_csum() for other 32bits multiple constant
* uses __csum_partial() in all other cases

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/fsl-lbc: Modify suspend/resume entry sequence
Raghav Dogra [Tue, 9 Feb 2016 09:39:08 +0000 (15:09 +0530)]
powerpc/fsl-lbc: Modify suspend/resume entry sequence

Modify platform driver suspend/resume to syscore
suspend/resume. This is because p1022ds needs to use
localbus when entering the PCIE resume.

Signed-off-by: Raghav Dogra <raghav.dogra@nxp.com>
[scottwood: dropped makefile churn]
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/8xx: CONFIG_DEBUG_PAGEALLOC requires ITLBmiss for kernel addresses
Christophe Leroy [Wed, 3 Feb 2016 22:34:21 +0000 (23:34 +0100)]
powerpc/8xx: CONFIG_DEBUG_PAGEALLOC requires ITLBmiss for kernel addresses

When CONFIG_DEBUG_PAGEALLOC is activated, the initial TLB mapping gets
flushed to track accesses to wrong areas. Therefore, kernel addresses
will also generate ITLB misses.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/885: set SDCR to 0x40
Christophe Leroy [Thu, 4 Feb 2016 10:07:48 +0000 (11:07 +0100)]
powerpc/885: set SDCR to 0x40

The MPC885 reference manual says that SDCR shall have value 0x40, but
most exemples set SDCR to 0x1
With 0x1 in SDCR, we observe TX underruns on SCC when using it in
QMC mode.
According the NXP technical support, this is a copy/paste error from
MPC860 reference manual, 0x40 being the only value supported
by the MPC885 HW.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/86xx: disable IDE subsystem in mpc8610_hpcd_defconfig
Bartlomiej Zolnierkiewicz [Wed, 3 Feb 2016 15:50:17 +0000 (16:50 +0100)]
powerpc/86xx: disable IDE subsystem in mpc8610_hpcd_defconfig

This patch disables deprecated IDE subsystem in mpc8610_hpcd_defconfig
(no IDE host drivers are selected in this config so there is no valid
reason to enable IDE subsystem itself).

Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/85xx: disable IDE subsystem in stx_gp3_defconfig
Bartlomiej Zolnierkiewicz [Wed, 3 Feb 2016 15:50:10 +0000 (16:50 +0100)]
powerpc/85xx: disable IDE subsystem in stx_gp3_defconfig

This patch disables deprecated IDE subsystem in stx_gp3_defconfig
(no IDE host drivers are selected in this config so there is no valid
reason to enable IDE subsystem itself).

Cc: Scott Wood <oss@buserror.net>
Cc: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/85xx: disable IDE subsystem in ksi8560_defconfig
Bartlomiej Zolnierkiewicz [Wed, 3 Feb 2016 15:50:08 +0000 (16:50 +0100)]
powerpc/85xx: disable IDE subsystem in ksi8560_defconfig

This patch disables deprecated IDE subsystem in ksi8560_defconfig
(no IDE host drivers are selected in this config so there is no valid
reason to enable IDE subsystem itself).

Cc: Scott Wood <oss@buserror.net>
Cc: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/83xx: disable IDE subsystem in mpc834x_itx_defconfig
Bartlomiej Zolnierkiewicz [Wed, 3 Feb 2016 15:50:07 +0000 (16:50 +0100)]
powerpc/83xx: disable IDE subsystem in mpc834x_itx_defconfig

This patch disables deprecated IDE subsystem in mpc834x_itx_defconfig
(no IDE host drivers are selected in this config so there is no valid
reason to enable IDE subsystem itself).

Cc: Scott Wood <oss@buserror.net>
Cc: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agoqe: Use GFP_ATOMIC while spin_lock_irqsave is held
Saurabh Sengar [Sun, 24 Jan 2016 06:54:06 +0000 (12:24 +0530)]
qe: Use GFP_ATOMIC while spin_lock_irqsave is held

cpm_muram_alloc_common is called twice and both the times
spin_lock_irqsave is held.
Using GFP_KERNEL can sleep in spin_lock_irqsave context and cause
deadlock

Signed-off-by: Saurabh Sengar <saurabh.truth@gmail.com>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agoqe: Make cpm_muram_alloc_common static
Saurabh Sengar [Tue, 26 Jan 2016 08:52:01 +0000 (14:22 +0530)]
qe: Make cpm_muram_alloc_common static

as cpm_muram_alloc_common is used only in this file,
making it static

Signed-off-by: Saurabh Sengar <saurabh.truth@gmail.com>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agoqe/ic: fix a buffer overflow error and add check elsewhere
Zhao Qiang [Thu, 21 Jan 2016 01:06:04 +0000 (09:06 +0800)]
qe/ic: fix a buffer overflow error and add check elsewhere

127 is the theoretical up boundary of QEIC number,
in fact there only be 44 qe_ic_info now.
add check to overflow for qe_ic_info

Signed-off-by: Zhao Qiang <qiang.zhao@nxp.com>
Acked-by: Li Yang <leoyang.li@nxp.com>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/fsl: Update fman dt binding with pcs-phy and tbi-phy
Igal Liberman [Thu, 24 Dec 2015 01:42:11 +0000 (03:42 +0200)]
powerpc/fsl: Update fman dt binding with pcs-phy and tbi-phy

The FMan contains internal PHY devices used for SGMII connections
to external PHYs. When these PHYs are in use a reference is needed
for both the external PHY and the internal one. For the external
PHY phy-handle provides the reference. For the internal PHY a new
handle is required.
In dTSEC, the internal PHY is a TBI (Ten Bit Interface) PHY,
the handle used will be tbi-handle.
In mEMAC, the internal PHY is a PCS (Physical Coding Sublayer) PHY,
the handle used will be pcsphy-handle.

Signed-off-by: Igal Liberman <igal.liberman@freescale.com>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/mpc85xx: Add CPU hotplug support for E6500
chenhui zhao [Fri, 20 Nov 2015 09:14:02 +0000 (17:14 +0800)]
powerpc/mpc85xx: Add CPU hotplug support for E6500

Support Freescale E6500 core-based platforms, like t4240.
Support disabling/enabling individual CPU thread dynamically.

Signed-off-by: Chenhui Zhao <chenhui.zhao@freescale.com>
8 years agopowerpc/mpc85xx: Add hotplug support on E5500 and E500MC cores
chenhui zhao [Fri, 20 Nov 2015 09:14:01 +0000 (17:14 +0800)]
powerpc/mpc85xx: Add hotplug support on E5500 and E500MC cores

Freescale E500MC and E5500 core-based platforms, like P4080, T1040,
support disabling/enabling CPU dynamically.
This patch adds this feature on those platforms.

Signed-off-by: Chenhui Zhao <chenhui.zhao@freescale.com>
Signed-off-by: Tang Yuantian <Yuantian.Tang@feescale.com>
[scottwood: removed unused pr_fmt]
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/mpc85xx: refactor the PM operations
chenhui zhao [Fri, 20 Nov 2015 09:14:00 +0000 (17:14 +0800)]
powerpc/mpc85xx: refactor the PM operations

Freescale CoreNet-based and Non-CoreNet-based platforms require
different PM operations. This patch extracted existing PM operations
on Non-CoreNet-based platforms to a new file which can accommodate
both platforms. In this way, PM operation codes are clearer structurally.

Signed-off-by: Chenhui Zhao <chenhui.zhao@freescale.com>
Signed-off-by: Tang Yuantian <Yuantian.Tang@feescale.com>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/rcpm: add RCPM driver
chenhui zhao [Fri, 20 Nov 2015 09:13:59 +0000 (17:13 +0800)]
powerpc/rcpm: add RCPM driver

There is a RCPM (Run Control/Power Management) in Freescale QorIQ
series processors. The device performs tasks associated with device
run control and power management.

The driver implements some features: mask/unmask irq, enter/exit low
power states, freeze time base, etc.

Signed-off-by: Chenhui Zhao <chenhui.zhao@freescale.com>
Signed-off-by: Tang Yuantian <Yuantian.Tang@freescale.com>
[scottwood: remove __KERNEL__ ifdef]
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/cache: add cache flush operation for various e500
chenhui zhao [Fri, 20 Nov 2015 09:13:58 +0000 (17:13 +0800)]
powerpc/cache: add cache flush operation for various e500

Various e500 core have different cache architecture, so they
need different cache flush operations. Therefore, add a callback
function cpu_flush_caches to the struct cpu_spec. The cache flush
operation for the specific kind of e500 is selected at init time.
The callback function will flush all caches inside the current cpu.

Signed-off-by: Chenhui Zhao <chenhui.zhao@freescale.com>
Signed-off-by: Tang Yuantian <Yuantian.Tang@feescale.com>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/mm: any thread in one core can be the first to setup TLB1
chenhui zhao [Thu, 24 Dec 2015 00:39:57 +0000 (08:39 +0800)]
powerpc/mm: any thread in one core can be the first to setup TLB1

On e6500, in the case of cpu hotplug, either thread in one core
may be the first thread initilzing the TLB1. The subsequent threads
must not setup it again.

The code is derived from the comment of Scott Wood.

Signed-off-by: Chenhui Zhao <chenhui.zhao@freescale.com>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agoDocumentation: dt: binding: fsl: add devicetree binding for describing RCPM
Wang Dongsheng [Mon, 26 Oct 2015 06:44:12 +0000 (14:44 +0800)]
Documentation: dt: binding: fsl: add devicetree binding for describing RCPM

RCPM is the Run Control and Power Management module performs all
device-level tasks associated with device run control and power
management.

Add this for freescale powerpc platform and layerscape platform.

Signed-off-by: Chenhui Zhao <chenhui.zhao@freescale.com>
Signed-off-by: Tang Yuantian <Yuantian.Tang@freescale.com>
Signed-off-by: Wang Dongsheng <dongsheng.wang@freescale.com>
[scottwood: s/pointer/phandle and "disabled" status from example]
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc: simplify csum_add(a, b) in case a or b is constant 0
Christophe Leroy [Tue, 22 Sep 2015 14:34:34 +0000 (16:34 +0200)]
powerpc: simplify csum_add(a, b) in case a or b is constant 0

Simplify csum_add(a, b) in case a or b is constant 0

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc32: optimise csum_partial() loop
Christophe Leroy [Tue, 22 Sep 2015 14:34:32 +0000 (16:34 +0200)]
powerpc32: optimise csum_partial() loop

On the 8xx, load latency is 2 cycles and taking branches also takes
2 cycles. So let's unroll the loop.

This patch improves csum_partial() speed by around 10% on both:
* 8xx (single issue processor with parallel execution)
* 83xx (superscalar 6xx processor with dual instruction fetch
and parallel execution)

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc32: optimise a few instructions in csum_partial()
Christophe Leroy [Tue, 22 Sep 2015 14:34:29 +0000 (16:34 +0200)]
powerpc32: optimise a few instructions in csum_partial()

r5 does contain the value to be updated, so lets use r5 all way long
for that. It makes the code more readable.

To avoid confusion, it is better to use adde instead of addc

The first addition is useless. Its only purpose is to clear carry.
As r4 is a signed int that is always positive, this can be done by
using srawi instead of srwi

Let's also remove the comment about bdnz having no overhead as it
is not correct on all powerpc, at least on MPC8xx

In the last part, in our situation, the remaining quantity of bytes
to be proceeded is between 0 and 3. Therefore, we can base that part
on the value of bit 31 and bit 30 of r4 instead of anding r4 with 3
then proceding on comparisons and substractions.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc32: rewrite csum_partial_copy_generic() based on copy_tofrom_user()
Christophe Leroy [Tue, 22 Sep 2015 14:34:27 +0000 (16:34 +0200)]
powerpc32: rewrite csum_partial_copy_generic() based on copy_tofrom_user()

csum_partial_copy_generic() does the same as copy_tofrom_user and also
calculates the checksum during the copy. Unlike copy_tofrom_user(),
the existing version of csum_partial_copy_generic() doesn't take
benefit of the cache.

This patch is a rewrite of csum_partial_copy_generic() based on
copy_tofrom_user().
The previous version of csum_partial_copy_generic() was handling
errors. Now we have the checksum wrapper functions to handle the error
case like in powerpc64 so we can make the error case simple:
just return -EFAULT.
copy_tofrom_user() only has r12 available => we use it for the
checksum r7 and r8 which contains pointers to error feedback are used,
so we stack them.

On a TCP benchmark using socklib on the loopback interface on which
checksum offload and scatter/gather have been deactivated, we get
about 20% performance increase.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc: inline ip_fast_csum()
Christophe Leroy [Tue, 22 Sep 2015 14:34:25 +0000 (16:34 +0200)]
powerpc: inline ip_fast_csum()

In several architectures, ip_fast_csum() is inlined
There are functions like ip_send_check() which do nothing
much more than calling ip_fast_csum().
Inlining ip_fast_csum() allows the compiler to optimise better

Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[scottwood: whitespace and cast fixes]
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc32: checksum_wrappers_64 becomes checksum_wrappers
Christophe Leroy [Tue, 22 Sep 2015 14:34:23 +0000 (16:34 +0200)]
powerpc32: checksum_wrappers_64 becomes checksum_wrappers

The powerpc64 checksum wrapper functions adds csum_and_copy_to_user()
which otherwise is implemented in include/net/checksum.h by using
csum_partial() then copy_to_user()

Those two wrapper fonctions are also applicable to powerpc32 as it is
based on the use of csum_partial_copy_generic() which also
exists on powerpc32

This patch renames arch/powerpc/lib/checksum_wrappers_64.c to
arch/powerpc/lib/checksum_wrappers.c and
makes it non-conditional to CONFIG_WORD_SIZE

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc: mark xer clobbered in csum_add()
Christophe Leroy [Tue, 22 Sep 2015 14:34:21 +0000 (16:34 +0200)]
powerpc: mark xer clobbered in csum_add()

addc uses carry so xer is clobbered in csum_add()

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc: unexport csum_tcpudp_magic
Christophe Leroy [Tue, 22 Sep 2015 14:34:19 +0000 (16:34 +0200)]
powerpc: unexport csum_tcpudp_magic

csum_tcpudp_magic is now an inline function, so there is
nothing to export

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Scott Wood <oss@buserror.net>
8 years agopowerpc/mm: Move hash64 tlbflush code into a new header
Aneesh Kumar K.V [Tue, 1 Mar 2016 07:29:21 +0000 (12:59 +0530)]
powerpc/mm: Move hash64 tlbflush code into a new header

No code changes.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm: Move hash related mmu-*.h headers to book3s/
Aneesh Kumar K.V [Tue, 1 Mar 2016 07:29:20 +0000 (12:59 +0530)]
powerpc/mm: Move hash related mmu-*.h headers to book3s/

No code changes.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm: add _PAGE_HASHPTE similar to 4K hash
Aneesh Kumar K.V [Tue, 1 Mar 2016 07:29:18 +0000 (12:59 +0530)]
powerpc/mm: add _PAGE_HASHPTE similar to 4K hash

We don't need to update linux page table entry with _PAGE_HASHPTE early
in hash pte fault. A parallel pte update will loop via _PAGE_BUSY
and look at _PAGE_HASHPTE for a required hpte flush only if
_PAGE_BUSY is cleared. That ensures a pte update will wait for a
parallel hpte insert to finish before looking at _PAGE_HASHPTE bit.

To avoid further confusion drop setting _PAGE_HASHPTE in cmpxchg in __hash_page_4K.

commit 41743a4e34f0 ("powerpc: Free a PTE bit on ppc64 with 64K pages")
did similar change for 64K config

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerp/mm: Update code comments
Aneesh Kumar K.V [Tue, 1 Mar 2016 07:29:17 +0000 (12:59 +0530)]
powerp/mm: Update code comments

We are updating pte in those functions.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agomm: Some arch may want to use HPAGE_PMD related values as variables
Kirill A. Shutemov [Tue, 1 Mar 2016 04:15:14 +0000 (09:45 +0530)]
mm: Some arch may want to use HPAGE_PMD related values as variables

With next generation power processor, we are having a new mmu model
[1] that require us to maintain a different linux page table format.

Inorder to support both current and future ppc64 systems with a single
kernel we need to make sure kernel can select between different page
table format at runtime. With the new MMU (radix MMU) added, we will
have two different pmd hugepage size 16MB for hash model and 2MB for
Radix model. Hence make HPAGE_PMD related values as a variable.

Actual conversion of HPAGE_PMD to a variable for ppc64 happens in a
followup patch.

[1] http://ibm.biz/power-isa3 (Needs registration).

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm: Switch book3s 64 with 64K page size to 4 level page table
Aneesh Kumar K.V [Tue, 1 Mar 2016 04:15:13 +0000 (09:45 +0530)]
powerpc/mm: Switch book3s 64 with 64K page size to 4 level page table

This is needed so that we can support both hash and radix page table
using single kernel. Radix kernel uses a 4 level table.

We now use physical address in upper page table tree levels. Even though
they are aligned to their size, for the masked bits we use the
bit positions as per PowerISA 3.0.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm: Don't have conditional defines for real_pte_t
Aneesh Kumar K.V [Tue, 1 Mar 2016 04:15:12 +0000 (09:45 +0530)]
powerpc/mm: Don't have conditional defines for real_pte_t

We remove real_pte_t out of STRICT_MM_TYPESCHECK.

Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm: Split pgtable types to separate header
Aneesh Kumar K.V [Tue, 1 Mar 2016 04:15:11 +0000 (09:45 +0530)]
powerpc/mm: Split pgtable types to separate header

We move the page table accessors into a separate header. We will
later add a big endian variant of the table which is needed for radix.
No functionality change only code movement.

Reviewed-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc: Add the ability to save VSX without giving it up
Cyril Bur [Mon, 29 Feb 2016 06:53:51 +0000 (17:53 +1100)]
powerpc: Add the ability to save VSX without giving it up

This patch adds the ability to be able to save the VSX registers to the
thread struct without giving up (disabling the facility) next time the
process returns to userspace.

This patch builds on a previous optimisation for the FPU and VEC registers
in the thread copy path to avoid a possibly pointless reload of VSX state.

Signed-off-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc: Add the ability to save Altivec without giving it up
Cyril Bur [Mon, 29 Feb 2016 06:53:50 +0000 (17:53 +1100)]
powerpc: Add the ability to save Altivec without giving it up

This patch adds the ability to be able to save the VEC registers to the
thread struct without giving up (disabling the facility) next time the
process returns to userspace.

This patch builds on a previous optimisation for the FPU registers in the
thread copy path to avoid a possibly pointless reload of VEC state.

Signed-off-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc: Add the ability to save FPU without giving it up
Cyril Bur [Mon, 29 Feb 2016 06:53:49 +0000 (17:53 +1100)]
powerpc: Add the ability to save FPU without giving it up

This patch adds the ability to be able to save the FPU registers to the
thread struct without giving up (disabling the facility) next time the
process returns to userspace.

This patch optimises the thread copy path (as a result of a fork() or
clone()) so that the parent thread can return to userspace with hot
registers avoiding a possibly pointless reload of FPU register state.

Signed-off-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc: Prepare for splitting giveup_{fpu, altivec, vsx} in two
Cyril Bur [Mon, 29 Feb 2016 06:53:48 +0000 (17:53 +1100)]
powerpc: Prepare for splitting giveup_{fpu, altivec, vsx} in two

This prepares for the decoupling of saving {fpu,altivec,vsx} registers and
marking {fpu,altivec,vsx} as being unused by a thread.

Currently giveup_{fpu,altivec,vsx}() does both however optimisations to
task switching can be made if these two operations are decoupled.
save_all() will permit the saving of registers to thread structs and leave
threads MSR with bits enabled.

This patch introduces no functional change.

Signed-off-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc: Restore FPU/VEC/VSX if previously used
Cyril Bur [Mon, 29 Feb 2016 06:53:47 +0000 (17:53 +1100)]
powerpc: Restore FPU/VEC/VSX if previously used

Currently the FPU, VEC and VSX facilities are lazily loaded. This is not
a problem unless a process is using these facilities.

Modern versions of GCC are very good at automatically vectorising code,
new and modernised workloads make use of floating point and vector
facilities, even the kernel makes use of vectorised memcpy.

All this combined greatly increases the cost of a syscall since the
kernel uses the facilities sometimes even in syscall fast-path making it
increasingly common for a thread to take an *_unavailable exception soon
after a syscall, not to mention potentially taking all three.

The obvious overcompensation to this problem is to simply always load
all the facilities on every exit to userspace. Loading up all FPU, VEC
and VSX registers every time can be expensive and if a workload does
avoid using them, it should not be forced to incur this penalty.

An 8bit counter is used to detect if the registers have been used in the
past and the registers are always loaded until the value wraps to back
to zero.

Several versions of the assembly in entry_64.S were tested:

  1. Always calling C.
  2. Performing a common case check and then calling C.
  3. A complex check in asm.

After some benchmarking it was determined that avoiding C in the common
case is a performance benefit (option 2). The full check in asm (option
3) greatly complicated that codepath for a negligible performance gain
and the trade-off was deemed not worth it.

Signed-off-by: Cyril Bur <cyrilbur@gmail.com>
[mpe: Move load_vec in the struct to fill an existing hole, reword change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
fixup

8 years agopowerpc: Explicitly disable math features when copying thread
Cyril Bur [Mon, 29 Feb 2016 06:53:46 +0000 (17:53 +1100)]
powerpc: Explicitly disable math features when copying thread

Currently when threads get scheduled off they always giveup the FPU,
Altivec (VMX) and Vector (VSX) units if they were using them. When they are
scheduled back on a fault is then taken to enable each facility and load
registers. As a result explicitly disabling FPU/VMX/VSX has not been
necessary.

Future changes and optimisations remove this mandatory giveup and fault
which could cause calls such as clone() and fork() to copy threads and run
them later with FPU/VMX/VSX enabled but no registers loaded.

This patch starts the process of having MSR_{FP,VEC,VSX} mean that a
threads registers are hot while not having MSR_{FP,VEC,VSX} means that the
registers must be loaded. This allows for a smarter return to userspace.

Signed-off-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agoselftests/powerpc: Test FPU and VMX regs in signal ucontext
Cyril Bur [Mon, 29 Feb 2016 06:53:45 +0000 (17:53 +1100)]
selftests/powerpc: Test FPU and VMX regs in signal ucontext

Load up the non volatile FPU and VMX regs and ensure that they are the
expected value in a signal handler

Signed-off-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agoselftests/powerpc: Test preservation of FPU and VMX regs across preemption
Cyril Bur [Mon, 29 Feb 2016 06:53:44 +0000 (17:53 +1100)]
selftests/powerpc: Test preservation of FPU and VMX regs across preemption

Loop in assembly checking the registers with many threads.

Signed-off-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agoselftests/powerpc: Test the preservation of FPU and VMX regs across syscall
Cyril Bur [Mon, 29 Feb 2016 06:53:43 +0000 (17:53 +1100)]
selftests/powerpc: Test the preservation of FPU and VMX regs across syscall

Test that the non volatile floating point and Altivec registers get
correctly preserved across the fork() syscall.

fork() works nicely for this purpose, the registers should be the same for
both parent and child

Signed-off-by: Cyril Bur <cyrilbur@gmail.com>
[mpe: Add include guards to basic_asm.h, minor formatting]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agoselftests/powerpc: Remove -flto from common CFLAGS
Suraj Jitindar Singh [Mon, 29 Feb 2016 06:29:55 +0000 (17:29 +1100)]
selftests/powerpc: Remove -flto from common CFLAGS

LTO can cause GCC to inline some functions which have attributes set.
The act of inlining the functions can lead to GCC forgetting about the
attributes which leads to incorrect tests.

Notable example being: __attribute__((__target__("no-vsx")))

LTO can also interact strangely with custom assembly functions and cause
tests to intermittently fail.

Both these cases are hard to detect and require manual inspection of
binaries which is unlikely to happen for all tests. Furthermore, LTO
optimisations are not necessary for selftests and correctness is
paramount and as such it is best to disable LTO.

LTO can be enabled on a per test basis.

A pseries_le_defconfig kernel on a POWER8 was used to determine that the
same subset of selftests pass and fail with and without -flto in the
common Makefile.

Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agoselftests/powerpc: Fix out of bounds access in TM signal test
Michael Ellerman [Wed, 2 Mar 2016 12:28:54 +0000 (23:28 +1100)]
selftests/powerpc: Fix out of bounds access in TM signal test

Gcc helpfully points out that we're accessing past the end of the gprs
array:

  tm-signal-msr-resv.c: In function 'signal_usr1':
  tm-signal-msr-resv.c:43:37: error: array subscript is above array bounds [-Werror=array-bounds]
    ucp->uc_mcontext.regs->gpr[PT_MSR] |= (7ULL);

We haven't noticed previously because -flto was hiding it somehow.

The code is confused, PT_MSR isn't a gpr, instead it's in
uc_regs->gregs, so fix it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm: Split hash page table sizing heuristic into a helper
David Gibson [Tue, 9 Feb 2016 03:32:43 +0000 (13:32 +1000)]
powerpc/mm: Split hash page table sizing heuristic into a helper

htab_get_table_size() either retrieve the size of the hash page table (HPT)
from the device tree - if the HPT size is determined by firmware - or
uses a heuristic to determine a good size based on RAM size if the kernel
is responsible for allocating the HPT.

To support a PAPR extension allowing resizing of the HPT, we're going to
want the memory size -> HPT size logic elsewhere, so split it out into a
helper function.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm: Clean up memory hotplug failure paths
David Gibson [Tue, 9 Feb 2016 03:32:42 +0000 (13:32 +1000)]
powerpc/mm: Clean up memory hotplug failure paths

This makes a number of cleanups to handling of mapping failures during
memory hotplug on Power:

For errors creating the linear mapping for the hot-added region:
  * This is now reported with EFAULT which is more appropriate than the
    previous EINVAL (the failure is unlikely to be related to the
    function's parameters)
  * An error in this path now prints a warning message, rather than just
    silently failing to add the extra memory.
  * Previously a failure here could result in the region being partially
    mapped.  We now clean up any partial mapping before failing.

For errors creating the vmemmap for the hot-added region:
   * This is now reported with EFAULT instead of causing a BUG() - this
     could happen for external reason (e.g. full hash table) so it's better
     to handle this non-fatally
   * An error message is also printed, so the failure won't be silent
   * As above a failure could cause a partially mapped region, we now
     clean this up. [mpe: move htab_remove_mapping() out of #ifdef
     CONFIG_MEMORY_HOTPLUG to enable this]

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Paul Mackerras <paulus@samba.org>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm: Handle removing maybe-present bolted HPTEs
David Gibson [Tue, 9 Feb 2016 03:32:41 +0000 (13:32 +1000)]
powerpc/mm: Handle removing maybe-present bolted HPTEs

At the moment the hpte_removebolted callback in ppc_md returns void and
will BUG_ON() if the hpte it's asked to remove doesn't exist in the first
place.  This is awkward for the case of cleaning up a mapping which was
partially made before failing.

So, we add a return value to hpte_removebolted, and have it return ENOENT
in the case that the HPTE to remove didn't exist in the first place.

In the (sole) caller, we propagate errors in hpte_removebolted to its
caller to handle.  However, we handle ENOENT specially, continuing to
complete the unmapping over the specified range before returning the error
to the caller.

This means that htab_remove_mapping() will work sanely on a partially
present mapping, removing any HPTEs which are present, while also returning
ENOENT to its caller in case it's important there.

There are two callers of htab_remove_mapping():
   - In remove_section_mapping() we already WARN_ON() any error return,
     which is reasonable - in this case the mapping should be fully
     present
   - In vmemmap_remove_mapping() we BUG_ON() any error.  We change that to
     just a WARN_ON() in the case of ENOENT, since failing to remove a
     mapping that wasn't there in the first place probably shouldn't be
     fatal.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm: Clean up error handling for htab_remove_mapping
David Gibson [Tue, 9 Feb 2016 03:32:40 +0000 (13:32 +1000)]
powerpc/mm: Clean up error handling for htab_remove_mapping

Currently, the only error that htab_remove_mapping() can report is -EINVAL,
if removal of bolted HPTEs isn't implemeted for this platform.  We make
a few clean ups to the handling of this:

 * EINVAL isn't really the right code - there's nothing wrong with the
   function's arguments - use ENODEV instead
 * We were also printing a warning message, but that's a decision better
   left up to the callers, so remove it
 * One caller is vmemmap_remove_mapping(), which will just BUG_ON() on
   error, making the warning message redundant, so no change is needed
   there.
 * The other caller is remove_section_mapping().  This is called in the
   memory hot remove path at a point after vmemmap_remove_mapping() so
   if hpte_removebolted isn't implemented, we'd expect to have already
   BUG()ed anyway.  Put a WARN_ON() here, in lieu of a printk() since this
   really shouldn't be happening.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc: Fix misspellings in comments.
Adam Buchbinder [Wed, 24 Feb 2016 18:51:11 +0000 (10:51 -0800)]
powerpc: Fix misspellings in comments.

Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/ps3: gelic_udbg: use struct udphdr from <linux/udp.h>
Luis Henriques [Mon, 8 Feb 2016 22:27:07 +0000 (22:27 +0000)]
powerpc/ps3: gelic_udbg: use struct udphdr from <linux/udp.h>

Instead of defining a local version of struct udphdr use the standard
definition from <linux/udp.h>.

The 'src' field is named 'source' in the <linux/udp.h> definition.

Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/ps3: gelic_udbg: use struct iphdr from <linux/ip.h>
Luis Henriques [Mon, 8 Feb 2016 22:27:06 +0000 (22:27 +0000)]
powerpc/ps3: gelic_udbg: use struct iphdr from <linux/ip.h>

Instead of defining a local version of struct iphdr use the standard
definition from <linux/ip.h>.

Several fields in the <linux/ip.h> definition have different names:
 - proto -> protocol
 - src -> saddr
 - dest -> daddr
 - total_length -> tot_len
 - checksum -> check

Also, 'ver_len' is composed by 'version' and 'ihl' in <linux/ip.h>.

Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/ps3: gelic_udbg: use struct vlan_hdr from <linux/if_vlan.h>
Luis Henriques [Mon, 8 Feb 2016 22:27:05 +0000 (22:27 +0000)]
powerpc/ps3: gelic_udbg: use struct vlan_hdr from <linux/if_vlan.h>

Instead of defining the local struct vlantag use the standard definition
of vlan_hdr from <linux/if_vlan.h>.

The fields in the <linux/if_vlan.h> definition have different names:
 - vlan -> h_vlan_TCI
 - subtype -> h_vlan_encapsulated_proto

While there, use also the ETH_P_IP macro instead of an hard-coded 0x0800
value.

Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/ps3: gelic_udbg: use struct ethhdr from <linux/if_ether.h>
Luis Henriques [Mon, 8 Feb 2016 22:27:04 +0000 (22:27 +0000)]
powerpc/ps3: gelic_udbg: use struct ethhdr from <linux/if_ether.h>

Instead of defining a local version of struct ethhdr use the standard
definition from <linux/if_ether.h>.

The fields in the <linux/if_ether.h> definition have different names:
 - dest -> h_dest
 - src -> h_source
 - type -> h_proto

While there, use a few other standard functions/macros:
 - eth_broadcast_addr (instead of a memset)
 - ETH_ALEN
 - ETH_P_8021Q

Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm/book3s-64: Expand the real page number field of the Linux PTE
Paul Mackerras [Mon, 22 Feb 2016 02:41:20 +0000 (13:41 +1100)]
powerpc/mm/book3s-64: Expand the real page number field of the Linux PTE

Now that other PTE fields have been moved out of the way, we can
expand the RPN field of the PTE on 64-bit Book 3S systems and align
it with the RPN field in the radix PTE format used by PowerISA v3.0
CPUs in radix mode.  For 64k page size, this means we need to move
the _PAGE_COMBO and _PAGE_4K_PFN bits.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm/book3s-64: Move software-used bits in PTE
Paul Mackerras [Mon, 22 Feb 2016 02:41:19 +0000 (13:41 +1100)]
powerpc/mm/book3s-64: Move software-used bits in PTE

This moves the _PAGE_SPECIAL and _PAGE_SOFT_DIRTY bits in the Linux
PTE on 64-bit Book 3S systems to bit positions which are designated
for software use in the radix PTE format used by PowerISA v3.0 CPUs
in radix mode.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm/book3s-64: Shuffle read, write, execute and user bits in PTE
Paul Mackerras [Mon, 22 Feb 2016 02:41:18 +0000 (13:41 +1100)]
powerpc/mm/book3s-64: Shuffle read, write, execute and user bits in PTE

This moves the _PAGE_EXEC, _PAGE_RW and _PAGE_USER bits around in
the Linux PTE on 64-bit Book 3S systems to correspond with the bit
positions used in radix mode by PowerISA v3.0 CPUs.  This also adds
a _PAGE_READ bit corresponding to the read permission bit in the
radix PTE.  _PAGE_READ is currently unused but could possibly be used
in future to improve pte_protnone().

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm/book3s-64: Move HPTE-related bits in PTE to upper end
Paul Mackerras [Mon, 22 Feb 2016 02:41:17 +0000 (13:41 +1100)]
powerpc/mm/book3s-64: Move HPTE-related bits in PTE to upper end

This moves the _PAGE_HASHPTE, _PAGE_F_GIX and _PAGE_F_SECOND fields in
the Linux PTE on 64-bit Book 3S systems to the most significant byte.
Of the 5 bits, one is a software-use bit and the other four are
reserved bit positions in the PowerISA v3.0 radix PTE format.
Using these bits is OK because these bits are all to do with tracking
the HPTE(s) associated with the Linux PTE, and therefore won't be
needed in radix mode.  This frees up bit positions in the lower two
bytes.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm/book3s-64: Move _PAGE_PTE to 2nd most significant bit
Paul Mackerras [Mon, 22 Feb 2016 02:41:16 +0000 (13:41 +1100)]
powerpc/mm/book3s-64: Move _PAGE_PTE to 2nd most significant bit

This changes _PAGE_PTE for 64-bit Book 3S processors from 0x1 to
0x4000_0000_0000_0000, because that bit is used as the L (leaf)
bit by PowerISA v3.0 CPUs in radix mode.  The "leaf" bit indicates
that the PTE points to a page directly rather than another radix
level, which is what the _PAGE_PTE bit means.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm/book3s-64: Move _PAGE_PRESENT to the most significant bit
Paul Mackerras [Mon, 22 Feb 2016 02:41:15 +0000 (13:41 +1100)]
powerpc/mm/book3s-64: Move _PAGE_PRESENT to the most significant bit

This changes _PAGE_PRESENT for 64-bit Book 3S processors from 0x2 to
0x8000_0000_0000_0000, because that is where PowerISA v3.0 CPUs in
radix mode will expect to find it.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm/book3s-64: Use physical addresses in upper page table tree levels
Paul Mackerras [Tue, 23 Feb 2016 02:36:17 +0000 (13:36 +1100)]
powerpc/mm/book3s-64: Use physical addresses in upper page table tree levels

This changes the Linux page tables to store physical addresses
rather than kernel virtual addresses in the upper levels of the
tree (pgd, pud and pmd) for 64-bit Book 3S machines.

This also changes the hugepd pointers used to implement hugepages
when the base page size is 4k to store physical addresses rather than
virtual addresses (again just for 64-bit Book3S machines).

This frees up some high order bits, and will be needed with
PowerISA v3.0 machines which read the page table tree in hardware
in radix mode.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm/book3s-64: Free up 7 high-order bits in the Linux PTE
Paul Mackerras [Mon, 22 Feb 2016 02:41:13 +0000 (13:41 +1100)]
powerpc/mm/book3s-64: Free up 7 high-order bits in the Linux PTE

This frees up bits 57-63 in the Linux PTE on 64-bit Book 3S machines.
In the 4k page case, this is done just by reducing the size of the
RPN field to 39 bits, giving 51-bit real addresses.  In the 64k page
case, we had 10 unused bits in the middle of the PTE, so this moves
the RPN field down 10 bits to make use of those unused bits.  This
means the RPN field is now 3 bits larger at 37 bits, giving 53-bit
real addresses in the normal case, or 49-bit real addresses for the
special 4k PFN case.

We are doing this in order to be able to move some other PTE bits
into the positions where PowerISA V3.0 processors will expect to
find them in radix-tree mode.  Ultimately we will be able to move
the RPN field to lower bit positions and make it larger.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm/book3s-64: Clean up some obsolete or misleading comments
Paul Mackerras [Mon, 22 Feb 2016 02:41:12 +0000 (13:41 +1100)]
powerpc/mm/book3s-64: Clean up some obsolete or misleading comments

No code changes.

Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agoMerge tag 'powerpc-4.5-4' into next
Michael Ellerman [Thu, 25 Feb 2016 10:52:58 +0000 (21:52 +1100)]
Merge tag 'powerpc-4.5-4' into next

Pull in our current fixes from 4.5, in particular the "Fix Multi hit
ERAT" bug is causing folks some grief when testing next.

8 years agopowerpc: Fix BUG_ON() reporting in real mode
Balbir Singh [Thu, 18 Feb 2016 02:48:01 +0000 (13:48 +1100)]
powerpc: Fix BUG_ON() reporting in real mode

I ran into this issue while debugging an early boot problem. The system
hit a BUG_ON() but report bug failed to print the line number and file
name. The reason being that the system was running in real mode and
report_bug() searches for addresses in the PAGE_OFFSET+ region.

Suggested-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc: Use BUILD_BUG_ON_MSG() for unsupported {cmp}xchg sizes
pan xinhui [Tue, 23 Feb 2016 11:05:01 +0000 (19:05 +0800)]
powerpc: Use BUILD_BUG_ON_MSG() for unsupported {cmp}xchg sizes

__xchg_called_with_bad_pointer() can't tell us which code uses {cmp}xchg
with an unsupported size, and no error is reported until the link stage.

To make such problems easier to debug, use BUILD_BUG_ON_MSG() instead.

Signed-off-by: pan xinhui <xinhui.pan@linux.vnet.ibm.com>
[mpe: Tweak change log wording & add relaxed/acquire]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
fixup

8 years agopowerpc/powernv: Add AST graphics driver to powernv_defconfig
Jeremy Kerr [Wed, 24 Feb 2016 01:55:05 +0000 (09:55 +0800)]
powerpc/powernv: Add AST graphics driver to powernv_defconfig

Most current OpenPOWER platforms have an AST BMC, so add graphics
support via the AST DRM driver.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/powernv: Add powernv firmware interface drivers to powernv_defconfig
Jeremy Kerr [Wed, 24 Feb 2016 01:55:04 +0000 (09:55 +0800)]
powerpc/powernv: Add powernv firmware interface drivers to powernv_defconfig

There are a few firmware-provided interfaces for OpenPOWER platforms:
the PRD infrastructure, IPMI support, and MTD access to the PNOR flash.

This change adds these to powernv_defconfig

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/powernv: Add powernv_defconfig
Jeremy Kerr [Wed, 24 Feb 2016 01:55:03 +0000 (09:55 +0800)]
powerpc/powernv: Add powernv_defconfig

This change adds a defconfig for the non-virtualised power platforms,
based on pseries_defconfig, but without pseries, and little-endian,
and no OF trampoline.

Signed-off-by: Jeremy Kerr <jk@ozlabs.org>
Acked-by: Joel Stanley <joel@jms.id.au>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc: Add POWER9 cputable entry
Michael Neuling [Fri, 19 Feb 2016 00:16:24 +0000 (11:16 +1100)]
powerpc: Add POWER9 cputable entry

Add a cputable entry for POWER9.  More code is required to actually
boot and run on a POWER9 but this gets the base piece in which we can
start building on.

Copies over from POWER8 except for:
- Adds a new CPU_FTR_ARCH_300 bit to start hanging new architecture
   features from (in subsequent patches).
- Advertises new user features bits PPC_FEATURE2_ARCH_3_00 &
  HAS_IEEE128 when on POWER9.
- Drops CPU_FTR_SUBCORE.
- Drops PMU code and machine check.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc: Use defines for __init_tlb_power[78]
Michael Neuling [Fri, 19 Feb 2016 00:16:23 +0000 (11:16 +1100)]
powerpc: Use defines for __init_tlb_power[78]

Use defines for literals __init_tlb_power[78] rather than hand coding
them.

Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/powernv: Create separate subcores CPU feature bit
Michael Neuling [Fri, 19 Feb 2016 00:16:22 +0000 (11:16 +1100)]
powerpc/powernv: Create separate subcores CPU feature bit

Subcores isn't really part of the 2.07 architecture but currently we
turn it on using the 2.07 feature bit.  Subcores is really a POWER8
specific feature.

This adds a new CPU_FTR bit just for subcores and moves the subcore
init code over to use this.

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/powernv: don't create OPAL msglog sysfs entry if memcons init fails
Andrew Donnellan [Thu, 18 Feb 2016 01:12:54 +0000 (12:12 +1100)]
powerpc/powernv: don't create OPAL msglog sysfs entry if memcons init fails

When initialising OPAL interfaces, there is a possibility that
opal_msglog_init() may fail to initialise the msglog/memory console.

Fix opal_msglog_sysfs_init() so it doesn't try to create sysfs entry for
the msglog if this occurs.

Suggested-by: Joel Stanley <joel@jms.id.au>
Fixes: 9b4fffa14906 ("powerpc/powernv: new function to access OPAL msglog")
Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm/hash: Clear the invalid slot information correctly
Aneesh Kumar K.V [Sat, 20 Feb 2016 15:11:54 +0000 (20:41 +0530)]
powerpc/mm/hash: Clear the invalid slot information correctly

We can get a hash pte fault with 4k base page size and find the pte
already inserted with 64K base page size. In that case we need to clear
the existing slot information from the old pte. Fix this correctly

With THP, we also clear the slot information with respect to all
the 64K hash pte mapping that 16MB page. They are all invalid
now. This make sure we don't find the slot valid when we fault with
4k base page size. Finding the slot valid should not result in any wrong
behavior because we do check again in hash page table for the validity.
But we can avoid that check completely.

Fixes: a43c0eb8364c022 ("powerpc/mm: Convert 4k hash insert to C")
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/eeh: Fix partial hotplug criterion
Gavin Shan [Fri, 12 Feb 2016 05:03:05 +0000 (16:03 +1100)]
powerpc/eeh: Fix partial hotplug criterion

During error recovery, the device could be removed as part of the
partial hotplug. The criterion used to come with partial hotplug
is: if the device driver provides error_detected(), slot_reset()
and resume() callbacks, it's immune from hotplug. Otherwise,
it's going to experience partial hotplug during EEH recovery. But
the criterion isn't correct enough: mlx4_core driver for Mellanox
adapters provides error_detected(), slot_reset() callbacks, but
resume() isn't there. Those Mellanox adapters won't be to involved
in the partial hotplug.

This fixes the criterion to a practical one: adpater with driver
that provides error_detected(), slot_reset() will be immune from
partial hotplug. resume() isn't mandatory.

Fixes: f2da4ccf ("powerpc/eeh: More relaxed hotplug criterion")
Cc: stable@vger.kernel.org #v4.4+
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc: atomic: Implement acquire/release/relaxed variants for cmpxchg
Boqun Feng [Tue, 15 Dec 2015 14:24:17 +0000 (22:24 +0800)]
powerpc: atomic: Implement acquire/release/relaxed variants for cmpxchg

Implement cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed, based on
which _release variants can be built.

To avoid superfluous barriers in _acquire variants, we implement these
operations with assembly code rather use __atomic_op_acquire() to build
them automatically.

For the same reason, we keep the assembly implementation of fully
ordered cmpxchg operations.

However, we don't do the similar for _release, because that will require
putting barriers in the middle of ll/sc loops, which is probably a bad
idea.

Note cmpxchg{,64}_relaxed and atomic{,64}_cmpxchg_relaxed are not
compiler barriers.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc: atomic: Implement acquire/release/relaxed variants for xchg
Boqun Feng [Tue, 15 Dec 2015 14:24:16 +0000 (22:24 +0800)]
powerpc: atomic: Implement acquire/release/relaxed variants for xchg

Implement xchg{,64}_relaxed and atomic{,64}_xchg_relaxed, based on these
_relaxed variants, release/acquire variants and fully ordered versions
can be built.

Note that xchg{,64}_relaxed and atomic_{,64}_xchg_relaxed are not
compiler barriers.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc: atomic: Implement atomic{, 64}_*_return_* variants
Boqun Feng [Wed, 6 Jan 2016 02:08:25 +0000 (10:08 +0800)]
powerpc: atomic: Implement atomic{, 64}_*_return_* variants

On powerpc, acquire and release semantics can be achieved with
lightweight barriers("lwsync" and "ctrl+isync"), which can be used to
implement __atomic_op_{acquire,release}.

For release semantics, since we only need to ensure all memory accesses
that issue before must take effects before the -store- part of the
atomics, "lwsync" is what we only need. On the platform without
"lwsync", "sync" should be used. Therefore in __atomic_op_release() we
use PPC_RELEASE_BARRIER.

For acquire semantics, "lwsync" is what we only need for the similar
reason.  However on the platform without "lwsync", we can use "isync"
rather than "sync" as an acquire barrier. Therefore in
__atomic_op_acquire() we use PPC_ACQUIRE_BARRIER, which is barrier() on
UP, "lwsync" if available and "isync" otherwise.

Implement atomic{,64}_{add,sub,inc,dec}_return_relaxed, and build other
variants with these helpers.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agoatomics: Allow architectures to define their own __atomic_op_* helpers
Boqun Feng [Tue, 15 Dec 2015 14:24:14 +0000 (22:24 +0800)]
atomics: Allow architectures to define their own __atomic_op_* helpers

Some architectures may have their special barriers for acquire, release
and fence semantics, so that general memory barriers(smp_mb__*_atomic())
in the default __atomic_op_*() may be too strong, so allow architectures
to define their own helpers which can overwrite the default helpers.

Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agoMAINTAINERS: Update EEH details and maintainership
Russell Currey [Wed, 17 Feb 2016 06:06:04 +0000 (17:06 +1100)]
MAINTAINERS: Update EEH details and maintainership

Enhanced Error Handling could mean anything in the context of the entire
kernel, so change the name to reference that it is both for PCI and
powerpc.

EEH covers a bit more than the previously listed files, so add the headers
and platform-specific code to the EEH maintained section.

In addition, I am taking over the maintainership.

Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc: Fix kgdb on little endian ppc64le
Balbir Singh [Mon, 1 Feb 2016 06:03:25 +0000 (17:03 +1100)]
powerpc: Fix kgdb on little endian ppc64le

I spent some time trying to use kgdb and debugged my inability to
resume from kgdb_handle_breakpoint(). NIP is not incremented
and that leads to a loop in the debugger.

I've tested this lightly on a virtual instance with KDB enabled.
After the patch, I am able to get the "go" command to work as
expected.

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/ioda: Set "read" permission when "write" is set
Alexey Kardashevskiy [Wed, 17 Feb 2016 07:26:31 +0000 (18:26 +1100)]
powerpc/ioda: Set "read" permission when "write" is set

Quite often drivers set only "write" permission assuming that this
includes "read" permission as well and this works on plenty of
platforms. However IODA2 is strict about this and produces an EEH when
"read" permission is not set and reading happens.

This adds a workaround in the IODA code to always add the "read" bit
when the "write" bit is set.

Fixes: 10b35b2b7485 ("powerpc/powernv: Do not set "read" flag if direction==DMA_NONE")
Cc: stable@vger.kernel.org # 4.2+
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Tested-by: Douglas Miller <dougmill@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/mm: Fix Multi hit ERAT cause by recent THP update
Aneesh Kumar K.V [Tue, 9 Feb 2016 01:20:31 +0000 (06:50 +0530)]
powerpc/mm: Fix Multi hit ERAT cause by recent THP update

With ppc64 we use the deposited pgtable_t to store the hash pte slot
information. We should not withdraw the deposited pgtable_t without
marking the pmd none. This ensure that low level hash fault handling
will skip this huge pte and we will handle them at upper levels.

Recent change to pmd splitting changed the above in order to handle the
race between pmd split and exit_mmap. The race is explained below.

Consider following race:

CPU0 CPU1
shrink_page_list()
  add_to_swap()
    split_huge_page_to_list()
      __split_huge_pmd_locked()
        pmdp_huge_clear_flush_notify()
// pmd_none() == true
exit_mmap()
  unmap_vmas()
    zap_pmd_range()
      // no action on pmd since pmd_none() == true
pmd_populate()

As result the THP will not be freed. The leak is detected by check_mm():

BUG: Bad rss-counter state mm:ffff880058d2e580 idx:1 val:512

The above required us to not mark pmd none during a pmd split.

The fix for ppc is to clear the huge pte of _PAGE_USER, so that low
level fault handling code skip this pte. At higher level we do take ptl
lock. That should serialze us against the pmd split. Once the lock is
acquired we do check the pmd again using pmd_same. That should always
return false for us and hence we should retry the access. We do the
pmd_same check in all case after taking plt with
THP (do_huge_pmd_wp_page, do_huge_pmd_numa_page and
huge_pmd_set_accessed)

Also make sure we wait for irq disable section in other cpus to finish
before flipping a huge pte entry with a regular pmd entry. Code paths
like find_linux_pte_or_hugepte depend on irq disable to get
a stable pte_t pointer. A parallel thp split need to make sure we
don't convert a pmd pte to a regular pmd entry without waiting for the
irq disable section to finish.

Fixes: eef1b3ba053a ("thp: implement split_huge_pmd()")
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/powernv: Fix stale PE primary bus
Gavin Shan [Tue, 9 Feb 2016 04:50:22 +0000 (15:50 +1100)]
powerpc/powernv: Fix stale PE primary bus

When PCI bus is unplugged during full hotplug for EEH recovery,
the platform PE instance (struct pnv_ioda_pe) isn't released and
it dereferences the stale PCI bus that has been released. It leads
to kernel crash when referring to the stale PCI bus.

This fixes the issue by correcting the PE's primary bus when it's
oneline at plugging time, in pnv_pci_dma_bus_setup() which is to
be called by pcibios_fixup_bus().

Cc: stable@vger.kernel.org # v4.1+
Reported-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Reported-by: Pradipta Ghosh <pradghos@in.ibm.com>
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Tested-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/eeh: Fix stale cached primary bus
Gavin Shan [Tue, 9 Feb 2016 04:50:21 +0000 (15:50 +1100)]
powerpc/eeh: Fix stale cached primary bus

When PE is created, its primary bus is cached to pe->bus. At later
point, the cached primary bus is returned from eeh_pe_bus_get().
However, we could get stale cached primary bus and run into kernel
crash in one case: full hotplug as part of fenced PHB error recovery
releases all PCI busses under the PHB at unplugging time and recreate
them at plugging time. pe->bus is still dereferencing the PCI bus
that was released.

This adds another PE flag (EEH_PE_PRI_BUS) to represent the validity
of pe->bus. pe->bus is updated when its first child EEH device is
online and the flag is set. Before unplugging in full hotplug for
error recovery, the flag is cleared.

Fixes: 8cdb2833 ("powerpc/eeh: Trace PCI bus from PE")
Cc: stable@vger.kernel.org #v3.11+
Reported-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Reported-by: Pradipta Ghosh <pradghos@in.ibm.com>
Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Tested-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/pseries: Don't trace hcalls on offline CPUs
Denis Kirjanov [Mon, 14 Dec 2015 20:18:06 +0000 (23:18 +0300)]
powerpc/pseries: Don't trace hcalls on offline CPUs

If a cpu is hotplugged while the hcall trace points are active, it's
possible to hit a warning from RCU due to the trace points calling into
RCU from an offline cpu, eg:

  RCU used illegally from offline CPU!
  rcu_scheduler_active = 1, debug_locks = 1

Make the hypervisor tracepoints conditional by using
TRACE_EVENT_FN_COND.

Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Denis Kirjanov <kda@linux-powerpc.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/perf/hv-gpci: Increase request buffer size
Sukadev Bhattiprolu [Tue, 9 Feb 2016 07:47:45 +0000 (02:47 -0500)]
powerpc/perf/hv-gpci: Increase request buffer size

The GPCI hcall allows for a 4K buffer but we limit the buffer to 1K.
The problem with a 1K buffer is if a request results in returning
more values than can be accomodated in the 1K buffer the request will
fail.

The buffer we are using is currently allocated on the stack and hence
limited in size. Instead use a per-CPU 4K buffer like we do with 24x7
counters (hv-24x7.c).

While here, rename the macro GPCI_MAX_DATA_BYTES to HGPCI_MAX_DATA_BYTES
for consistency with 24x7 counters.

Signed-off-by: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/powernv: allocate sparse PE# when using M64 BAR in Single PE mode
Wei Yang [Thu, 22 Oct 2015 01:22:19 +0000 (09:22 +0800)]
powerpc/powernv: allocate sparse PE# when using M64 BAR in Single PE mode

When M64 BAR is set to Single PE mode, the PE# assigned to VF could be
sparse.

This patch restructures the code to allocate sparse PE# for VFs when M64
BAR is set to Single PE mode. Also it rename the offset to pe_num_map to
reflect the content is the PE number.

Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com>
Reviewed-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Acked-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/powernv: boundary the total VF BAR size instead of the individual one
Wei Yang [Thu, 22 Oct 2015 01:22:18 +0000 (09:22 +0800)]
powerpc/powernv: boundary the total VF BAR size instead of the individual one

Each VF could have 6 BARs at most. When the total BAR size exceeds the
gate, after expanding it will also exhaust the M64 Window.

This patch limits the boundary by checking the total VF BAR size instead of
the individual BAR.

Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com>
Reviewed-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Acked-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/powernv: replace the hard coded boundary with gate
Wei Yang [Thu, 22 Oct 2015 01:22:17 +0000 (09:22 +0800)]
powerpc/powernv: replace the hard coded boundary with gate

At the moment 64bit-prefetchable window can be maximum 64GB, which is
currently got from device tree. This means that in shared mode the maximum
supported VF BAR size is 64GB/256=256MB. While this size could exhaust the
whole 64bit-prefetchable window. This is a design decision to set a
boundary to 64MB of the VF BAR size. Since VF BAR size with 64MB would
occupy a quarter of the 64bit-prefetchable window, this is affordable.

This patch replaces magic limit of 64MB with "gate", which is 1/4 of the
M64 Segment Size(m64_segsize >> 2) and adds comment to explain the reason
for it.

Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com>
Reviewed-by: Gavin Shan <gwshan@linux.vent.ibm.com>
Acked-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/powernv: use one M64 BAR in Single PE mode for one VF BAR
Wei Yang [Thu, 22 Oct 2015 01:22:16 +0000 (09:22 +0800)]
powerpc/powernv: use one M64 BAR in Single PE mode for one VF BAR

In current implementation, when VF BAR is bigger than 64MB, it uses 4 M64
BARs in Single PE mode to cover the number of VFs required to be enabled.
By doing so, several VFs would be in one VF Group and leads to interference
between VFs in the same group.

And in this patch, m64_wins is renamed to m64_map, which means index number
of the M64 BAR used to map the VF BAR. Based on Gavin's comments. Also
makes sure the VF BAR size is bigger than 32MB when M64 BAR is used in
Single PE mode.

This patch changes the design by using one M64 BAR in Single PE mode for
one VF BAR. This gives absolute isolation for VFs.

Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com>
Reviewed-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Acked-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/powernv: simplify the calculation of iov resource alignment
Wei Yang [Thu, 22 Oct 2015 01:22:15 +0000 (09:22 +0800)]
powerpc/powernv: simplify the calculation of iov resource alignment

The alignment of IOV BAR on PowerNV platform is the total size of the IOV
BAR. No matter whether the IOV BAR is extended with number of
roundup_pow_of_two(total_vfs) or number of max PE number (256), the total
size could be calculated by (vfs_expanded * VF_BAR_size).

This patch simplifies the pnv_pci_iov_resource_alignment() by removing the
first case.

Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com>
Reviewed-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Acked-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/powernv: don't enable SRIOV when VF BAR has non 64bit-prefetchable BAR
Wei Yang [Thu, 22 Oct 2015 01:22:14 +0000 (09:22 +0800)]
powerpc/powernv: don't enable SRIOV when VF BAR has non 64bit-prefetchable BAR

On PHB3, we enable SRIOV devices by mapping IOV BAR with M64 BARs. If a
SRIOV device's IOV BAR is not 64bit-prefetchable, this is not assigned from
64bit prefetchable window, which means M64 BAR can't work on it.

The reason is PCI bridges support only 2 memory windows and the kernel code
programs bridges in the way that one window is 32bit-nonprefetchable and
the other one is 64bit-prefetchable. So if devices' IOV BAR is 64bit and
non-prefetchable, it will be mapped into 32bit space and therefore M64
cannot be used for it.

This patch makes this explicit and truncate IOV resource in this case to
save MMIO space.

Signed-off-by: Wei Yang <weiyang@linux.vnet.ibm.com>
Reviewed-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Acked-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
8 years agopowerpc/powernv: Simplify definitions of EEH debugfs handlers
Gavin Shan [Tue, 9 Feb 2016 04:50:24 +0000 (15:50 +1100)]
powerpc/powernv: Simplify definitions of EEH debugfs handlers

The EEH debugfs handlers have same prototype. This introduces
a macro to define them, then to simplify the code. No logical
changes.

Signed-off-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>