GitHub/LineageOS/android_kernel_motorola_exynos9610.git
12 years agocrypto: caam - increase TRNG clocks per sample
Kim Phillips [Mon, 17 Sep 2012 23:21:55 +0000 (18:21 -0500)]
crypto: caam - increase TRNG clocks per sample

we need to configure the TRNG to use more clocks per sample
to handle the two back-to-back 64KiB random descriptor requests
on higher frequency P5040s.

Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto, tcrypt: remove local_bh_disable/enable() around local_irq_disable/enable()
Suresh Siddha [Mon, 17 Sep 2012 17:37:26 +0000 (10:37 -0700)]
crypto, tcrypt: remove local_bh_disable/enable() around local_irq_disable/enable()

Ran into this while looking at some new crypto code using FPU
hitting a WARN_ON_ONCE(!irq_fpu_usable()) in the kernel_fpu_begin()
on a x86 kernel that uses the new eagerfpu model. In short, current eagerfpu
changes return 0 for interrupted_kernel_fpu_idle() and the in_interrupt()
thinks it is in the interrupt context because of the local_bh_disable().
Thus resulting in the WARN_ON().

Remove the local_bh_disable/enable() calls around the existing
local_irq_disable/enable() calls. local_irq_disable/enable() already
disables the BH.

 [ If there are any other legitimate users calling kernel_fpu_begin() from
   the process context but with BH disabled, then we can look into fixing the
   irq_fpu_usable() in future. ]

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: tegra-aes - fix error return code
Peter Senna Tschudin [Mon, 17 Sep 2012 17:28:28 +0000 (19:28 +0200)]
crypto: tegra-aes - fix error return code

Convert a nonnegative error return code to a negative one, as returned
elsewhere in the function.

A simplified version of the semantic match that finds this problem is as
follows: (http://coccinelle.lip6.fr/)

// <smpl>
(
if@p1 (\(ret < 0\|ret != 0\))
 { ... return ret; }
|
ret@p1 = 0
)
... when != ret = e1
    when != &ret
*if(...)
{
  ... when != ret = e2
      when forall
 return ret;
}
// </smpl>

Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: crypto4xx - fix error return code
Peter Senna Tschudin [Mon, 17 Sep 2012 17:28:27 +0000 (19:28 +0200)]
crypto: crypto4xx - fix error return code

Convert a nonnegative error return code to a negative one, as returned
elsewhere in the function.

A simplified version of the semantic match that finds this problem is as
follows: (http://coccinelle.lip6.fr/)

// <smpl>
(
if@p1 (\(ret < 0\|ret != 0\))
 { ... return ret; }
|
ret@p1 = 0
)
... when != ret = e1
    when != &ret
*if(...)
{
  ... when != ret = e2
      when forall
 return ret;
}
// </smpl>

Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: hifn_795x - fix error return code
Peter Senna Tschudin [Mon, 17 Sep 2012 17:28:26 +0000 (19:28 +0200)]
crypto: hifn_795x - fix error return code

Convert a nonnegative error return code to a negative one, as returned
elsewhere in the function.

A simplified version of the semantic match that finds this problem is as
follows: (http://coccinelle.lip6.fr/)

// <smpl>
(
if@p1 (\(ret < 0\|ret != 0\))
 { ... return ret; }
|
ret@p1 = 0
)
... when != ret = e1
    when != &ret
*if(...)
{
  ... when != ret = e2
      when forall
 return ret;
}
// </smpl>

Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: ux500 - fix error return code
Peter Senna Tschudin [Mon, 17 Sep 2012 17:28:25 +0000 (19:28 +0200)]
crypto: ux500 - fix error return code

Convert a nonnegative error return code to a negative one, as returned
elsewhere in the function.

A simplified version of the semantic match that finds this problem is as
follows: (http://coccinelle.lip6.fr/)

// <smpl>
(
if@p1 (\(ret < 0\|ret != 0\))
 { ... return ret; }
|
ret@p1 = 0
)
... when != ret = e1
    when != &ret
*if(...)
{
  ... when != ret = e2
      when forall
 return ret;
}
// </smpl>

Signed-off-by: Peter Senna Tschudin <peter.senna@gmail.com>
Reviewed-by: Arun Murthy <arunrmurthy83@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: caam - fix error IDs for SEC v5.x RNG4
Horia Geanta [Sat, 15 Sep 2012 00:33:54 +0000 (03:33 +0300)]
crypto: caam - fix error IDs for SEC v5.x RNG4

According to SEC v5.0-v5.3 reference manuals.

Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Acked-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agohwrng: mxc-rnga - Access data via structure
Fabio Estevam [Tue, 4 Sep 2012 21:35:22 +0000 (18:35 -0300)]
hwrng: mxc-rnga - Access data via structure

In current driver, everytime we need to access the rng clock
,ie to enable or disable it, a call to clk_get is done.

This is not correct and the preferred way is to provide a rng data structure
that could be used for accessing rng resources.

Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agohwrng: mxc-rnga - Adapt clocks to new i.mx clock framework
Fabio Estevam [Tue, 4 Sep 2012 21:35:21 +0000 (18:35 -0300)]
hwrng: mxc-rnga - Adapt clocks to new i.mx clock framework

Adapt clocks to the new i.mx clock framework and fix the following warning:

------------[ cut here ]------------
WARNING: at drivers/clk/clk.c:511 __clk_enable+0x9c/0xac()
Modules linked in:
Backtrace:
[<800124c8>] (dump_backtrace+0x0/0x10c) from [<804172dc>] (dump_stack+0x18/0x1c)
 r7:00000009 r6:000001ff r5:8032cb50 r4:00000000
[<804172c4>] (dump_stack+0x0/0x1c) from [<80021834>] (warn_slowpath_common+0x54)
[<800217e0>] (warn_slowpath_common+0x0/0x6c) from [<80021870>] (warn_slowpath_n)
 r9:80581cac r8:8700a9c0 r7:805ab070 r6:80000013 r5:806133d4
r4:8700a9c0
[<8002184c>] (warn_slowpath_null+0x0/0x2c) from [<8032cb50>] (__clk_enable+0x9c)
[<8032cab4>] (__clk_enable+0x0/0xac) from [<8032cb88>] (clk_enable+0x28/0x44)
 r5:806133d4 r4:8700a9c0
[<8032cb60>] (clk_enable+0x0/0x44) from [<80560f14>] (mxc_rnga_probe+0x68/0x164)
 r7:805ab070 r6:8706ec00 r5:80611314 r4:00000000
[<80560eac>] (mxc_rnga_probe+0x0/0x164) from [<8025914c>] (platform_drv_probe+0)
[<8025912c>] (platform_drv_probe+0x0/0x24) from [<80257c7c>] (driver_probe_devi)
[<80257bfc>] (driver_probe_device+0x0/0x204) from [<80257e94>] (__driver_attach)
 r9:80581cac r8:0000008e r7:00000000 r6:8706ec3c r5:805ab070
r4:8706ec08
[<80257e00>] (__driver_attach+0x0/0x98) from [<8025642c>] (bus_for_each_dev+0x6)
 r7:00000000 r6:80257e00 r5:87035e98 r4:805ab070
[<802563c4>] (bus_for_each_dev+0x0/0x94) from [<80257adc>] (driver_attach+0x20/)
 r7:00000000 r6:873f2380 r5:805ab338 r4:805ab070
[<80257abc>] (driver_attach+0x0/0x28) from [<80256d50>] (bus_add_driver+0x18c/0)
[<80256bc4>] (bus_add_driver+0x0/0x268) from [<802584c4>] (driver_register+0x80)
[<80258444>] (driver_register+0x0/0x134) from [<802594f4>] (platform_driver_reg)
 r7:00000000 r6:805c2e00 r5:00000007 r4:805ab05c
[<802594a8>] (platform_driver_register+0x0/0x60) from [<80259528>] (platform_dr)
[<80259508>] (platform_driver_probe+0x0/0xa4) from [<80560ea0>] (mod_init+0x18/)
 r7:00000000 r6:805c2e00 r5:00000007 r4:87034000
[<80560e88>] (mod_init+0x0/0x24) from [<800086b4>] (do_one_initcall+0x40/0x194)
[<80008674>] (do_one_initcall+0x0/0x194) from [<8053d3f4>] (kernel_init+0xfc/0x)
[<8053d2f8>] (kernel_init+0x0/0x1cc) from [<80027190>] (do_exit+0x0/0x7ec)
---[ end trace 4198eed02050f461 ]---

Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: caam - add IPsec ESN support
Horia Geanta [Tue, 4 Sep 2012 16:18:00 +0000 (19:18 +0300)]
crypto: caam - add IPsec ESN support

Support for ESNs (extended sequence numbers).
Tested with strongswan by connecting back-to-back P1010RDB with P2020RDB.

Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: 842 - remove .cra_list initialization
Jussi Kivilinna [Wed, 29 Aug 2012 20:37:25 +0000 (23:37 +0300)]
crypto: 842 - remove .cra_list initialization

.cra_list initialization is unneeded and have been removed from all other
crypto modules except 842.

Cc: Robert Jennings <rcj@linux.vnet.ibm.com>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agoRevert "[CRYPTO] cast6: inline bloat--"
Jussi Kivilinna [Tue, 28 Aug 2012 13:49:28 +0000 (16:49 +0300)]
Revert "[CRYPTO] cast6: inline bloat--"

This reverts commit e6ccc727f30a02670f6a00df6d548942bc988f43.

Above commit caused performance regression for CAST6. Reverting gives
following increase in tcrypt speed tests (revert-vs-old ratios).

AMD Phenom II X6 1055T, x86-64:

size    ecb             cbc             ctr             lrw             xts
        enc     dec     enc     dec     enc     dec     enc     dec     enc     dec
16b     1.15x   1.17x   1.16x   1.17x   1.16x   1.16x   1.14x   1.19x   1.05x   1.07x
64b     1.19x   1.23x   1.20x   1.22x   1.19x   1.19x   1.16x   1.24x   1.12x   1.12x
256b    1.21x   1.24x   1.22x   1.24x   1.20x   1.20x   1.17x   1.21x   1.16x   1.14x
1kb     1.21x   1.25x   1.22x   1.24x   1.21x   1.21x   1.18x   1.22x   1.17x   1.15x
8kb     1.21x   1.25x   1.22x   1.24x   1.21x   1.21x   1.18x   1.22x   1.18x   1.15x

Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: cast6 - fix sparse warnings (symbol was not declared, should be static?)
Jussi Kivilinna [Tue, 28 Aug 2012 13:47:09 +0000 (16:47 +0300)]
crypto: cast6 - fix sparse warnings (symbol was not declared, should be static?)

Fix "symbol 'x' was not declared. Should it be static?" sparse warnings.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: cast5 - fix sparse warnings (symbol was not declared, should be static?)
Jussi Kivilinna [Tue, 28 Aug 2012 13:47:04 +0000 (16:47 +0300)]
crypto: cast5 - fix sparse warnings (symbol was not declared, should be static?)

Fix "symbol 'x' was not declared. Should it be static?" sparse warnings.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: camellia-x86_64 - fix sparse warnings (constant is so big)
Jussi Kivilinna [Tue, 28 Aug 2012 13:46:59 +0000 (16:46 +0300)]
crypto: camellia-x86_64 - fix sparse warnings (constant is so big)

Fix "constant 0xXXXXXXXXXXXXXXXX is so big it's unsigned long" sparse warnings.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: crypto_user - fix sparse warnings (symbol was not declared, should be static?)
Jussi Kivilinna [Tue, 28 Aug 2012 13:46:54 +0000 (16:46 +0300)]
crypto: crypto_user - fix sparse warnings (symbol was not declared, should be static?)

Fix "symbol 'x' was not declared. Should it be static?" sparse warnings.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: cast6-avx - tune assembler code for more performance
Jussi Kivilinna [Tue, 28 Aug 2012 11:24:54 +0000 (14:24 +0300)]
crypto: cast6-avx - tune assembler code for more performance

Patch replaces 'movb' instructions with 'movzbl' to break false register
dependencies, interleaves instructions better for out-of-order scheduling
and merges constant 16-bit rotation with round-key variable rotation.

tcrypt ECB results:

Intel Core i5-2450M:

size    old-vs-new      new-vs-generic  old-vs-generic
        enc     dec     enc     dec     enc     dec
256     1.13x   1.19x   2.05x   2.17x   1.82x   1.82x
1k      1.18x   1.21x   2.26x   2.33x   1.93x   1.93x
8k      1.19x   1.19x   2.32x   2.33x   1.95x   1.95x

[v2]
 - Do instruction interleaving another way to avoid adding new FPU<=>CPU
   register moves as these cause performance drop on Bulldozer.
 - Improvements to round-key variable rotation handling.
 - Further interleaving improvements for better out-of-order scheduling.

Cc: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: cast5-avx - tune assembler code for more performance
Jussi Kivilinna [Tue, 28 Aug 2012 11:24:49 +0000 (14:24 +0300)]
crypto: cast5-avx - tune assembler code for more performance

Patch replaces 'movb' instructions with 'movzbl' to break false register
dependencies, interleaves instructions better for out-of-order scheduling
and merges constant 16-bit rotation with round-key variable rotation.

tcrypt ECB results (128bit key):

Intel Core i5-2450M:

size    old-vs-new      new-vs-generic  old-vs-generic
        enc     dec     enc     dec     enc     dec
256     1.18x   1.18x   2.45x   2.47x   2.08x   2.10x
1k      1.20x   1.20x   2.73x   2.73x   2.28x   2.28x
8k      1.20x   1.19x   2.73x   2.73x   2.28x   2.29x

[v2]
 - Do instruction interleaving another way to avoid adding new FPU<=>CPU
   register moves as these cause performance drop on Bulldozer.
 - Improvements to round-key variable rotation handling.
 - Further interleaving improvements for better out-of-order scheduling.

Cc: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: twofish-avx - tune assembler code for more performance
Jussi Kivilinna [Tue, 28 Aug 2012 11:24:43 +0000 (14:24 +0300)]
crypto: twofish-avx - tune assembler code for more performance

Patch replaces 'movb' instructions with 'movzbl' to break false register
dependencies and interleaves instructions better for out-of-order scheduling.

Tested on Intel Core i5-2450M and AMD FX-8100.

tcrypt ECB results:

Intel Core i5-2450M:

size    old-vs-new      new-vs-3way     old-vs-3way
        enc     dec     enc     dec     enc     dec
256     1.12x   1.13x   1.36x   1.37x   1.21x   1.22x
1k      1.14x   1.14x   1.48x   1.49x   1.29x   1.31x
8k      1.14x   1.14x   1.50x   1.52x   1.32x   1.33x

AMD FX-8100:

size    old-vs-new      new-vs-3way     old-vs-3way
        enc     dec     enc     dec     enc     dec
256     1.10x   1.11x   1.01x   1.01x   0.92x   0.91x
1k      1.11x   1.12x   1.08x   1.07x   0.97x   0.96x
8k      1.11x   1.13x   1.10x   1.08x   0.99x   0.97x

[v2]
 - Do instruction interleaving another way to avoid adding new FPU<=>CPU
   register moves as these cause performance drop on Bulldozer.
 - Further interleaving improvements for better out-of-order scheduling.

Tested-by: Borislav Petkov <bp@alien8.de>
Cc: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: geode-aes - Use module_pci_driver
Sachin Kamat [Mon, 27 Aug 2012 10:15:31 +0000 (15:45 +0530)]
crypto: geode-aes - Use module_pci_driver

module_pci_driver makes the code simpler by eliminating
module_init and module_exit calls.

Signed-off-by: Sachin Kamat <sachin.kamat@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: remove duplicated include
Wei Yongjun [Sun, 26 Aug 2012 01:34:06 +0000 (09:34 +0800)]
crypto: remove duplicated include

From: Wei Yongjun <yongjun_wei@trendmicro.com.cn>

Remove duplicated include.

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: caam - coccicheck fixes
Kim Phillips [Thu, 6 Sep 2012 20:17:03 +0000 (04:17 +0800)]
crypto: caam - coccicheck fixes

use true/false for bool, fix code alignment, and fix two allocs with
no test.

Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: ux500/hash - remove unneeded return at ux500_hash_mod_fini
Devendra Naga [Fri, 24 Aug 2012 17:33:57 +0000 (23:03 +0530)]
crypto: ux500/hash - remove unneeded return at ux500_hash_mod_fini

Signed-off-by: Devendra Naga <develkernel412222@gmail.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agoarm/crypto: Add optimized AES and SHA1 routines
David McCullough [Thu, 6 Sep 2012 20:17:02 +0000 (04:17 +0800)]
arm/crypto: Add optimized AES and SHA1 routines

Add assembler versions of AES and SHA1 for ARM platforms.  This has provided
up to a 50% improvement in IPsec/TCP throughout for tunnels using AES128/SHA1.

Platform   CPU SPeed    Endian   Before (bps)   After (bps)   Improvement

IXP425      533 MHz      big     11217042        15566294        ~38%
KS8695      166 MHz     little    3828549         5795373        ~51%

Signed-off-by: David McCullough <ucdevel@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: Add a MAINTAINERS entry for P7+ in-Nest crypto driver
Kent Yoder [Thu, 6 Sep 2012 20:17:01 +0000 (04:17 +0800)]
crypto: Add a MAINTAINERS entry for P7+ in-Nest crypto driver

Add a MAINTAINERS entry for the IBM Power in-Nest Crypto Acceleators
driver.

Signed-off-by: Kent Yoder <key@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: talitos - add IPsec ESN support
Horia Geanta [Wed, 8 Aug 2012 15:46:45 +0000 (18:46 +0300)]
crypto: talitos - add IPsec ESN support

Support for ESNs (extended sequence numbers).
Tested with strongswan on a P2020RDB back-to-back setup.
Extracted from /etc/ipsec.conf:
esp=aes-sha1-esn-modp4096!

Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: talitos - support for assoc data provided as scatterlist
Horia Geanta [Thu, 2 Aug 2012 14:16:40 +0000 (17:16 +0300)]
crypto: talitos - support for assoc data provided as scatterlist

Generate a link table in case assoc data is a scatterlist.
While at it, add support for handling non-contiguous assoc data and iv.

Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: talitos - change type and name for [src|dst]_is_chained
Horia Geanta [Thu, 2 Aug 2012 14:16:39 +0000 (17:16 +0300)]
crypto: talitos - change type and name for [src|dst]_is_chained

It's more natural to think of these vars as bool rather than int.

Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: talitos - prune unneeded descriptor allocation param
Horia Geanta [Thu, 2 Aug 2012 14:16:38 +0000 (17:16 +0300)]
crypto: talitos - prune unneeded descriptor allocation param

talitos_edesc_alloc does not need hash_result param.
Checking whether dst scatterlist is NULL or not is all that is required.

Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: talitos - fix icv management on outbound direction
Horia Geanta [Thu, 2 Aug 2012 14:16:37 +0000 (17:16 +0300)]
crypto: talitos - fix icv management on outbound direction

For IPsec encryption, in the case when:
-the input buffer is fragmented (edesc->src_nents > 0)
-the output buffer is not fragmented (edesc->dst_nents = 0)
the ICV is not output in the link table, but after the encrypted payload.

Copying the ICV must be avoided in this case; consequently the condition
edesc->dma_len > 0 must be more specific, i.e. must depend on the type
of the output buffer - fragmented or not.

Testing was performed by modifying testmgr to support src != dst,
since currently native kernel IPsec does in-place encryption
(src == dst).

Signed-off-by: Horia Geanta <horia.geanta@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: talitos - consolidate common cra_* assignments
Kim Phillips [Thu, 9 Aug 2012 01:33:34 +0000 (20:33 -0500)]
crypto: talitos - consolidate common cra_* assignments

the entry points and geniv definitions for all aead,
ablkcipher, and hash algorithms are all common; move them to a
single assignment in talitos_alg_alloc().

This assumes it's ok to assign a setkey() on non-hmac algs.

Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: talitos - consolidate cra_type assignments
Kim Phillips [Thu, 9 Aug 2012 01:32:00 +0000 (20:32 -0500)]
crypto: talitos - consolidate cra_type assignments

lighten driver_algs[] by moving them to talitos_alg_alloc().

Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: atmel - Remove possible typo error
Tushar Behera [Tue, 7 Aug 2012 12:02:14 +0000 (17:32 +0530)]
crypto: atmel - Remove possible typo error

Commit bd3c7b5c2aba ("crypto: atmel - add Atmel AES driver") possibly
has a typo error of adding an extra CONFIG_.

CC: Nicolas Royer <nicolas@eukrea.com>
Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agodrivers/char/hw_random/octeon-rng.c: drop frees of devm allocated data
Julia Lawall [Sat, 4 Aug 2012 16:50:46 +0000 (18:50 +0200)]
drivers/char/hw_random/octeon-rng.c: drop frees of devm allocated data

devm_kfree and devm_iounmap should not have to be explicitly used.

The semantic patch that fixes this problem is as follows:
(http://coccinelle.lip6.fr/)

// <smpl>
@@
expression x,d;
@@

x = devm_kzalloc(...)
...
?-devm_kfree(d,x);
// </smpl>

Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: nx - Remove virt_to_abs() usage in nx-842.c
Michael Ellerman [Fri, 3 Aug 2012 02:23:15 +0000 (12:23 +1000)]
crypto: nx - Remove virt_to_abs() usage in nx-842.c

virt_to_abs() is just a wrapper around __pa(), use __pa() directly.

We should be including <asm/page.h> to get __pa(). abs_addr.h will be
removed shortly so drop that.

We were getting of.h via abs_addr.h so we need to include that directly.

Having done all that, clean up the ordering of the includes.

Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Acked-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: aesni_intel - improve lrw and xts performance by utilizing parallel AES-NI...
Jussi Kivilinna [Sun, 22 Jul 2012 15:18:37 +0000 (18:18 +0300)]
crypto: aesni_intel - improve lrw and xts performance by utilizing parallel AES-NI hardware pipelines

Use parallel LRW and XTS encryption facilities to better utilize AES-NI
hardware pipelines and gain extra performance.

Tcrypt benchmark results (async), old vs new ratios:

Intel Core i5-2450M CPU (fam: 6, model: 42, step: 7)

aes:128bit
        lrw:256bit      xts:256bit
size    lrw-enc lrw-dec xts-dec xts-dec
16B     0.99x   1.00x   1.22x   1.19x
64B     1.38x   1.50x   1.58x   1.61x
256B    2.04x   2.02x   2.27x   2.29x
1024B   2.56x   2.54x   2.89x   2.92x
8192B   2.85x   2.99x   3.40x   3.23x

aes:192bit
        lrw:320bit      xts:384bit
size    lrw-enc lrw-dec xts-dec xts-dec
16B     1.08x   1.08x   1.16x   1.17x
64B     1.48x   1.54x   1.59x   1.65x
256B    2.18x   2.17x   2.29x   2.28x
1024B   2.67x   2.67x   2.87x   3.05x
8192B   2.93x   2.84x   3.28x   3.33x

aes:256bit
        lrw:348bit      xts:512bit
size    lrw-enc lrw-dec xts-dec xts-dec
16B     1.07x   1.07x   1.18x   1.19x
64B     1.56x   1.56x   1.70x   1.71x
256B    2.22x   2.24x   2.46x   2.46x
1024B   2.76x   2.77x   3.13x   3.05x
8192B   2.99x   3.05x   3.40x   3.30x

Cc: Huang Ying <ying.huang@intel.com>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Reviewed-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agopowerpc/crypto: add 842 crypto driver
Seth Jennings [Thu, 19 Jul 2012 14:42:41 +0000 (09:42 -0500)]
powerpc/crypto: add 842 crypto driver

This patch add the 842 cryptographic API driver that
submits compression requests to the 842 hardware compression
accelerator driver (nx-compress).

If the hardware accelerator goes offline for any reason
(dynamic disable, migration, etc...), this driver will use LZO
as a software failover for all future compression requests.
For decompression requests, the 842 hardware driver contains
a software implementation of the 842 decompressor to support
the decompression of data that was compressed before the accelerator
went offline.

Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agopowerpc/crypto: add 842 hardware compression driver
Seth Jennings [Thu, 19 Jul 2012 14:42:40 +0000 (09:42 -0500)]
powerpc/crypto: add 842 hardware compression driver

This patch adds the driver for interacting with the 842
compression accelerator on IBM Power7+ systems.

The device is a child of the Platform Facilities Option (PFO)
and shows up as a child of the IBM VIO bus.

The compression/decompression API takes the same arguments
as existing compression methods like lzo and deflate.  The 842
hardware operates on 4K hardware pages and the driver breaks up
input on 4K boundaries to submit it to the hardware accelerator.

Signed-off-by: Robert Jennings <rcj@linux.vnet.ibm.com>
Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agopowerpc/crypto: add compression support to arch vec
Seth Jennings [Thu, 19 Jul 2012 14:42:39 +0000 (09:42 -0500)]
powerpc/crypto: add compression support to arch vec

This patch enables compression engine support in the
architecture vector.  This causes the Power hypervisor
to allow access to the nx comrpession accelerator.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agopowerpc/crypto: rework Kconfig
Seth Jennings [Thu, 19 Jul 2012 14:42:38 +0000 (09:42 -0500)]
powerpc/crypto: rework Kconfig

This patch creates a new submenu for the NX cryptographic
hardware accelerator and breaks the NX options into their own
Kconfig file under drivers/crypto/nx/Kconfig.

This will permit additional NX functionality to be easily
and more cleanly added in the future without touching
drivers/crypto/Makefile|Kconfig.

Signed-off-by: Seth Jennings <sjenning@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: caam - set descriptor sharing type to SERIAL
Kim Phillips [Fri, 13 Jul 2012 22:49:28 +0000 (17:49 -0500)]
crypto: caam - set descriptor sharing type to SERIAL

SHARE_WAIT, whilst more optimal for association-less crypto,
has the ability to start thrashing the CCB descriptor/key
caches, given high levels of traffic across multiple security
associations (and thus keys).

Switch to using the SERIAL sharing type, which prefers
the last used CCB for the SA.  On a 2-DECO platform
such as the P3041, this can improve performance by
about 3.7%.

Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: caam - add backward compatible string sec4.0
Shengzhou Liu [Fri, 13 Jul 2012 22:49:21 +0000 (17:49 -0500)]
crypto: caam - add backward compatible string sec4.0

In some device trees of previous version, there were string "fsl,sec4.0".
To be backward compatible with device trees, we first check "fsl,sec-v4.0",
if it fails, then check for "fsl,sec4.0".

Signed-off-by: Shengzhou Liu <Shengzhou.Liu@freescale.com>
extended to include new hash and rng code, which was omitted from
the previous version of this patch during a rebase of the SDK
version.

Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: caam - fix possible deadlock condition
Kim Phillips [Fri, 13 Jul 2012 23:04:23 +0000 (18:04 -0500)]
crypto: caam - fix possible deadlock condition

commit "crypto: caam - use non-irq versions of spinlocks for job rings"
made two bad assumptions:

(a) The caam_jr_enqueue lock isn't used in softirq context.
Not true: jr_enqueue can be interrupted by an incoming net
interrupt and the received packet may be sent for encryption,
via caam_jr_enqueue in softirq context, thereby inducing a
deadlock.

This is evidenced when running netperf over an IPSec tunnel
between two P4080's, with spinlock debugging turned on:

[  892.092569] BUG: spinlock lockup on CPU#7, netperf/10634, e8bf5f70
[  892.098747] Call Trace:
[  892.101197] [eff9fc10] [c00084c0] show_stack+0x48/0x15c (unreliable)
[  892.107563] [eff9fc50] [c0239c2c] do_raw_spin_lock+0x16c/0x174
[  892.113399] [eff9fc80] [c0596494] _raw_spin_lock+0x3c/0x50
[  892.118889] [eff9fc90] [c0445e74] caam_jr_enqueue+0xf8/0x250
[  892.124550] [eff9fcd0] [c044a644] aead_decrypt+0x6c/0xc8
[  892.129625] BUG: spinlock lockup on CPU#5, swapper/5/0, e8bf5f70
[  892.129629] Call Trace:
[  892.129637] [effa7c10] [c00084c0] show_stack+0x48/0x15c (unreliable)
[  892.129645] [effa7c50] [c0239c2c] do_raw_spin_lock+0x16c/0x174
[  892.129652] [effa7c80] [c0596494] _raw_spin_lock+0x3c/0x50
[  892.129660] [effa7c90] [c0445e74] caam_jr_enqueue+0xf8/0x250
[  892.129666] [effa7cd0] [c044a644] aead_decrypt+0x6c/0xc8
[  892.129674] [effa7d00] [c0509724] esp_input+0x178/0x334
[  892.129681] [effa7d50] [c0519778] xfrm_input+0x77c/0x818
[  892.129688] [effa7da0] [c050e344] xfrm4_rcv_encap+0x20/0x30
[  892.129697] [effa7db0] [c04b90c8] ip_local_deliver+0x190/0x408
[  892.129703] [effa7de0] [c04b966c] ip_rcv+0x32c/0x898
[  892.129709] [effa7e10] [c048b998] __netif_receive_skb+0x27c/0x4e8
[  892.129715] [effa7e80] [c048d744] netif_receive_skb+0x4c/0x13c
[  892.129726] [effa7eb0] [c03c28ac] _dpa_rx+0x1a8/0x354
[  892.129732] [effa7ef0] [c03c2ac4] ingress_rx_default_dqrr+0x6c/0x108
[  892.129742] [effa7f10] [c0467ae0] qman_poll_dqrr+0x170/0x1d4
[  892.129748] [effa7f40] [c03c153c] dpaa_eth_poll+0x20/0x94
[  892.129754] [effa7f60] [c048dbd0] net_rx_action+0x13c/0x1f4
[  892.129763] [effa7fa0] [c003d1b8] __do_softirq+0x108/0x1b0
[  892.129769] [effa7ff0] [c000df58] call_do_softirq+0x14/0x24
[  892.129775] [ebacfe70] [c0004868] do_softirq+0xd8/0x104
[  892.129780] [ebacfe90] [c003d5a4] irq_exit+0xb8/0xd8
[  892.129786] [ebacfea0] [c0004498] do_IRQ+0xa4/0x1b0
[  892.129792] [ebacfed0] [c000fad8] ret_from_except+0x0/0x18
[  892.129798] [ebacff90] [c0009010] cpu_idle+0x94/0xf0
[  892.129804] [ebacffb0] [c059ff88] start_secondary+0x42c/0x430
[  892.129809] [ebacfff0] [c0001e28] __secondary_start+0x30/0x84
[  892.281474]
[  892.282959] [eff9fd00] [c0509724] esp_input+0x178/0x334
[  892.288186] [eff9fd50] [c0519778] xfrm_input+0x77c/0x818
[  892.293499] [eff9fda0] [c050e344] xfrm4_rcv_encap+0x20/0x30
[  892.299074] [eff9fdb0] [c04b90c8] ip_local_deliver+0x190/0x408
[  892.304907] [eff9fde0] [c04b966c] ip_rcv+0x32c/0x898
[  892.309872] [eff9fe10] [c048b998] __netif_receive_skb+0x27c/0x4e8
[  892.315966] [eff9fe80] [c048d744] netif_receive_skb+0x4c/0x13c
[  892.321803] [eff9feb0] [c03c28ac] _dpa_rx+0x1a8/0x354
[  892.326855] [eff9fef0] [c03c2ac4] ingress_rx_default_dqrr+0x6c/0x108
[  892.333212] [eff9ff10] [c0467ae0] qman_poll_dqrr+0x170/0x1d4
[  892.338872] [eff9ff40] [c03c153c] dpaa_eth_poll+0x20/0x94
[  892.344271] [eff9ff60] [c048dbd0] net_rx_action+0x13c/0x1f4
[  892.349846] [eff9ffa0] [c003d1b8] __do_softirq+0x108/0x1b0
[  892.355338] [eff9fff0] [c000df58] call_do_softirq+0x14/0x24
[  892.360910] [e7169950] [c0004868] do_softirq+0xd8/0x104
[  892.366135] [e7169970] [c003d5a4] irq_exit+0xb8/0xd8
[  892.371101] [e7169980] [c0004498] do_IRQ+0xa4/0x1b0
[  892.375979] [e71699b0] [c000fad8] ret_from_except+0x0/0x18
[  892.381466] [e7169a70] [c0445e74] caam_jr_enqueue+0xf8/0x250
[  892.387127] [e7169ab0] [c044ad4c] aead_givencrypt+0x6ac/0xa70
[  892.392873] [e7169b20] [c050a0b8] esp_output+0x2b4/0x570
[  892.398186] [e7169b80] [c0519b9c] xfrm_output_resume+0x248/0x7c0
[  892.404194] [e7169bb0] [c050e89c] xfrm4_output_finish+0x18/0x28
[  892.410113] [e7169bc0] [c050e8f4] xfrm4_output+0x48/0x98
[  892.415427] [e7169bd0] [c04beac0] ip_local_out+0x48/0x98
[  892.420740] [e7169be0] [c04bec7c] ip_queue_xmit+0x16c/0x490
[  892.426314] [e7169c10] [c04d6128] tcp_transmit_skb+0x35c/0x9a4
[  892.432147] [e7169c70] [c04d6f98] tcp_write_xmit+0x200/0xa04
[  892.437808] [e7169cc0] [c04c8ccc] tcp_sendmsg+0x994/0xcec
[  892.443213] [e7169d40] [c04eebfc] inet_sendmsg+0xd0/0x164
[  892.448617] [e7169d70] [c04792f8] sock_sendmsg+0x8c/0xbc
[  892.453931] [e7169e40] [c047aecc] sys_sendto+0xc0/0xfc
[  892.459069] [e7169f10] [c047b934] sys_socketcall+0x110/0x25c
[  892.464729] [e7169f40] [c000f480] ret_from_syscall+0x0/0x3c

(b) since the caam_jr_dequeue lock is only used in bh context,
then semantically it should use _bh spin_lock types.  spin_lock_bh
semantics are to disable back-halves, and used when a lock is shared
between softirq (bh) context and process and/or h/w IRQ context.
Since the lock is only used within softirq context, and this tasklet
is atomic, there is no need to do the additional work to disable
back halves.

This patch adds back-half disabling protection to caam_jr_enqueue
spin_locks to fix (a), and drops it from caam_jr_dequeue to fix (b).

Signed-off-by: Kim Phillips <kim.phillips@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: cast6 - add x86_64/avx assembler implementation
Johannes Goetzfried [Wed, 11 Jul 2012 17:38:57 +0000 (19:38 +0200)]
crypto: cast6 - add x86_64/avx assembler implementation

This patch adds a x86_64/avx assembler implementation of the Cast6 block
cipher. The implementation processes eight blocks in parallel (two 4 block
chunk AVX operations). The table-lookups are done in general-purpose registers.
For small blocksizes the functions from the generic module are called. A good
performance increase is provided for blocksizes greater or equal to 128B.

Patch has been tested with tcrypt and automated filesystem tests.

Tcrypt benchmark results:

Intel Core i5-2500 CPU (fam:6, model:42, step:7)

cast6-avx-x86_64 vs. cast6-generic
128bit key:                                             (lrw:256bit)    (xts:256bit)
size    ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec
16B     0.97x   1.00x   1.01x   1.01x   0.99x   0.97x   0.98x   1.01x   0.96x   0.98x
64B     0.98x   0.99x   1.02x   1.01x   0.99x   1.00x   1.01x   0.99x   1.00x   0.99x
256B    1.77x   1.84x   0.99x   1.85x   1.77x   1.77x   1.70x   1.74x   1.69x   1.72x
1024B   1.93x   1.95x   0.99x   1.96x   1.93x   1.93x   1.84x   1.85x   1.89x   1.87x
8192B   1.91x   1.95x   0.99x   1.97x   1.95x   1.91x   1.86x   1.87x   1.93x   1.90x

256bit key:                                             (lrw:384bit)    (xts:512bit)
size    ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec
16B     0.97x   0.99x   1.02x   1.01x   0.98x   0.99x   1.00x   1.00x   0.98x   0.98x
64B     0.98x   0.99x   1.01x   1.00x   1.00x   1.00x   1.01x   1.01x   0.97x   1.00x
256B    1.77x   1.83x   1.00x   1.86x   1.79x   1.78x   1.70x   1.76x   1.71x   1.69x
1024B   1.92x   1.95x   0.99x   1.96x   1.93x   1.93x   1.83x   1.86x   1.89x   1.87x
8192B   1.94x   1.95x   0.99x   1.97x   1.95x   1.95x   1.87x   1.87x   1.93x   1.91x

Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: testmgr - add larger cast6 testvectors
Johannes Goetzfried [Wed, 11 Jul 2012 17:38:29 +0000 (19:38 +0200)]
crypto: testmgr - add larger cast6 testvectors

New ECB, CBC, CTR, LRW and XTS testvectors for cast6. We need larger
testvectors to check parallel code paths in the optimized implementation. Tests
have also been added to the tcrypt module.

Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: cast6 - prepare generic module for optimized implementations
Johannes Goetzfried [Wed, 11 Jul 2012 17:38:12 +0000 (19:38 +0200)]
crypto: cast6 - prepare generic module for optimized implementations

Rename cast6 module to cast6_generic to allow autoloading of optimized
implementations. Generic functions and s-boxes are exported to be able to use
them within optimized implementations.

Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: cast5 - add x86_64/avx assembler implementation
Johannes Goetzfried [Wed, 11 Jul 2012 17:37:37 +0000 (19:37 +0200)]
crypto: cast5 - add x86_64/avx assembler implementation

This patch adds a x86_64/avx assembler implementation of the Cast5 block
cipher. The implementation processes sixteen blocks in parallel (four 4 block
chunk AVX operations). The table-lookups are done in general-purpose registers.
For small blocksizes the functions from the generic module are called. A good
performance increase is provided for blocksizes greater or equal to 128B.

Patch has been tested with tcrypt and automated filesystem tests.

Tcrypt benchmark results:

Intel Core i5-2500 CPU (fam:6, model:42, step:7)

cast5-avx-x86_64 vs. cast5-generic
64bit key:
size    ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec
16B     0.99x   0.99x   1.00x   1.00x   1.02x   1.01x
64B     1.00x   1.00x   0.98x   1.00x   1.01x   1.02x
256B    2.03x   2.01x   0.95x   2.11x   2.12x   2.13x
1024B   2.30x   2.24x   0.95x   2.29x   2.35x   2.35x
8192B   2.31x   2.27x   0.95x   2.31x   2.39x   2.39x

128bit key:
size    ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec
16B     0.99x   0.99x   1.00x   1.00x   1.01x   1.01x
64B     1.00x   1.00x   0.98x   1.01x   1.02x   1.01x
256B    2.17x   2.13x   0.96x   2.19x   2.19x   2.19x
1024B   2.29x   2.32x   0.95x   2.34x   2.37x   2.38x
8192B   2.35x   2.32x   0.95x   2.35x   2.39x   2.39x

Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: testmgr - add larger cast5 testvectors
Johannes Goetzfried [Wed, 11 Jul 2012 17:37:21 +0000 (19:37 +0200)]
crypto: testmgr - add larger cast5 testvectors

New ECB, CBC and CTR testvectors for cast5. We need larger testvectors to check
parallel code paths in the optimized implementation. Tests have also been added
to the tcrypt module.

Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: cast5 - prepare generic module for optimized implementations
Johannes Goetzfried [Wed, 11 Jul 2012 17:37:04 +0000 (19:37 +0200)]
crypto: cast5 - prepare generic module for optimized implementations

Rename cast5 module to cast5_generic to allow autoloading of optimized
implementations. Generic functions and s-boxes are exported to be able to use
them within optimized implementations.

Signed-off-by: Johannes Goetzfried <Johannes.Goetzfried@informatik.stud.uni-erlangen.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: arch/s390 - cleanup - remove unneeded cra_list initialization
Jussi Kivilinna [Wed, 11 Jul 2012 11:21:01 +0000 (14:21 +0300)]
crypto: arch/s390 - cleanup - remove unneeded cra_list initialization

Initialization of cra_list is currently mixed, most ciphers initialize this
field and most shashes do not. Initialization however is not needed at all
since cra_list is initialized/overwritten in __crypto_register_alg() with
list_add(). Therefore perform cleanup to remove all unneeded initializations
of this field in 'arch/s390/crypto/'

Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Cc: linux-s390@vger.kernel.org
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: drivers - remove cra_list initialization
Jussi Kivilinna [Wed, 11 Jul 2012 11:20:56 +0000 (14:20 +0300)]
crypto: drivers - remove cra_list initialization

Initialization of cra_list is currently mixed, most ciphers initialize this
field and most shashes do not. Initialization however is not needed at all
since cra_list is initialized/overwritten in __crypto_register_alg() with
list_add(). Therefore perform cleanup to remove all unneeded initializations
of this field in 'crypto/drivers/'.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: linux-geode@lists.infradead.org
Cc: Michal Ludvig <michal@logix.cz>
Cc: Dmitry Kasatkin <dmitry.kasatkin@nokia.com>
Cc: Varun Wadekar <vwadekar@nvidia.com>
Cc: Eric Bénard <eric@eukrea.com>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Acked-by: Kent Yoder <key@linux.vnet.ibm.com>
Acked-by: Vladimir Zapolskiy <vladimir_zapolskiy@mentor.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: arch/x86 - cleanup - remove unneeded crypto_alg.cra_list initializations
Jussi Kivilinna [Wed, 11 Jul 2012 11:20:51 +0000 (14:20 +0300)]
crypto: arch/x86 - cleanup - remove unneeded crypto_alg.cra_list initializations

Initialization of cra_list is currently mixed, most ciphers initialize this
field and most shashes do not. Initialization however is not needed at all
since cra_list is initialized/overwritten in __crypto_register_alg() with
list_add(). Therefore perform cleanup to remove all unneeded initializations
of this field in 'arch/x86/crypto/'.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: cleanup - remove unneeded crypto_alg.cra_list initializations
Jussi Kivilinna [Wed, 11 Jul 2012 11:20:46 +0000 (14:20 +0300)]
crypto: cleanup - remove unneeded crypto_alg.cra_list initializations

Initialization of cra_list is currently mixed, most ciphers initialize this
field and most shashes do not. Initialization however is not needed at all
since cra_list is initialized/overwritten in __crypto_register_alg() with
list_add(). Therefore perform cleanup to remove all unneeded initializations
of this field in 'crypto/'.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: whirlpool - use crypto_[un]register_shashes
Jussi Kivilinna [Wed, 11 Jul 2012 11:20:41 +0000 (14:20 +0300)]
crypto: whirlpool - use crypto_[un]register_shashes

Combine all shash algs to be registered and use new crypto_[un]register_shashes
functions. This simplifies init/exit code.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: sha512 - use crypto_[un]register_shashes
Jussi Kivilinna [Wed, 11 Jul 2012 11:20:36 +0000 (14:20 +0300)]
crypto: sha512 - use crypto_[un]register_shashes

Combine all shash algs to be registered and use new crypto_[un]register_shashes
functions. This simplifies init/exit code.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: sha256 - use crypto_[un]register_shashes
Jussi Kivilinna [Wed, 11 Jul 2012 11:20:30 +0000 (14:20 +0300)]
crypto: sha256 - use crypto_[un]register_shashes

Combine all shash algs to be registered and use new crypto_[un]register_shashes
functions. This simplifies init/exit code.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: tiger - use crypto_[un]register_shashes
Jussi Kivilinna [Wed, 11 Jul 2012 11:20:25 +0000 (14:20 +0300)]
crypto: tiger - use crypto_[un]register_shashes

Combine all shash algs to be registered and use new crypto_[un]register_shashes
functions. This simplifies init/exit code.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: add crypto_[un]register_shashes for [un]registering multiple shash entries...
Jussi Kivilinna [Wed, 11 Jul 2012 11:20:20 +0000 (14:20 +0300)]
crypto: add crypto_[un]register_shashes for [un]registering multiple shash entries at once

Add crypto_[un]register_shashes() to allow simplifying init/exit code of shash
crypto modules that register multiple algorithms.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: ansi_cprng - use crypto_[un]register_algs
Jussi Kivilinna [Wed, 11 Jul 2012 11:20:15 +0000 (14:20 +0300)]
crypto: ansi_cprng - use crypto_[un]register_algs

Combine all crypto_alg to be registered and use new crypto_[un]register_algs
functions. This simplifies init/exit code.

Cc: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: serpent - use crypto_[un]register_algs
Jussi Kivilinna [Wed, 11 Jul 2012 11:20:10 +0000 (14:20 +0300)]
crypto: serpent - use crypto_[un]register_algs

Combine all crypto_alg to be registered and use new crypto_[un]register_algs
functions. This simplifies init/exit code.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: des - use crypto_[un]register_algs
Jussi Kivilinna [Wed, 11 Jul 2012 11:20:05 +0000 (14:20 +0300)]
crypto: des - use crypto_[un]register_algs

Combine all crypto_alg to be registered and use new crypto_[un]register_algs
functions. This simplifies init/exit code.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: crypto_null - use crypto_[un]register_algs
Jussi Kivilinna [Wed, 11 Jul 2012 11:20:00 +0000 (14:20 +0300)]
crypto: crypto_null - use crypto_[un]register_algs

Combine all crypto_alg to be registered and use new crypto_[un]register_algs
functions. This simplifies init/exit code.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agocrypto: tea - use crypto_[un]register_algs
Jussi Kivilinna [Wed, 11 Jul 2012 11:19:55 +0000 (14:19 +0300)]
crypto: tea - use crypto_[un]register_algs

Combine all crypto_alg to be registered and use new crypto_[un]register_algs
functions. This simplifies init/exit code.

Signed-off-by: Jussi Kivilinna <jussi.kivilinna@mbnet.fi>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
12 years agoMerge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6
Linus Torvalds [Wed, 1 Aug 2012 03:44:03 +0000 (20:44 -0700)]
Merge tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6

Pull irqdomain changes from Grant Likely:
 "Round of refactoring and enhancements to irq_domain infrastructure.
  This series starts the process of simplifying irqdomain.  The ultimate
  goal is to merge LEGACY, LINEAR and TREE mappings into a single
  system, but had to back off from that after some last minute bugs.
  Instead it mainly reorganizes the code and ensures that the reverse
  map gets populated when the irq is mapped instead of the first time it
  is looked up.

  Merging of the irq_domain types is deferred to v3.7

  In other news, this series adds helpers for creating static mappings
  on a linear or tree mapping."

* tag 'irqdomain-for-linus' of git://git.secretlab.ca/git/linux-2.6:
  irqdomain: Improve diagnostics when a domain mapping fails
  irqdomain: eliminate slow-path revmap lookups
  irqdomain: Fix irq_create_direct_mapping() to test irq_domain type.
  irqdomain: Eliminate dedicated radix lookup functions
  irqdomain: Support for static IRQ mapping and association.
  irqdomain: Always update revmap when setting up a virq
  irqdomain: Split disassociating code into separate function
  irq_domain: correct a minor wrong comment for linear revmap
  irq_domain: Standardise legacy/linear domain selection
  irqdomain: Make ops->map hook optional
  irqdomain: Remove unnecessary test for IRQ_DOMAIN_MAP_LEGACY
  irqdomain: Simple NUMA awareness.
  devicetree: add helper inline for retrieving a node's full name

12 years agoMerge branch 'akpm' (Andrew's patch-bomb)
Linus Torvalds [Wed, 1 Aug 2012 02:25:39 +0000 (19:25 -0700)]
Merge branch 'akpm' (Andrew's patch-bomb)

Merge Andrew's second set of patches:
 - MM
 - a few random fixes
 - a couple of RTC leftovers

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (120 commits)
  rtc/rtc-88pm80x: remove unneed devm_kfree
  rtc/rtc-88pm80x: assign ret only when rtc_register_driver fails
  mm: hugetlbfs: close race during teardown of hugetlbfs shared page tables
  tmpfs: distribute interleave better across nodes
  mm: remove redundant initialization
  mm: warn if pg_data_t isn't initialized with zero
  mips: zero out pg_data_t when it's allocated
  memcg: gix memory accounting scalability in shrink_page_list
  mm/sparse: remove index_init_lock
  mm/sparse: more checks on mem_section number
  mm/sparse: optimize sparse_index_alloc
  memcg: add mem_cgroup_from_css() helper
  memcg: further prevent OOM with too many dirty pages
  memcg: prevent OOM with too many dirty pages
  mm: mmu_notifier: fix freed page still mapped in secondary MMU
  mm: memcg: only check anon swapin page charges for swap cache
  mm: memcg: only check swap cache pages for repeated charging
  mm: memcg: split swapin charge function into private and public part
  mm: memcg: remove needless !mm fixup to init_mm when charging
  mm: memcg: remove unneeded shmem charge type
  ...

12 years agoMerge tag 'vfio-for-v3.6' of git://github.com/awilliam/linux-vfio
Linus Torvalds [Wed, 1 Aug 2012 02:17:27 +0000 (19:17 -0700)]
Merge tag 'vfio-for-v3.6' of git://github.com/awilliam/linux-vfio

Pull VFIO core from Alex Williamson:
 "This series includes the VFIO userspace driver interface for the 3.6
  kernel merge window.  This driver is intended to provide a secure
  interface for device access using IOMMU protection for applications
  like assignment of physical devices to virtual machines.

  Qemu will be the first user of this interface, enabling assignment of
  PCI devices to Qemu guests.  This interface is intended to eventually
  replace the x86-specific assignment mechanism currently available in
  KVM.

  This interface has the advantage of being more secure, by working with
  IOMMU groups to ensure device isolation and providing it's own
  filtered resource access mechanism, and also more flexible, in not
  being x86 or KVM specific (extensions to enable POWER are already
  working).

  This driver is originally the work of Tom Lyon, but has since been
  handed over to me and gone through a complete overhaul thanks to the
  input from David Gibson, Ben Herrenschmidt, Chris Wright, Joerg
  Roedel, and others.  This driver has been available in linux-next for
  the last month."

Paul Mackerras says:
 "I would be glad to see it go in since we want to use it with KVM on
  PowerPC.  If possible we'd like the PowerPC bits for it to go in as
  well."

* tag 'vfio-for-v3.6' of git://github.com/awilliam/linux-vfio:
  vfio: Add PCI device driver
  vfio: Type1 IOMMU implementation
  vfio: Add documentation
  vfio: VFIO core

12 years agoMerge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso...
Linus Torvalds [Wed, 1 Aug 2012 02:07:42 +0000 (19:07 -0700)]
Merge tag 'random_for_linus' of git://git./linux/kernel/git/tytso/random

Pull random subsystem patches from Ted Ts'o:
 "This patch series contains a major revamp of how we collect entropy
  from interrupts for /dev/random and /dev/urandom.

  The goal is to addresses weaknesses discussed in the paper "Mining
  your Ps and Qs: Detection of Widespread Weak Keys in Network Devices",
  by Nadia Heninger, Zakir Durumeric, Eric Wustrow, J.  Alex Halderman,
  which will be published in the Proceedings of the 21st Usenix Security
  Symposium, August 2012.  (See https://factorable.net for more
  information and an extended version of the paper.)"

Fix up trivial conflicts due to nearby changes in
drivers/{mfd/ab3100-core.c, usb/gadget/omap_udc.c}

* tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random: (33 commits)
  random: mix in architectural randomness in extract_buf()
  dmi: Feed DMI table to /dev/random driver
  random: Add comment to random_initialize()
  random: final removal of IRQF_SAMPLE_RANDOM
  um: remove IRQF_SAMPLE_RANDOM which is now a no-op
  sparc/ldc: remove IRQF_SAMPLE_RANDOM which is now a no-op
  [ARM] pxa: remove IRQF_SAMPLE_RANDOM which is now a no-op
  board-palmz71: remove IRQF_SAMPLE_RANDOM which is now a no-op
  isp1301_omap: remove IRQF_SAMPLE_RANDOM which is now a no-op
  pxa25x_udc: remove IRQF_SAMPLE_RANDOM which is now a no-op
  omap_udc: remove IRQF_SAMPLE_RANDOM which is now a no-op
  goku_udc: remove IRQF_SAMPLE_RANDOM which was commented out
  uartlite: remove IRQF_SAMPLE_RANDOM which is now a no-op
  drivers: hv: remove IRQF_SAMPLE_RANDOM which is now a no-op
  xen-blkfront: remove IRQF_SAMPLE_RANDOM which is now a no-op
  n2_crypto: remove IRQF_SAMPLE_RANDOM which is now a no-op
  pda_power: remove IRQF_SAMPLE_RANDOM which is now a no-op
  i2c-pmcmsp: remove IRQF_SAMPLE_RANDOM which is now a no-op
  input/serio/hp_sdc.c: remove IRQF_SAMPLE_RANDOM which is now a no-op
  mfd: remove IRQF_SAMPLE_RANDOM which is now a no-op
  ...

12 years agoMerge tag 'rdma-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland...
Linus Torvalds [Wed, 1 Aug 2012 01:59:55 +0000 (18:59 -0700)]
Merge tag 'rdma-for-linus' of git://git./linux/kernel/git/roland/infiniband

Pull final RDMA changes from Roland Dreier:
 - Fix IPoIB to stop using unsafe linkage between networking neighbour
   layer and private path database.
 - Small fixes for bugs found by Fengguang Wu's automated builds.

* tag 'rdma-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband:
  IPoIB: Use a private hash table for path lookup in xmit path
  IB/qib: Fix size of cc_supported_table_entries
  RDMA/ucma: Convert open-coded equivalent to memdup_user()
  RDMA/ocrdma: Fix check of GSI CQs
  RDMA/cma: Use PTR_RET rather than if (IS_ERR(...)) + PTR_ERR

12 years agoMerge branch 'v4l_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab...
Linus Torvalds [Wed, 1 Aug 2012 01:47:44 +0000 (18:47 -0700)]
Merge branch 'v4l_for_linus' of git://git./linux/kernel/git/mchehab/linux-media

Pull second set of media updates from Mauro Carvalho Chehab:

 - radio API: add support to work with radio frequency bands

 - new AM/FM radio drivers: radio-shark, radio-shark2

 - new Remote Controller USB driver: iguanair

 - conversion of several drivers to the v4l2 core control framework

 - new board additions at existing drivers

 - the remaining (and vast majority of the patches) are due to
   drivers/DocBook fixes/cleanups.

* 'v4l_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media: (154 commits)
  [media] radio-tea5777: use library for 64bits div
  [media] tlg2300: Declare MODULE_FIRMWARE usage
  [media] lgs8gxx: Declare MODULE_FIRMWARE usage
  [media] xc5000: Add MODULE_FIRMWARE statements
  [media] s2255drv: Add MODULE_FIRMWARE statement
  [media] dib8000: move dereference after check for NULL
  [media] Documentation: Update cardlists
  [media] bttv: add support for Aposonic W-DVR
  [media] cx25821: Remove bad strcpy to read-only char*
  [media] pms.c: remove duplicated include
  [media] smiapp-core.c: remove duplicated include
  [media] via-camera: pass correct format settings to sensor
  [media] rtl2832.c: minor cleanup
  [media] Add support for the IguanaWorks USB IR Transceiver
  [media] Minor cleanups for MCE USB
  [media] drivers/media/dvb/siano/smscoreapi.c: use list_for_each_entry
  [media] Use a named union in struct v4l2_ioctl_info
  [media] mceusb: Add Twisted Melon USB IDs
  [media] staging/media/solo6x10: use module_pci_driver macro
  [media] staging/media/dt3155v4l: use module_pci_driver macro
  ...

Conflicts:
Documentation/feature-removal-schedule.txt

12 years agoMerge tag 'nfs-for-3.6-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
Linus Torvalds [Wed, 1 Aug 2012 01:45:44 +0000 (18:45 -0700)]
Merge tag 'nfs-for-3.6-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs

Pull second wave of NFS client updates from Trond Myklebust:

 - Patches from Bryan to allow splitting of the NFSv2/v3/v4 code into
   separate modules.

 - Fix Oopses in the NFSv4 idmapper

 - Fix a deadlock whereby rpciod tries to allocate a new socket and ends
   up recursing into the NFS code due to memory reclaim.

 - Increase the number of permitted callback connections.

* tag 'nfs-for-3.6-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
  nfs: explicitly reject LOCK_MAND flock() requests
  nfs: increase number of permitted callback connections.
  SUNRPC: return negative value in case rpcbind client creation error
  NFS: Convert v4 into a module
  NFS: Convert v3 into a module
  NFS: Convert v2 into a module
  NFS: Keep module parameters in the generic NFS client
  NFS: Split out remaining NFS v4 inode functions
  NFS: Pass super operations and xattr handlers in the nfs_subversion
  NFS: Only initialize the ACL client in the v3 case
  NFS: Create a try_mount rpc op
  NFS: Remove the NFS v4 xdev mount function
  NFS: Add version registering framework
  NFS: Fix a number of bugs in the idmapper
  nfs: skip commit in releasepage if we're freeing memory for fs-related reasons
  sunrpc: clarify comments on rpc_make_runnable
  pnfsblock: bail out partial page IO

12 years agoMerge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Linus Torvalds [Wed, 1 Aug 2012 01:43:13 +0000 (18:43 -0700)]
Merge git://git./linux/kernel/git/davem/net

Pull networking update from David S. Miller:
 "I think Eric Dumazet and I have dealt with all of the known routing
  cache removal fallout.  Some other minor fixes all around.

  1) Fix RCU of cached routes, particular of output routes which require
     liberation via call_rcu() instead of call_rcu_bh().  From Eric
     Dumazet.

  2) Make sure we purge net device references in cached routes properly.

  3) TG3 driver bug fixes from Michael Chan.

  4) Fix reported 'expires' value in ipv6 routes, from Li Wei.

  5) TUN driver ioctl leaks kernel bytes to userspace, from Mathias
     Krause."

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (22 commits)
  ipv4: Properly purge netdev references on uncached routes.
  ipv4: Cache routes in nexthop exception entries.
  ipv4: percpu nh_rth_output cache
  ipv4: Restore old dst_free() behavior.
  bridge: make port attributes const
  ipv4: remove rt_cache_rebuild_count
  net: ipv4: fix RCU races on dst refcounts
  net: TCP early demux cleanup
  tun: Fix formatting.
  net/tun: fix ioctl() based info leaks
  tg3: Update version to 3.124
  tg3: Fix race condition in tg3_get_stats64()
  tg3: Add New 5719 Read DMA workaround
  tg3: Fix Read DMA workaround for 5719 A0.
  tg3: Request APE_LOCK_PHY before PHY access
  ipv6: fix incorrect route 'expires' value passed to userspace
  mISDN: Bugfix only few bytes are transfered on a connection
  seeq: use PTR_RET at init_module of driver
  bnx2x: remove cast around the kmalloc in bnx2x_prev_mark_path
  ipv4: clean up put_child
  ...

12 years agortc/rtc-88pm80x: remove unneed devm_kfree
Devendra Naga [Tue, 31 Jul 2012 23:46:24 +0000 (16:46 -0700)]
rtc/rtc-88pm80x: remove unneed devm_kfree

devm_kzalloc() doesn't need a matching devm_kfree(), the freeing mechanism
will trigger when driver unloads.

Signed-off-by: Devendra Naga <devendra.aaru@gmail.com>
Cc: Alessandro Zummo <a.zummo@towertech.it>
Cc: Ashish Jangam <ashish.jangam@kpitcummins.com>
Cc: David Dajun Chen <dchen@diasemi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agortc/rtc-88pm80x: assign ret only when rtc_register_driver fails
Devendra Naga [Tue, 31 Jul 2012 23:46:22 +0000 (16:46 -0700)]
rtc/rtc-88pm80x: assign ret only when rtc_register_driver fails

At the probe we are assigning ret to return value of PTR_ERR right after
the rtc_register_drive()r, as we would have done it in the if
(IS_ERR(ptr)) check, since the function fails and goes inside that case

Signed-off-by: Devendra Naga <devendra.aaru@gmail.com>
Cc: Alessandro Zummo <a.zummo@towertech.it>
Cc: Ashish Jangam <ashish.jangam@kpitcummins.com>
Cc: David Dajun Chen <dchen@diasemi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: hugetlbfs: close race during teardown of hugetlbfs shared page tables
Mel Gorman [Tue, 31 Jul 2012 23:46:20 +0000 (16:46 -0700)]
mm: hugetlbfs: close race during teardown of hugetlbfs shared page tables

If a process creates a large hugetlbfs mapping that is eligible for page
table sharing and forks heavily with children some of whom fault and
others which destroy the mapping then it is possible for page tables to
get corrupted.  Some teardowns of the mapping encounter a "bad pmd" and
output a message to the kernel log.  The final teardown will trigger a
BUG_ON in mm/filemap.c.

This was reproduced in 3.4 but is known to have existed for a long time
and goes back at least as far as 2.6.37.  It was probably was introduced
in 2.6.20 by [39dde65c: shared page table for hugetlb page].  The messages
look like this;

[  ..........] Lots of bad pmd messages followed by this
[  127.164256] mm/memory.c:391: bad pmd ffff880412e04fe8(80000003de4000e7).
[  127.164257] mm/memory.c:391: bad pmd ffff880412e04ff0(80000003de6000e7).
[  127.164258] mm/memory.c:391: bad pmd ffff880412e04ff8(80000003de0000e7).
[  127.186778] ------------[ cut here ]------------
[  127.186781] kernel BUG at mm/filemap.c:134!
[  127.186782] invalid opcode: 0000 [#1] SMP
[  127.186783] CPU 7
[  127.186784] Modules linked in: af_packet cpufreq_conservative cpufreq_userspace cpufreq_powersave acpi_cpufreq mperf ext3 jbd dm_mod coretemp crc32c_intel usb_storage ghash_clmulni_intel aesni_intel i2c_i801 r8169 mii uas sr_mod cdrom sg iTCO_wdt iTCO_vendor_support shpchp serio_raw cryptd aes_x86_64 e1000e pci_hotplug dcdbas aes_generic container microcode ext4 mbcache jbd2 crc16 sd_mod crc_t10dif i915 drm_kms_helper drm i2c_algo_bit ehci_hcd ahci libahci usbcore rtc_cmos usb_common button i2c_core intel_agp video intel_gtt fan processor thermal thermal_sys hwmon ata_generic pata_atiixp libata scsi_mod
[  127.186801]
[  127.186802] Pid: 9017, comm: hugetlbfs-test Not tainted 3.4.0-autobuild #53 Dell Inc. OptiPlex 990/06D7TR
[  127.186804] RIP: 0010:[<ffffffff810ed6ce>]  [<ffffffff810ed6ce>] __delete_from_page_cache+0x15e/0x160
[  127.186809] RSP: 0000:ffff8804144b5c08  EFLAGS: 00010002
[  127.186810] RAX: 0000000000000001 RBX: ffffea000a5c9000 RCX: 00000000ffffffc0
[  127.186811] RDX: 0000000000000000 RSI: 0000000000000009 RDI: ffff88042dfdad00
[  127.186812] RBP: ffff8804144b5c18 R08: 0000000000000009 R09: 0000000000000003
[  127.186813] R10: 0000000000000000 R11: 000000000000002d R12: ffff880412ff83d8
[  127.186814] R13: ffff880412ff83d8 R14: 0000000000000000 R15: ffff880412ff83d8
[  127.186815] FS:  00007fe18ed2c700(0000) GS:ffff88042dce0000(0000) knlGS:0000000000000000
[  127.186816] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[  127.186817] CR2: 00007fe340000503 CR3: 0000000417a14000 CR4: 00000000000407e0
[  127.186818] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  127.186819] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[  127.186820] Process hugetlbfs-test (pid: 9017, threadinfo ffff8804144b4000, task ffff880417f803c0)
[  127.186821] Stack:
[  127.186822]  ffffea000a5c9000 0000000000000000 ffff8804144b5c48 ffffffff810ed83b
[  127.186824]  ffff8804144b5c48 000000000000138a 0000000000001387 ffff8804144b5c98
[  127.186825]  ffff8804144b5d48 ffffffff811bc925 ffff8804144b5cb8 0000000000000000
[  127.186827] Call Trace:
[  127.186829]  [<ffffffff810ed83b>] delete_from_page_cache+0x3b/0x80
[  127.186832]  [<ffffffff811bc925>] truncate_hugepages+0x115/0x220
[  127.186834]  [<ffffffff811bca43>] hugetlbfs_evict_inode+0x13/0x30
[  127.186837]  [<ffffffff811655c7>] evict+0xa7/0x1b0
[  127.186839]  [<ffffffff811657a3>] iput_final+0xd3/0x1f0
[  127.186840]  [<ffffffff811658f9>] iput+0x39/0x50
[  127.186842]  [<ffffffff81162708>] d_kill+0xf8/0x130
[  127.186843]  [<ffffffff81162812>] dput+0xd2/0x1a0
[  127.186845]  [<ffffffff8114e2d0>] __fput+0x170/0x230
[  127.186848]  [<ffffffff81236e0e>] ? rb_erase+0xce/0x150
[  127.186849]  [<ffffffff8114e3ad>] fput+0x1d/0x30
[  127.186851]  [<ffffffff81117db7>] remove_vma+0x37/0x80
[  127.186853]  [<ffffffff81119182>] do_munmap+0x2d2/0x360
[  127.186855]  [<ffffffff811cc639>] sys_shmdt+0xc9/0x170
[  127.186857]  [<ffffffff81410a39>] system_call_fastpath+0x16/0x1b
[  127.186858] Code: 0f 1f 44 00 00 48 8b 43 08 48 8b 00 48 8b 40 28 8b b0 40 03 00 00 85 f6 0f 88 df fe ff ff 48 89 df e8 e7 cb 05 00 e9 d2 fe ff ff <0f> 0b 55 83 e2 fd 48 89 e5 48 83 ec 30 48 89 5d d8 4c 89 65 e0
[  127.186868] RIP  [<ffffffff810ed6ce>] __delete_from_page_cache+0x15e/0x160
[  127.186870]  RSP <ffff8804144b5c08>
[  127.186871] ---[ end trace 7cbac5d1db69f426 ]---

The bug is a race and not always easy to reproduce.  To reproduce it I was
doing the following on a single socket I7-based machine with 16G of RAM.

$ hugeadm --pool-pages-max DEFAULT:13G
$ echo $((18*1048576*1024)) > /proc/sys/kernel/shmmax
$ echo $((18*1048576*1024)) > /proc/sys/kernel/shmall
$ for i in `seq 1 9000`; do ./hugetlbfs-test; done

On my particular machine, it usually triggers within 10 minutes but
enabling debug options can change the timing such that it never hits.
Once the bug is triggered, the machine is in trouble and needs to be
rebooted.  The machine will respond but processes accessing proc like "ps
aux" will hang due to the BUG_ON.  shutdown will also hang and needs a
hard reset or a sysrq-b.

The basic problem is a race between page table sharing and teardown.  For
the most part page table sharing depends on i_mmap_mutex.  In some cases,
it is also taking the mm->page_table_lock for the PTE updates but with
shared page tables, it is the i_mmap_mutex that is more important.

Unfortunately it appears to be also insufficient. Consider the following
situation

Process A Process B
--------- ---------
hugetlb_fault shmdt
   LockWrite(mmap_sem)
       do_munmap
    unmap_region
      unmap_vmas
        unmap_single_vma
          unmap_hugepage_range
                   Lock(i_mmap_mutex)
    Lock(mm->page_table_lock)
    huge_pmd_unshare/unmap tables <--- (1)
    Unlock(mm->page_table_lock)
                   Unlock(i_mmap_mutex)
  huge_pte_alloc       ...
    Lock(i_mmap_mutex)       ...
    vma_prio_walk, find svma, spte       ...
    Lock(mm->page_table_lock)       ...
    share spte       ...
    Unlock(mm->page_table_lock)       ...
    Unlock(i_mmap_mutex)       ...
  hugetlb_no_page   <--- (2)
      free_pgtables
        unlink_file_vma
hugetlb_free_pgd_range
    remove_vma_list

In this scenario, it is possible for Process A to share page tables with
Process B that is trying to tear them down.  The i_mmap_mutex on its own
does not prevent Process A walking Process B's page tables.  At (1) above,
the page tables are not shared yet so it unmaps the PMDs.  Process A sets
up page table sharing and at (2) faults a new entry.  Process B then trips
up on it in free_pgtables.

This patch fixes the problem by adding a new function
__unmap_hugepage_range_final that is only called when the VMA is about to
be destroyed.  This function clears VM_MAYSHARE during
unmap_hugepage_range() under the i_mmap_mutex.  This makes the VMA
ineligible for sharing and avoids the race.  Superficially this looks like
it would then be vunerable to truncate and madvise issues but hugetlbfs
has its own truncate handlers so does not use unmap_mapping_range() and
does not support madvise(DONTNEED).

This should be treated as a -stable candidate if it is merged.

Test program is as follows. The test case was mostly written by Michal
Hocko with a few minor changes to reproduce this bug.

==== CUT HERE ====

static size_t huge_page_size = (2UL << 20);
static size_t nr_huge_page_A = 512;
static size_t nr_huge_page_B = 5632;

unsigned int get_random(unsigned int max)
{
struct timeval tv;

gettimeofday(&tv, NULL);
srandom(tv.tv_usec);
return random() % max;
}

static void play(void *addr, size_t size)
{
unsigned char *start = addr,
      *end = start + size,
      *a;
start += get_random(size/2);

/* we could itterate on huge pages but let's give it more time. */
for (a = start; a < end; a += 4096)
*a = 0;
}

int main(int argc, char **argv)
{
key_t key = IPC_PRIVATE;
size_t sizeA = nr_huge_page_A * huge_page_size;
size_t sizeB = nr_huge_page_B * huge_page_size;
int shmidA, shmidB;
void *addrA = NULL, *addrB = NULL;
int nr_children = 300, n = 0;

if ((shmidA = shmget(key, sizeA, IPC_CREAT|SHM_HUGETLB|0660)) == -1) {
perror("shmget:");
return 1;
}

if ((addrA = shmat(shmidA, addrA, SHM_R|SHM_W)) == (void *)-1UL) {
perror("shmat");
return 1;
}
if ((shmidB = shmget(key, sizeB, IPC_CREAT|SHM_HUGETLB|0660)) == -1) {
perror("shmget:");
return 1;
}

if ((addrB = shmat(shmidB, addrB, SHM_R|SHM_W)) == (void *)-1UL) {
perror("shmat");
return 1;
}

fork_child:
switch(fork()) {
case 0:
switch (n%3) {
case 0:
play(addrA, sizeA);
break;
case 1:
play(addrB, sizeB);
break;
case 2:
break;
}
break;
case -1:
perror("fork:");
break;
default:
if (++n < nr_children)
goto fork_child;
play(addrA, sizeA);
break;
}
shmdt(addrA);
shmdt(addrB);
do {
wait(NULL);
} while (--n > 0);
shmctl(shmidA, IPC_RMID, NULL);
shmctl(shmidB, IPC_RMID, NULL);
return 0;
}

[akpm@linux-foundation.org: name the declaration's args, fix CONFIG_HUGETLBFS=n build]
Signed-off-by: Hugh Dickins <hughd@google.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agotmpfs: distribute interleave better across nodes
Nathan Zimmer [Tue, 31 Jul 2012 23:46:17 +0000 (16:46 -0700)]
tmpfs: distribute interleave better across nodes

When tmpfs has the interleave memory policy, it always starts allocating
for each file from node 0 at offset 0.  When there are many small files,
the lower nodes fill up disproportionately.

This patch spreads out node usage by starting files at nodes other than 0,
by using the inode number to bias the starting node for interleave.

Signed-off-by: Nathan Zimmer <nzimmer@sgi.com>
Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Nick Piggin <npiggin@gmail.com>
Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: remove redundant initialization
Minchan Kim [Tue, 31 Jul 2012 23:46:16 +0000 (16:46 -0700)]
mm: remove redundant initialization

pg_data_t is zeroed before reaching free_area_init_core(), so remove the
now unnecessary initializations.

Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: warn if pg_data_t isn't initialized with zero
Minchan Kim [Tue, 31 Jul 2012 23:46:14 +0000 (16:46 -0700)]
mm: warn if pg_data_t isn't initialized with zero

Warn if memory-hotplug/boot code doesn't initialize pg_data_t with zero
when it is allocated.  Arch code and memory hotplug already initiailize
pg_data_t.  So this warning should never happen.  I select fields randomly
near the beginning, middle and end of pg_data_t for checking.

This patch isn't for performance but for removing initialization code
which is necessary to add whenever we adds new field to pg_data_t or zone.

Firstly, Andrew suggested clearing out of pg_data_t in MM core part but
Tejun doesn't like it because in the future, some archs can initialize
some fields in arch code and pass them into general MM part so blindly
clearing it out in mm core part would be very annoying.

Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomips: zero out pg_data_t when it's allocated
Minchan Kim [Tue, 31 Jul 2012 23:46:11 +0000 (16:46 -0700)]
mips: zero out pg_data_t when it's allocated

This patch is preparation for the next patch which removes the zeroing of
the pg_data_t in core MM.  All archs except MIPS already do this.

Signed-off-by: Minchan Kim <minchan@kernel.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomemcg: gix memory accounting scalability in shrink_page_list
Tim Chen [Tue, 31 Jul 2012 23:46:08 +0000 (16:46 -0700)]
memcg: gix memory accounting scalability in shrink_page_list

I noticed in a multi-process parallel files reading benchmark I ran on a 8
socket machine, throughput slowed down by a factor of 8 when I ran the
benchmark within a cgroup container.  I traced the problem to the
following code path (see below) when we are trying to reclaim memory from
file cache.  The res_counter_uncharge function is called on every page
that's reclaimed and created heavy lock contention.  The patch below
allows the reclaimed pages to be uncharged from the resource counter in
batch and recovered the regression.

Tim

     40.67%           usemem  [kernel.kallsyms]                   [k] _raw_spin_lock
                      |
                      --- _raw_spin_lock
                         |
                         |--92.61%-- res_counter_uncharge
                         |          |
                         |          |--100.00%-- __mem_cgroup_uncharge_common
                         |          |          |
                         |          |          |--100.00%-- mem_cgroup_uncharge_cache_page
                         |          |          |          __remove_mapping
                         |          |          |          shrink_page_list
                         |          |          |          shrink_inactive_list
                         |          |          |          shrink_mem_cgroup_zone
                         |          |          |          shrink_zone
                         |          |          |          do_try_to_free_pages
                         |          |          |          try_to_free_pages
                         |          |          |          __alloc_pages_nodemask
                         |          |          |          alloc_pages_current

Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm/sparse: remove index_init_lock
Gavin Shan [Tue, 31 Jul 2012 23:46:06 +0000 (16:46 -0700)]
mm/sparse: remove index_init_lock

sparse_index_init() uses the index_init_lock spinlock to protect root
mem_section assignment.  The lock is not necessary anymore because the
function is called only during boot (during paging init which is executed
only from a single CPU) and from the hotplug code (by add_memory() via
arch_add_memory()) which uses mem_hotplug_mutex.

The lock was introduced by 28ae55c9 ("sparsemem extreme: hotplug
preparation") and sparse_index_init() was used only during boot at that
time.

Later when the hotplug code (and add_memory()) was introduced there was no
synchronization so it was possible to online more sections from the same
root probably (though I am not 100% sure about that).  The first
synchronization has been added by 6ad696d2 ("mm: allow memory hotplug and
hibernation in the same kernel") which was later replaced by the
mem_hotplug_mutex - 20d6c96b ("mem-hotplug: introduce
{un}lock_memory_hotplug()").

Let's remove the lock as it is not needed and it makes the code more
confusing.

[mhocko@suse.cz: changelog]
Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Cc: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm/sparse: more checks on mem_section number
Gavin Shan [Tue, 31 Jul 2012 23:46:04 +0000 (16:46 -0700)]
mm/sparse: more checks on mem_section number

__section_nr() was implemented to retrieve the corresponding memory
section number according to its descriptor.  It's possible that the
specified memory section descriptor doesn't exist in the global array.  So
add more checking on that and report an error for a wrong case.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm/sparse: optimize sparse_index_alloc
Gavin Shan [Tue, 31 Jul 2012 23:46:02 +0000 (16:46 -0700)]
mm/sparse: optimize sparse_index_alloc

With CONFIG_SPARSEMEM_EXTREME, the two levels of memory section
descriptors are allocated from slab or bootmem.  When allocating from
slab, let slab/bootmem allocator clear the memory chunk.  We needn't clear
it explicitly.

Signed-off-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomemcg: add mem_cgroup_from_css() helper
Wanpeng Li [Tue, 31 Jul 2012 23:46:01 +0000 (16:46 -0700)]
memcg: add mem_cgroup_from_css() helper

Add a mem_cgroup_from_css() helper to replace open-coded invokations of
container_of().  To clarify the code and to add a little more type safety.

[akpm@linux-foundation.org: fix extensive breakage]
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Gavin Shan <shangw@linux.vnet.ibm.com>
Cc: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Cc: Gavin Shan <shangw@linux.vnet.ibm.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Michal Hocko <mhocko@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomemcg: further prevent OOM with too many dirty pages
Hugh Dickins [Tue, 31 Jul 2012 23:45:59 +0000 (16:45 -0700)]
memcg: further prevent OOM with too many dirty pages

The may_enter_fs test turns out to be too restrictive: though I saw no
problem with it when testing on 3.5-rc6, it very soon OOMed when I tested
on 3.5-rc6-mm1.  I don't know what the difference there is, perhaps I just
slightly changed the way I started off the testing: dd if=/dev/zero
of=/mnt/temp bs=1M count=1024; rm -f /mnt/temp; sync repeatedly, in 20M
memory.limit_in_bytes cgroup to ext4 on USB stick.

ext4 (and gfs2 and xfs) turn out to allocate new pages for writing with
AOP_FLAG_NOFS: that seems a little worrying, and it's unclear to me why
the transaction needs to be started even before allocating pagecache
memory.  But it may not be worth worrying about these days: if direct
reclaim avoids FS writeback, does __GFP_FS now mean anything?

Anyway, we insisted on the may_enter_fs test to avoid hangs with the loop
device; but since that also masks off __GFP_IO, we can test for __GFP_IO
directly, ignoring may_enter_fs and __GFP_FS.

But even so, the test still OOMs sometimes: when originally testing on
3.5-rc6, it OOMed about one time in five or ten; when testing just now on
3.5-rc6-mm1, it OOMed on the first iteration.

This residual problem comes from an accumulation of pages under ordinary
writeback, not marked PageReclaim, so rightly not causing the memcg check
to wait on their writeback: these too can prevent shrink_page_list() from
freeing any pages, so many times that memcg reclaim fails and OOMs.

Deal with these in the same way as direct reclaim now deals with dirty FS
pages: mark them PageReclaim.  It is appropriate to rotate these to tail
of list when writepage completes, but more importantly, the PageReclaim
flag makes memcg reclaim wait on them if encountered again.  Increment
NR_VMSCAN_IMMEDIATE?  That's arguable: I chose not.

Setting PageReclaim here may occasionally race with end_page_writeback()
clearing it: lru_deactivate_fn() already faced the same race, and
correctly concluded that the window is small and the issue non-critical.

With these changes, the test runs indefinitely without OOMing on ext4,
ext3 and ext2: I'll move on to test with other filesystems later.

Trivia: invert conditions for a clearer block without an else, and goto
keep_locked to do the unlock_page.

Signed-off-by: Hugh Dickins <hughd@google.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujtisu.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Ying Han <yinghan@google.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Fengguang Wu <fengguang.wu@intel.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomemcg: prevent OOM with too many dirty pages
Michal Hocko [Tue, 31 Jul 2012 23:45:55 +0000 (16:45 -0700)]
memcg: prevent OOM with too many dirty pages

The current implementation of dirty pages throttling is not memcg aware
which makes it easy to have memcg LRUs full of dirty pages.  Without
throttling, these LRUs can be scanned faster than the rate of writeback,
leading to memcg OOM conditions when the hard limit is small.

This patch fixes the problem by throttling the allocating process
(possibly a writer) during the hard limit reclaim by waiting on
PageReclaim pages.  We are waiting only for PageReclaim pages because
those are the pages that made one full round over LRU and that means that
the writeback is much slower than scanning.

The solution is far from being ideal - long term solution is memcg aware
dirty throttling - but it is meant to be a band aid until we have a real
fix.  We are seeing this happening during nightly backups which are placed
into containers to prevent from eviction of the real working set.

The change affects only memcg reclaim and only when we encounter
PageReclaim pages which is a signal that the reclaim doesn't catch up on
with the writers so somebody should be throttled.  This could be
potentially unfair because it could be somebody else from the group who
gets throttled on behalf of the writer but as writers need to allocate as
well and they allocate in higher rate the probability that only innocent
processes would be penalized is not that high.

I have tested this change by a simple dd copying /dev/zero to tmpfs or
ext3 running under small memcg (1G copy under 5M, 60M, 300M and 2G
containers) and dd got killed by OOM killer every time.  With the patch I
could run the dd with the same size under 5M controller without any OOM.
The issue is more visible with slower devices for output.

* With the patch
================
* tmpfs size=2G
---------------
$ vim cgroup_cache_oom_test.sh
$ ./cgroup_cache_oom_test.sh 5M
using Limit 5M for group
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 30.4049 s, 34.5 MB/s
$ ./cgroup_cache_oom_test.sh 60M
using Limit 60M for group
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 31.4561 s, 33.3 MB/s
$ ./cgroup_cache_oom_test.sh 300M
using Limit 300M for group
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 20.4618 s, 51.2 MB/s
$ ./cgroup_cache_oom_test.sh 2G
using Limit 2G for group
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 1.42172 s, 738 MB/s

* ext3
------
$ ./cgroup_cache_oom_test.sh 5M
using Limit 5M for group
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 27.9547 s, 37.5 MB/s
$ ./cgroup_cache_oom_test.sh 60M
using Limit 60M for group
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 30.3221 s, 34.6 MB/s
$ ./cgroup_cache_oom_test.sh 300M
using Limit 300M for group
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 24.5764 s, 42.7 MB/s
$ ./cgroup_cache_oom_test.sh 2G
using Limit 2G for group
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 3.35828 s, 312 MB/s

* Without the patch
===================
* tmpfs size=2G
---------------
$ ./cgroup_cache_oom_test.sh 5M
using Limit 5M for group
./cgroup_cache_oom_test.sh: line 46:  4668 Killed                  dd if=/dev/zero of=$OUT/zero bs=1M count=$count
$ ./cgroup_cache_oom_test.sh 60M
using Limit 60M for group
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 25.4989 s, 41.1 MB/s
$ ./cgroup_cache_oom_test.sh 300M
using Limit 300M for group
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 24.3928 s, 43.0 MB/s
$ ./cgroup_cache_oom_test.sh 2G
using Limit 2G for group
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 1.49797 s, 700 MB/s

* ext3
------
$ ./cgroup_cache_oom_test.sh 5M
using Limit 5M for group
./cgroup_cache_oom_test.sh: line 46:  4689 Killed                  dd if=/dev/zero of=$OUT/zero bs=1M count=$count
$ ./cgroup_cache_oom_test.sh 60M
using Limit 60M for group
./cgroup_cache_oom_test.sh: line 46:  4692 Killed                  dd if=/dev/zero of=$OUT/zero bs=1M count=$count
$ ./cgroup_cache_oom_test.sh 300M
using Limit 300M for group
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 20.248 s, 51.8 MB/s
$ ./cgroup_cache_oom_test.sh 2G
using Limit 2G for group
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 2.85201 s, 368 MB/s

[akpm@linux-foundation.org: tweak changelog, reordered the test to optimize for CONFIG_CGROUP_MEM_RES_CTLR=n]
[hughd@google.com: fix deadlock with loop driver]
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujtisu.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Rik van Riel <riel@redhat.com>
Cc: Ying Han <yinghan@google.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Hugh Dickins <hughd@google.com>
Reviewed-by: Mel Gorman <mgorman@suse.de>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Michal Hocko <mhocko@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: mmu_notifier: fix freed page still mapped in secondary MMU
Xiao Guangrong [Tue, 31 Jul 2012 23:45:52 +0000 (16:45 -0700)]
mm: mmu_notifier: fix freed page still mapped in secondary MMU

mmu_notifier_release() is called when the process is exiting.  It will
delete all the mmu notifiers.  But at this time the page belonging to the
process is still present in page tables and is present on the LRU list, so
this race will happen:

      CPU 0                 CPU 1
mmu_notifier_release:    try_to_unmap:
   hlist_del_init_rcu(&mn->hlist);
                            ptep_clear_flush_notify:
                                  mmu nofifler not found
                            free page  !!!!!!
                            /*
                             * At the point, the page has been
                             * freed, but it is still mapped in
                             * the secondary MMU.
                             */

  mn->ops->release(mn, mm);

Then the box is not stable and sometimes we can get this bug:

[  738.075923] BUG: Bad page state in process migrate-perf  pfn:03bec
[  738.075931] page:ffffea00000efb00 count:0 mapcount:0 mapping:          (null) index:0x8076
[  738.075936] page flags: 0x20000000000014(referenced|dirty)

The same issue is present in mmu_notifier_unregister().

We can call ->release before deleting the notifier to ensure the page has
been unmapped from the secondary MMU before it is freed.

Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
Cc: Avi Kivity <avi@redhat.com>
Cc: Marcelo Tosatti <mtosatti@redhat.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: memcg: only check anon swapin page charges for swap cache
Johannes Weiner [Tue, 31 Jul 2012 23:45:50 +0000 (16:45 -0700)]
mm: memcg: only check anon swapin page charges for swap cache

shmem knows for sure that the page is in swap cache when attempting to
charge a page, because the cache charge entry function has a check for it.
Only anon pages may be removed from swap cache already when trying to
charge their swapin.

Adjust the comment, though: '4969c11 mm: fix swapin race condition' added
a stable PageSwapCache check under the page lock in the do_swap_page()
before calling the memory controller, so it's unuse_pte()'s pte_same()
that may fail.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Wanpeng Li <liwp.linux@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: memcg: only check swap cache pages for repeated charging
Johannes Weiner [Tue, 31 Jul 2012 23:45:47 +0000 (16:45 -0700)]
mm: memcg: only check swap cache pages for repeated charging

Only anon and shmem pages in the swap cache are attempted to be charged
multiple times, from every swap pte fault or from shmem_unuse().  No other
pages require checking PageCgroupUsed().

Charging pages in the swap cache is also serialized by the page lock, and
since both the try_charge and commit_charge are called under the same page
lock section, the PageCgroupUsed() check might as well happen before the
counter charging, let alone reclaim.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Wanpeng Li <liwp.linux@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: memcg: split swapin charge function into private and public part
Johannes Weiner [Tue, 31 Jul 2012 23:45:43 +0000 (16:45 -0700)]
mm: memcg: split swapin charge function into private and public part

When shmem is charged upon swapin, it does not need to check twice whether
the memory controller is enabled.

Also, shmem pages do not have to be checked for everything that regular
anon pages have to be checked for, so let shmem use the internal version
directly and allow future patches to move around checks that are only
required when swapping in anon pages.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Wanpeng Li <liwp.linux@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: memcg: remove needless !mm fixup to init_mm when charging
Johannes Weiner [Tue, 31 Jul 2012 23:45:40 +0000 (16:45 -0700)]
mm: memcg: remove needless !mm fixup to init_mm when charging

It does not matter to __mem_cgroup_try_charge() if the passed mm is NULL
or init_mm, it will charge the root memcg in either case.

Also fix up the comment in __mem_cgroup_try_charge() that claimed the
init_mm would be charged when no mm was passed.  It's not really
incorrect, but confusing.  Clarify that the root memcg is charged in this
case.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Wanpeng Li <liwp.linux@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: memcg: remove unneeded shmem charge type
Johannes Weiner [Tue, 31 Jul 2012 23:45:39 +0000 (16:45 -0700)]
mm: memcg: remove unneeded shmem charge type

shmem page charges have not needed a separate charge type to tell them
from regular file pages since 08e552c ("memcg: synchronized LRU").

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Wanpeng Li <liwp.linux@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: memcg: move swapin charge functions above callsites
Johannes Weiner [Tue, 31 Jul 2012 23:45:36 +0000 (16:45 -0700)]
mm: memcg: move swapin charge functions above callsites

Charging cache pages may require swapin in the shmem case.  Save the
forward declaration and just move the swapin functions above the cache
charging functions.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Wanpeng Li <liwp.linux@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: memcg: only check for PageSwapCache when uncharging anon
Johannes Weiner [Tue, 31 Jul 2012 23:45:34 +0000 (16:45 -0700)]
mm: memcg: only check for PageSwapCache when uncharging anon

Only anon pages that are uncharged at the time of the last page table
mapping vanishing may be in swapcache.

When shmem pages, file pages, swap-freed anon pages, or just migrated
pages are uncharged, they are known for sure to be not in swapcache.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Wanpeng Li <liwp.linux@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: memcg: push down PageSwapCache check into uncharge entry functions
Johannes Weiner [Tue, 31 Jul 2012 23:45:31 +0000 (16:45 -0700)]
mm: memcg: push down PageSwapCache check into uncharge entry functions

Not all uncharge paths need to check if the page is swapcache, some of
them can know for sure.

Push down the check into all callsites of uncharge_common() so that the
patch that removes some of them is more obvious.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Wanpeng Li <liwp.linux@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: swapfile: clean up unuse_pte race handling
Johannes Weiner [Tue, 31 Jul 2012 23:45:28 +0000 (16:45 -0700)]
mm: swapfile: clean up unuse_pte race handling

The conditional mem_cgroup_cancel_charge_swapin() is a leftover from when
the function would continue to reestablish the page even after
mem_cgroup_try_charge_swapin() failed.  After 85d9fc8 "memcg: fix refcnt
handling at swapoff", the condition is always true when this code is
reached.

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Wanpeng Li <liwp.linux@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agomm: memcg: fix compaction/migration failing due to memcg limits
Johannes Weiner [Tue, 31 Jul 2012 23:45:25 +0000 (16:45 -0700)]
mm: memcg: fix compaction/migration failing due to memcg limits

Compaction (and page migration in general) can currently be hindered
through pages being owned by memory cgroups that are at their limits and
unreclaimable.

The reason is that the replacement page is being charged against the limit
while the page being replaced is also still charged.  But this seems
unnecessary, given that only one of the two pages will still be in use
after migration finishes.

This patch changes the memcg migration sequence so that the replacement
page is not charged.  Whatever page is still in use after successful or
failed migration gets to keep the charge of the page that was going to be
replaced.

The replacement page will still show up temporarily in the rss/cache
statistics, this can be fixed in a later patch as it's less urgent.

Reported-by: David Rientjes <rientjes@google.com>
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Hugh Dickins <hughd@google.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Wanpeng Li <liwp.linux@gmail.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agoswapfile: avoid dereferencing bd_disk during swap_entry_free for network storage
Mel Gorman [Tue, 31 Jul 2012 23:45:20 +0000 (16:45 -0700)]
swapfile: avoid dereferencing bd_disk during swap_entry_free for network storage

Commit b3a27d ("swap: Add swap slot free callback to
block_device_operations") dereferences p->bdev->bd_disk but this is a NULL
dereference if using swap-over-NFS.  This patch checks SWP_BLKDEV on the
swap_info_struct before dereferencing.

With reference to this callback, Christoph Hellwig stated "Please just
remove the callback entirely.  It has no user outside the staging tree and
was added clearly against the rules for that staging tree".  This would
also be my preference but there was not an obvious way of keeping zram in
staging/ happy.

Signed-off-by: Xiaotian Feng <dfeng@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Eric B Munson <emunson@mgebm.net>
Cc: Eric Paris <eparis@redhat.com>
Cc: James Morris <jmorris@namei.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Cc: Neil Brown <neilb@suse.de>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agonfs: prevent page allocator recursions with swap over NFS.
Mel Gorman [Tue, 31 Jul 2012 23:45:16 +0000 (16:45 -0700)]
nfs: prevent page allocator recursions with swap over NFS.

GFP_NOFS is _more_ permissive than GFP_NOIO in that it will initiate IO,
just not of any filesystem data.

The problem is that previously NOFS was correct because that avoids
recursion into the NFS code.  With swap-over-NFS, it is no longer correct
as swap IO can lead to this recursion.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Eric B Munson <emunson@mgebm.net>
Cc: Eric Paris <eparis@redhat.com>
Cc: James Morris <jmorris@namei.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Cc: Neil Brown <neilb@suse.de>
Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: Xiaotian Feng <dfeng@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agonfs: enable swap on NFS
Mel Gorman [Tue, 31 Jul 2012 23:45:12 +0000 (16:45 -0700)]
nfs: enable swap on NFS

Implement the new swapfile a_ops for NFS and hook up ->direct_IO.  This
will set the NFS socket to SOCK_MEMALLOC and run socket reconnect under
PF_MEMALLOC as well as reset SOCK_MEMALLOC before engaging the protocol
->connect() method.

PF_MEMALLOC should allow the allocation of struct socket and related
objects and the early (re)setting of SOCK_MEMALLOC should allow us to
receive the packets required for the TCP connection buildup.

[jlayton@redhat.com: Restore PF_MEMALLOC task flags in all cases]
[dfeng@redhat.com: Fix handling of multiple swap files]
[a.p.zijlstra@chello.nl: Original patch]
Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Eric B Munson <emunson@mgebm.net>
Cc: Eric Paris <eparis@redhat.com>
Cc: James Morris <jmorris@namei.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Cc: Neil Brown <neilb@suse.de>
Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: Xiaotian Feng <dfeng@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agonfs: disable data cache revalidation for swapfiles
Mel Gorman [Tue, 31 Jul 2012 23:45:10 +0000 (16:45 -0700)]
nfs: disable data cache revalidation for swapfiles

The VM does not like PG_private set on PG_swapcache pages.  As suggested
by Trond in http://lkml.org/lkml/2006/8/25/348, this patch disables NFS
data cache revalidation on swap files.  as it does not make sense to have
other clients change the file while it is being used as swap.  This avoids
setting PG_private on swap pages, since there ought to be no further races
with invalidate_inode_pages2() to deal with.

Since we cannot set PG_private we cannot use page->private which is
already used by PG_swapcache pages to store the nfs_page.  Thus augment
the new nfs_page_find_request logic.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Acked-by: Rik van Riel <riel@redhat.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Eric B Munson <emunson@mgebm.net>
Cc: Eric Paris <eparis@redhat.com>
Cc: James Morris <jmorris@namei.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Christie <michaelc@cs.wisc.edu>
Cc: Neil Brown <neilb@suse.de>
Cc: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: Xiaotian Feng <dfeng@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>