FROMGIT: crypto: arm/chacha20 - limit the preemption-disabled section
authorEric Biggers <ebiggers@google.com>
Sat, 17 Nov 2018 01:26:23 +0000 (17:26 -0800)
committerEric Biggers <ebiggers@google.com>
Wed, 5 Dec 2018 20:30:45 +0000 (12:30 -0800)
To improve responsivesess, disable preemption for each step of the walk
(which is at most PAGE_SIZE) rather than for the entire
encryption/decryption operation.

Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit be2830b15b60011845ad701076511e8b93b2fd76
 https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master)
Bug: 112008522
Test: As series, see Ic61c13b53facfd2173065be715a7ee5f3af8760b
Change-Id: I21bfa9c14635e695b128c87df53fea505c3cdd4e
Signed-off-by: Eric Biggers <ebiggers@google.com>
arch/arm/crypto/chacha20-neon-glue.c

index 7386eb1c1889d3776675f210e37e31d06e4ab982..2bc035cb8f23a1ee9d5f82332c3a73fda6fa856a 100644 (file)
@@ -68,22 +68,22 @@ static int chacha20_neon(struct skcipher_request *req)
        if (req->cryptlen <= CHACHA_BLOCK_SIZE || !may_use_simd())
                return crypto_chacha_crypt(req);
 
-       err = skcipher_walk_virt(&walk, req, true);
+       err = skcipher_walk_virt(&walk, req, false);
 
        crypto_chacha_init(state, ctx, walk.iv);
 
-       kernel_neon_begin();
        while (walk.nbytes > 0) {
                unsigned int nbytes = walk.nbytes;
 
                if (nbytes < walk.total)
                        nbytes = round_down(nbytes, walk.stride);
 
+               kernel_neon_begin();
                chacha20_doneon(state, walk.dst.virt.addr, walk.src.virt.addr,
                                nbytes);
+               kernel_neon_end();
                err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
        }
-       kernel_neon_end();
 
        return err;
 }