[03/13] crypto: x86/sha - yield FPU context during long loops

Message ID 20221219220223.3982176-4-elliott@hpe.com
State New
Headers
Series crypto: x86 - yield FPU context during long loops |

Commit Message

Elliott, Robert (Servers) Dec. 19, 2022, 10:02 p.m. UTC
  The x86 assembly language implementations using SIMD process data
between kernel_fpu_begin() and kernel_fpu_end() calls. That
disables scheduler preemption, so prevents the CPU core from being
used by other threads.

The update() and finup() functions might be called to process
large quantities of data, which can result in RCU stalls and
soft lockups.

Periodically check if the kernel scheduler wants to run something
else on the CPU. If so, yield the kernel FPU context and let the
scheduler intervene.

Fixes: 66be89515888 ("crypto: sha1 - SSSE3 based SHA1 implementation for x86-64")
Fixes: 8275d1aa6422 ("crypto: sha256 - Create module providing optimized SHA256 routines using SSSE3, AVX or AVX2 instructions.")
Fixes: 87de4579f92d ("crypto: sha512 - Create module providing optimized SHA512 routines using SSSE3, AVX or AVX2 instructions.")
Fixes: aa031b8f702e ("crypto: x86/sha512 - load based on CPU features")
Suggested-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Robert Elliott <elliott@hpe.com>
---
 arch/x86/crypto/sha1_avx2_x86_64_asm.S |   6 +-
 arch/x86/crypto/sha1_ni_asm.S          |   8 +-
 arch/x86/crypto/sha1_ssse3_glue.c      | 120 ++++++++++++++++++++-----
 arch/x86/crypto/sha256_ni_asm.S        |   8 +-
 arch/x86/crypto/sha256_ssse3_glue.c    | 115 +++++++++++++++++++-----
 arch/x86/crypto/sha512_ssse3_glue.c    |  89 ++++++++++++++----
 6 files changed, 277 insertions(+), 69 deletions(-)
  

Comments

Herbert Xu Dec. 20, 2022, 3:57 a.m. UTC | #1
On Mon, Dec 19, 2022 at 04:02:13PM -0600, Robert Elliott wrote:
>
> +void __sha1_transform_ssse3(struct sha1_state *state, const u8 *data, int blocks)
> +{
> +	if (blocks <= 0)
> +		return;
> +
> +	kernel_fpu_begin();
> +	for (;;) {
> +		const int chunks = min(blocks, 4096 / SHA1_BLOCK_SIZE);
> +
> +		sha1_transform_ssse3(state->state, data, chunks);
> +		data += chunks * SHA1_BLOCK_SIZE;
> +		blocks -= chunks;
> +
> +		if (blocks <= 0)
> +			break;
> +
> +		kernel_fpu_yield();

Shouldn't this check the MAY_SLEEP flag?

Cheers,
  
Herbert Xu Dec. 30, 2022, 9:08 a.m. UTC | #2
On Mon, Dec 19, 2022 at 04:02:13PM -0600, Robert Elliott wrote:
>
> diff --git a/arch/x86/crypto/sha1_avx2_x86_64_asm.S b/arch/x86/crypto/sha1_avx2_x86_64_asm.S
> index c3ee9334cb0f..df03fbb2c42c 100644
> --- a/arch/x86/crypto/sha1_avx2_x86_64_asm.S
> +++ b/arch/x86/crypto/sha1_avx2_x86_64_asm.S
> @@ -58,9 +58,9 @@
>  /*
>   * SHA-1 implementation with Intel(R) AVX2 instruction set extensions.
>   *
> - *This implementation is based on the previous SSSE3 release:
> - *Visit http://software.intel.com/en-us/articles/
> - *and refer to improving-the-performance-of-the-secure-hash-algorithm-1/
> + * This implementation is based on the previous SSSE3 release:
> + * Visit http://software.intel.com/en-us/articles/
> + * and refer to improving-the-performance-of-the-secure-hash-algorithm-1/

Could you please leave out changes which are not related to the
main purpose of this patch?

Put them into a separate patch if necessary.

> diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c
> index 44340a1139e0..b269b455fbbe 100644
> --- a/arch/x86/crypto/sha1_ssse3_glue.c
> +++ b/arch/x86/crypto/sha1_ssse3_glue.c
> @@ -41,9 +41,7 @@ static int sha1_update(struct shash_desc *desc, const u8 *data,
>  	 */
>  	BUILD_BUG_ON(offsetof(struct sha1_state, state) != 0);
>  
> -	kernel_fpu_begin();
>  	sha1_base_do_update(desc, data, len, sha1_xform);
> -	kernel_fpu_end();

Moving kernel_fpu_begin/kernel_fpu_end down seems to be entirely
unnecessary as you could already call kernel_fpu_yield deep down
the stack with the current code.

Thanks,
  
Herbert Xu Jan. 12, 2023, 8:05 a.m. UTC | #3
On Mon, Dec 19, 2022 at 04:02:13PM -0600, Robert Elliott wrote:
>
> @@ -41,9 +41,7 @@ static int sha1_update(struct shash_desc *desc, const u8 *data,

I just realised a show-stopper with this patch-set.  We don't
have a desc->flags field that tells us whether we can sleep or
not.

I'm currently doing a patch-set for per-request keys and I will
add a flags field to shash_desc so we could use that for your
patch-set too.

Thanks,
  
Eric Biggers Jan. 12, 2023, 6:46 p.m. UTC | #4
On Thu, Jan 12, 2023 at 04:05:55PM +0800, Herbert Xu wrote:
> On Mon, Dec 19, 2022 at 04:02:13PM -0600, Robert Elliott wrote:
> >
> > @@ -41,9 +41,7 @@ static int sha1_update(struct shash_desc *desc, const u8 *data,
> 
> I just realised a show-stopper with this patch-set.  We don't
> have a desc->flags field that tells us whether we can sleep or
> not.
> 
> I'm currently doing a patch-set for per-request keys and I will
> add a flags field to shash_desc so we could use that for your
> patch-set too.
> 

Right, this used to exist, but it didn't actually do anything, and it had
suffered heavily from bitrot.  For example, some callers specified MAY_SLEEP
when actually they couldn't sleep.  IIRC, some callers also didn't even bother
initializing the flags, so they were passing uninitialized memory.  So I removed
it in commit 877b5691f27a ("crypto: shash - remove shash_desc::flags").

Has there been any consideration of just adding the crypto_shash_update_large()
helper function that I had mentioned in the commit message of 877b5691f27a?

- Eric
  
Herbert Xu Jan. 13, 2023, 2:36 a.m. UTC | #5
On Thu, Jan 12, 2023 at 10:46:42AM -0800, Eric Biggers wrote:
>
> Right, this used to exist, but it didn't actually do anything, and it had
> suffered heavily from bitrot.  For example, some callers specified MAY_SLEEP
> when actually they couldn't sleep.  IIRC, some callers also didn't even bother
> initializing the flags, so they were passing uninitialized memory.  So I removed
> it in commit 877b5691f27a ("crypto: shash - remove shash_desc::flags").
> 
> Has there been any consideration of just adding the crypto_shash_update_large()
> helper function that I had mentioned in the commit message of 877b5691f27a?

I had forgotten about this :)

Perhaps we should just convert any users that trigger these warnings
over to ahash? The shash interface was never meant to process large
amounts of data anyway.

Cheers,
  
Herbert Xu Jan. 13, 2023, 2:37 a.m. UTC | #6
On Fri, Jan 13, 2023 at 10:36:08AM +0800, Herbert Xu wrote:
>
> Perhaps we should just convert any users that trigger these warnings
> over to ahash? The shash interface was never meant to process large
> amounts of data anyway.

We could even add some length checks in shash to ensure that
all large updates fail with a big bright warning once the existing
users have been converted.

Cheers,
  
Elliott, Robert (Servers) Jan. 13, 2023, 7:35 p.m. UTC | #7
> -----Original Message-----
> From: Herbert Xu <herbert@gondor.apana.org.au>
> Sent: Thursday, January 12, 2023 8:38 PM
> To: Eric Biggers <ebiggers@kernel.org>
> Cc: Elliott, Robert (Servers) <elliott@hpe.com>; davem@davemloft.net;
> Jason@zx2c4.com; ardb@kernel.org; ap420073@gmail.com;
> David.Laight@aculab.com; tim.c.chen@linux.intel.com; peter@n8pjl.ca;
> tglx@linutronix.de; mingo@redhat.com; bp@alien8.de;
> dave.hansen@linux.intel.com; linux-crypto@vger.kernel.org; x86@kernel.org;
> linux-kernel@vger.kernel.org
> Subject: Re: [PATCH 03/13] crypto: x86/sha - yield FPU context during long
> loops
> 
> On Fri, Jan 13, 2023 at 10:36:08AM +0800, Herbert Xu wrote:
> >
> > Perhaps we should just convert any users that trigger these warnings
> > over to ahash? The shash interface was never meant to process large
> > amounts of data anyway.
> 
> We could even add some length checks in shash to ensure that
> all large updates fail with a big bright warning once the existing
> users have been converted.

The call trace that triggered this whole topic was checking module
signatures during boot (thousands of files totaling 2.4 GB):
[   29.729849]  ? sha512_finup.part.0+0x1de/0x230 [sha512_ssse3]
[   29.729851]  ? pkcs7_digest+0xaf/0x1f0
[   29.729854]  ? pkcs7_verify+0x61/0x540
[   29.729856]  ? verify_pkcs7_message_sig+0x4a/0xe0
[   29.729859]  ? pkcs7_parse_message+0x174/0x1b0
[   29.729861]  ? verify_pkcs7_signature+0x4c/0x80
[   29.729862]  ? mod_verify_sig+0x74/0x90
[   29.729867]  ? module_sig_check+0x87/0xd0
[   29.729868]  ? load_module+0x4e/0x1fc0
[   29.729871]  ? xfs_file_read_iter+0x70/0xe0 [xfs]
[   29.729955]  ? __kernel_read+0x118/0x290
[   29.729959]  ? ima_post_read_file+0xac/0xc0
[   29.729962]  ? kernel_read_file+0x211/0x2a0
[   29.729965]  ? __do_sys_finit_module+0x93/0xf0

pkcs_digest() uses shash like this:
        /* Allocate the hashing algorithm we're going to need and find out how
         * big the hash operational data will be.
         */
        tfm = crypto_alloc_shash(sinfo->sig->hash_algo, 0, 0);
        if (IS_ERR(tfm))
                return (PTR_ERR(tfm) == -ENOENT) ? -ENOPKG : PTR_ERR(tfm);

        desc_size = crypto_shash_descsize(tfm) + sizeof(*desc);
        sig->digest_size = crypto_shash_digestsize(tfm);

        ret = -ENOMEM;
        sig->digest = kmalloc(sig->digest_size, GFP_KERNEL);
        if (!sig->digest)
                goto error_no_desc;

        desc = kzalloc(desc_size, GFP_KERNEL);
        if (!desc)
                goto error_no_desc;

        desc->tfm   = tfm;

        /* Digest the message [RFC2315 9.3] */
        ret = crypto_shash_digest(desc, pkcs7->data, pkcs7->data_len,
                                  sig->digest);
        if (ret < 0)
                goto error;
        pr_devel("MsgDigest = [%*ph]\n", 8, sig->digest);

There is a crypto_ahash_digest() available. Interestingly, the number of
users of each one happens to be identical:
    $ grep -Er --include '*.[chS]' "crypto_shash_digest\(" | wc -l
    37
    $ grep -Er --include '*.[chS]' "crypto_ahash_digest\(" | wc -l
    37
  
Herbert Xu Jan. 16, 2023, 3:33 a.m. UTC | #8
On Fri, Jan 13, 2023 at 07:35:07PM +0000, Elliott, Robert (Servers) wrote:
>
> pkcs_digest() uses shash like this:
>         /* Allocate the hashing algorithm we're going to need and find out how
>          * big the hash operational data will be.
>          */
>         tfm = crypto_alloc_shash(sinfo->sig->hash_algo, 0, 0);
>         if (IS_ERR(tfm))
>                 return (PTR_ERR(tfm) == -ENOENT) ? -ENOPKG : PTR_ERR(tfm);
> 
>         desc_size = crypto_shash_descsize(tfm) + sizeof(*desc);
>         sig->digest_size = crypto_shash_digestsize(tfm);
> 
>         ret = -ENOMEM;
>         sig->digest = kmalloc(sig->digest_size, GFP_KERNEL);
>         if (!sig->digest)
>                 goto error_no_desc;
> 
>         desc = kzalloc(desc_size, GFP_KERNEL);
>         if (!desc)
>                 goto error_no_desc;
> 
>         desc->tfm   = tfm;
> 
>         /* Digest the message [RFC2315 9.3] */
>         ret = crypto_shash_digest(desc, pkcs7->data, pkcs7->data_len,
>                                   sig->digest);
>         if (ret < 0)
>                 goto error;
>         pr_devel("MsgDigest = [%*ph]\n", 8, sig->digest);

As this path is sleepable, the conversion should be fairly trivial
with crypto_wait_req.

Cheers,
  

Patch

diff --git a/arch/x86/crypto/sha1_avx2_x86_64_asm.S b/arch/x86/crypto/sha1_avx2_x86_64_asm.S
index c3ee9334cb0f..df03fbb2c42c 100644
--- a/arch/x86/crypto/sha1_avx2_x86_64_asm.S
+++ b/arch/x86/crypto/sha1_avx2_x86_64_asm.S
@@ -58,9 +58,9 @@ 
 /*
  * SHA-1 implementation with Intel(R) AVX2 instruction set extensions.
  *
- *This implementation is based on the previous SSSE3 release:
- *Visit http://software.intel.com/en-us/articles/
- *and refer to improving-the-performance-of-the-secure-hash-algorithm-1/
+ * This implementation is based on the previous SSSE3 release:
+ * Visit http://software.intel.com/en-us/articles/
+ * and refer to improving-the-performance-of-the-secure-hash-algorithm-1/
  *
  */
 
diff --git a/arch/x86/crypto/sha1_ni_asm.S b/arch/x86/crypto/sha1_ni_asm.S
index a69595b033c8..d513b85e242c 100644
--- a/arch/x86/crypto/sha1_ni_asm.S
+++ b/arch/x86/crypto/sha1_ni_asm.S
@@ -75,7 +75,7 @@ 
 .text
 
 /**
- * sha1_ni_transform - Calculate SHA1 hash using the x86 SHA-NI feature set
+ * sha1_transform_ni - Calculate SHA1 hash using the x86 SHA-NI feature set
  * @digest:	address of current 20-byte hash value (%rdi, DIGEST_PTR macro)
  * @data:	address of data (%rsi, DATA_PTR macro);
  *		data size must be a multiple of 64 bytes
@@ -94,9 +94,9 @@ 
  * The non-indented lines are instructions related to the message schedule.
  *
  * Return:    none
- * Prototype: asmlinkage void sha1_ni_transform(u32 *digest, const u8 *data, int blocks)
+ * Prototype: asmlinkage void sha1_transform_ni(u32 *digest, const u8 *data, int blocks)
  */
-SYM_TYPED_FUNC_START(sha1_ni_transform)
+SYM_TYPED_FUNC_START(sha1_transform_ni)
 	push		%rbp
 	mov		%rsp, %rbp
 	sub		$FRAME_SIZE, %rsp
@@ -294,7 +294,7 @@  SYM_TYPED_FUNC_START(sha1_ni_transform)
 	pop		%rbp
 
 	RET
-SYM_FUNC_END(sha1_ni_transform)
+SYM_FUNC_END(sha1_transform_ni)
 
 .section	.rodata.cst16.PSHUFFLE_BYTE_FLIP_MASK, "aM", @progbits, 16
 .align 16
diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c
index 44340a1139e0..b269b455fbbe 100644
--- a/arch/x86/crypto/sha1_ssse3_glue.c
+++ b/arch/x86/crypto/sha1_ssse3_glue.c
@@ -41,9 +41,7 @@  static int sha1_update(struct shash_desc *desc, const u8 *data,
 	 */
 	BUILD_BUG_ON(offsetof(struct sha1_state, state) != 0);
 
-	kernel_fpu_begin();
 	sha1_base_do_update(desc, data, len, sha1_xform);
-	kernel_fpu_end();
 
 	return 0;
 }
@@ -54,28 +52,46 @@  static int sha1_finup(struct shash_desc *desc, const u8 *data,
 	if (!crypto_simd_usable())
 		return crypto_sha1_finup(desc, data, len, out);
 
-	kernel_fpu_begin();
 	if (len)
 		sha1_base_do_update(desc, data, len, sha1_xform);
 	sha1_base_do_finalize(desc, sha1_xform);
-	kernel_fpu_end();
 
 	return sha1_base_finish(desc, out);
 }
 
-asmlinkage void sha1_transform_ssse3(struct sha1_state *state,
-				     const u8 *data, int blocks);
+asmlinkage void sha1_transform_ssse3(u32 *digest, const u8 *data, int blocks);
+
+void __sha1_transform_ssse3(struct sha1_state *state, const u8 *data, int blocks)
+{
+	if (blocks <= 0)
+		return;
+
+	kernel_fpu_begin();
+	for (;;) {
+		const int chunks = min(blocks, 4096 / SHA1_BLOCK_SIZE);
+
+		sha1_transform_ssse3(state->state, data, chunks);
+		data += chunks * SHA1_BLOCK_SIZE;
+		blocks -= chunks;
+
+		if (blocks <= 0)
+			break;
+
+		kernel_fpu_yield();
+	}
+	kernel_fpu_end();
+}
 
 static int sha1_ssse3_update(struct shash_desc *desc, const u8 *data,
 			     unsigned int len)
 {
-	return sha1_update(desc, data, len, sha1_transform_ssse3);
+	return sha1_update(desc, data, len, __sha1_transform_ssse3);
 }
 
 static int sha1_ssse3_finup(struct shash_desc *desc, const u8 *data,
 			      unsigned int len, u8 *out)
 {
-	return sha1_finup(desc, data, len, out, sha1_transform_ssse3);
+	return sha1_finup(desc, data, len, out, __sha1_transform_ssse3);
 }
 
 /* Add padding and return the message digest. */
@@ -113,19 +129,39 @@  static void unregister_sha1_ssse3(void)
 		crypto_unregister_shash(&sha1_ssse3_alg);
 }
 
-asmlinkage void sha1_transform_avx(struct sha1_state *state,
-				   const u8 *data, int blocks);
+asmlinkage void sha1_transform_avx(u32 *digest, const u8 *data, int blocks);
+
+void __sha1_transform_avx(struct sha1_state *state, const u8 *data, int blocks)
+{
+	if (blocks <= 0)
+		return;
+
+	kernel_fpu_begin();
+	for (;;) {
+		const int chunks = min(blocks, 4096 / SHA1_BLOCK_SIZE);
+
+		sha1_transform_avx(state->state, data, chunks);
+		data += chunks * SHA1_BLOCK_SIZE;
+		blocks -= chunks;
+
+		if (blocks <= 0)
+			break;
+
+		kernel_fpu_yield();
+	}
+	kernel_fpu_end();
+}
 
 static int sha1_avx_update(struct shash_desc *desc, const u8 *data,
 			     unsigned int len)
 {
-	return sha1_update(desc, data, len, sha1_transform_avx);
+	return sha1_update(desc, data, len, __sha1_transform_avx);
 }
 
 static int sha1_avx_finup(struct shash_desc *desc, const u8 *data,
 			      unsigned int len, u8 *out)
 {
-	return sha1_finup(desc, data, len, out, sha1_transform_avx);
+	return sha1_finup(desc, data, len, out, __sha1_transform_avx);
 }
 
 static int sha1_avx_final(struct shash_desc *desc, u8 *out)
@@ -175,8 +211,28 @@  static void unregister_sha1_avx(void)
 
 #define SHA1_AVX2_BLOCK_OPTSIZE	4	/* optimal 4*64 bytes of SHA1 blocks */
 
-asmlinkage void sha1_transform_avx2(struct sha1_state *state,
-				    const u8 *data, int blocks);
+asmlinkage void sha1_transform_avx2(u32 *digest, const u8 *data, int blocks);
+
+void __sha1_transform_avx2(struct sha1_state *state, const u8 *data, int blocks)
+{
+	if (blocks <= 0)
+		return;
+
+	kernel_fpu_begin();
+	for (;;) {
+		const int chunks = min(blocks, 4096 / SHA1_BLOCK_SIZE);
+
+		sha1_transform_avx2(state->state, data, chunks);
+		data += chunks * SHA1_BLOCK_SIZE;
+		blocks -= chunks;
+
+		if (blocks <= 0)
+			break;
+
+		kernel_fpu_yield();
+	}
+	kernel_fpu_end();
+}
 
 static bool avx2_usable(void)
 {
@@ -193,9 +249,9 @@  static void sha1_apply_transform_avx2(struct sha1_state *state,
 {
 	/* Select the optimal transform based on data block size */
 	if (blocks >= SHA1_AVX2_BLOCK_OPTSIZE)
-		sha1_transform_avx2(state, data, blocks);
+		__sha1_transform_avx2(state, data, blocks);
 	else
-		sha1_transform_avx(state, data, blocks);
+		__sha1_transform_avx(state, data, blocks);
 }
 
 static int sha1_avx2_update(struct shash_desc *desc, const u8 *data,
@@ -245,19 +301,39 @@  static void unregister_sha1_avx2(void)
 }
 
 #ifdef CONFIG_AS_SHA1_NI
-asmlinkage void sha1_ni_transform(struct sha1_state *digest, const u8 *data,
-				  int rounds);
+asmlinkage void sha1_transform_ni(u32 *digest, const u8 *data, int rounds);
+
+void __sha1_transform_ni(struct sha1_state *state, const u8 *data, int blocks)
+{
+	if (blocks <= 0)
+		return;
+
+	kernel_fpu_begin();
+	for (;;) {
+		const int chunks = min(blocks, 4096 / SHA1_BLOCK_SIZE);
+
+		sha1_transform_ni(state->state, data, chunks);
+		data += chunks * SHA1_BLOCK_SIZE;
+		blocks -= chunks;
+
+		if (blocks <= 0)
+			break;
+
+		kernel_fpu_yield();
+	}
+	kernel_fpu_end();
+}
 
 static int sha1_ni_update(struct shash_desc *desc, const u8 *data,
-			     unsigned int len)
+			  unsigned int len)
 {
-	return sha1_update(desc, data, len, sha1_ni_transform);
+	return sha1_update(desc, data, len, __sha1_transform_ni);
 }
 
 static int sha1_ni_finup(struct shash_desc *desc, const u8 *data,
-			      unsigned int len, u8 *out)
+			 unsigned int len, u8 *out)
 {
-	return sha1_finup(desc, data, len, out, sha1_ni_transform);
+	return sha1_finup(desc, data, len, out, __sha1_transform_ni);
 }
 
 static int sha1_ni_final(struct shash_desc *desc, u8 *out)
diff --git a/arch/x86/crypto/sha256_ni_asm.S b/arch/x86/crypto/sha256_ni_asm.S
index e7a3b9939327..29458ec970a9 100644
--- a/arch/x86/crypto/sha256_ni_asm.S
+++ b/arch/x86/crypto/sha256_ni_asm.S
@@ -79,7 +79,7 @@ 
 .text
 
 /**
- * sha256_ni_transform - Calculate SHA256 hash using the x86 SHA-NI feature set
+ * sha256_transform_ni - Calculate SHA256 hash using the x86 SHA-NI feature set
  * @digest:	address of current 32-byte hash value (%rdi, DIGEST_PTR macro)
  * @data:	address of data (%rsi, DATA_PTR macro);
  *		data size must be a multiple of 64 bytes
@@ -98,9 +98,9 @@ 
  * The non-indented lines are instructions related to the message schedule.
  *
  * Return:	none
- * Prototype:	asmlinkage void sha256_ni_transform(u32 *digest, const u8 *data, int blocks)
+ * Prototype:	asmlinkage void sha256_transform_ni(u32 *digest, const u8 *data, int blocks)
  */
-SYM_TYPED_FUNC_START(sha256_ni_transform)
+SYM_TYPED_FUNC_START(sha256_transform_ni)
 	shl		$6, NUM_BLKS		/*  convert to bytes */
 	jz		.Ldone_hash
 	add		DATA_PTR, NUM_BLKS	/* pointer to end of data */
@@ -329,7 +329,7 @@  SYM_TYPED_FUNC_START(sha256_ni_transform)
 .Ldone_hash:
 
 	RET
-SYM_FUNC_END(sha256_ni_transform)
+SYM_FUNC_END(sha256_transform_ni)
 
 .section	.rodata.cst256.K256, "aM", @progbits, 256
 .align 64
diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c
index 3a5f6be7dbba..43927cf3d06e 100644
--- a/arch/x86/crypto/sha256_ssse3_glue.c
+++ b/arch/x86/crypto/sha256_ssse3_glue.c
@@ -40,8 +40,28 @@ 
 #include <linux/string.h>
 #include <asm/simd.h>
 
-asmlinkage void sha256_transform_ssse3(struct sha256_state *state,
-				       const u8 *data, int blocks);
+asmlinkage void sha256_transform_ssse3(u32 *digest, const u8 *data, int blocks);
+
+void __sha256_transform_ssse3(struct sha256_state *state, const u8 *data, int blocks)
+{
+	if (blocks <= 0)
+		return;
+
+	kernel_fpu_begin();
+	for (;;) {
+		const int chunks = min(blocks, 4096 / SHA256_BLOCK_SIZE);
+
+		sha256_transform_ssse3(state->state, data, chunks);
+		data += chunks * SHA256_BLOCK_SIZE;
+		blocks -= chunks;
+
+		if (blocks <= 0)
+			break;
+
+		kernel_fpu_yield();
+	}
+	kernel_fpu_end();
+}
 
 static int _sha256_update(struct shash_desc *desc, const u8 *data,
 			  unsigned int len, sha256_block_fn *sha256_xform)
@@ -58,9 +78,7 @@  static int _sha256_update(struct shash_desc *desc, const u8 *data,
 	 */
 	BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0);
 
-	kernel_fpu_begin();
 	sha256_base_do_update(desc, data, len, sha256_xform);
-	kernel_fpu_end();
 
 	return 0;
 }
@@ -71,11 +89,9 @@  static int sha256_finup(struct shash_desc *desc, const u8 *data,
 	if (!crypto_simd_usable())
 		return crypto_sha256_finup(desc, data, len, out);
 
-	kernel_fpu_begin();
 	if (len)
 		sha256_base_do_update(desc, data, len, sha256_xform);
 	sha256_base_do_finalize(desc, sha256_xform);
-	kernel_fpu_end();
 
 	return sha256_base_finish(desc, out);
 }
@@ -83,13 +99,13 @@  static int sha256_finup(struct shash_desc *desc, const u8 *data,
 static int sha256_ssse3_update(struct shash_desc *desc, const u8 *data,
 			 unsigned int len)
 {
-	return _sha256_update(desc, data, len, sha256_transform_ssse3);
+	return _sha256_update(desc, data, len, __sha256_transform_ssse3);
 }
 
 static int sha256_ssse3_finup(struct shash_desc *desc, const u8 *data,
 	      unsigned int len, u8 *out)
 {
-	return sha256_finup(desc, data, len, out, sha256_transform_ssse3);
+	return sha256_finup(desc, data, len, out, __sha256_transform_ssse3);
 }
 
 /* Add padding and return the message digest. */
@@ -143,19 +159,39 @@  static void unregister_sha256_ssse3(void)
 				ARRAY_SIZE(sha256_ssse3_algs));
 }
 
-asmlinkage void sha256_transform_avx(struct sha256_state *state,
-				     const u8 *data, int blocks);
+asmlinkage void sha256_transform_avx(u32 *digest, const u8 *data, int blocks);
+
+void __sha256_transform_avx(struct sha256_state *state, const u8 *data, int blocks)
+{
+	if (blocks <= 0)
+		return;
+
+	kernel_fpu_begin();
+	for (;;) {
+		const int chunks = min(blocks, 4096 / SHA256_BLOCK_SIZE);
+
+		sha256_transform_avx(state->state, data, chunks);
+		data += chunks * SHA256_BLOCK_SIZE;
+		blocks -= chunks;
+
+		if (blocks <= 0)
+			break;
+
+		kernel_fpu_yield();
+	}
+	kernel_fpu_end();
+}
 
 static int sha256_avx_update(struct shash_desc *desc, const u8 *data,
 			 unsigned int len)
 {
-	return _sha256_update(desc, data, len, sha256_transform_avx);
+	return _sha256_update(desc, data, len, __sha256_transform_avx);
 }
 
 static int sha256_avx_finup(struct shash_desc *desc, const u8 *data,
 		      unsigned int len, u8 *out)
 {
-	return sha256_finup(desc, data, len, out, sha256_transform_avx);
+	return sha256_finup(desc, data, len, out, __sha256_transform_avx);
 }
 
 static int sha256_avx_final(struct shash_desc *desc, u8 *out)
@@ -219,19 +255,39 @@  static void unregister_sha256_avx(void)
 				ARRAY_SIZE(sha256_avx_algs));
 }
 
-asmlinkage void sha256_transform_rorx(struct sha256_state *state,
-				      const u8 *data, int blocks);
+asmlinkage void sha256_transform_rorx(u32 *state, const u8 *data, int blocks);
+
+void __sha256_transform_avx2(struct sha256_state *state, const u8 *data, int blocks)
+{
+	if (blocks <= 0)
+		return;
+
+	kernel_fpu_begin();
+	for (;;) {
+		const int chunks = min(blocks, 4096 / SHA256_BLOCK_SIZE);
+
+		sha256_transform_rorx(state->state, data, chunks);
+		data += chunks * SHA256_BLOCK_SIZE;
+		blocks -= chunks;
+
+		if (blocks <= 0)
+			break;
+
+		kernel_fpu_yield();
+	}
+	kernel_fpu_end();
+}
 
 static int sha256_avx2_update(struct shash_desc *desc, const u8 *data,
 			 unsigned int len)
 {
-	return _sha256_update(desc, data, len, sha256_transform_rorx);
+	return _sha256_update(desc, data, len, __sha256_transform_avx2);
 }
 
 static int sha256_avx2_finup(struct shash_desc *desc, const u8 *data,
 		      unsigned int len, u8 *out)
 {
-	return sha256_finup(desc, data, len, out, sha256_transform_rorx);
+	return sha256_finup(desc, data, len, out, __sha256_transform_avx2);
 }
 
 static int sha256_avx2_final(struct shash_desc *desc, u8 *out)
@@ -294,19 +350,38 @@  static void unregister_sha256_avx2(void)
 }
 
 #ifdef CONFIG_AS_SHA256_NI
-asmlinkage void sha256_ni_transform(struct sha256_state *digest,
-				    const u8 *data, int rounds);
+asmlinkage void sha256_transform_ni(u32 *digest, const u8 *data, int rounds);
+
+void __sha256_transform_ni(struct sha256_state *state, const u8 *data, int blocks)
+{
+	if (blocks <= 0)
+		return;
+
+	kernel_fpu_begin();
+	for (;;) {
+		const int chunks = min(blocks, 4096 / SHA256_BLOCK_SIZE);
 
+		sha256_transform_ni(state->state, data, chunks);
+		data += chunks * SHA256_BLOCK_SIZE;
+		blocks -= chunks;
+
+		if (blocks <= 0)
+			break;
+
+		kernel_fpu_yield();
+	}
+	kernel_fpu_end();
+}
 static int sha256_ni_update(struct shash_desc *desc, const u8 *data,
 			 unsigned int len)
 {
-	return _sha256_update(desc, data, len, sha256_ni_transform);
+	return _sha256_update(desc, data, len, __sha256_transform_ni);
 }
 
 static int sha256_ni_finup(struct shash_desc *desc, const u8 *data,
 		      unsigned int len, u8 *out)
 {
-	return sha256_finup(desc, data, len, out, sha256_ni_transform);
+	return sha256_finup(desc, data, len, out, __sha256_transform_ni);
 }
 
 static int sha256_ni_final(struct shash_desc *desc, u8 *out)
diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c
index 6d3b85e53d0e..cb6aad9d5052 100644
--- a/arch/x86/crypto/sha512_ssse3_glue.c
+++ b/arch/x86/crypto/sha512_ssse3_glue.c
@@ -39,8 +39,28 @@ 
 #include <asm/cpu_device_id.h>
 #include <asm/simd.h>
 
-asmlinkage void sha512_transform_ssse3(struct sha512_state *state,
-				       const u8 *data, int blocks);
+asmlinkage void sha512_transform_ssse3(u64 *digest, const u8 *data, int blocks);
+
+void __sha512_transform_ssse3(struct sha512_state *state, const u8 *data, int blocks)
+{
+	if (blocks <= 0)
+		return;
+
+	kernel_fpu_begin();
+	for (;;) {
+		const int chunks = min(blocks, 4096 / SHA512_BLOCK_SIZE);
+
+		sha512_transform_ssse3(&state->state[0], data, chunks);
+		data += chunks * SHA512_BLOCK_SIZE;
+		blocks -= chunks;
+
+		if (blocks <= 0)
+			break;
+
+		kernel_fpu_yield();
+	}
+	kernel_fpu_end();
+}
 
 static int sha512_update(struct shash_desc *desc, const u8 *data,
 		       unsigned int len, sha512_block_fn *sha512_xform)
@@ -57,9 +77,7 @@  static int sha512_update(struct shash_desc *desc, const u8 *data,
 	 */
 	BUILD_BUG_ON(offsetof(struct sha512_state, state) != 0);
 
-	kernel_fpu_begin();
 	sha512_base_do_update(desc, data, len, sha512_xform);
-	kernel_fpu_end();
 
 	return 0;
 }
@@ -70,11 +88,9 @@  static int sha512_finup(struct shash_desc *desc, const u8 *data,
 	if (!crypto_simd_usable())
 		return crypto_sha512_finup(desc, data, len, out);
 
-	kernel_fpu_begin();
 	if (len)
 		sha512_base_do_update(desc, data, len, sha512_xform);
 	sha512_base_do_finalize(desc, sha512_xform);
-	kernel_fpu_end();
 
 	return sha512_base_finish(desc, out);
 }
@@ -82,13 +98,13 @@  static int sha512_finup(struct shash_desc *desc, const u8 *data,
 static int sha512_ssse3_update(struct shash_desc *desc, const u8 *data,
 		       unsigned int len)
 {
-	return sha512_update(desc, data, len, sha512_transform_ssse3);
+	return sha512_update(desc, data, len, __sha512_transform_ssse3);
 }
 
 static int sha512_ssse3_finup(struct shash_desc *desc, const u8 *data,
 	      unsigned int len, u8 *out)
 {
-	return sha512_finup(desc, data, len, out, sha512_transform_ssse3);
+	return sha512_finup(desc, data, len, out, __sha512_transform_ssse3);
 }
 
 /* Add padding and return the message digest. */
@@ -142,8 +158,29 @@  static void unregister_sha512_ssse3(void)
 			ARRAY_SIZE(sha512_ssse3_algs));
 }
 
-asmlinkage void sha512_transform_avx(struct sha512_state *state,
-				     const u8 *data, int blocks);
+asmlinkage void sha512_transform_avx(u64 *digest, const u8 *data, int blocks);
+
+void __sha512_transform_avx(struct sha512_state *state, const u8 *data, int blocks)
+{
+	if (blocks <= 0)
+		return;
+
+	kernel_fpu_begin();
+	for (;;) {
+		const int chunks = min(blocks, 4096 / SHA512_BLOCK_SIZE);
+
+		sha512_transform_avx(state->state, data, chunks);
+		data += chunks * SHA512_BLOCK_SIZE;
+		blocks -= chunks;
+
+		if (blocks <= 0)
+			break;
+
+		kernel_fpu_yield();
+	}
+	kernel_fpu_end();
+}
+
 static bool avx_usable(void)
 {
 	if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) {
@@ -158,13 +195,13 @@  static bool avx_usable(void)
 static int sha512_avx_update(struct shash_desc *desc, const u8 *data,
 		       unsigned int len)
 {
-	return sha512_update(desc, data, len, sha512_transform_avx);
+	return sha512_update(desc, data, len, __sha512_transform_avx);
 }
 
 static int sha512_avx_finup(struct shash_desc *desc, const u8 *data,
 	      unsigned int len, u8 *out)
 {
-	return sha512_finup(desc, data, len, out, sha512_transform_avx);
+	return sha512_finup(desc, data, len, out, __sha512_transform_avx);
 }
 
 /* Add padding and return the message digest. */
@@ -218,19 +255,39 @@  static void unregister_sha512_avx(void)
 			ARRAY_SIZE(sha512_avx_algs));
 }
 
-asmlinkage void sha512_transform_rorx(struct sha512_state *state,
-				      const u8 *data, int blocks);
+asmlinkage void sha512_transform_rorx(u64 *digest, const u8 *data, int blocks);
+
+void __sha512_transform_avx2(struct sha512_state *state, const u8 *data, int blocks)
+{
+	if (blocks <= 0)
+		return;
+
+	kernel_fpu_begin();
+	for (;;) {
+		const int chunks = min(blocks, 4096 / SHA512_BLOCK_SIZE);
+
+		sha512_transform_rorx(state->state, data, chunks);
+		data += chunks * SHA512_BLOCK_SIZE;
+		blocks -= chunks;
+
+		if (blocks <= 0)
+			break;
+
+		kernel_fpu_yield();
+	}
+	kernel_fpu_end();
+}
 
 static int sha512_avx2_update(struct shash_desc *desc, const u8 *data,
 		       unsigned int len)
 {
-	return sha512_update(desc, data, len, sha512_transform_rorx);
+	return sha512_update(desc, data, len, __sha512_transform_avx2);
 }
 
 static int sha512_avx2_finup(struct shash_desc *desc, const u8 *data,
 	      unsigned int len, u8 *out)
 {
-	return sha512_finup(desc, data, len, out, sha512_transform_rorx);
+	return sha512_finup(desc, data, len, out, __sha512_transform_avx2);
 }
 
 /* Add padding and return the message digest. */