From patchwork Wed Nov 16 04:13:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20694 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3083943wru; Tue, 15 Nov 2022 20:15:37 -0800 (PST) X-Google-Smtp-Source: AA0mqf7m2im9/Tgy9uCLdohJNIX6LhK9BSmxfB0LkWhD6BGpiUB179a1IECmsIh99+7ttxtWfxHZ X-Received: by 2002:a17:90b:234b:b0:213:5a4d:8138 with SMTP id ms11-20020a17090b234b00b002135a4d8138mr1794670pjb.17.1668572137386; Tue, 15 Nov 2022 20:15:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572137; cv=none; d=google.com; s=arc-20160816; b=rYBUIb4Y5hdfpc3Zf6UYUJlIKYd9TjKdX+W5/+rQrJo9GZ5SdOnYiJ0rci0ITMQ67N ssJvMWDj0QVjD9ezztjb3Q92N3AiGOPlTFGmA2OYz2qKh7hqAmrjg3veihUUI3g1opUe 5wxtn+t2g2dtNYamifG495vB5p6HrX5eJhy4x8v4H/9umNDCuV5G+LviBn18dM/lwsof ELQolRFfab+YylWDqx0+eMGp27yAQKACIxOKbcqLfXv8XtsnSwFtr24fGMYSp7ARWUNo kSoft95g/l2UIpBAX1QscnjBnnuAa0nZ+0b4mm8m5tsO5WBOuGK5ObL0gDs/IDCIBVOc U4qg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=OQQdqDDbTZo3g2+8cfmGfikrPh68vOUttFSoj8wftHw=; b=f0UfLnBg0aDdUgviFzs6iT8cuFHN1iFggrQz0vDva0czKECLwsoZwy5rVDnP57Gw0/ 4kTq6hT09rQxxp70CS0d7Doqin2uCi9IooKT822j4MgrbDgwVZEG9V5SjO8FKDRUqc2D M6HmxcvPEevyuRCxH52GxhA0i8QsJwjAiiuxGHAjycEKTUa/I1l1HQwozW42trrmayUL YqjuWC2j+55snBWv5mx5haFWqYy7wbZxa6lD7SErQoJ0IkrWH9Q4wNii231QgUuNMJNo Wzpb9WdOgWLJEckr/KSRwI3/d0EwEJ0CSU/DrHtrj5v52GpSAjMDMwd0GBzB+CmPEz6X n24A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=I1XsPFNR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id nu16-20020a17090b1b1000b002121890521csi1002800pjb.119.2022.11.15.20.15.24; Tue, 15 Nov 2022 20:15:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=I1XsPFNR; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231794AbiKPEOW (ORCPT + 99 others); Tue, 15 Nov 2022 23:14:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230136AbiKPEOM (ORCPT ); Tue, 15 Nov 2022 23:14:12 -0500 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F9381DF33; Tue, 15 Nov 2022 20:14:11 -0800 (PST) Received: from pps.filterd (m0150241.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG31MdE027556; Wed, 16 Nov 2022 04:13:55 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=OQQdqDDbTZo3g2+8cfmGfikrPh68vOUttFSoj8wftHw=; b=I1XsPFNR/SjuWMvT/gvlpEIPKJiMDMnVifggwyfN+2z+hLbruihk0N2DpR3ycva+jS3z 8iIZ8V3rev+FZ7nHeV8gB9dUcU7zBG+dHIM6HM8z9CL7qdrKkyVTAhLlZ3cA/pOhdlqp AA4PvJKGF/g1RI4UMEG/gEgG7ibcXpRYcYJS+P2jx1hJkPEnppM6Z5LTTI5OoJDy1LdG eOjGJo3ay3sNUmUpbjaiBrE6IBtqmlbkvmilnpfxadbO1cgxTV3fWERIaB8iBEu1HFYi ConBT0K0eRhtr/TVfeHEiL/DKvNQeMh4f6DjOJ5ATjRP1oPukIfuabENvJbVpzMM9Ja4 tA== Received: from p1lg14881.it.hpe.com (p1lg14881.it.hpe.com [16.230.97.202]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqkbreeh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:13:55 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14881.it.hpe.com (Postfix) with ESMTPS id 8C9C9809F55; Wed, 16 Nov 2022 04:13:54 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 07DE6808B9A; Wed, 16 Nov 2022 04:13:53 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 01/24] crypto: tcrypt - test crc32 Date: Tue, 15 Nov 2022 22:13:19 -0600 Message-Id: <20221116041342.3841-2-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: tKkXXZb93fsG73gPIS7Xx7aTQbp_KSIH X-Proofpoint-ORIG-GUID: tKkXXZb93fsG73gPIS7Xx7aTQbp_KSIH X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=920 impostorscore=0 spamscore=0 clxscore=1015 mlxscore=0 suspectscore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 priorityscore=1501 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748447890521893907?= X-GMAIL-MSGID: =?utf-8?q?1749624697095505446?= Add self-test and speed tests for crc32, paralleling those offered for crc32c and crct10dif. Signed-off-by: Robert Elliott --- crypto/tcrypt.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index a82679b576bb..4426386dfb42 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -1711,6 +1711,10 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) ret += tcrypt_test("gcm(aria)"); break; + case 59: + ret += tcrypt_test("crc32"); + break; + case 100: ret += tcrypt_test("hmac(md5)"); break; @@ -2317,6 +2321,10 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) generic_hash_speed_template); if (mode > 300 && mode < 400) break; fallthrough; + case 329: + test_hash_speed("crc32", sec, generic_hash_speed_template); + if (mode > 300 && mode < 400) break; + fallthrough; case 399: break; From patchwork Wed Nov 16 04:13:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20693 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3083868wru; Tue, 15 Nov 2022 20:15:22 -0800 (PST) X-Google-Smtp-Source: AA0mqf4jJr0FZiKV0Xay9I98+Mi3I2oQvcgGNRMvPlhmHYXFUlZTQ8bfjhNuh0Vw1SBseZWh1CFz X-Received: by 2002:a17:903:40d0:b0:182:2589:db21 with SMTP id t16-20020a17090340d000b001822589db21mr7130156pld.151.1668572122531; Tue, 15 Nov 2022 20:15:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572122; cv=none; d=google.com; s=arc-20160816; b=xipSFaeyimzoIISpe1s3sIk+6nWfnc/f9LicITFpnVaJm+3mG/zIOxDJ91ygBdqIYC pIw+S+IauYl6H4ZRc8LDlfpiIxVO2M4nF5/OFWrPlx2zfcBDR/tNnFIs7OYx2DcEQWZN LG9iJr/o8xppzxQanYfYcCmDAVNJc+BpRqIXL+ECRMHd1sVQE5mT4CtNlm3C9fgIAUZT V3pxJTY4kRagkGHkliX7iFYuUJPWJg2v8DQqnGK3iSA4peUyWbHQYua4woLzG9s7mRgU T0ovvLbv/V5BjDCK/387bCbf1HdpJ0IjOrhBOoHqVEYAE1lULhHlaEgF2UIyCLa1HrDe 4zBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=CY/C6YFYVZFR5UoI13Eo6J2yfeNTSSmCxRx7EfVHqbg=; b=Pf/OZki1GrDut5oilAj+SASskIVkaVTmM19ol9UyTwf+YvoFIaBEHiexZHappu030j nuETGDgUvHWLP0tN+sAVL8oOL3YEU9A1Vh4Mm8Uc/fcxw11YNebQVSmXaQnzlRO52Ogz YL3yfuZlpPxKxHHLBs5579t+c12TrQ0nx2KAjL7m0yAg5FtofZ3haRSTog99kWPcJXtw 384XEZCSciA9opo6Zk6OnI3HBZQafsRsreG4e4lfB2SnQZzexBVeybxvXu+Aa50nRoT9 qG3vIGyxCkfCRoNS72URJRcezn5crMUHGmGpSa4d4ODFlyMwxq/6V8dWeIw6e+AJeyTd TCyg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=fM9h6dJy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t4-20020a170902bc4400b00176d22a068csi13544831plz.515.2022.11.15.20.15.08; Tue, 15 Nov 2022 20:15:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=fM9h6dJy; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231734AbiKPEOS (ORCPT + 99 others); Tue, 15 Nov 2022 23:14:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230247AbiKPEOM (ORCPT ); Tue, 15 Nov 2022 23:14:12 -0500 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D169C2CDCC; Tue, 15 Nov 2022 20:14:11 -0800 (PST) Received: from pps.filterd (m0150242.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3cx7C007106; Wed, 16 Nov 2022 04:13:56 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=CY/C6YFYVZFR5UoI13Eo6J2yfeNTSSmCxRx7EfVHqbg=; b=fM9h6dJy+AdO5AcTEYyp6YpcnBnk9ZN608lQM0gqT9OkHvlZLsP8W80kZtnfMBDgCjlg VxrYDnf0YEf1WI298AGyfOnU+bsqxb5eycRuflv1doUoYRQjxRdiyrFVaCkxxt9JnVMu WS+3LIbq1PYZs5V69LvVRQwUkGm4pSsW5Zyaf6XAQPxSkidvlpNS04/AIywvaYXcHUTj KiczeHOKz4jQeZ5m7x5Tjp/3g3VaMub0JPIsVJB9Qn/V2paJSTG/xqe2M8KkkM9BQ3+n aJAJqRjWPw00zxYBBMEQGoJ4A4kXLod++NHzv85XUXOa01cgdkoeWK0VZ2HqNktxMD8p jQ== Received: from p1lg14881.it.hpe.com (p1lg14881.it.hpe.com [16.230.97.202]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3kvr5486sb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:13:56 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14881.it.hpe.com (Postfix) with ESMTPS id E8F04809F56; Wed, 16 Nov 2022 04:13:55 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 6BA9E802A17; Wed, 16 Nov 2022 04:13:55 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 02/24] crypto: tcrypt - test nhpoly1305 Date: Tue, 15 Nov 2022 22:13:20 -0600 Message-Id: <20221116041342.3841-3-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 4A3ZRqi_ZK-xEomLs7JnoQpwLFXvTwK7 X-Proofpoint-GUID: 4A3ZRqi_ZK-xEomLs7JnoQpwLFXvTwK7 X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 malwarescore=0 adultscore=0 clxscore=1015 lowpriorityscore=0 spamscore=0 bulkscore=0 mlxlogscore=896 phishscore=0 impostorscore=0 priorityscore=1501 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748447931242270793?= X-GMAIL-MSGID: =?utf-8?q?1749624682040217119?= Add self-test mode for nhpoly1305. Signed-off-by: Robert Elliott --- crypto/tcrypt.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 4426386dfb42..7a6a56751043 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -1715,6 +1715,10 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb) ret += tcrypt_test("crc32"); break; + case 60: + ret += tcrypt_test("nhpoly1305"); + break; + case 100: ret += tcrypt_test("hmac(md5)"); break; From patchwork Wed Nov 16 04:13:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20697 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084008wru; Tue, 15 Nov 2022 20:15:56 -0800 (PST) X-Google-Smtp-Source: AA0mqf7OxmRUpaE31mKRL/SAtDujysY5uFCOSZ051Y2wqsk4Mwa60CJBr48PvA3tjgywPGBh/aH7 X-Received: by 2002:a63:f62:0:b0:476:9983:b395 with SMTP id 34-20020a630f62000000b004769983b395mr8922919pgp.355.1668572155904; Tue, 15 Nov 2022 20:15:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572155; cv=none; d=google.com; s=arc-20160816; b=VMH3tsw9h63RZRmIFi0zFJuMJn4doId9h6o9/vuzt56e7rnhFJtS+vGBDJ45HlJ1U8 mPOxRU6DIDubdkmUSOyTk/PrhQ2rtgEihoULqbvzsFz8HfOpBIAX77R7tD0VZfAA40MM FTrlU+YSg0HzN2eFXO3IguWiXs2Q8bA9CsHbGfN4j9NraZfa/hz4RgLbDMVx9NA1uyWh c2CjD2/ZthOWGSFNZYDqY7Z99F3rIF3c4VTsQOCCGL9DM0sO7yOQw6W1h524/DYnpCp6 HsOrx77cJ+/VGcyDNJnSKx3Hc5GM9t3TqTgYbb3HY9s6DmiwZLR3CUQnSdMIusk0odbM tWHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=3j25tB+SFVCOvqm6qX5iPWKXOvcRNIYcxNjwjiDX4a8=; b=q2XRZ9OG9yARYtPqp5Q2xAmtwGo6suvg/0cYjjsUPtH9hhlF8aYBg1wB79SdffoF/G 2VIP8SsYaRG99YiIirOwcBqzJg7mUR9HGe4zcWQThchIqbYr0WeMpluZQMtrlBFZbftk +Vy+oJpCShfMqiCy+5rdjB/SmE5IfR3A1Yk5gKlC8Att0wpUEprZXLFsjv4kx1nAim6I eOECo11POVsagMTuV1H4mU085wFQzq3rSW0kM81f18AO1xTicpS+SgXMDgzz0KExYMUR cdfky9RLubfHUCuMRtjAhO0rSmOCoDOGxSQNd7YYYUEpVP8AN0eX8qHZ7soA1lM51kNB 2mxQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b="LRUZs/0S"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l18-20020a056a00141200b00572899fe8cbsi2844701pfu.369.2022.11.15.20.15.41; Tue, 15 Nov 2022 20:15:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b="LRUZs/0S"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231998AbiKPEOe (ORCPT + 99 others); Tue, 15 Nov 2022 23:14:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231231AbiKPEOR (ORCPT ); Tue, 15 Nov 2022 23:14:17 -0500 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 875551E3D2; Tue, 15 Nov 2022 20:14:15 -0800 (PST) Received: from pps.filterd (m0134424.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3NI1w017091; Wed, 16 Nov 2022 04:13:58 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=3j25tB+SFVCOvqm6qX5iPWKXOvcRNIYcxNjwjiDX4a8=; b=LRUZs/0SaEY+kayj+DDxTXlbmF3wIv/oFNc2q24RoGuzdRHS1niYQLueLGhgvG1In5V9 CWuBQnfObsWSPGMJxBydQa1drUh93qIUO3bgwk5uxKJhveDFg0LiDgZ+QxJMz+pYMnxC ccmKmchjru+PtYb/9RgSuqgTAJ+eu3jtGhG76VXTqEcc8doCzg03H3a7UIMScdIrGnau TkRfYiSFyqjvWFdXCCSHBvaShTeCJRnZ9nT1A87Mg/21FzVl8cinnpo0tgsvOZLAH5B0 wqscSZm2siUvCYkiERapmzWrpPPAWLPgUBvF5pl5hBH/0YzwZ+f31djMfz97f90MJvSP qA== Received: from p1lg14879.it.hpe.com (p1lg14879.it.hpe.com [16.230.97.200]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqwa0ahg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:13:57 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14879.it.hpe.com (Postfix) with ESMTPS id 18781295AF; Wed, 16 Nov 2022 04:13:57 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 9AF4E8065DB; Wed, 16 Nov 2022 04:13:56 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 03/24] crypto: tcrypt - reschedule during cycles speed tests Date: Tue, 15 Nov 2022 22:13:21 -0600 Message-Id: <20221116041342.3841-4-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: w3xWmJxEGL3OmJ9_6so6C6l3Ufati3Qh X-Proofpoint-ORIG-GUID: w3xWmJxEGL3OmJ9_6so6C6l3Ufati3Qh X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 adultscore=0 phishscore=0 suspectscore=0 clxscore=1015 priorityscore=1501 bulkscore=0 mlxlogscore=999 spamscore=0 lowpriorityscore=0 impostorscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748447995817856218?= X-GMAIL-MSGID: =?utf-8?q?1749624716786382316?= commit 2af632996b89 ("crypto: tcrypt - reschedule during speed tests") added cond_resched() calls to "Avoid RCU stalls in the case of non-preemptible kernel and lengthy speed tests by rescheduling when advancing from one block size to another." It only makes those calls if the sec module parameter is used (run the speed test for a certain number of seconds), not the default "cycles" mode. Expand those to also run in "cycles" mode to reduce the rate of rcu stall warnings: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: Suggested-by: Herbert Xu Tested-by: Taehee Yoo Signed-off-by: Robert Elliott --- crypto/tcrypt.c | 44 ++++++++++++++++++-------------------------- 1 file changed, 18 insertions(+), 26 deletions(-) diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c index 7a6a56751043..c025ba26b663 100644 --- a/crypto/tcrypt.c +++ b/crypto/tcrypt.c @@ -408,14 +408,13 @@ static void test_mb_aead_speed(const char *algo, int enc, int secs, } - if (secs) { + if (secs) ret = test_mb_aead_jiffies(data, enc, bs, secs, num_mb); - cond_resched(); - } else { + else ret = test_mb_aead_cycles(data, enc, bs, num_mb); - } + cond_resched(); if (ret) { pr_err("%s() failed return code=%d\n", e, ret); @@ -661,13 +660,11 @@ static void test_aead_speed(const char *algo, int enc, unsigned int secs, bs + (enc ? 0 : authsize), iv); - if (secs) { - ret = test_aead_jiffies(req, enc, bs, - secs); - cond_resched(); - } else { + if (secs) + ret = test_aead_jiffies(req, enc, bs, secs); + else ret = test_aead_cycles(req, enc, bs); - } + cond_resched(); if (ret) { pr_err("%s() failed return code=%d\n", e, ret); @@ -917,14 +914,13 @@ static void test_ahash_speed_common(const char *algo, unsigned int secs, ahash_request_set_crypt(req, sg, output, speed[i].plen); - if (secs) { + if (secs) ret = test_ahash_jiffies(req, speed[i].blen, speed[i].plen, output, secs); - cond_resched(); - } else { + else ret = test_ahash_cycles(req, speed[i].blen, speed[i].plen, output); - } + cond_resched(); if (ret) { pr_err("hashing failed ret=%d\n", ret); @@ -1184,15 +1180,14 @@ static void test_mb_skcipher_speed(const char *algo, int enc, int secs, cur->sg, bs, iv); } - if (secs) { + if (secs) ret = test_mb_acipher_jiffies(data, enc, bs, secs, num_mb); - cond_resched(); - } else { + else ret = test_mb_acipher_cycles(data, enc, bs, num_mb); - } + cond_resched(); if (ret) { pr_err("%s() failed flags=%x\n", e, @@ -1401,14 +1396,11 @@ static void test_skcipher_speed(const char *algo, int enc, unsigned int secs, skcipher_request_set_crypt(req, sg, sg, bs, iv); - if (secs) { - ret = test_acipher_jiffies(req, enc, - bs, secs); - cond_resched(); - } else { - ret = test_acipher_cycles(req, enc, - bs); - } + if (secs) + ret = test_acipher_jiffies(req, enc, bs, secs); + else + ret = test_acipher_cycles(req, enc, bs); + cond_resched(); if (ret) { pr_err("%s() failed flags=%x\n", e, From patchwork Wed Nov 16 04:13:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20702 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084094wru; Tue, 15 Nov 2022 20:16:14 -0800 (PST) X-Google-Smtp-Source: AA0mqf4FdK1ODs4vXRHVLBlMgOXxs3xmP9GwvqtfOERYvbgsRAt9strKZyR6ua8XOEIncbVorJFI X-Received: by 2002:a17:902:d64a:b0:186:9d71:228c with SMTP id y10-20020a170902d64a00b001869d71228cmr7268852plh.109.1668572173949; Tue, 15 Nov 2022 20:16:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572173; cv=none; d=google.com; s=arc-20160816; b=IJjTwgoTGfbqb3jCk4My+wh7f7mvw+Y6rWzz13471Idg3XtPqGCxNac9aRzvPCBbq7 FSFbOborWcKUPI0/7990X4ay+lkCi69YsirTSjFnIjaMWRjOafzNGBfI6IfVOdtcqAiM 6bFUgsi/f3+t7qEsTuZ1W/HcobMabZbB3jGJumW7V+9I6fCESu3UuzdJnGwTKql4ultO /pd7F09+svUqEnT+FG+vaWzmJU1FvZyLWyQRT7YUeNflTaraHiZXd8D85dgXuuFNRidP h6zIB5lHNfnTWRN67iB2j+xVLQTaqLf7b4/j6K71RL1uJ/s8hRVxQaEVnvihH9njDL+i Wm7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=JLi1pqnZUODkLCm7SUQuuKGdHYscWYh21IabB9tftms=; b=xW9qkz0bDDIc95+FZXGVkdMEzGUyMZpbUwovOnGy55eci8ieL1bYcbHswBfNZoYkUd jPnJ032c9tvh5GDu3jV+/eVNjIT3zvV1CC2NXQoJXsKNi/Jm7x9DzPKBBYDFQdAHJoBg bAbrl9rCXMiECgop19S5XNqUr03eFFDWWHkN/ENfnVtpmchQD4QWHyLiZT5ALkWps65J xPMzGUC5Nuyt+OGLeKReVA/svGVy1q176BnNsPXNlKsp8XqqNbGe6iULk77efKlG5nb4 dbRt1HqjvigJWkCFVMa6Ip1p7ve1dBtcD0Ayqv70HvSStdQdncsfCSUu63fjtLyYKXPX 4I6A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=WC70eAGL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l3-20020a6542c3000000b0044fb0824e52si14197266pgp.171.2022.11.15.20.15.58; Tue, 15 Nov 2022 20:16:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=WC70eAGL; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231686AbiKPEPE (ORCPT + 99 others); Tue, 15 Nov 2022 23:15:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231678AbiKPEOV (ORCPT ); Tue, 15 Nov 2022 23:14:21 -0500 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BCBA431EFA; Tue, 15 Nov 2022 20:14:18 -0800 (PST) Received: from pps.filterd (m0134425.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3RUTc013874; Wed, 16 Nov 2022 04:13:59 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=JLi1pqnZUODkLCm7SUQuuKGdHYscWYh21IabB9tftms=; b=WC70eAGL2IOGKa+Ve/ihe/kM+8DB57Q9/FathLVv3F6YM8DWhgqwyVeSoQS0obusd0Eb 41dExgTVQMhUoKkfFmfu0HIimq8ksgjsUKAmI70jAq4VYBblR++N+cpDn+L+aYtp8h4R XjfTwqvrvLOOpVvf99J/d2i0tameakQt49XwqdwKTR3koB/YCy4BbYkaczPWU/X3e6nn q9RX4woHybqSf5cIg+ryzcLVKcJYs36b0cwca9RvLO02rxq5OpuB8WTk37V1xmcnA/Dq OuSN5C+ALAMgA/D3kJ5qMjR+hfRBQqqRnnuVJTsUz95jXaPpDQp7fyOEZazuoCxliK26 rg== Received: from p1lg14879.it.hpe.com (p1lg14879.it.hpe.com [16.230.97.200]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqyfr9hh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:13:59 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14879.it.hpe.com (Postfix) with ESMTPS id 385C34B5CE; Wed, 16 Nov 2022 04:13:58 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id BC9CD802DD6; Wed, 16 Nov 2022 04:13:57 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 04/24] crypto: x86/sha - limit FPU preemption Date: Tue, 15 Nov 2022 22:13:22 -0600 Message-Id: <20221116041342.3841-5-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: C2NCv8BnIiHVn237aUl32a4qJjW2A1tT X-Proofpoint-ORIG-GUID: C2NCv8BnIiHVn237aUl32a4qJjW2A1tT X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 lowpriorityscore=0 phishscore=0 adultscore=0 malwarescore=0 priorityscore=1501 impostorscore=0 spamscore=0 clxscore=1015 mlxscore=0 suspectscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746612336794594387?= X-GMAIL-MSGID: =?utf-8?q?1749624736272096005?= Limit the number of bytes processed between kernel_fpu_begin() and kernel_fpu_end() calls. Those functions call preempt_disable() and preempt_enable(), so the CPU core is unavailable for scheduling while running. This leads to "rcu_preempt detected expedited stalls" with stack dumps pointing to the optimized hash function if the module is loaded and used a lot: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: ... For example, that can occur during boot with the stack track pointing to the sha512-x86 function if the system set to use SHA-512 for module signing. The call trace includes: module_sig_check mod_verify_sig pkcs7_verify pkcs7_digest sha512_finup sha512_base_do_update Fixes: 66be89515888 ("crypto: sha1 - SSSE3 based SHA1 implementation for x86-64") Fixes: 8275d1aa6422 ("crypto: sha256 - Create module providing optimized SHA256 routines using SSSE3, AVX or AVX2 instructions.") Fixes: 87de4579f92d ("crypto: sha512 - Create module providing optimized SHA512 routines using SSSE3, AVX or AVX2 instructions.") Fixes: aa031b8f702e ("crypto: x86/sha512 - load based on CPU features") Suggested-by: Herbert Xu Reviewed-by: Tim Chen Signed-off-by: Robert Elliott --- v3 simplify to while loops rather than do..while loops, avoid redundant checks for zero length, rename the limit macro and change into a const, vary the limit for each algo --- arch/x86/crypto/sha1_ssse3_glue.c | 64 ++++++++++++++++++++++------- arch/x86/crypto/sha256_ssse3_glue.c | 64 ++++++++++++++++++++++------- arch/x86/crypto/sha512_ssse3_glue.c | 55 +++++++++++++++++++------ 3 files changed, 140 insertions(+), 43 deletions(-) diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c index 44340a1139e0..4bc77c84b0fb 100644 --- a/arch/x86/crypto/sha1_ssse3_glue.c +++ b/arch/x86/crypto/sha1_ssse3_glue.c @@ -26,8 +26,17 @@ #include #include +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +#ifdef CONFIG_AS_SHA1_NI +static const unsigned int bytes_per_fpu_shani = 34 * 1024; +#endif +static const unsigned int bytes_per_fpu_avx2 = 34 * 1024; +static const unsigned int bytes_per_fpu_avx = 30 * 1024; +static const unsigned int bytes_per_fpu_ssse3 = 26 * 1024; + static int sha1_update(struct shash_desc *desc, const u8 *data, - unsigned int len, sha1_block_fn *sha1_xform) + unsigned int len, unsigned int bytes_per_fpu, + sha1_block_fn *sha1_xform) { struct sha1_state *sctx = shash_desc_ctx(desc); @@ -41,22 +50,39 @@ static int sha1_update(struct shash_desc *desc, const u8 *data, */ BUILD_BUG_ON(offsetof(struct sha1_state, state) != 0); - kernel_fpu_begin(); - sha1_base_do_update(desc, data, len, sha1_xform); - kernel_fpu_end(); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sha1_base_do_update(desc, data, chunk, sha1_xform); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } return 0; } static int sha1_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out, sha1_block_fn *sha1_xform) + unsigned int len, unsigned int bytes_per_fpu, + u8 *out, sha1_block_fn *sha1_xform) { if (!crypto_simd_usable()) return crypto_sha1_finup(desc, data, len, out); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sha1_base_do_update(desc, data, chunk, sha1_xform); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } + kernel_fpu_begin(); - if (len) - sha1_base_do_update(desc, data, len, sha1_xform); sha1_base_do_finalize(desc, sha1_xform); kernel_fpu_end(); @@ -69,13 +95,15 @@ asmlinkage void sha1_transform_ssse3(struct sha1_state *state, static int sha1_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha1_update(desc, data, len, sha1_transform_ssse3); + return sha1_update(desc, data, len, bytes_per_fpu_ssse3, + sha1_transform_ssse3); } static int sha1_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha1_finup(desc, data, len, out, sha1_transform_ssse3); + return sha1_finup(desc, data, len, bytes_per_fpu_ssse3, out, + sha1_transform_ssse3); } /* Add padding and return the message digest. */ @@ -119,13 +147,15 @@ asmlinkage void sha1_transform_avx(struct sha1_state *state, static int sha1_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha1_update(desc, data, len, sha1_transform_avx); + return sha1_update(desc, data, len, bytes_per_fpu_avx, + sha1_transform_avx); } static int sha1_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha1_finup(desc, data, len, out, sha1_transform_avx); + return sha1_finup(desc, data, len, bytes_per_fpu_avx, out, + sha1_transform_avx); } static int sha1_avx_final(struct shash_desc *desc, u8 *out) @@ -201,13 +231,15 @@ static void sha1_apply_transform_avx2(struct sha1_state *state, static int sha1_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha1_update(desc, data, len, sha1_apply_transform_avx2); + return sha1_update(desc, data, len, bytes_per_fpu_avx2, + sha1_apply_transform_avx2); } static int sha1_avx2_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha1_finup(desc, data, len, out, sha1_apply_transform_avx2); + return sha1_finup(desc, data, len, bytes_per_fpu_avx2, out, + sha1_apply_transform_avx2); } static int sha1_avx2_final(struct shash_desc *desc, u8 *out) @@ -251,13 +283,15 @@ asmlinkage void sha1_ni_transform(struct sha1_state *digest, const u8 *data, static int sha1_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha1_update(desc, data, len, sha1_ni_transform); + return sha1_update(desc, data, len, bytes_per_fpu_shani, + sha1_ni_transform); } static int sha1_ni_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha1_finup(desc, data, len, out, sha1_ni_transform); + return sha1_finup(desc, data, len, bytes_per_fpu_shani, out, + sha1_ni_transform); } static int sha1_ni_final(struct shash_desc *desc, u8 *out) diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index 3a5f6be7dbba..cdcdf5a80ffe 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -40,11 +40,20 @@ #include #include +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +#ifdef CONFIG_AS_SHA256_NI +static const unsigned int bytes_per_fpu_shani = 13 * 1024; +#endif +static const unsigned int bytes_per_fpu_avx2 = 13 * 1024; +static const unsigned int bytes_per_fpu_avx = 11 * 1024; +static const unsigned int bytes_per_fpu_ssse3 = 11 * 1024; + asmlinkage void sha256_transform_ssse3(struct sha256_state *state, const u8 *data, int blocks); static int _sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len, sha256_block_fn *sha256_xform) + unsigned int len, unsigned int bytes_per_fpu, + sha256_block_fn *sha256_xform) { struct sha256_state *sctx = shash_desc_ctx(desc); @@ -58,22 +67,39 @@ static int _sha256_update(struct shash_desc *desc, const u8 *data, */ BUILD_BUG_ON(offsetof(struct sha256_state, state) != 0); - kernel_fpu_begin(); - sha256_base_do_update(desc, data, len, sha256_xform); - kernel_fpu_end(); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sha256_base_do_update(desc, data, chunk, sha256_xform); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } return 0; } static int sha256_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out, sha256_block_fn *sha256_xform) + unsigned int len, unsigned int bytes_per_fpu, + u8 *out, sha256_block_fn *sha256_xform) { if (!crypto_simd_usable()) return crypto_sha256_finup(desc, data, len, out); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sha256_base_do_update(desc, data, chunk, sha256_xform); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } + kernel_fpu_begin(); - if (len) - sha256_base_do_update(desc, data, len, sha256_xform); sha256_base_do_finalize(desc, sha256_xform); kernel_fpu_end(); @@ -83,13 +109,15 @@ static int sha256_finup(struct shash_desc *desc, const u8 *data, static int sha256_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return _sha256_update(desc, data, len, sha256_transform_ssse3); + return _sha256_update(desc, data, len, bytes_per_fpu_ssse3, + sha256_transform_ssse3); } static int sha256_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha256_finup(desc, data, len, out, sha256_transform_ssse3); + return sha256_finup(desc, data, len, bytes_per_fpu_ssse3, + out, sha256_transform_ssse3); } /* Add padding and return the message digest. */ @@ -149,13 +177,15 @@ asmlinkage void sha256_transform_avx(struct sha256_state *state, static int sha256_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return _sha256_update(desc, data, len, sha256_transform_avx); + return _sha256_update(desc, data, len, bytes_per_fpu_avx, + sha256_transform_avx); } static int sha256_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha256_finup(desc, data, len, out, sha256_transform_avx); + return sha256_finup(desc, data, len, bytes_per_fpu_avx, + out, sha256_transform_avx); } static int sha256_avx_final(struct shash_desc *desc, u8 *out) @@ -225,13 +255,15 @@ asmlinkage void sha256_transform_rorx(struct sha256_state *state, static int sha256_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return _sha256_update(desc, data, len, sha256_transform_rorx); + return _sha256_update(desc, data, len, bytes_per_fpu_avx2, + sha256_transform_rorx); } static int sha256_avx2_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha256_finup(desc, data, len, out, sha256_transform_rorx); + return sha256_finup(desc, data, len, bytes_per_fpu_avx2, + out, sha256_transform_rorx); } static int sha256_avx2_final(struct shash_desc *desc, u8 *out) @@ -300,13 +332,15 @@ asmlinkage void sha256_ni_transform(struct sha256_state *digest, static int sha256_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return _sha256_update(desc, data, len, sha256_ni_transform); + return _sha256_update(desc, data, len, bytes_per_fpu_shani, + sha256_ni_transform); } static int sha256_ni_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha256_finup(desc, data, len, out, sha256_ni_transform); + return sha256_finup(desc, data, len, bytes_per_fpu_shani, + out, sha256_ni_transform); } static int sha256_ni_final(struct shash_desc *desc, u8 *out) diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index 6d3b85e53d0e..c7036cfe2a7e 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -39,11 +39,17 @@ #include #include +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu_avx2 = 20 * 1024; +static const unsigned int bytes_per_fpu_avx = 17 * 1024; +static const unsigned int bytes_per_fpu_ssse3 = 17 * 1024; + asmlinkage void sha512_transform_ssse3(struct sha512_state *state, const u8 *data, int blocks); static int sha512_update(struct shash_desc *desc, const u8 *data, - unsigned int len, sha512_block_fn *sha512_xform) + unsigned int len, unsigned int bytes_per_fpu, + sha512_block_fn *sha512_xform) { struct sha512_state *sctx = shash_desc_ctx(desc); @@ -57,22 +63,39 @@ static int sha512_update(struct shash_desc *desc, const u8 *data, */ BUILD_BUG_ON(offsetof(struct sha512_state, state) != 0); - kernel_fpu_begin(); - sha512_base_do_update(desc, data, len, sha512_xform); - kernel_fpu_end(); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sha512_base_do_update(desc, data, chunk, sha512_xform); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } return 0; } static int sha512_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out, sha512_block_fn *sha512_xform) + unsigned int len, unsigned int bytes_per_fpu, + u8 *out, sha512_block_fn *sha512_xform) { if (!crypto_simd_usable()) return crypto_sha512_finup(desc, data, len, out); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sha512_base_do_update(desc, data, chunk, sha512_xform); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } + kernel_fpu_begin(); - if (len) - sha512_base_do_update(desc, data, len, sha512_xform); sha512_base_do_finalize(desc, sha512_xform); kernel_fpu_end(); @@ -82,13 +105,15 @@ static int sha512_finup(struct shash_desc *desc, const u8 *data, static int sha512_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha512_update(desc, data, len, sha512_transform_ssse3); + return sha512_update(desc, data, len, bytes_per_fpu_ssse3, + sha512_transform_ssse3); } static int sha512_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha512_finup(desc, data, len, out, sha512_transform_ssse3); + return sha512_finup(desc, data, len, bytes_per_fpu_ssse3, + out, sha512_transform_ssse3); } /* Add padding and return the message digest. */ @@ -158,13 +183,15 @@ static bool avx_usable(void) static int sha512_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha512_update(desc, data, len, sha512_transform_avx); + return sha512_update(desc, data, len, bytes_per_fpu_avx, + sha512_transform_avx); } static int sha512_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha512_finup(desc, data, len, out, sha512_transform_avx); + return sha512_finup(desc, data, len, bytes_per_fpu_avx, + out, sha512_transform_avx); } /* Add padding and return the message digest. */ @@ -224,13 +251,15 @@ asmlinkage void sha512_transform_rorx(struct sha512_state *state, static int sha512_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { - return sha512_update(desc, data, len, sha512_transform_rorx); + return sha512_update(desc, data, len, bytes_per_fpu_avx2, + sha512_transform_rorx); } static int sha512_avx2_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { - return sha512_finup(desc, data, len, out, sha512_transform_rorx); + return sha512_finup(desc, data, len, bytes_per_fpu_avx2, + out, sha512_transform_rorx); } /* Add padding and return the message digest. */ From patchwork Wed Nov 16 04:13:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20703 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084290wru; Tue, 15 Nov 2022 20:17:00 -0800 (PST) X-Google-Smtp-Source: AA0mqf6Xr6jn+N1iGs1R3wqX/6B6Dq7T5f7UvYfX9zsfKmhJdS8zdYiWhIrCJ6TqCUAd9m1EFeqT X-Received: by 2002:a62:16d5:0:b0:56d:78b0:16a0 with SMTP id 204-20020a6216d5000000b0056d78b016a0mr21347355pfw.81.1668572219803; Tue, 15 Nov 2022 20:16:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572219; cv=none; d=google.com; s=arc-20160816; b=dUU1gBzs8+dKy0lJT4VUwqVWAHre3hZbWIE3tFnp/UU3Xt74H+0AtFqn5dB/cT4y9d Jej3YOwLsou00dob4+3tql4pdmSxhsHidoPUIpD6LR9zvApS+u1s4eiuHN0Oxj4Pmk1n 9nJc/T2eELwJQeVQbMPW60zwV+nNOSRITqZv2ULwk49bDhsxXeI7YLHcXzC45IC/0f4n KdD5O8TZqnvYNzx7894bxzbmpB23xEWqnT/bXNnCJi1ffk88UOtEKGfMHSSoam/tIsgK wAe8K2MCV1OySLnHQVA4IqhlP6LegSr8yOs2nrSr5HGHszbcpKw8DYC1tKvXvF//64DR FqBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=AS6NRcgOMKJV9Uya+6n2lqXyza3d0a7SOYRhtlczmpk=; b=h+nfQzwJsX/EeE5lNesiyGxFlpZydNFX70T9zezSHQaGnPdCZpDH0fzuxq1gl/hs/6 No05Eu1xw0C63aIOnNMUD2VbR5RY/T0xmn6PQ7cmXMXQj2M0YPEIhIvTyRglsk4ewO2y qWNDAHqegKD/WWyytNuaJh2vk+u7lKt8H/F2vZvNRBFmmWm2I8RFc7yhBL/l1lj/X6tH gLVo3/uQVJ56cb4IxQaijAkRnyO7XEiFKJXU4IaPLHSEuS6xjNmyvwzngfDzQ8PRy/vq /FGZFHbNPsPd2qwBWxWxa6D/1MrA0yDfJktVfKVbm3MmBT4ZOhhSDSegiqOqWvss1tfH m2fg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=BZySKXnr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bf12-20020a170902b90c00b00188a23098b4si13311171plb.268.2022.11.15.20.16.45; Tue, 15 Nov 2022 20:16:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=BZySKXnr; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232397AbiKPEPr (ORCPT + 99 others); Tue, 15 Nov 2022 23:15:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231431AbiKPEOR (ORCPT ); Tue, 15 Nov 2022 23:14:17 -0500 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DE63317F4; Tue, 15 Nov 2022 20:14:15 -0800 (PST) Received: from pps.filterd (m0150241.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3Vv6B008016; Wed, 16 Nov 2022 04:14:00 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=AS6NRcgOMKJV9Uya+6n2lqXyza3d0a7SOYRhtlczmpk=; b=BZySKXnr+E7B9UpxuAbyTnO+/DQaBNtYRjj/87e+8XJ++tbuYL1e3LDZLWqX/mG1OsQ6 xB5cjyWN2UebHRA7dTEJNeiHEP2J81bvlb2yTs2TEAKgEarEbXcCnc8tAcZvIZZ9QzKm 88MnSvpOPZdY/dyqYBdnTRnDLCDIhDyCs+pr4ZQruj+l7UVCXUcX014w/Qs4eux/C86h EfuRxUvbvrN0y3cxzAQ5Pxr+KHFC+8G9eTYhiz3UXxCJBByU1Z4eT0qwykPekPNU+jv6 9PgxzYbUCPOOd23iwIENU+3Hf615MrKLVEvx/4VljxfVlE9nBiH2v2ExxX9A/tugk6KD vw== Received: from p1lg14878.it.hpe.com (p1lg14878.it.hpe.com [16.230.97.204]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqkbref5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:00 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14878.it.hpe.com (Postfix) with ESMTPS id 845922EECF; Wed, 16 Nov 2022 04:13:59 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 1497B808BA7; Wed, 16 Nov 2022 04:13:59 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 05/24] crypto: x86/crc - limit FPU preemption Date: Tue, 15 Nov 2022 22:13:23 -0600 Message-Id: <20221116041342.3841-6-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: QO_FEhQK1pIWU3aLh5WVSufmvR6eO__7 X-Proofpoint-ORIG-GUID: QO_FEhQK1pIWU3aLh5WVSufmvR6eO__7 X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 impostorscore=0 spamscore=0 clxscore=1015 mlxscore=0 suspectscore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 priorityscore=1501 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746613734004811368?= X-GMAIL-MSGID: =?utf-8?q?1749624783644442680?= Limit the number of bytes processed between kernel_fpu_begin() and kernel_fpu_end() calls. Those functions call preempt_disable() and preempt_enable(), so the CPU core is unavailable for scheduling while running, leading to: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: ... Fixes: 78c37d191dd6 ("crypto: crc32 - add crc32 pclmulqdq implementation and wrappers for table implementation") Fixes: 6a8ce1ef3940 ("crypto: crc32c - Optimize CRC32C calculation with PCLMULQDQ instruction") Fixes: 0b95a7f85718 ("crypto: crct10dif - Glue code to cast accelerated CRCT10DIF assembly as a crypto transform") Suggested-by: Herbert Xu Signed-off-by: Robert Elliott --- v3 use while loops and static int, simplify one of the loop structures, add algorithm-specific limits, use local stack variable in crc32 finup rather than the context pointer like update uses --- arch/x86/crypto/crc32-pclmul_asm.S | 6 +-- arch/x86/crypto/crc32-pclmul_glue.c | 27 +++++++++---- arch/x86/crypto/crc32c-intel_glue.c | 52 ++++++++++++++++++------- arch/x86/crypto/crct10dif-pclmul_glue.c | 48 +++++++++++++++++------ 4 files changed, 99 insertions(+), 34 deletions(-) diff --git a/arch/x86/crypto/crc32-pclmul_asm.S b/arch/x86/crypto/crc32-pclmul_asm.S index ca53e96996ac..9abd861636c3 100644 --- a/arch/x86/crypto/crc32-pclmul_asm.S +++ b/arch/x86/crypto/crc32-pclmul_asm.S @@ -72,15 +72,15 @@ .text /** * Calculate crc32 - * BUF - buffer (16 bytes aligned) - * LEN - sizeof buffer (16 bytes aligned), LEN should be grater than 63 + * BUF - buffer - must be 16 bytes aligned + * LEN - sizeof buffer - must be multiple of 16 bytes and greater than 63 * CRC - initial crc32 * return %eax crc32 * uint crc32_pclmul_le_16(unsigned char const *buffer, * size_t len, uint crc32) */ -SYM_FUNC_START(crc32_pclmul_le_16) /* buffer and buffer size are 16 bytes aligned */ +SYM_FUNC_START(crc32_pclmul_le_16) movdqa (BUF), %xmm1 movdqa 0x10(BUF), %xmm2 movdqa 0x20(BUF), %xmm3 diff --git a/arch/x86/crypto/crc32-pclmul_glue.c b/arch/x86/crypto/crc32-pclmul_glue.c index 98cf3b4e4c9f..df3dbc754818 100644 --- a/arch/x86/crypto/crc32-pclmul_glue.c +++ b/arch/x86/crypto/crc32-pclmul_glue.c @@ -46,6 +46,9 @@ #define SCALE_F 16L /* size of xmm register */ #define SCALE_F_MASK (SCALE_F - 1) +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 655 * 1024; + u32 crc32_pclmul_le_16(unsigned char const *buffer, size_t len, u32 crc32); static u32 __attribute__((pure)) @@ -55,6 +58,9 @@ static u32 __attribute__((pure)) unsigned int iremainder; unsigned int prealign; + BUILD_BUG_ON(bytes_per_fpu < PCLMUL_MIN_LEN); + BUILD_BUG_ON(bytes_per_fpu & SCALE_F_MASK); + if (len < PCLMUL_MIN_LEN + SCALE_F_MASK || !crypto_simd_usable()) return crc32_le(crc, p, len); @@ -70,12 +76,19 @@ static u32 __attribute__((pure)) iquotient = len & (~SCALE_F_MASK); iremainder = len & SCALE_F_MASK; - kernel_fpu_begin(); - crc = crc32_pclmul_le_16(p, iquotient, crc); - kernel_fpu_end(); + while (iquotient >= PCLMUL_MIN_LEN) { + unsigned int chunk = min(iquotient, bytes_per_fpu); + + kernel_fpu_begin(); + crc = crc32_pclmul_le_16(p, chunk, crc); + kernel_fpu_end(); + + iquotient -= chunk; + p += chunk; + } - if (iremainder) - crc = crc32_le(crc, p + iquotient, iremainder); + if (iquotient || iremainder) + crc = crc32_le(crc, p, iquotient + iremainder); return crc; } @@ -120,8 +133,8 @@ static int crc32_pclmul_update(struct shash_desc *desc, const u8 *data, } /* No final XOR 0xFFFFFFFF, like crc32_le */ -static int __crc32_pclmul_finup(u32 *crcp, const u8 *data, unsigned int len, - u8 *out) +static int __crc32_pclmul_finup(const u32 *crcp, const u8 *data, + unsigned int len, u8 *out) { *(__le32 *)out = cpu_to_le32(crc32_pclmul_le(*crcp, data, len)); return 0; diff --git a/arch/x86/crypto/crc32c-intel_glue.c b/arch/x86/crypto/crc32c-intel_glue.c index feccb5254c7e..f08ed68ec93d 100644 --- a/arch/x86/crypto/crc32c-intel_glue.c +++ b/arch/x86/crypto/crc32c-intel_glue.c @@ -45,7 +45,10 @@ asmlinkage unsigned int crc_pcl(const u8 *buffer, int len, unsigned int crc_init); #endif /* CONFIG_X86_64 */ -static u32 crc32c_intel_le_hw_byte(u32 crc, unsigned char const *data, size_t length) +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 868 * 1024; + +static u32 crc32c_intel_le_hw_byte(u32 crc, const unsigned char *data, size_t length) { while (length--) { asm("crc32b %1, %0" @@ -56,7 +59,7 @@ static u32 crc32c_intel_le_hw_byte(u32 crc, unsigned char const *data, size_t le return crc; } -static u32 __pure crc32c_intel_le_hw(u32 crc, unsigned char const *p, size_t len) +static u32 __pure crc32c_intel_le_hw(u32 crc, const unsigned char *p, size_t len) { unsigned int iquotient = len / SCALE_F; unsigned int iremainder = len % SCALE_F; @@ -110,8 +113,8 @@ static int crc32c_intel_update(struct shash_desc *desc, const u8 *data, return 0; } -static int __crc32c_intel_finup(u32 *crcp, const u8 *data, unsigned int len, - u8 *out) +static int __crc32c_intel_finup(const u32 *crcp, const u8 *data, + unsigned int len, u8 *out) { *(__le32 *)out = ~cpu_to_le32(crc32c_intel_le_hw(*crcp, data, len)); return 0; @@ -153,29 +156,52 @@ static int crc32c_pcl_intel_update(struct shash_desc *desc, const u8 *data, { u32 *crcp = shash_desc_ctx(desc); + BUILD_BUG_ON(bytes_per_fpu < CRC32C_PCL_BREAKEVEN); + BUILD_BUG_ON(bytes_per_fpu % SCALE_F); + /* * use faster PCL version if datasize is large enough to * overcome kernel fpu state save/restore overhead */ if (len >= CRC32C_PCL_BREAKEVEN && crypto_simd_usable()) { - kernel_fpu_begin(); - *crcp = crc_pcl(data, len, *crcp); - kernel_fpu_end(); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + *crcp = crc_pcl(data, chunk, *crcp); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } } else *crcp = crc32c_intel_le_hw(*crcp, data, len); return 0; } -static int __crc32c_pcl_intel_finup(u32 *crcp, const u8 *data, unsigned int len, - u8 *out) +static int __crc32c_pcl_intel_finup(const u32 *crcp, const u8 *data, + unsigned int len, u8 *out) { + u32 crc = *crcp; + + BUILD_BUG_ON(bytes_per_fpu < CRC32C_PCL_BREAKEVEN); + BUILD_BUG_ON(bytes_per_fpu % SCALE_F); + if (len >= CRC32C_PCL_BREAKEVEN && crypto_simd_usable()) { - kernel_fpu_begin(); - *(__le32 *)out = ~cpu_to_le32(crc_pcl(data, len, *crcp)); - kernel_fpu_end(); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + crc = crc_pcl(data, chunk, crc); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } + *(__le32 *)out = ~cpu_to_le32(crc); } else *(__le32 *)out = - ~cpu_to_le32(crc32c_intel_le_hw(*crcp, data, len)); + ~cpu_to_le32(crc32c_intel_le_hw(crc, data, len)); return 0; } diff --git a/arch/x86/crypto/crct10dif-pclmul_glue.c b/arch/x86/crypto/crct10dif-pclmul_glue.c index 71291d5af9f4..4f6b8c727d88 100644 --- a/arch/x86/crypto/crct10dif-pclmul_glue.c +++ b/arch/x86/crypto/crct10dif-pclmul_glue.c @@ -34,6 +34,11 @@ #include #include +#define PCLMUL_MIN_LEN 16U /* minimum size of buffer for crc_t10dif_pcl */ + +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 614 * 1024; + asmlinkage u16 crc_t10dif_pcl(u16 init_crc, const u8 *buf, size_t len); struct chksum_desc_ctx { @@ -54,11 +59,21 @@ static int chksum_update(struct shash_desc *desc, const u8 *data, { struct chksum_desc_ctx *ctx = shash_desc_ctx(desc); - if (length >= 16 && crypto_simd_usable()) { - kernel_fpu_begin(); - ctx->crc = crc_t10dif_pcl(ctx->crc, data, length); - kernel_fpu_end(); - } else + BUILD_BUG_ON(bytes_per_fpu < PCLMUL_MIN_LEN); + + if (length >= PCLMUL_MIN_LEN && crypto_simd_usable()) { + while (length >= PCLMUL_MIN_LEN) { + unsigned int chunk = min(length, bytes_per_fpu); + + kernel_fpu_begin(); + ctx->crc = crc_t10dif_pcl(ctx->crc, data, chunk); + kernel_fpu_end(); + + length -= chunk; + data += chunk; + } + } + if (length) ctx->crc = crc_t10dif_generic(ctx->crc, data, length); return 0; } @@ -73,12 +88,23 @@ static int chksum_final(struct shash_desc *desc, u8 *out) static int __chksum_finup(__u16 crc, const u8 *data, unsigned int len, u8 *out) { - if (len >= 16 && crypto_simd_usable()) { - kernel_fpu_begin(); - *(__u16 *)out = crc_t10dif_pcl(crc, data, len); - kernel_fpu_end(); - } else - *(__u16 *)out = crc_t10dif_generic(crc, data, len); + BUILD_BUG_ON(bytes_per_fpu < PCLMUL_MIN_LEN); + + if (len >= PCLMUL_MIN_LEN && crypto_simd_usable()) { + while (len >= PCLMUL_MIN_LEN) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + crc = crc_t10dif_pcl(crc, data, chunk); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } + } + if (len) + crc = crc_t10dif_generic(crc, data, len); + *(__u16 *)out = crc; return 0; } From patchwork Wed Nov 16 04:13:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20708 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084451wru; Tue, 15 Nov 2022 20:17:39 -0800 (PST) X-Google-Smtp-Source: AA0mqf6B+6/KA2A2COqD0+PyoQPqz1k0Ea8mffJ5wBL9tpOdFvQxBzzyo6dno4CXTbdHS84iHmAj X-Received: by 2002:a63:1549:0:b0:46f:d2d4:bae2 with SMTP id 9-20020a631549000000b0046fd2d4bae2mr18342823pgv.506.1668572258660; Tue, 15 Nov 2022 20:17:38 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572258; cv=none; d=google.com; s=arc-20160816; b=D97JpyMmDeEqb2/3DBiWSPMkPk4S5FungyAQY2XHmrUCXzFkWWmOtkmwg7puxYsD0N JM+1to7t230Q1ro+yi9pVoq/jU+Grp3lJft5i2br0WG6RAyNl4iTbQKUvBBy9WgF5NDt 5u+fyxs/i8T/Ntutoyq0d8VEWIJFlzzKXKg2zzM898nPIrv6wv6UxSyUY/bg966bgrP2 4BVYLDlVESPQEWFiTAVy1gKaMG+jEM50n9ZaxpAPtuWru0BqeXNR9m7CFW44SQpJKiib OyOFDTj28Q3cJJraD9CVwWbmUe/A9oGfdG4mJG706746FfDMaA+DPrtbRa31MI/kHeAZ xepA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DZaqRBDQdjwv0XB4GveGfUUF/gpHvXgfa3E0Wxb3Tbk=; b=SDoH8ACB5snY94fWkWbN65/T4Hyjq55g6zlOG/cgj+J501IKK7c/oM2Oqfdzn0s8nx Bj9Z6OPZA+t2qd+CEGf+UXRS9GDpsMw1m8tETS5AZSjN7ydi+Abq7AtRDtIQEpqQQK1h tu8pa+6XD0JEgp24AwYW8qqnVpPN8MAsYFmvkuQmE30rG4fNauT1mFVG4lGTALlTC9XZ xuoJKhzdLktIWvfyK/7N4vLXt8nE+X2/MJz2XwWh9S7T9LMB/rZBayxxQ6vqTo11gxZL +kzsxHfUvORcTfzSmfnxyAZOnBUuD7t9TFjs0MH/3jzaIkT8DdTHlvQaSBhjMB/1joh4 Y+wQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=cyCS6uAN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id b25-20020a631b19000000b0046af665ed92si15175386pgb.480.2022.11.15.20.17.23; Tue, 15 Nov 2022 20:17:38 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=cyCS6uAN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232277AbiKPEPQ (ORCPT + 99 others); Tue, 15 Nov 2022 23:15:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231866AbiKPEOV (ORCPT ); Tue, 15 Nov 2022 23:14:21 -0500 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC9CD31EF2; Tue, 15 Nov 2022 20:14:18 -0800 (PST) Received: from pps.filterd (m0134423.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3Ncm7026856; Wed, 16 Nov 2022 04:14:01 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=DZaqRBDQdjwv0XB4GveGfUUF/gpHvXgfa3E0Wxb3Tbk=; b=cyCS6uANQ22abyg9phMfFgKUTy3CpaXUj5VAcgX3cTEZbsoCBDPfNjrec2g6WRlzqf2G 8N3M1lz/M7X87H82kSx19icCEFOq0U8HX5C/xPt7JEp2dvIcy80O12TezdTfduFO/FIP fiNaGEGs5lQym+u5hYjoYM7QJmMiurOHODMIPrJ7LLjXfChIKfSTcxN1iZcYQp/RlX/n HUjDBgpy9ZYSM7nJd+4w+xvi7EXait7UQJrmRpPZUkpmrSWYb20Hh9hT6gLbM8lK1vQv PVGLo/VSxjk+CRMSR2jY3SlRPRGDJogtl3lqNazF3QkzcT8+HhfYUmUv31fhLT4Jl4Wt 6Q== Received: from p1lg14879.it.hpe.com (p1lg14879.it.hpe.com [16.230.97.200]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqwqgaaw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:01 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14879.it.hpe.com (Postfix) with ESMTPS id 9DE474B5CE; Wed, 16 Nov 2022 04:14:00 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 273A38065DB; Wed, 16 Nov 2022 04:14:00 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 06/24] crypto: x86/sm3 - limit FPU preemption Date: Tue, 15 Nov 2022 22:13:24 -0600 Message-Id: <20221116041342.3841-7-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: d93n2GK0Nadv8dJmFx5k8lFZAwdQ3ekB X-Proofpoint-GUID: d93n2GK0Nadv8dJmFx5k8lFZAwdQ3ekB X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 phishscore=0 impostorscore=0 spamscore=0 suspectscore=0 lowpriorityscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 bulkscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748447956252132036?= X-GMAIL-MSGID: =?utf-8?q?1749624824720457556?= Limit the number of bytes processed between kernel_fpu_begin() and kernel_fpu_end() calls. Those functions call preempt_disable() and preempt_enable(), so the CPU core is unavailable for scheduling while running, causing: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: ... Fixes: 930ab34d906d ("crypto: x86/sm3 - add AVX assembly implementation") Suggested-by: Herbert Xu Signed-off-by: Robert Elliott --- v3 use while loop, static int --- arch/x86/crypto/sm3_avx_glue.c | 35 ++++++++++++++++++++++++++++------ 1 file changed, 29 insertions(+), 6 deletions(-) diff --git a/arch/x86/crypto/sm3_avx_glue.c b/arch/x86/crypto/sm3_avx_glue.c index 661b6f22ffcd..483aaed996ba 100644 --- a/arch/x86/crypto/sm3_avx_glue.c +++ b/arch/x86/crypto/sm3_avx_glue.c @@ -17,6 +17,9 @@ #include #include +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 11 * 1024; + asmlinkage void sm3_transform_avx(struct sm3_state *state, const u8 *data, int nblocks); @@ -25,8 +28,10 @@ static int sm3_avx_update(struct shash_desc *desc, const u8 *data, { struct sm3_state *sctx = shash_desc_ctx(desc); + BUILD_BUG_ON(bytes_per_fpu == 0); + if (!crypto_simd_usable() || - (sctx->count % SM3_BLOCK_SIZE) + len < SM3_BLOCK_SIZE) { + (sctx->count % SM3_BLOCK_SIZE) + len < SM3_BLOCK_SIZE) { sm3_update(sctx, data, len); return 0; } @@ -37,9 +42,16 @@ static int sm3_avx_update(struct shash_desc *desc, const u8 *data, */ BUILD_BUG_ON(offsetof(struct sm3_state, state) != 0); - kernel_fpu_begin(); - sm3_base_do_update(desc, data, len, sm3_transform_avx); - kernel_fpu_end(); + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sm3_base_do_update(desc, data, chunk, sm3_transform_avx); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } return 0; } @@ -47,6 +59,8 @@ static int sm3_avx_update(struct shash_desc *desc, const u8 *data, static int sm3_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { + BUILD_BUG_ON(bytes_per_fpu == 0); + if (!crypto_simd_usable()) { struct sm3_state *sctx = shash_desc_ctx(desc); @@ -57,9 +71,18 @@ static int sm3_avx_finup(struct shash_desc *desc, const u8 *data, return 0; } + while (len) { + unsigned int chunk = min(len, bytes_per_fpu); + + kernel_fpu_begin(); + sm3_base_do_update(desc, data, chunk, sm3_transform_avx); + kernel_fpu_end(); + + len -= chunk; + data += chunk; + } + kernel_fpu_begin(); - if (len) - sm3_base_do_update(desc, data, len, sm3_transform_avx); sm3_base_do_finalize(desc, sm3_transform_avx); kernel_fpu_end(); From patchwork Wed Nov 16 04:13:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20696 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3083992wru; Tue, 15 Nov 2022 20:15:52 -0800 (PST) X-Google-Smtp-Source: AA0mqf4XO0BmqWbnILVIKcR5cn8R+Yr+tGEO9LNOhoFhvVRB6AEJlMbuAhXYxFvBjQeI1mZ03R2P X-Received: by 2002:aa7:8546:0:b0:56c:dba2:30b with SMTP id y6-20020aa78546000000b0056cdba2030bmr21438450pfn.72.1668572151732; Tue, 15 Nov 2022 20:15:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572151; cv=none; d=google.com; s=arc-20160816; b=EywOc46gKl2HS5ig1kUlPk2S/f2/j9gYtW1TfTO7hhRVAkvBBeHRTzBoyjMdVoN46A 2Ic+acbvnyv3lz1+uxrdiySO8v8GQDglAQ4GxXL29Qbw9CUGPGf+xitkrLklOwKJyUPA QvwxLUcZ/mb5lBI8Xkj8p4eVtk3eKvEmSiJ6I207VPOTu2ZL/MxEReUU2XMMzX6Rimin YAtj6bVO4Q5MZB/7PFCRFTILqr5u226T+vi5iNIAYTexIkDZSUtmKVzQYh4qSZjdKLCf A4JgZwwoBgZ7wJs3bBrZ6yq0G6IDldGoh7L26QyljdqYsS/RzrjEEsVmphK2KjNE8lKp wgKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ukwY7JIs/U1Bbbab6Ibms4gk638cl7c9kt9xB3XTx+A=; b=lMcc00W5fkEaNXQD5SpiahUCj2KbQpJWFJppE9pz6kBd4GClYMkBTGCE2Od0AVQkZl CIX+7FW1U68CQH16JGdr/HyTsjiFHHR0Eez/QAulRr47Sijh2j5TLdjEIp1ZOYGyLs6F RVA6mkC0/dhQ5yrahALvg/sRE6eOdWdLn7f4bacUn62qHlSQCNop5unoYUHBvS2PmNi3 QGq/7MGQqCbjMQtj4ZkEONtkMcXVDQ9vcr2Ugtz5iQIihFutqZDEqkYv1TzDRrkIxR4q FWtiJ9EF8ZYWvdj9RagYU/EwtWbJr+N0COdFIDl5OgbrrewpbUewqw27Ymouy4J8IZKe CbJg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=kxttb0P+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id f5-20020a17090274c500b00186f25708f4si12961632plt.241.2022.11.15.20.15.37; Tue, 15 Nov 2022 20:15:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=kxttb0P+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232060AbiKPEOr (ORCPT + 99 others); Tue, 15 Nov 2022 23:14:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54204 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231848AbiKPEOU (ORCPT ); Tue, 15 Nov 2022 23:14:20 -0500 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DF25D31EEE; Tue, 15 Nov 2022 20:14:17 -0800 (PST) Received: from pps.filterd (m0150244.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG4DcFj023372; Wed, 16 Nov 2022 04:14:02 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=ukwY7JIs/U1Bbbab6Ibms4gk638cl7c9kt9xB3XTx+A=; b=kxttb0P+LQ79UvpnOJTgvoNBaX+a399Fm4RabsyfNJgL/DQ1hu3D53f8gFqmawaX/CW5 /0uhDA/QWWIyEChrsiqXX9gQuBX6ei89re5gpt5kz6B46PBBZ408seJ2rMykvjZ1/zoK UQCHlW09ObeR3uWq5/9313MGb9erqRf6TsAWILfWPttfai/DYFkw8sTp/A5OWe2470D9 VWQgWSvPUmZasNsDSPB0P6j2TsDytFuAT/d7e2KbJtOH6SQ4hN6LeeAubnF5MRLJPAPB QrLhezUGNR7WmZiypRfwpmKl6p6ecDUDRN/fkyTtjPSuzhGjRCVUeyVMwjV2FFPgZL4h IA== Received: from p1lg14878.it.hpe.com (p1lg14878.it.hpe.com [16.230.97.204]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvrmng06w-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:02 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14878.it.hpe.com (Postfix) with ESMTPS id B62102EEE8; Wed, 16 Nov 2022 04:14:01 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 46F70808B9A; Wed, 16 Nov 2022 04:14:01 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 07/24] crypto: x86/ghash - use u8 rather than char Date: Tue, 15 Nov 2022 22:13:25 -0600 Message-Id: <20221116041342.3841-8-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: OztRO2yPCF9wMos6GXUqpT1CCWFMUx0i X-Proofpoint-ORIG-GUID: OztRO2yPCF9wMos6GXUqpT1CCWFMUx0i X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=986 lowpriorityscore=0 priorityscore=1501 mlxscore=0 suspectscore=0 phishscore=0 spamscore=0 adultscore=0 impostorscore=0 malwarescore=0 clxscore=1015 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748447981970560144?= X-GMAIL-MSGID: =?utf-8?q?1749624712335455582?= Use more consistent unambivalent types for the source and destination buffer pointer arguments. Signed-off-by: Robert Elliott --- arch/x86/crypto/ghash-clmulni-intel_asm.S | 4 ++-- arch/x86/crypto/ghash-clmulni-intel_glue.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/crypto/ghash-clmulni-intel_asm.S b/arch/x86/crypto/ghash-clmulni-intel_asm.S index 2bf871899920..c7b8542facee 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_asm.S +++ b/arch/x86/crypto/ghash-clmulni-intel_asm.S @@ -88,7 +88,7 @@ SYM_FUNC_START_LOCAL(__clmul_gf128mul_ble) RET SYM_FUNC_END(__clmul_gf128mul_ble) -/* void clmul_ghash_mul(char *dst, const u128 *shash) */ +/* void clmul_ghash_mul(u8 *dst, const u128 *shash) */ SYM_FUNC_START(clmul_ghash_mul) FRAME_BEGIN movups (%rdi), DATA @@ -103,7 +103,7 @@ SYM_FUNC_START(clmul_ghash_mul) SYM_FUNC_END(clmul_ghash_mul) /* - * void clmul_ghash_update(char *dst, const char *src, unsigned int srclen, + * void clmul_ghash_update(u8 *dst, const u8 *src, unsigned int srclen, * const u128 *shash); */ SYM_FUNC_START(clmul_ghash_update) diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c index 1f1a95f3dd0c..e996627c6583 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_glue.c +++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c @@ -23,9 +23,9 @@ #define GHASH_BLOCK_SIZE 16 #define GHASH_DIGEST_SIZE 16 -void clmul_ghash_mul(char *dst, const u128 *shash); +void clmul_ghash_mul(u8 *dst, const u128 *shash); -void clmul_ghash_update(char *dst, const char *src, unsigned int srclen, +void clmul_ghash_update(u8 *dst, const u8 *src, unsigned int srclen, const u128 *shash); struct ghash_async_ctx { From patchwork Wed Nov 16 04:13:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20698 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084023wru; Tue, 15 Nov 2022 20:15:58 -0800 (PST) X-Google-Smtp-Source: AA0mqf4PA4D/dEtcx5zLxeqkbSeSIYY2fco2zMe7wy2n+/govpfdAFNYJlTZu+Xsl3PZoyZSJ1Lc X-Received: by 2002:a17:903:120a:b0:186:9849:5c1a with SMTP id l10-20020a170903120a00b0018698495c1amr7168570plh.110.1668572158685; Tue, 15 Nov 2022 20:15:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572158; cv=none; d=google.com; s=arc-20160816; b=olP4MtvZIxE6xlQHZEcRgEn9D3ZTZS8XSF7jPG4/i0rtIXQIDrsLQpJzEu5KtTRgo/ a+XYDesEVqknk4YYpOWM2YGYxC78OEqaBYGGsT8zinJHaiYxuC0vDUUh+rSI1Y5o3Mmq O6Sa8+q8Flk31XWMah/yV0OQAtocrXA9qmRcwYKZfsMYs5vl/IimMfSVKD4XbWwVhGbE y1hpUA4mqZvG5BkLMx2pcDn9kF0v18S/SEZ4jOGf+cM72Nk/fjsARvaek1oz9Xq/bzmJ fTTWYHipMxt6R0tqWIDYYOfHzFyvJJA2uDpawJsVXFn9bz5SikHM+0/ebRbcBhN33wV2 vIYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=yGY/aFR7vSy6xzempavBAqGCddr6XsRxi33pB4WDD80=; b=gi2mCcDIPntXOaDmu3cNtEDyaAB+AGJAIJ5wPrnQ6FPSBw6KZh1KYbmVyFUYda121D vu2tNOmOgOmXztBN3Yo11eGZtwgjw7KgvCKdWYqjsZ/k/Wn8MS5D8da9dIYvaF6vYBUR mZiigFTyU4EvDv++00nnfalUF6T5VAG+8VYb97/3EdXjMMxhg6oQQDtUaMmf7HCIUU1h aE/yz0zOU8i6C49M1LBx588baNhP/jSSStd8EfcGCbKAtqhNPd2q13XTR7Se5P+7zwvt bZZ0id0js5xvklkmjrJLnHZwYqlaWOaIZc4XXnuMjrEeBrZIsT+QJrDruAZnVciXH7P2 kJ9A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=CQEq8NJs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id d19-20020a170902b71300b0018881a1c130si13297950pls.587.2022.11.15.20.15.45; Tue, 15 Nov 2022 20:15:58 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=CQEq8NJs; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232097AbiKPEOw (ORCPT + 99 others); Tue, 15 Nov 2022 23:14:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231862AbiKPEOU (ORCPT ); Tue, 15 Nov 2022 23:14:20 -0500 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0872431EF0; Tue, 15 Nov 2022 20:14:17 -0800 (PST) Received: from pps.filterd (m0134423.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3Nn4B027960; Wed, 16 Nov 2022 04:14:03 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=yGY/aFR7vSy6xzempavBAqGCddr6XsRxi33pB4WDD80=; b=CQEq8NJsKhtbC5lUNvPFZZpCpE6OYxyPI0nMx52cSEfmoGpJJvCAYtGQ3R7cAHwfZry5 oYIMCDr6c5tfgDdi6ZmJH1uqP+Whlyja820pUkxQOIRUxbDYElH9rD5UnZzqkhl7ft66 XfD/ZVBO/AAtTXRVfrV6lVGuaGgSRvGJ5V4M6wQjSW1SG18EsIjBNvogf4HhBH7sFVTA 0VmYAx/meod39tugh437bugwcU4EBtZifAKVd9gVPAG3iPtph0w1JaELvgE95u1+ryu5 kG15cgnp3XE/X/5A0TYCUNqDA9pXSPNd+wBAL8B54Zs8IK+ixHnEaQa2mKP4hWyMrLB9 ew== Received: from p1lg14881.it.hpe.com (p1lg14881.it.hpe.com [16.230.97.202]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqwqgab2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:03 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14881.it.hpe.com (Postfix) with ESMTPS id C59EC809F56; Wed, 16 Nov 2022 04:14:02 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 50CF78058DE; Wed, 16 Nov 2022 04:14:02 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 08/24] crypto: x86/ghash - restructure FPU context saving Date: Tue, 15 Nov 2022 22:13:26 -0600 Message-Id: <20221116041342.3841-9-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: lWHW-zZnaofSzYJgbZJ7691Oou1zTpew X-Proofpoint-GUID: lWHW-zZnaofSzYJgbZJ7691Oou1zTpew X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 phishscore=0 impostorscore=0 spamscore=0 suspectscore=0 lowpriorityscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 bulkscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748447903777106507?= X-GMAIL-MSGID: =?utf-8?q?1749624719685832861?= Wrap each of the calls to clmul_hash_update and clmul_ghash__mul in its own set of kernel_fpu_begin and kernel_fpu_end calls, preparing to limit the amount of data processed by each _update call to avoid RCU stalls. This is more like how polyval-clmulni_glue is structured. Fixes: 0e1227d356e9 ("crypto: ghash - Add PCLMULQDQ accelerated implementation") Suggested-by: Herbert Xu Signed-off-by: Robert Elliott --- arch/x86/crypto/ghash-clmulni-intel_glue.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c index e996627c6583..22367e363d72 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_glue.c +++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c @@ -80,7 +80,6 @@ static int ghash_update(struct shash_desc *desc, struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); u8 *dst = dctx->buffer; - kernel_fpu_begin(); if (dctx->bytes) { int n = min(srclen, dctx->bytes); u8 *pos = dst + (GHASH_BLOCK_SIZE - dctx->bytes); @@ -91,10 +90,14 @@ static int ghash_update(struct shash_desc *desc, while (n--) *pos++ ^= *src++; - if (!dctx->bytes) + if (!dctx->bytes) { + kernel_fpu_begin(); clmul_ghash_mul(dst, &ctx->shash); + kernel_fpu_end(); + } } + kernel_fpu_begin(); clmul_ghash_update(dst, src, srclen, &ctx->shash); kernel_fpu_end(); From patchwork Wed Nov 16 04:13:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20710 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084499wru; Tue, 15 Nov 2022 20:17:46 -0800 (PST) X-Google-Smtp-Source: AA0mqf6N5gnulhR5s8A2BA8PTv2BQ47B7+wLpUiuPyNgi3YMBCe+SrYXtiXiArxPpZFr/Bei+0zI X-Received: by 2002:a17:90a:7304:b0:20a:9810:86ab with SMTP id m4-20020a17090a730400b0020a981086abmr1809917pjk.10.1668572266134; Tue, 15 Nov 2022 20:17:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572266; cv=none; d=google.com; s=arc-20160816; b=gItH1R3RuvUyj6yNrY3FYI0Wk+xEb7OtFWGiXED7QurwXqBIFTFfvypHU0h0QdecQD bWQ3/pOeGXTuSfhbBYU7dKAbuK2aWSs7OcRBSXKD31IXH/X2ai7ji+ej0EoPiDat9+Mm rG2qm/HSDdT5FLEw+dvkr4pC0n5IjK2Y8ORKNVfcxH9dC8tQXmHxy2JVJDfeB1ljB1sJ MTiPADoaSZDaYocrIIsNXZvz027jfUu0D4VbhdpZC4fTuLgN5EDfTvZt9pVQWomwXQFe aqyi33a05fSxyrj0bVn6vuOKPeiuCD9nXrJNEeIzIeo3J5df+h1hvxQqPc61RdiGxAg9 33ig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=u6FAcp+6tcSEs7tXNY3uZohFs8We5u80NlPyFDCPC14=; b=ccjXHRFPuMrFxsDfNvRARXGW643xcGeK72uNFDqxkEhZGS2rt1daM5fhYk8tQPCX9R OyAQ7GYsF0MNiYX3NuQa8a1s/An5uficnPz6/UAv1CuHvDKdksNALRsmQnmZixCn2bYY /s+B1jhiUXbh5Fu5SiH4U/X1rrxXoU5Fiz5VmIYdcsoHMbAEwCYqBs3Hh+hZ7Mpug1Lv fgOXBqXGkXRvl5df4nw7oQ9SgIW9x5LCdQAWKHufQ62BJJgbkNCjgxo6aRFTfHw+OiM/ YygN7w0hyBJTlgUdaPSqf/3+/RZFJiv+zdj2s4vqGn07bTLg243RVbJrMQAISS7Ga18I /MCQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=DItwDFUl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id k18-20020a170902c41200b0017280be5a02si17997653plk.589.2022.11.15.20.17.32; Tue, 15 Nov 2022 20:17:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=DItwDFUl; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232296AbiKPEPU (ORCPT + 99 others); Tue, 15 Nov 2022 23:15:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54200 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231803AbiKPEOY (ORCPT ); Tue, 15 Nov 2022 23:14:24 -0500 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D687131ED3; Tue, 15 Nov 2022 20:14:19 -0800 (PST) Received: from pps.filterd (m0150245.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3W9QC000909; Wed, 16 Nov 2022 04:14:04 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=u6FAcp+6tcSEs7tXNY3uZohFs8We5u80NlPyFDCPC14=; b=DItwDFUl/bic1tiBntP4bMAn2guCdy9fLGcCa7uN3s1IhXZzVXdvefc/EVOQ2ehkDeUQ P/gIoj01snvR+lvaXJC6+6f5DjJu1u7Rpv0GgHfCQVRmK1kvHLLyUtTt5tS2eQe4erbA vA0RCaRyUigcInzPwbAEJ+MOvzFL3Cd4CknTb5Rg74zqDoSluU1Og2veHmwnuY4zXiHK opcmVsR3CxvPauqdYSYaJBW3bS7MeMmbIWV5+rmseXiEdziax3iWTcc7S5mg5fpcPxPJ 9EF9mAh0FnH44EYvPVhxaD19R1p3Pu2ZPbFv6OUFB5nEmOQVXqEWOnoMZbfuYUsY90xy zw== Received: from p1lg14881.it.hpe.com (p1lg14881.it.hpe.com [16.230.97.202]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqy689qj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:04 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14881.it.hpe.com (Postfix) with ESMTPS id D55C8809F55; Wed, 16 Nov 2022 04:14:03 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 5FAAA808B9A; Wed, 16 Nov 2022 04:14:03 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 09/24] crypto: x86/ghash - limit FPU preemption Date: Tue, 15 Nov 2022 22:13:27 -0600 Message-Id: <20221116041342.3841-10-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 6rRm_cgmG3Ha27Vq-sgF0mf9Hgjvp0OZ X-Proofpoint-GUID: 6rRm_cgmG3Ha27Vq-sgF0mf9Hgjvp0OZ X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 mlxlogscore=999 bulkscore=0 phishscore=0 malwarescore=0 adultscore=0 suspectscore=0 impostorscore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1746615071137650807?= X-GMAIL-MSGID: =?utf-8?q?1749624832642745317?= Limit the number of bytes processed between kernel_fpu_begin() and kernel_fpu_end() calls. Those functions call preempt_disable() and preempt_enable(), so the CPU core is unavailable for scheduling while running, leading to: rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: ... Fixes: 0e1227d356e9 ("crypto: ghash - Add PCLMULQDQ accelerated implementation") Suggested-by: Herbert Xu Signed-off-by: Robert Elliott --- v3 change to static int, simplify while loop --- arch/x86/crypto/ghash-clmulni-intel_glue.c | 28 +++++++++++++++------- 1 file changed, 19 insertions(+), 9 deletions(-) diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c index 22367e363d72..0f24c3b23fd2 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_glue.c +++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c @@ -20,8 +20,11 @@ #include #include -#define GHASH_BLOCK_SIZE 16 -#define GHASH_DIGEST_SIZE 16 +#define GHASH_BLOCK_SIZE 16U +#define GHASH_DIGEST_SIZE 16U + +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 50 * 1024; void clmul_ghash_mul(u8 *dst, const u128 *shash); @@ -80,9 +83,11 @@ static int ghash_update(struct shash_desc *desc, struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); u8 *dst = dctx->buffer; + BUILD_BUG_ON(bytes_per_fpu < GHASH_BLOCK_SIZE); + if (dctx->bytes) { int n = min(srclen, dctx->bytes); - u8 *pos = dst + (GHASH_BLOCK_SIZE - dctx->bytes); + u8 *pos = dst + GHASH_BLOCK_SIZE - dctx->bytes; dctx->bytes -= n; srclen -= n; @@ -97,13 +102,18 @@ static int ghash_update(struct shash_desc *desc, } } - kernel_fpu_begin(); - clmul_ghash_update(dst, src, srclen, &ctx->shash); - kernel_fpu_end(); + while (srclen >= GHASH_BLOCK_SIZE) { + unsigned int chunk = min(srclen, bytes_per_fpu); + + kernel_fpu_begin(); + clmul_ghash_update(dst, src, chunk, &ctx->shash); + kernel_fpu_end(); + + src += chunk & ~(GHASH_BLOCK_SIZE - 1); + srclen -= chunk & ~(GHASH_BLOCK_SIZE - 1); + } - if (srclen & 0xf) { - src += srclen - (srclen & 0xf); - srclen &= 0xf; + if (srclen) { dctx->bytes = GHASH_BLOCK_SIZE - srclen; while (srclen--) *dst++ ^= *src++; From patchwork Wed Nov 16 04:13:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20700 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084059wru; Tue, 15 Nov 2022 20:16:06 -0800 (PST) X-Google-Smtp-Source: AA0mqf4aZDdI0UwkT47x1wrc1waopKtWnOW0pUJBPQr2zbSQI+ig/HSDp886GlzhZMviAz35oFv6 X-Received: by 2002:a62:e412:0:b0:56d:a1fc:7000 with SMTP id r18-20020a62e412000000b0056da1fc7000mr21083445pfh.35.1668572166290; Tue, 15 Nov 2022 20:16:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572166; cv=none; d=google.com; s=arc-20160816; b=Ffd5vo8PEPhTzmhONmuCaXKuIRbfUUeAcnK2KnqgV8gylXU1lo2DjdxKDQefpD3z9e 0a6DL/j4WNHd4mAwP7aTKEfsxIMHF2vw8nSDW9TRpXcG6fY/veK0QodQJnoUCKp2xPVU euaztjKL2I7lfFXWmxtZH2NSdOvNxV3oFMkTK5fEx0qjzS5LXRNb4rGEiku/wB0scFLh AYUavyYgs+Tel9qhMSuI0ScnbmmJ/TA6ChU/Z+WL9GO7PtoK714IOKMS1xIC27/0iO0C zutTQUo+Ranc86rtT/WZnrjcn66Fd/bYgBQWD4PMGilAat7esvSKdcKzUdjf9utpYGuQ fuUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=DBZSqkANZQQ0OruNgKmmCEGIHHi7mZUgVGLxPPk3P8k=; b=M2qsmqMW8KEl175xyIouwCFBnWIbgWXQ6gh2wOuN5JM6DceKtxSspe56yIbGi3D00J ImKoi8UfGQQZSm/FyPYFY8dIsXSFdiLI00c6bpzpZovJlc6dS2O3i/r4hD6YRo0bH+QV d7DLD+R5BV6ccFcfVA/IiRii8uakEtMAJz0UaNscWhwIWrcWSfkzd9lOSh9MQeNhzmGx 1oLz+aKoaBgjKhs16vIaCEq4njP2AftF7vDCxRv8isbWkO+8pbP0YkGhtdQE+23xeDE+ CnhvN0klzdwcSFR8CnhBjErwsIN4IyVi6YRvJSXV1l6RWoIM4cNSRFGSeXUPWaotPFhe 527A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=AUz+aGuz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e19-20020a170902e0d300b00186881e1ff0si12739282pla.302.2022.11.15.20.15.53; Tue, 15 Nov 2022 20:16:06 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=AUz+aGuz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230247AbiKPEO1 (ORCPT + 99 others); Tue, 15 Nov 2022 23:14:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231702AbiKPEOP (ORCPT ); Tue, 15 Nov 2022 23:14:15 -0500 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 720602D1DB; Tue, 15 Nov 2022 20:14:14 -0800 (PST) Received: from pps.filterd (m0134422.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG401G7021757; Wed, 16 Nov 2022 04:14:05 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=DBZSqkANZQQ0OruNgKmmCEGIHHi7mZUgVGLxPPk3P8k=; b=AUz+aGuzCA7f7EAWYy6mMP5TMq133ey6CcO/y7KePC293RXjthMsM7itHq9YQczbltyY gmNzgonVvC/cpXn3ByK6E7+WKTZ6y0BO2mkMtiGtyBFlvAX5d2ercfokLWYBFcHeFfJf GmoCF83mdgv5MVYdUFC2vLt7xZJop6SUrJEJYPsM/TZupQrBMuHF4PZ1CfARWKSIzscI LkQRQxdJbZZ/rWwg9nPL2OgeC7tzMx2pK7P21lS6ISkWmxRdRNKv8hesz6i7YIedaMvr js6y7DMoc1ksHhHkkL8rZoCuUgHqq9C7Cfakm8npDbH1NT0VcOto+Zn722ule2L1tssW 6Q== Received: from p1lg14880.it.hpe.com (p1lg14880.it.hpe.com [16.230.97.201]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvrew82we-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:05 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14880.it.hpe.com (Postfix) with ESMTPS id 0AE40806B77; Wed, 16 Nov 2022 04:14:05 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 83A7C808BA7; Wed, 16 Nov 2022 04:14:04 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 10/24] crypto: x86/poly - limit FPU preemption Date: Tue, 15 Nov 2022 22:13:28 -0600 Message-Id: <20221116041342.3841-11-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 56s8Ohy8TZ1WboXjT_ok7Gn6ijdkpJuE X-Proofpoint-GUID: 56s8Ohy8TZ1WboXjT_ok7Gn6ijdkpJuE X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxscore=0 impostorscore=0 adultscore=0 lowpriorityscore=0 clxscore=1015 priorityscore=1501 malwarescore=0 phishscore=0 suspectscore=0 spamscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749624727795367060?= X-GMAIL-MSGID: =?utf-8?q?1749624727795367060?= Use a static const unsigned int for the limit of the number of bytes processed between kernel_fpu_begin() and kernel_fpu_end() rather than using the SZ_4K macro (which is a signed value), or a magic value of 4096U embedded in the C code. Use unsigned int rather than size_t for some of the arguments to avoid typecasting for the min() macro. Signed-off-by: Robert Elliott --- v3 use static int rather than macro, change to while loops rather than do/while loops --- arch/x86/crypto/nhpoly1305-avx2-glue.c | 11 +++++--- arch/x86/crypto/nhpoly1305-sse2-glue.c | 11 +++++--- arch/x86/crypto/poly1305_glue.c | 37 +++++++++++++++++--------- arch/x86/crypto/polyval-clmulni_glue.c | 8 ++++-- 4 files changed, 46 insertions(+), 21 deletions(-) diff --git a/arch/x86/crypto/nhpoly1305-avx2-glue.c b/arch/x86/crypto/nhpoly1305-avx2-glue.c index 8ea5ab0f1ca7..f7dc9c563bb5 100644 --- a/arch/x86/crypto/nhpoly1305-avx2-glue.c +++ b/arch/x86/crypto/nhpoly1305-avx2-glue.c @@ -13,6 +13,9 @@ #include #include +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 337 * 1024; + asmlinkage void nh_avx2(const u32 *key, const u8 *message, size_t message_len, u8 hash[NH_HASH_BYTES]); @@ -26,18 +29,20 @@ static void _nh_avx2(const u32 *key, const u8 *message, size_t message_len, static int nhpoly1305_avx2_update(struct shash_desc *desc, const u8 *src, unsigned int srclen) { + BUILD_BUG_ON(bytes_per_fpu == 0); + if (srclen < 64 || !crypto_simd_usable()) return crypto_nhpoly1305_update(desc, src, srclen); - do { - unsigned int n = min_t(unsigned int, srclen, SZ_4K); + while (srclen) { + unsigned int n = min(srclen, bytes_per_fpu); kernel_fpu_begin(); crypto_nhpoly1305_update_helper(desc, src, n, _nh_avx2); kernel_fpu_end(); src += n; srclen -= n; - } while (srclen); + } return 0; } diff --git a/arch/x86/crypto/nhpoly1305-sse2-glue.c b/arch/x86/crypto/nhpoly1305-sse2-glue.c index 2b353d42ed13..daffcc7019ad 100644 --- a/arch/x86/crypto/nhpoly1305-sse2-glue.c +++ b/arch/x86/crypto/nhpoly1305-sse2-glue.c @@ -13,6 +13,9 @@ #include #include +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 199 * 1024; + asmlinkage void nh_sse2(const u32 *key, const u8 *message, size_t message_len, u8 hash[NH_HASH_BYTES]); @@ -26,18 +29,20 @@ static void _nh_sse2(const u32 *key, const u8 *message, size_t message_len, static int nhpoly1305_sse2_update(struct shash_desc *desc, const u8 *src, unsigned int srclen) { + BUILD_BUG_ON(bytes_per_fpu == 0); + if (srclen < 64 || !crypto_simd_usable()) return crypto_nhpoly1305_update(desc, src, srclen); - do { - unsigned int n = min_t(unsigned int, srclen, SZ_4K); + while (srclen) { + unsigned int n = min(srclen, bytes_per_fpu); kernel_fpu_begin(); crypto_nhpoly1305_update_helper(desc, src, n, _nh_sse2); kernel_fpu_end(); src += n; srclen -= n; - } while (srclen); + } return 0; } diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c index 1dfb8af48a3c..16831c036d71 100644 --- a/arch/x86/crypto/poly1305_glue.c +++ b/arch/x86/crypto/poly1305_glue.c @@ -15,20 +15,27 @@ #include #include +#define POLY1305_BLOCK_SIZE_MASK (~(POLY1305_BLOCK_SIZE - 1)) + +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 217 * 1024; + asmlinkage void poly1305_init_x86_64(void *ctx, const u8 key[POLY1305_BLOCK_SIZE]); asmlinkage void poly1305_blocks_x86_64(void *ctx, const u8 *inp, - const size_t len, const u32 padbit); + const unsigned int len, + const u32 padbit); asmlinkage void poly1305_emit_x86_64(void *ctx, u8 mac[POLY1305_DIGEST_SIZE], const u32 nonce[4]); asmlinkage void poly1305_emit_avx(void *ctx, u8 mac[POLY1305_DIGEST_SIZE], const u32 nonce[4]); -asmlinkage void poly1305_blocks_avx(void *ctx, const u8 *inp, const size_t len, - const u32 padbit); -asmlinkage void poly1305_blocks_avx2(void *ctx, const u8 *inp, const size_t len, - const u32 padbit); +asmlinkage void poly1305_blocks_avx(void *ctx, const u8 *inp, + const unsigned int len, const u32 padbit); +asmlinkage void poly1305_blocks_avx2(void *ctx, const u8 *inp, + const unsigned int len, const u32 padbit); asmlinkage void poly1305_blocks_avx512(void *ctx, const u8 *inp, - const size_t len, const u32 padbit); + const unsigned int len, + const u32 padbit); static __ro_after_init DEFINE_STATIC_KEY_FALSE(poly1305_use_avx); static __ro_after_init DEFINE_STATIC_KEY_FALSE(poly1305_use_avx2); @@ -86,14 +93,12 @@ static void poly1305_simd_init(void *ctx, const u8 key[POLY1305_BLOCK_SIZE]) poly1305_init_x86_64(ctx, key); } -static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len, +static void poly1305_simd_blocks(void *ctx, const u8 *inp, unsigned int len, const u32 padbit) { struct poly1305_arch_internal *state = ctx; - /* SIMD disables preemption, so relax after processing each page. */ - BUILD_BUG_ON(SZ_4K < POLY1305_BLOCK_SIZE || - SZ_4K % POLY1305_BLOCK_SIZE); + BUILD_BUG_ON(bytes_per_fpu < POLY1305_BLOCK_SIZE); if (!static_branch_likely(&poly1305_use_avx) || (len < (POLY1305_BLOCK_SIZE * 18) && !state->is_base2_26) || @@ -103,8 +108,14 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len, return; } - do { - const size_t bytes = min_t(size_t, len, SZ_4K); + while (len) { + unsigned int bytes; + + if (len < POLY1305_BLOCK_SIZE) + bytes = len; + else + bytes = min(len, + bytes_per_fpu & POLY1305_BLOCK_SIZE_MASK); kernel_fpu_begin(); if (IS_ENABLED(CONFIG_AS_AVX512) && static_branch_likely(&poly1305_use_avx512)) @@ -117,7 +128,7 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len, len -= bytes; inp += bytes; - } while (len); + } } static void poly1305_simd_emit(void *ctx, u8 mac[POLY1305_DIGEST_SIZE], diff --git a/arch/x86/crypto/polyval-clmulni_glue.c b/arch/x86/crypto/polyval-clmulni_glue.c index b7664d018851..de1c908f7412 100644 --- a/arch/x86/crypto/polyval-clmulni_glue.c +++ b/arch/x86/crypto/polyval-clmulni_glue.c @@ -29,6 +29,9 @@ #define NUM_KEY_POWERS 8 +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 393 * 1024; + struct polyval_tfm_ctx { /* * These powers must be in the order h^8, ..., h^1. @@ -107,6 +110,8 @@ static int polyval_x86_update(struct shash_desc *desc, unsigned int nblocks; unsigned int n; + BUILD_BUG_ON(bytes_per_fpu < POLYVAL_BLOCK_SIZE); + if (dctx->bytes) { n = min(srclen, dctx->bytes); pos = dctx->buffer + POLYVAL_BLOCK_SIZE - dctx->bytes; @@ -123,8 +128,7 @@ static int polyval_x86_update(struct shash_desc *desc, } while (srclen >= POLYVAL_BLOCK_SIZE) { - /* Allow rescheduling every 4K bytes. */ - nblocks = min(srclen, 4096U) / POLYVAL_BLOCK_SIZE; + nblocks = min(srclen, bytes_per_fpu) / POLYVAL_BLOCK_SIZE; internal_polyval_update(tctx, src, nblocks, dctx->buffer); srclen -= nblocks * POLYVAL_BLOCK_SIZE; src += nblocks * POLYVAL_BLOCK_SIZE; From patchwork Wed Nov 16 04:13:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20695 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3083958wru; Tue, 15 Nov 2022 20:15:41 -0800 (PST) X-Google-Smtp-Source: AA0mqf6dcgM+eRmj0qDv9rBF+GUj+fdehUFQzwxoaKbpAHj8BJdOsHSW4MR0z6Pf6QUGiVTbnQYp X-Received: by 2002:a05:6a00:bc6:b0:56d:8e07:4626 with SMTP id x6-20020a056a000bc600b0056d8e074626mr21281187pfu.70.1668572141290; Tue, 15 Nov 2022 20:15:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572141; cv=none; d=google.com; s=arc-20160816; b=lMi5dKiGR1r5BqRx1GtKbIAkD6l/l9H+I5AWAFrRXlN1UBXSM94TjXeurBX0F5Bed2 +XWjUFZcp/GFKB5xFTb8v3IBpGjNStKSOI/mJDwYmbZFkCs+58NiBQEbaosI2sVM/TAS AbfUDBAi+nado2kEUpcz282Zwpkt63s0+xFd5J+ONZUWmpmwEWyJKPPl0fvcsTQiOlXB GIYzfpHFSA1HMDx+puenVcWN/zdmzhzrTrAiFE8PzI3014vMMpwZyX0wuIep9nm+NBOI 7a4bMpyEnUevoudG08LpIQstpuJTAJpdvkDe6Z4FSD7FIrqgeUzEM42OKTVvAI9vHoOF HGeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cTTNWZf0gsaOChSSwVwJsJGsUpzhNtRL8Yvh4fEQViw=; b=H5AUjScwwr8RNWRBG0/d+/mhNYz4P2LcmwNCkNLYURFhA4Bfc64UCn35XfHgDI2ggl TmyMTjH61nAzdq16/s5gpyb/Rj8oh3eiazUbdO+2eDlvWlF8FzmSMN7nfc9pVabXsmTK tHYCPe82U5wayZgtk8gjsP9jUlz/clGeaWYxfltnY+2QVmFdnKdbZVufopJjg4Jr2Ya5 G9FwztuiUEDEXWxwqXJyuM8F3EMX0xJg26OpVjpvZySDKd4UODBgY6+255+QhYiI5HKX mh6v5L1cT4U42MUsPlON0gb55P35dJeA+QLf2cPe1QI7ylvQ9v1FBSEbiM2T0K1F756u ZMrQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=oCuOJ54e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u2-20020a63ef02000000b0046ebaf1821bsi13390196pgh.113.2022.11.15.20.15.28; Tue, 15 Nov 2022 20:15:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=oCuOJ54e; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232027AbiKPEOk (ORCPT + 99 others); Tue, 15 Nov 2022 23:14:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231812AbiKPEOT (ORCPT ); Tue, 15 Nov 2022 23:14:19 -0500 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C49A2CDCC; Tue, 15 Nov 2022 20:14:17 -0800 (PST) Received: from pps.filterd (m0134423.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3Ncm8026856; Wed, 16 Nov 2022 04:14:07 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=cTTNWZf0gsaOChSSwVwJsJGsUpzhNtRL8Yvh4fEQViw=; b=oCuOJ54erZ91/ohmsoPTEJnfhsBH1vjmh1haz6V7DtH9adN81WnwEZ9Bp+lnpyziuKTr 76WR0CsARrKK6EjpLsNa+IilhpmXSvAzfDdL9puh45aSwhSpogkAs7M/MB70t5wYX1wj VscpfEGI5LwWWXoW9bgT7wEzFA7VwV74/9w7Wbhb/dlbO3KM11ltNMQvu55xKrpilzuE OEWTH1C3FMwrFKK7EQuiiFWxXfZhYx8YKlCviIwyQont3q9oo+H/O73LAUgtgkxZfBC0 DHWX+sHlU2OVRdACoTBKKRufgjwfv0ueaVkb2Wt3H69ilTNiOle2yhb0KVcV3Afkb/k0 5g== Received: from p1lg14879.it.hpe.com (p1lg14879.it.hpe.com [16.230.97.200]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqwqgabd-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:07 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14879.it.hpe.com (Postfix) with ESMTPS id 1BFFF4B5DC; Wed, 16 Nov 2022 04:14:06 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id A22BE808BA7; Wed, 16 Nov 2022 04:14:05 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 11/24] crypto: x86/aegis - limit FPU preemption Date: Tue, 15 Nov 2022 22:13:29 -0600 Message-Id: <20221116041342.3841-12-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: _Ez51FyL3s9E__eU_HvQXPq7PEMUgsSQ X-Proofpoint-GUID: _Ez51FyL3s9E__eU_HvQXPq7PEMUgsSQ X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 phishscore=0 impostorscore=0 spamscore=0 suspectscore=0 lowpriorityscore=0 mlxscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 bulkscore=0 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749624701748918128?= X-GMAIL-MSGID: =?utf-8?q?1749624701748918128?= Make kernel_fpu_begin() and kernel_fpu_end() calls around each assembly language function that uses FPU context, rather than around the entire set (init, ad, crypt, final). Limit the processing of bulk data based on a module parameter, so multiple blocks are processed within one FPU context (associated data is not limited). Allow the skcipher_walk functions to sleep again, since they are is no longer called inside FPU context. Motivation: calling crypto_aead_encrypt() with a single scatter-gather list entry pointing to a 1 MiB plaintext buffer caused the aesni_encrypt function to receive a length of 1048576 bytes and consume 306348 cycles within FPU context to process that data. Fixes: 1d373d4e8e15 ("crypto: x86 - Add optimized AEGIS implementations") Fixes: ba6771c0a0bc ("crypto: x86/aegis - fix handling chunked inputs and MAY_SLEEP") Signed-off-by: Robert Elliott --- arch/x86/crypto/aegis128-aesni-glue.c | 39 ++++++++++++++++++++------- 1 file changed, 29 insertions(+), 10 deletions(-) diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c index 4623189000d8..6e96bdda2811 100644 --- a/arch/x86/crypto/aegis128-aesni-glue.c +++ b/arch/x86/crypto/aegis128-aesni-glue.c @@ -23,6 +23,9 @@ #define AEGIS128_MIN_AUTH_SIZE 8 #define AEGIS128_MAX_AUTH_SIZE 16 +/* avoid kernel_fpu_begin/end scheduler/rcu stalls */ +static const unsigned int bytes_per_fpu = 4 * 1024; + asmlinkage void crypto_aegis128_aesni_init(void *state, void *key, void *iv); asmlinkage void crypto_aegis128_aesni_ad( @@ -85,15 +88,19 @@ static void crypto_aegis128_aesni_process_ad( if (pos > 0) { unsigned int fill = AEGIS128_BLOCK_SIZE - pos; memcpy(buf.bytes + pos, src, fill); - crypto_aegis128_aesni_ad(state, + kernel_fpu_begin(); + crypto_aegis128_aesni_ad(state->blocks, AEGIS128_BLOCK_SIZE, buf.bytes); + kernel_fpu_end(); pos = 0; left -= fill; src += fill; } - crypto_aegis128_aesni_ad(state, left, src); + kernel_fpu_begin(); + crypto_aegis128_aesni_ad(state->blocks, left, src); + kernel_fpu_end(); src += left & ~(AEGIS128_BLOCK_SIZE - 1); left &= AEGIS128_BLOCK_SIZE - 1; @@ -110,7 +117,9 @@ static void crypto_aegis128_aesni_process_ad( if (pos > 0) { memset(buf.bytes + pos, 0, AEGIS128_BLOCK_SIZE - pos); - crypto_aegis128_aesni_ad(state, AEGIS128_BLOCK_SIZE, buf.bytes); + kernel_fpu_begin(); + crypto_aegis128_aesni_ad(state->blocks, AEGIS128_BLOCK_SIZE, buf.bytes); + kernel_fpu_end(); } } @@ -119,15 +128,23 @@ static void crypto_aegis128_aesni_process_crypt( const struct aegis_crypt_ops *ops) { while (walk->nbytes >= AEGIS128_BLOCK_SIZE) { - ops->crypt_blocks(state, - round_down(walk->nbytes, AEGIS128_BLOCK_SIZE), + unsigned int chunk = min(walk->nbytes, bytes_per_fpu); + + chunk = round_down(chunk, AEGIS128_BLOCK_SIZE); + + kernel_fpu_begin(); + ops->crypt_blocks(state->blocks, chunk, walk->src.virt.addr, walk->dst.virt.addr); - skcipher_walk_done(walk, walk->nbytes % AEGIS128_BLOCK_SIZE); + kernel_fpu_end(); + + skcipher_walk_done(walk, walk->nbytes - chunk); } if (walk->nbytes) { - ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr, + kernel_fpu_begin(); + ops->crypt_tail(state->blocks, walk->nbytes, walk->src.virt.addr, walk->dst.virt.addr); + kernel_fpu_end(); skcipher_walk_done(walk, 0); } } @@ -172,15 +189,17 @@ static void crypto_aegis128_aesni_crypt(struct aead_request *req, struct skcipher_walk walk; struct aegis_state state; - ops->skcipher_walk_init(&walk, req, true); + ops->skcipher_walk_init(&walk, req, false); kernel_fpu_begin(); + crypto_aegis128_aesni_init(&state.blocks, ctx->key.bytes, req->iv); + kernel_fpu_end(); - crypto_aegis128_aesni_init(&state, ctx->key.bytes, req->iv); crypto_aegis128_aesni_process_ad(&state, req->src, req->assoclen); crypto_aegis128_aesni_process_crypt(&state, &walk, ops); - crypto_aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen); + kernel_fpu_begin(); + crypto_aegis128_aesni_final(&state.blocks, tag_xor, req->assoclen, cryptlen); kernel_fpu_end(); } From patchwork Wed Nov 16 04:13:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20711 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084511wru; Tue, 15 Nov 2022 20:17:50 -0800 (PST) X-Google-Smtp-Source: AA0mqf7ydD2ODWMTm+p1sOndaFSio1O3Zl4H8Vlei1HGI0Pe3jnNKnuZV8RcpcYx/7ChNv0Oq128 X-Received: by 2002:a17:902:f60c:b0:186:878e:3b0d with SMTP id n12-20020a170902f60c00b00186878e3b0dmr6989799plg.149.1668572270313; Tue, 15 Nov 2022 20:17:50 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572270; cv=none; d=google.com; s=arc-20160816; b=KLkeepHl4ZidngiXkMjXt6DvW7Gpt58kzdezePiFGwaqL9MDulBicu86cAuparpLa5 JdhftAf1SjzVJjnMXCpF8c84ib1kHDcUl1IrJnwKNfjBzMm9cUZJt2cSJqPH5B8vnuUp FQfvWt+Gl9Lxrsf4U3ThtsE1EBegK5aXZDfRJvlITZmijtX3J6BpKwz6039hrN9RI0Ee 9ba0Bj3LggIC7FoG769z6JCRx2xqEPZWJEbR04WF/mQ18QZQYg3fce/IutmBxWAMfImS aDgtpqVj5bkMTq6ERUlNNTY/1w+vQN4PuG2FwFYj3rUrSjfECgBAFtNLD/V+EnwzWZzX ac9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=3DT2dGUsYD5z+1mTOCjd0II633gWE/migA0MCL9vm7A=; b=GjOww0hCbUPApgPNy/LNuRHP82GnfpVln4618IKkRhzBu/2fZb+4RjCE9w8m9kWoe1 KMjoN2BG4wxqy6cABrtq5/Pnb5Ipcw3NSE7thk5SW7eKVSOwPsh97bXB2wsh3qgCnNTn 12sfJxsUCVUkMkIPGLljXr5b5JfIsVAv8kKhXQE3G3Eng77nDrAmkuz3TyJwokaXnFR3 8STSjuq5quHl/ENjTJcqFvq9ZP+cMFIpeFeDgOj4uo2BKesuSvLMTEXmx5O5G1L1Fyko sPiYlFeIvDlk3mG+d9TqC+R14ietZ/pGYiGpQbNOM6RCNWcuy2hxkfc/HKQZVomUMWUY fXWw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=J6TmsNuk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i12-20020a63cd0c000000b0047063e59929si13815036pgg.836.2022.11.15.20.17.32; Tue, 15 Nov 2022 20:17:50 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=J6TmsNuk; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232360AbiKPEPc (ORCPT + 99 others); Tue, 15 Nov 2022 23:15:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231871AbiKPEOZ (ORCPT ); Tue, 15 Nov 2022 23:14:25 -0500 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A95F2D1DB; Tue, 15 Nov 2022 20:14:19 -0800 (PST) Received: from pps.filterd (m0148664.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3HohF024146; Wed, 16 Nov 2022 04:14:08 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=3DT2dGUsYD5z+1mTOCjd0II633gWE/migA0MCL9vm7A=; b=J6TmsNukmxNs6mv4sMcZTauF3u6VaXudN8vrc5T3O3BUe4M+p9l3iX1oMhUgQa4piVyz IU5a4hXeReJMWUw/Zym5gkESWAQW7KoLufesbIAVmDeiJEY85b2eYyvD0k+ovbH6kA1l X7qnecawY7tjiV/8ORJyuKYZJR4bPZ1ciG6bRtr6Odca/95GiIKbbvZD3MfcMwPQfL7W 6/lp/p1A0AYRm6JJ8WbdjY1u6GXFV+NsQ3xUsArApzYWQaztkaxleJ+swLc+c0tEOEft 865DwGJ1fKpL00dDzOWnuvLvsOJrgcH+JcOYxJp4CSRj3eK3/tglMNnq8VVc9NZaeh/Y zQ== Received: from p1lg14880.it.hpe.com (p1lg14880.it.hpe.com [16.230.97.201]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqturc3s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:08 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14880.it.hpe.com (Postfix) with ESMTPS id 4FF018040E8; Wed, 16 Nov 2022 04:14:07 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id BF1E580FE88; Wed, 16 Nov 2022 04:14:06 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott , kernel test robot Subject: [PATCH v4 12/24] crypto: x86/sha - register all variations Date: Tue, 15 Nov 2022 22:13:30 -0600 Message-Id: <20221116041342.3841-13-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Qat-OnP-xdC88svxJY34uKIPYtgqUs8v X-Proofpoint-GUID: Qat-OnP-xdC88svxJY34uKIPYtgqUs8v X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 priorityscore=1501 suspectscore=0 bulkscore=0 clxscore=1015 malwarescore=0 lowpriorityscore=0 mlxscore=0 mlxlogscore=999 phishscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748447916958020834?= X-GMAIL-MSGID: =?utf-8?q?1749624836916151136?= Don't register and unregister each of the functions from least- to most-optimized (e.g., SSSE3 then AVX then AVX2); register all variations. This enables selecting those other algorithms if needed, such as for testing with: modprobe tcrypt mode=300 alg=sha512-avx modprobe tcrypt mode=400 alg=sha512-avx Suggested-by: Tim Chen Suggested-by: Herbert Xu Signed-off-by: Robert Elliott --- v3 register all the variations, not just the best one, per Herbert's feedback. return -ENODEV if none are successful, 0 if any are successful v4 remove driver_name strings that are only used by later patches no longer included in this series that enhance the prints. A future patch series might remove existing prints rather than add and enhance them. Reported-by: kernel test robot --- arch/x86/crypto/sha1_ssse3_glue.c | 132 +++++++++++++-------------- arch/x86/crypto/sha256_ssse3_glue.c | 136 +++++++++++++--------------- arch/x86/crypto/sha512_ssse3_glue.c | 99 +++++++++----------- 3 files changed, 168 insertions(+), 199 deletions(-) diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c index 4bc77c84b0fb..e75a1060bb5f 100644 --- a/arch/x86/crypto/sha1_ssse3_glue.c +++ b/arch/x86/crypto/sha1_ssse3_glue.c @@ -34,6 +34,13 @@ static const unsigned int bytes_per_fpu_avx2 = 34 * 1024; static const unsigned int bytes_per_fpu_avx = 30 * 1024; static const unsigned int bytes_per_fpu_ssse3 = 26 * 1024; +static int using_x86_ssse3; +static int using_x86_avx; +static int using_x86_avx2; +#ifdef CONFIG_AS_SHA1_NI +static int using_x86_shani; +#endif + static int sha1_update(struct shash_desc *desc, const u8 *data, unsigned int len, unsigned int bytes_per_fpu, sha1_block_fn *sha1_xform) @@ -128,17 +135,12 @@ static struct shash_alg sha1_ssse3_alg = { } }; -static int register_sha1_ssse3(void) -{ - if (boot_cpu_has(X86_FEATURE_SSSE3)) - return crypto_register_shash(&sha1_ssse3_alg); - return 0; -} - static void unregister_sha1_ssse3(void) { - if (boot_cpu_has(X86_FEATURE_SSSE3)) + if (using_x86_ssse3) { crypto_unregister_shash(&sha1_ssse3_alg); + using_x86_ssse3 = 0; + } } asmlinkage void sha1_transform_avx(struct sha1_state *state, @@ -179,28 +181,12 @@ static struct shash_alg sha1_avx_alg = { } }; -static bool avx_usable(void) -{ - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { - if (boot_cpu_has(X86_FEATURE_AVX)) - pr_info("AVX detected but unusable.\n"); - return false; - } - - return true; -} - -static int register_sha1_avx(void) -{ - if (avx_usable()) - return crypto_register_shash(&sha1_avx_alg); - return 0; -} - static void unregister_sha1_avx(void) { - if (avx_usable()) + if (using_x86_avx) { crypto_unregister_shash(&sha1_avx_alg); + using_x86_avx = 0; + } } #define SHA1_AVX2_BLOCK_OPTSIZE 4 /* optimal 4*64 bytes of SHA1 blocks */ @@ -208,16 +194,6 @@ static void unregister_sha1_avx(void) asmlinkage void sha1_transform_avx2(struct sha1_state *state, const u8 *data, int blocks); -static bool avx2_usable(void) -{ - if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2) - && boot_cpu_has(X86_FEATURE_BMI1) - && boot_cpu_has(X86_FEATURE_BMI2)) - return true; - - return false; -} - static void sha1_apply_transform_avx2(struct sha1_state *state, const u8 *data, int blocks) { @@ -263,17 +239,12 @@ static struct shash_alg sha1_avx2_alg = { } }; -static int register_sha1_avx2(void) -{ - if (avx2_usable()) - return crypto_register_shash(&sha1_avx2_alg); - return 0; -} - static void unregister_sha1_avx2(void) { - if (avx2_usable()) + if (using_x86_avx2) { crypto_unregister_shash(&sha1_avx2_alg); + using_x86_avx2 = 0; + } } #ifdef CONFIG_AS_SHA1_NI @@ -315,49 +286,70 @@ static struct shash_alg sha1_ni_alg = { } }; -static int register_sha1_ni(void) -{ - if (boot_cpu_has(X86_FEATURE_SHA_NI)) - return crypto_register_shash(&sha1_ni_alg); - return 0; -} - static void unregister_sha1_ni(void) { - if (boot_cpu_has(X86_FEATURE_SHA_NI)) + if (using_x86_shani) { crypto_unregister_shash(&sha1_ni_alg); + using_x86_shani = 0; + } } #else -static inline int register_sha1_ni(void) { return 0; } static inline void unregister_sha1_ni(void) { } #endif static int __init sha1_ssse3_mod_init(void) { - if (register_sha1_ssse3()) - goto fail; + const char *feature_name; + int ret; + +#ifdef CONFIG_AS_SHA1_NI + /* SHA-NI */ + if (boot_cpu_has(X86_FEATURE_SHA_NI)) { - if (register_sha1_avx()) { - unregister_sha1_ssse3(); - goto fail; + ret = crypto_register_shash(&sha1_ni_alg); + if (!ret) + using_x86_shani = 1; } +#endif + + /* AVX2 */ + if (boot_cpu_has(X86_FEATURE_AVX2)) { - if (register_sha1_avx2()) { - unregister_sha1_avx(); - unregister_sha1_ssse3(); - goto fail; + if (boot_cpu_has(X86_FEATURE_BMI1) && + boot_cpu_has(X86_FEATURE_BMI2)) { + + ret = crypto_register_shash(&sha1_avx2_alg); + if (!ret) + using_x86_avx2 = 1; + } } - if (register_sha1_ni()) { - unregister_sha1_avx2(); - unregister_sha1_avx(); - unregister_sha1_ssse3(); - goto fail; + /* AVX */ + if (boot_cpu_has(X86_FEATURE_AVX)) { + + if (cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, + &feature_name)) { + + ret = crypto_register_shash(&sha1_avx_alg); + if (!ret) + using_x86_avx = 1; + } } - return 0; -fail: + /* SSE3 */ + if (boot_cpu_has(X86_FEATURE_SSSE3)) { + ret = crypto_register_shash(&sha1_ssse3_alg); + if (!ret) + using_x86_ssse3 = 1; + } + +#ifdef CONFIG_AS_SHA1_NI + if (using_x86_shani) + return 0; +#endif + if (using_x86_avx2 || using_x86_avx || using_x86_ssse3) + return 0; return -ENODEV; } diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index cdcdf5a80ffe..c6261ede4bae 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -51,6 +51,13 @@ static const unsigned int bytes_per_fpu_ssse3 = 11 * 1024; asmlinkage void sha256_transform_ssse3(struct sha256_state *state, const u8 *data, int blocks); +static int using_x86_ssse3; +static int using_x86_avx; +static int using_x86_avx2; +#ifdef CONFIG_AS_SHA256_NI +static int using_x86_shani; +#endif + static int _sha256_update(struct shash_desc *desc, const u8 *data, unsigned int len, unsigned int bytes_per_fpu, sha256_block_fn *sha256_xform) @@ -156,19 +163,13 @@ static struct shash_alg sha256_ssse3_algs[] = { { } } }; -static int register_sha256_ssse3(void) -{ - if (boot_cpu_has(X86_FEATURE_SSSE3)) - return crypto_register_shashes(sha256_ssse3_algs, - ARRAY_SIZE(sha256_ssse3_algs)); - return 0; -} - static void unregister_sha256_ssse3(void) { - if (boot_cpu_has(X86_FEATURE_SSSE3)) + if (using_x86_ssse3) { crypto_unregister_shashes(sha256_ssse3_algs, ARRAY_SIZE(sha256_ssse3_algs)); + using_x86_ssse3 = 0; + } } asmlinkage void sha256_transform_avx(struct sha256_state *state, @@ -223,30 +224,13 @@ static struct shash_alg sha256_avx_algs[] = { { } } }; -static bool avx_usable(void) -{ - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { - if (boot_cpu_has(X86_FEATURE_AVX)) - pr_info("AVX detected but unusable.\n"); - return false; - } - - return true; -} - -static int register_sha256_avx(void) -{ - if (avx_usable()) - return crypto_register_shashes(sha256_avx_algs, - ARRAY_SIZE(sha256_avx_algs)); - return 0; -} - static void unregister_sha256_avx(void) { - if (avx_usable()) + if (using_x86_avx) { crypto_unregister_shashes(sha256_avx_algs, ARRAY_SIZE(sha256_avx_algs)); + using_x86_avx = 0; + } } asmlinkage void sha256_transform_rorx(struct sha256_state *state, @@ -301,28 +285,13 @@ static struct shash_alg sha256_avx2_algs[] = { { } } }; -static bool avx2_usable(void) -{ - if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2) && - boot_cpu_has(X86_FEATURE_BMI2)) - return true; - - return false; -} - -static int register_sha256_avx2(void) -{ - if (avx2_usable()) - return crypto_register_shashes(sha256_avx2_algs, - ARRAY_SIZE(sha256_avx2_algs)); - return 0; -} - static void unregister_sha256_avx2(void) { - if (avx2_usable()) + if (using_x86_avx2) { crypto_unregister_shashes(sha256_avx2_algs, ARRAY_SIZE(sha256_avx2_algs)); + using_x86_avx2 = 0; + } } #ifdef CONFIG_AS_SHA256_NI @@ -378,51 +347,72 @@ static struct shash_alg sha256_ni_algs[] = { { } } }; -static int register_sha256_ni(void) -{ - if (boot_cpu_has(X86_FEATURE_SHA_NI)) - return crypto_register_shashes(sha256_ni_algs, - ARRAY_SIZE(sha256_ni_algs)); - return 0; -} - static void unregister_sha256_ni(void) { - if (boot_cpu_has(X86_FEATURE_SHA_NI)) + if (using_x86_shani) { crypto_unregister_shashes(sha256_ni_algs, ARRAY_SIZE(sha256_ni_algs)); + using_x86_shani = 0; + } } #else -static inline int register_sha256_ni(void) { return 0; } static inline void unregister_sha256_ni(void) { } #endif static int __init sha256_ssse3_mod_init(void) { - if (register_sha256_ssse3()) - goto fail; + const char *feature_name; + int ret; + +#ifdef CONFIG_AS_SHA256_NI + /* SHA-NI */ + if (boot_cpu_has(X86_FEATURE_SHA_NI)) { - if (register_sha256_avx()) { - unregister_sha256_ssse3(); - goto fail; + ret = crypto_register_shashes(sha256_ni_algs, + ARRAY_SIZE(sha256_ni_algs)); + if (!ret) + using_x86_shani = 1; } +#endif + + /* AVX2 */ + if (boot_cpu_has(X86_FEATURE_AVX2)) { - if (register_sha256_avx2()) { - unregister_sha256_avx(); - unregister_sha256_ssse3(); - goto fail; + if (boot_cpu_has(X86_FEATURE_BMI2)) { + ret = crypto_register_shashes(sha256_avx2_algs, + ARRAY_SIZE(sha256_avx2_algs)); + if (!ret) + using_x86_avx2 = 1; + } } - if (register_sha256_ni()) { - unregister_sha256_avx2(); - unregister_sha256_avx(); - unregister_sha256_ssse3(); - goto fail; + /* AVX */ + if (boot_cpu_has(X86_FEATURE_AVX)) { + + if (cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, + &feature_name)) { + ret = crypto_register_shashes(sha256_avx_algs, + ARRAY_SIZE(sha256_avx_algs)); + if (!ret) + using_x86_avx = 1; + } } - return 0; -fail: + /* SSE3 */ + if (boot_cpu_has(X86_FEATURE_SSSE3)) { + ret = crypto_register_shashes(sha256_ssse3_algs, + ARRAY_SIZE(sha256_ssse3_algs)); + if (!ret) + using_x86_ssse3 = 1; + } + +#ifdef CONFIG_AS_SHA256_NI + if (using_x86_shani) + return 0; +#endif + if (using_x86_avx2 || using_x86_avx || using_x86_ssse3) + return 0; return -ENODEV; } diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index c7036cfe2a7e..feae85933270 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -47,6 +47,10 @@ static const unsigned int bytes_per_fpu_ssse3 = 17 * 1024; asmlinkage void sha512_transform_ssse3(struct sha512_state *state, const u8 *data, int blocks); +static int using_x86_ssse3; +static int using_x86_avx; +static int using_x86_avx2; + static int sha512_update(struct shash_desc *desc, const u8 *data, unsigned int len, unsigned int bytes_per_fpu, sha512_block_fn *sha512_xform) @@ -152,33 +156,17 @@ static struct shash_alg sha512_ssse3_algs[] = { { } } }; -static int register_sha512_ssse3(void) -{ - if (boot_cpu_has(X86_FEATURE_SSSE3)) - return crypto_register_shashes(sha512_ssse3_algs, - ARRAY_SIZE(sha512_ssse3_algs)); - return 0; -} - static void unregister_sha512_ssse3(void) { - if (boot_cpu_has(X86_FEATURE_SSSE3)) + if (using_x86_ssse3) { crypto_unregister_shashes(sha512_ssse3_algs, ARRAY_SIZE(sha512_ssse3_algs)); + using_x86_ssse3 = 0; + } } asmlinkage void sha512_transform_avx(struct sha512_state *state, const u8 *data, int blocks); -static bool avx_usable(void) -{ - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { - if (boot_cpu_has(X86_FEATURE_AVX)) - pr_info("AVX detected but unusable.\n"); - return false; - } - - return true; -} static int sha512_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) @@ -230,19 +218,13 @@ static struct shash_alg sha512_avx_algs[] = { { } } }; -static int register_sha512_avx(void) -{ - if (avx_usable()) - return crypto_register_shashes(sha512_avx_algs, - ARRAY_SIZE(sha512_avx_algs)); - return 0; -} - static void unregister_sha512_avx(void) { - if (avx_usable()) + if (using_x86_avx) { crypto_unregister_shashes(sha512_avx_algs, ARRAY_SIZE(sha512_avx_algs)); + using_x86_avx = 0; + } } asmlinkage void sha512_transform_rorx(struct sha512_state *state, @@ -298,22 +280,6 @@ static struct shash_alg sha512_avx2_algs[] = { { } } }; -static bool avx2_usable(void) -{ - if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2) && - boot_cpu_has(X86_FEATURE_BMI2)) - return true; - - return false; -} - -static int register_sha512_avx2(void) -{ - if (avx2_usable()) - return crypto_register_shashes(sha512_avx2_algs, - ARRAY_SIZE(sha512_avx2_algs)); - return 0; -} static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), @@ -324,32 +290,53 @@ MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static void unregister_sha512_avx2(void) { - if (avx2_usable()) + if (using_x86_avx2) { crypto_unregister_shashes(sha512_avx2_algs, ARRAY_SIZE(sha512_avx2_algs)); + using_x86_avx2 = 0; + } } static int __init sha512_ssse3_mod_init(void) { + const char *feature_name; + int ret; + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - if (register_sha512_ssse3()) - goto fail; + /* AVX2 */ + if (boot_cpu_has(X86_FEATURE_AVX2)) { + if (boot_cpu_has(X86_FEATURE_BMI2)) { + ret = crypto_register_shashes(sha512_avx2_algs, + ARRAY_SIZE(sha512_avx2_algs)); + if (!ret) + using_x86_avx2 = 1; + } + } + + /* AVX */ + if (boot_cpu_has(X86_FEATURE_AVX)) { - if (register_sha512_avx()) { - unregister_sha512_ssse3(); - goto fail; + if (cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, + &feature_name)) { + ret = crypto_register_shashes(sha512_avx_algs, + ARRAY_SIZE(sha512_avx_algs)); + if (!ret) + using_x86_avx = 1; + } } - if (register_sha512_avx2()) { - unregister_sha512_avx(); - unregister_sha512_ssse3(); - goto fail; + /* SSE3 */ + if (boot_cpu_has(X86_FEATURE_SSSE3)) { + ret = crypto_register_shashes(sha512_ssse3_algs, + ARRAY_SIZE(sha512_ssse3_algs)); + if (!ret) + using_x86_ssse3 = 1; } - return 0; -fail: + if (using_x86_avx2 || using_x86_avx || using_x86_ssse3) + return 0; return -ENODEV; } From patchwork Wed Nov 16 04:13:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20699 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084043wru; Tue, 15 Nov 2022 20:16:03 -0800 (PST) X-Google-Smtp-Source: AA0mqf4eM4x3xBM5G7uFApoi/fG2SRQVOs9t3LsOfXkO+lcurhEdWXt1WJyaAghB132Id4GgnaPt X-Received: by 2002:a17:902:be13:b0:186:748f:e8c5 with SMTP id r19-20020a170902be1300b00186748fe8c5mr7330052pls.73.1668572163294; Tue, 15 Nov 2022 20:16:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572163; cv=none; d=google.com; s=arc-20160816; b=cS+j6Kn/pzxCVQnLTeK30n35P2/VQ3eGJjzAGDJsYq+fkiiYcqSwUWofw/y3KEJ63u jM0ouTZZrIS+j2ZsUfj7JlUkGBRALh4yKNDPiQO8JYN8InaLYSpqTNsjcgaNIV8kSxYs 698z9245pXY5i/JUhBz4qciB28qv7zUqt/V2M4na2HBWCM4LMoJkd7Hb2tLQzjWGS/im O7QYzqDFORyIMz4aESbjyYM/TrmUTFp82yR341LdDfeLXQSYaBFPMuwkZgKLefGa7kSG DQhhv8nr9dQZ993Q7nMuradzVUP4Go/XY1VpbbZm2t2nxW9BcM7fwmL5ieWFXtNMBcAt yZOA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=78cE9OO8uzMM2db32hW7tetx3xycoj1j3qAj8neR0YQ=; b=OY+kCnqSKejhI+cUas6Xt0WkSoB6YXzYVxh2nXGJxJNm1Cq0oBkZYKyksQfLlsOs2s 2AhIO8XZdFeE78mSS/34SpsIa5o7hyrz8cLSKhhSch5RkPNlCITaMoCk9F94ghISHG0L Up21/gm1uzR8/mTJgM/nvqKCepg3z+b8T5xccPWnCCfqBYOsFE4bI5I+2Vrttoux0khe LIuCTT13nZ7LeJvzIwZBSfMRmMutGGot0IEBg1BFEnwa63u89iUsJpbqqrJGXtmqBs0R ACSoi6uJhhExhWQwSE74GjCvSdDnk6gVrHIZvpdSRp2RLU1pxjdw6ZCh+EL2CFGqBySD 6lkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=pNrzYLZ3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id z5-20020a056a00240500b00571fe971a74si8383610pfh.41.2022.11.15.20.15.50; Tue, 15 Nov 2022 20:16:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=pNrzYLZ3; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231952AbiKPEO7 (ORCPT + 99 others); Tue, 15 Nov 2022 23:14:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231135AbiKPEOV (ORCPT ); Tue, 15 Nov 2022 23:14:21 -0500 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A495B31EDF; Tue, 15 Nov 2022 20:14:17 -0800 (PST) Received: from pps.filterd (m0150241.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG31Mrn027550; Wed, 16 Nov 2022 04:14:09 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=78cE9OO8uzMM2db32hW7tetx3xycoj1j3qAj8neR0YQ=; b=pNrzYLZ3h+rBst1BTvYNT0AiB29y0JYgoF0wCZefTrpEd1CiMK3Ny654MC0MCxA4ZCZI F4h4gReZSH9gul8fWRSQBDwSQGBA2xaWQVrIceGZVLSZ2yzrRC88+RYaInz1f89kSotZ 3ZlxaJwuViu6KQu2yXlRG9SoMPIZD5ktyBSPQ6w91KIPVF52Rou1LvCBlu6uq7oHCC80 5DMShuENZS5dfXyuQn8cauW4pLB+II1i2WHFxPRS5nPa9+VxIheDkLLa656JyTaAPlD0 1xuOFOOL9w2oeRCyDenf/a4msm/JfJgJjhvt5iq7vXLa+lYpAmsh8mQu7oCnhshHjDiK xg== Received: from p1lg14880.it.hpe.com (p1lg14880.it.hpe.com [16.230.97.201]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqkbrefv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:09 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14880.it.hpe.com (Postfix) with ESMTPS id 712E98040CD; Wed, 16 Nov 2022 04:14:08 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 008A1802DD6; Wed, 16 Nov 2022 04:14:07 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 13/24] crypto: x86/sha - minimize time in FPU context Date: Tue, 15 Nov 2022 22:13:31 -0600 Message-Id: <20221116041342.3841-14-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: I1qUADll7Q5ufH1ja4mJxCxa-CcDUQNj X-Proofpoint-ORIG-GUID: I1qUADll7Q5ufH1ja4mJxCxa-CcDUQNj X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 impostorscore=0 spamscore=0 clxscore=1015 mlxscore=0 suspectscore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 priorityscore=1501 adultscore=7 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748448054405195833?= X-GMAIL-MSGID: =?utf-8?q?1749624724917603984?= Narrow the kernel_fpu_begin()/kernel_fpu_end() to just wrap the assembly functions, not any extra C code around them (which includes several memcpy() calls). This reduces unnecessary time in FPU context, in which the scheduler is prevented from preempting and the RCU subsystem is kept from doing its work. Example results measuring a boot, in which SHA-512 is used to check all module signatures using finup() calls: Before: calls maxcycles bpf update finup algorithm module ======== ============ ======== ======== ======== =========== ============== 168390 1233188 19456 0 19456 sha512-avx2 sha512_ssse3 After: 182694 1007224 19456 0 19456 sha512-avx2 sha512_ssse3 That means it stayed in FPU context for 226k fewer clocks cycles (which is 102 microseconds on this system, 18% less). Signed-off-by: Robert Elliott --- arch/x86/crypto/sha1_ssse3_glue.c | 82 ++++++++++++++++++++--------- arch/x86/crypto/sha256_ssse3_glue.c | 67 ++++++++++++++++++----- arch/x86/crypto/sha512_ssse3_glue.c | 48 ++++++++++++----- 3 files changed, 145 insertions(+), 52 deletions(-) diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c index e75a1060bb5f..32f3310e19e2 100644 --- a/arch/x86/crypto/sha1_ssse3_glue.c +++ b/arch/x86/crypto/sha1_ssse3_glue.c @@ -34,6 +34,54 @@ static const unsigned int bytes_per_fpu_avx2 = 34 * 1024; static const unsigned int bytes_per_fpu_avx = 30 * 1024; static const unsigned int bytes_per_fpu_ssse3 = 26 * 1024; +asmlinkage void sha1_transform_ssse3(struct sha1_state *state, + const u8 *data, int blocks); + +asmlinkage void sha1_transform_avx(struct sha1_state *state, + const u8 *data, int blocks); + +asmlinkage void sha1_transform_avx2(struct sha1_state *state, + const u8 *data, int blocks); + +#ifdef CONFIG_AS_SHA1_NI +asmlinkage void sha1_ni_transform(struct sha1_state *digest, const u8 *data, + int rounds); +#endif + +static void fpu_sha1_transform_ssse3(struct sha1_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha1_transform_ssse3(state, data, blocks); + kernel_fpu_end(); +} + +static void fpu_sha1_transform_avx(struct sha1_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha1_transform_avx(state, data, blocks); + kernel_fpu_end(); +} + +static void fpu_sha1_transform_avx2(struct sha1_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha1_transform_avx2(state, data, blocks); + kernel_fpu_end(); +} + +#ifdef CONFIG_AS_SHA1_NI +static void fpu_sha1_transform_shani(struct sha1_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha1_ni_transform(state, data, blocks); + kernel_fpu_end(); +} +#endif + static int using_x86_ssse3; static int using_x86_avx; static int using_x86_avx2; @@ -60,9 +108,7 @@ static int sha1_update(struct shash_desc *desc, const u8 *data, while (len) { unsigned int chunk = min(len, bytes_per_fpu); - kernel_fpu_begin(); sha1_base_do_update(desc, data, chunk, sha1_xform); - kernel_fpu_end(); len -= chunk; data += chunk; @@ -81,36 +127,29 @@ static int sha1_finup(struct shash_desc *desc, const u8 *data, while (len) { unsigned int chunk = min(len, bytes_per_fpu); - kernel_fpu_begin(); sha1_base_do_update(desc, data, chunk, sha1_xform); - kernel_fpu_end(); len -= chunk; data += chunk; } - kernel_fpu_begin(); sha1_base_do_finalize(desc, sha1_xform); - kernel_fpu_end(); return sha1_base_finish(desc, out); } -asmlinkage void sha1_transform_ssse3(struct sha1_state *state, - const u8 *data, int blocks); - static int sha1_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return sha1_update(desc, data, len, bytes_per_fpu_ssse3, - sha1_transform_ssse3); + fpu_sha1_transform_ssse3); } static int sha1_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha1_finup(desc, data, len, bytes_per_fpu_ssse3, out, - sha1_transform_ssse3); + fpu_sha1_transform_ssse3); } /* Add padding and return the message digest. */ @@ -143,21 +182,18 @@ static void unregister_sha1_ssse3(void) } } -asmlinkage void sha1_transform_avx(struct sha1_state *state, - const u8 *data, int blocks); - static int sha1_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return sha1_update(desc, data, len, bytes_per_fpu_avx, - sha1_transform_avx); + fpu_sha1_transform_avx); } static int sha1_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha1_finup(desc, data, len, bytes_per_fpu_avx, out, - sha1_transform_avx); + fpu_sha1_transform_avx); } static int sha1_avx_final(struct shash_desc *desc, u8 *out) @@ -191,17 +227,14 @@ static void unregister_sha1_avx(void) #define SHA1_AVX2_BLOCK_OPTSIZE 4 /* optimal 4*64 bytes of SHA1 blocks */ -asmlinkage void sha1_transform_avx2(struct sha1_state *state, - const u8 *data, int blocks); - static void sha1_apply_transform_avx2(struct sha1_state *state, const u8 *data, int blocks) { /* Select the optimal transform based on data block size */ if (blocks >= SHA1_AVX2_BLOCK_OPTSIZE) - sha1_transform_avx2(state, data, blocks); + fpu_sha1_transform_avx2(state, data, blocks); else - sha1_transform_avx(state, data, blocks); + fpu_sha1_transform_avx(state, data, blocks); } static int sha1_avx2_update(struct shash_desc *desc, const u8 *data, @@ -248,21 +281,18 @@ static void unregister_sha1_avx2(void) } #ifdef CONFIG_AS_SHA1_NI -asmlinkage void sha1_ni_transform(struct sha1_state *digest, const u8 *data, - int rounds); - static int sha1_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return sha1_update(desc, data, len, bytes_per_fpu_shani, - sha1_ni_transform); + fpu_sha1_transform_shani); } static int sha1_ni_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha1_finup(desc, data, len, bytes_per_fpu_shani, out, - sha1_ni_transform); + fpu_sha1_transform_shani); } static int sha1_ni_final(struct shash_desc *desc, u8 *out) diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index c6261ede4bae..839da1b36273 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -51,6 +51,51 @@ static const unsigned int bytes_per_fpu_ssse3 = 11 * 1024; asmlinkage void sha256_transform_ssse3(struct sha256_state *state, const u8 *data, int blocks); +asmlinkage void sha256_transform_avx(struct sha256_state *state, + const u8 *data, int blocks); + +asmlinkage void sha256_transform_rorx(struct sha256_state *state, + const u8 *data, int blocks); + +#ifdef CONFIG_AS_SHA256_NI +asmlinkage void sha256_ni_transform(struct sha256_state *digest, + const u8 *data, int rounds); +#endif + +static void fpu_sha256_transform_ssse3(struct sha256_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha256_transform_ssse3(state, data, blocks); + kernel_fpu_end(); +} + +static void fpu_sha256_transform_avx(struct sha256_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha256_transform_avx(state, data, blocks); + kernel_fpu_end(); +} + +static void fpu_sha256_transform_avx2(struct sha256_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha256_transform_rorx(state, data, blocks); + kernel_fpu_end(); +} + +#ifdef CONFIG_AS_SHA1_NI +static void fpu_sha256_transform_shani(struct sha256_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha256_ni_transform(state, data, blocks); + kernel_fpu_end(); +} +#endif + static int using_x86_ssse3; static int using_x86_avx; static int using_x86_avx2; @@ -77,9 +122,7 @@ static int _sha256_update(struct shash_desc *desc, const u8 *data, while (len) { unsigned int chunk = min(len, bytes_per_fpu); - kernel_fpu_begin(); sha256_base_do_update(desc, data, chunk, sha256_xform); - kernel_fpu_end(); len -= chunk; data += chunk; @@ -98,17 +141,13 @@ static int sha256_finup(struct shash_desc *desc, const u8 *data, while (len) { unsigned int chunk = min(len, bytes_per_fpu); - kernel_fpu_begin(); sha256_base_do_update(desc, data, chunk, sha256_xform); - kernel_fpu_end(); len -= chunk; data += chunk; } - kernel_fpu_begin(); sha256_base_do_finalize(desc, sha256_xform); - kernel_fpu_end(); return sha256_base_finish(desc, out); } @@ -117,14 +156,14 @@ static int sha256_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return _sha256_update(desc, data, len, bytes_per_fpu_ssse3, - sha256_transform_ssse3); + fpu_sha256_transform_ssse3); } static int sha256_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha256_finup(desc, data, len, bytes_per_fpu_ssse3, - out, sha256_transform_ssse3); + out, fpu_sha256_transform_ssse3); } /* Add padding and return the message digest. */ @@ -179,14 +218,14 @@ static int sha256_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return _sha256_update(desc, data, len, bytes_per_fpu_avx, - sha256_transform_avx); + fpu_sha256_transform_avx); } static int sha256_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha256_finup(desc, data, len, bytes_per_fpu_avx, - out, sha256_transform_avx); + out, fpu_sha256_transform_avx); } static int sha256_avx_final(struct shash_desc *desc, u8 *out) @@ -240,14 +279,14 @@ static int sha256_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return _sha256_update(desc, data, len, bytes_per_fpu_avx2, - sha256_transform_rorx); + fpu_sha256_transform_avx2); } static int sha256_avx2_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha256_finup(desc, data, len, bytes_per_fpu_avx2, - out, sha256_transform_rorx); + out, fpu_sha256_transform_avx2); } static int sha256_avx2_final(struct shash_desc *desc, u8 *out) @@ -302,14 +341,14 @@ static int sha256_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return _sha256_update(desc, data, len, bytes_per_fpu_shani, - sha256_ni_transform); + fpu_sha256_transform_shani); } static int sha256_ni_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha256_finup(desc, data, len, bytes_per_fpu_shani, - out, sha256_ni_transform); + out, fpu_sha256_transform_shani); } static int sha256_ni_final(struct shash_desc *desc, u8 *out) diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index feae85933270..48586ab40d55 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -47,6 +47,36 @@ static const unsigned int bytes_per_fpu_ssse3 = 17 * 1024; asmlinkage void sha512_transform_ssse3(struct sha512_state *state, const u8 *data, int blocks); +asmlinkage void sha512_transform_avx(struct sha512_state *state, + const u8 *data, int blocks); + +asmlinkage void sha512_transform_rorx(struct sha512_state *state, + const u8 *data, int blocks); + +static void fpu_sha512_transform_ssse3(struct sha512_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha512_transform_ssse3(state, data, blocks); + kernel_fpu_end(); +} + +static void fpu_sha512_transform_avx(struct sha512_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha512_transform_avx(state, data, blocks); + kernel_fpu_end(); +} + +static void fpu_sha512_transform_avx2(struct sha512_state *state, + const u8 *data, int blocks) +{ + kernel_fpu_begin(); + sha512_transform_rorx(state, data, blocks); + kernel_fpu_end(); +} + static int using_x86_ssse3; static int using_x86_avx; static int using_x86_avx2; @@ -70,9 +100,7 @@ static int sha512_update(struct shash_desc *desc, const u8 *data, while (len) { unsigned int chunk = min(len, bytes_per_fpu); - kernel_fpu_begin(); sha512_base_do_update(desc, data, chunk, sha512_xform); - kernel_fpu_end(); len -= chunk; data += chunk; @@ -91,17 +119,13 @@ static int sha512_finup(struct shash_desc *desc, const u8 *data, while (len) { unsigned int chunk = min(len, bytes_per_fpu); - kernel_fpu_begin(); sha512_base_do_update(desc, data, chunk, sha512_xform); - kernel_fpu_end(); len -= chunk; data += chunk; } - kernel_fpu_begin(); sha512_base_do_finalize(desc, sha512_xform); - kernel_fpu_end(); return sha512_base_finish(desc, out); } @@ -110,14 +134,14 @@ static int sha512_ssse3_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return sha512_update(desc, data, len, bytes_per_fpu_ssse3, - sha512_transform_ssse3); + fpu_sha512_transform_ssse3); } static int sha512_ssse3_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha512_finup(desc, data, len, bytes_per_fpu_ssse3, - out, sha512_transform_ssse3); + out, fpu_sha512_transform_ssse3); } /* Add padding and return the message digest. */ @@ -172,14 +196,14 @@ static int sha512_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return sha512_update(desc, data, len, bytes_per_fpu_avx, - sha512_transform_avx); + fpu_sha512_transform_avx); } static int sha512_avx_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha512_finup(desc, data, len, bytes_per_fpu_avx, - out, sha512_transform_avx); + out, fpu_sha512_transform_avx); } /* Add padding and return the message digest. */ @@ -234,14 +258,14 @@ static int sha512_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { return sha512_update(desc, data, len, bytes_per_fpu_avx2, - sha512_transform_rorx); + fpu_sha512_transform_avx2); } static int sha512_avx2_finup(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { return sha512_finup(desc, data, len, bytes_per_fpu_avx2, - out, sha512_transform_rorx); + out, fpu_sha512_transform_avx2); } /* Add padding and return the message digest. */ From patchwork Wed Nov 16 04:13:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20701 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084095wru; Tue, 15 Nov 2022 20:16:14 -0800 (PST) X-Google-Smtp-Source: AA0mqf55qbiIVZgX5frpkDMdsGUjOtkUmn2+lTjtM1y6lM1atv0B3nPwFBv0UqGbuwDbv5wp+8Oe X-Received: by 2002:a63:fd49:0:b0:476:d2d9:5151 with SMTP id m9-20020a63fd49000000b00476d2d95151mr3401216pgj.487.1668572173893; Tue, 15 Nov 2022 20:16:13 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572173; cv=none; d=google.com; s=arc-20160816; b=AxE1sMZla70Y5DhhIpeg9x6Rk6z1LZctrH7b1Hr7YQAtSHg9N5ivpcYeUgSG1O0vkE /Ugi9ckcZ8yFnQt7p+6VLa+Bz6+LYzuErJzORkRgp/CjnJq/q9KdxeEeesEqqswaqrKQ mt4CydFBv+5LAXIFNvzC/Gkkb3STFjT7HxGsLiDHRJdTzkhzFtrpxsdTMua2L+ItW5yC 6WhshIjmVmtLvuVoP8xjGR1yGcwPEe6h+7XbVn2BtUy8p/OmBiQl/NfgP/ZaXf+ac3tI 5AEQewo3z5/IAgtBWGVlksxgPjLEqBojtKY1xS4XW9SjpYqu74Fym0E7Bxn7JMu6N/Yw 3+cA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=8ESuWlIrale+sXyV4fNctyjVH7uHSmRimCDCqLPwhog=; b=ohn0pemQqJfVGyVrxL66SEVklz3NdRPoto8Y+/5cwzDkPchG5xA3KLaddqGHUDbVLw KuSSu02uPViSRwYkInxd/FwPwR8DUQtHEeBJDa4/qt2Vw45IqNebT1rOzfX9A7jUlFbj lV1WFcIQ0ftqJALAi94ZFytYp3LBg1PIqG/k/2c4Kp1Xt+332Axo8zVbRqfyn4akbtLK X3WE4ebJrBUGxth4ZLrPdsWeHdot8AABPiObaVQo2AzrQ8uXhD4c3O0VYUfYQtxNHsOO 0S1DmQ6U5k3XKllfH/MR/ndM4EU0Stz92/fSXJ8NNpm862EU7E73aBUPztdbrSVerLBQ t4kA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=RBF1t4oG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p4-20020a170902b08400b00184000a834dsi12999657plr.455.2022.11.15.20.16.00; Tue, 15 Nov 2022 20:16:13 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=RBF1t4oG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231989AbiKPEPJ (ORCPT + 99 others); Tue, 15 Nov 2022 23:15:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231868AbiKPEOY (ORCPT ); Tue, 15 Nov 2022 23:14:24 -0500 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58F2F31F84; Tue, 15 Nov 2022 20:14:19 -0800 (PST) Received: from pps.filterd (m0134424.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3MWSf016081; Wed, 16 Nov 2022 04:14:10 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=8ESuWlIrale+sXyV4fNctyjVH7uHSmRimCDCqLPwhog=; b=RBF1t4oGDoPGC+0xeLQ1EWG+ZHfJKiSOxXAophg7F+UmWGEzThZ7WftkUbuFNvyeoNr1 vB7AWaFoO+SdHX7c3pQ6vmRi87PPNYxwVzMC62JTidoa/IilwzZCgwuNMMlslroN3POR jzK+SVaksSzCTfgnJ5oMViD8BwHp8NudD9zVuis4r4g7Gy/vzcaWV3Mc4K7p+XhT7znx Jeq0V/djvbuDpSU0AxX9yLKj5oSi0nA3kXJNXcA7K879XrDG0J7Bc2WQ5vaxRJgnMzMO LChW/PmGJWwJv5pz9zJOmuEWxwHPrb+/k9UNCwueJNNMH5mzhdw/1AZRp8AcJbzDroXJ Cg== Received: from p1lg14878.it.hpe.com (p1lg14878.it.hpe.com [16.230.97.204]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqwa0ajs-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:10 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14878.it.hpe.com (Postfix) with ESMTPS id 908C32EEF6; Wed, 16 Nov 2022 04:14:09 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 1E29880FE88; Wed, 16 Nov 2022 04:14:09 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 14/24] crypto: x86/sha - load based on CPU features Date: Tue, 15 Nov 2022 22:13:32 -0600 Message-Id: <20221116041342.3841-15-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: 54hW-XXOxZ2aQxfGxRBb41DPGlIROLF3 X-Proofpoint-ORIG-GUID: 54hW-XXOxZ2aQxfGxRBb41DPGlIROLF3 X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 adultscore=0 phishscore=0 suspectscore=0 clxscore=1015 priorityscore=1501 bulkscore=0 mlxlogscore=999 spamscore=0 lowpriorityscore=0 impostorscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749624735924854727?= X-GMAIL-MSGID: =?utf-8?q?1749624735924854727?= Like commit aa031b8f702e ("crypto: x86/sha512 - load based on CPU features"), add module aliases for x86-optimized crypto modules: sha1, sha256 based on CPU feature bits so udev gets a chance to load them later in the boot process when the filesystems are all running. Signed-off-by: Robert Elliott --- v3 put device table SHA_NI entries inside CONFIG_SHAn_NI ifdefs, ensure builds properly with arch/x86/Kconfig.assembler changed to not set CONFIG_AS_SHA*_NI --- arch/x86/crypto/sha1_ssse3_glue.c | 15 +++++++++++++++ arch/x86/crypto/sha256_ssse3_glue.c | 15 +++++++++++++++ 2 files changed, 30 insertions(+) diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c index 32f3310e19e2..806463f57b6d 100644 --- a/arch/x86/crypto/sha1_ssse3_glue.c +++ b/arch/x86/crypto/sha1_ssse3_glue.c @@ -24,6 +24,7 @@ #include #include #include +#include #include /* avoid kernel_fpu_begin/end scheduler/rcu stalls */ @@ -328,11 +329,25 @@ static void unregister_sha1_ni(void) static inline void unregister_sha1_ni(void) { } #endif +static const struct x86_cpu_id module_cpu_ids[] = { +#ifdef CONFIG_AS_SHA1_NI + X86_MATCH_FEATURE(X86_FEATURE_SHA_NI, NULL), +#endif + X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init sha1_ssse3_mod_init(void) { const char *feature_name; int ret; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + #ifdef CONFIG_AS_SHA1_NI /* SHA-NI */ if (boot_cpu_has(X86_FEATURE_SHA_NI)) { diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index 839da1b36273..30c8c50c1123 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -38,6 +38,7 @@ #include #include #include +#include #include /* avoid kernel_fpu_begin/end scheduler/rcu stalls */ @@ -399,11 +400,25 @@ static void unregister_sha256_ni(void) static inline void unregister_sha256_ni(void) { } #endif +static const struct x86_cpu_id module_cpu_ids[] = { +#ifdef CONFIG_AS_SHA256_NI + X86_MATCH_FEATURE(X86_FEATURE_SHA_NI, NULL), +#endif + X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init sha256_ssse3_mod_init(void) { const char *feature_name; int ret; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + #ifdef CONFIG_AS_SHA256_NI /* SHA-NI */ if (boot_cpu_has(X86_FEATURE_SHA_NI)) { From patchwork Wed Nov 16 04:13:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20709 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084465wru; Tue, 15 Nov 2022 20:17:41 -0800 (PST) X-Google-Smtp-Source: AA0mqf7eNJymQkMISR7KXNJVoMu1M+SDIc0fvw6djYSmqCAryot9+SByscSaUa+0s+IawG2nlRlh X-Received: by 2002:a63:5025:0:b0:464:8d6:8dc7 with SMTP id e37-20020a635025000000b0046408d68dc7mr18960971pgb.594.1668572261180; Tue, 15 Nov 2022 20:17:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572261; cv=none; d=google.com; s=arc-20160816; b=ekhz1EmGOT414XGbhNrx9yiAm2PX4ROp1T+ArDvYZoUvsjMvNOPSwfIIz3tEtULe/y byrFAJqCo4SAobFXhMH1kAkhvyjOP78q575cc7J5qhTXkuWw5x7pWMrDClbgfHLS1LDp 159lEw6rS0EwPuzFKTaLj5AIwvTVoqMb4GbnNWQigUlgjUUEE8zZcOeXQXirjlExAvvp hiI/9hJzl+ELzo7vTcN+DDWsUgdBa5tSLOuKkMG61TKG78SS+vvZJ/9A0oCZBHHdJZ15 A4zouCrRwkkbEFzAgrPB7oV1oDj49c+UrK68R6jn/Ep5/2VVeIV4SyHaBnL+FY884qfS BDyw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=6SZmW+gWgFMHc7MDMy56h8QMVAgK51dtPijJjTSczbQ=; b=vqPUhrgrlbq2tbd0S2UnZMWedZjGn7pOhAOp4BRnqDMxwMBIAJCd1Um9FcQhVKGM87 5vxYWWsUcd/BjFbstDUPjjhVQeVoNFEODCtpX8Proboe8sPhR+U8tLahBXV+MfaC7HTs cMflpZVU3SZjzUax3sa0KeUpfF0MXQ50wmnQSpqEXAO9UErYej4ZuHxjWdRN8bvAOi7H iAcVjgd+GDk+jz1p08jy8tvWkWghTE/Z9djsCQuFFSzp7Yk0iAO0L9KreZxvZZU/W9Be 87FdyjVos/YEPdfxUB2XEAQpbZv67fKnQXXWIVBf8FsSCiW7VKBvCIw0tVmpMvaN7Nml YTeQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=nvNEjlnS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q193-20020a632aca000000b00450200a1078si13881147pgq.853.2022.11.15.20.17.27; Tue, 15 Nov 2022 20:17:41 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=nvNEjlnS; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232338AbiKPEP1 (ORCPT + 99 others); Tue, 15 Nov 2022 23:15:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231834AbiKPEOZ (ORCPT ); Tue, 15 Nov 2022 23:14:25 -0500 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A28E31DEC; Tue, 15 Nov 2022 20:14:21 -0800 (PST) Received: from pps.filterd (m0134425.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3RlJv014105; Wed, 16 Nov 2022 04:14:12 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=6SZmW+gWgFMHc7MDMy56h8QMVAgK51dtPijJjTSczbQ=; b=nvNEjlnSt/zEG6vEC+p5CU74HwPGCMynUh3pRF0tdqv09LK8BZDuit0odSNONEkyhBDv 0KPA2FI7Q2XecXCoz28zVLRsxf98gZi5t8UjG6WQ8n6fibRzvilrjz7kXOMUU8F+Rmqt Ak72nShCrcY3HR9lRhEbvpK31KqgGNxVa8cZ+IIpj6ct8fONYVMsYNsurzhYRxEf8E8q +miA/EzZ6iYvghZFxKOkWkhgS5va5IqGdRzYH4Qz/DcqRAjDOaBynU8i0JouQlbo+JjM OWiZMyxWD92HnHqxZwWxg6MZPynGha1T6SHU2BgXrP2P1Lrj4x/vwvgmLgmZ2xHDpxxa hg== Received: from p1lg14881.it.hpe.com (p1lg14881.it.hpe.com [16.230.97.202]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqyfr9jn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:11 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14881.it.hpe.com (Postfix) with ESMTPS id EFC37809F54; Wed, 16 Nov 2022 04:14:10 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 79FA88058DE; Wed, 16 Nov 2022 04:14:10 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 15/24] crypto: x86/crc - load based on CPU features Date: Tue, 15 Nov 2022 22:13:33 -0600 Message-Id: <20221116041342.3841-16-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: JLbth3PE-M5wIDPE3_GTv_qHBXz7tZW6 X-Proofpoint-ORIG-GUID: JLbth3PE-M5wIDPE3_GTv_qHBXz7tZW6 X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 lowpriorityscore=0 phishscore=0 adultscore=0 malwarescore=0 priorityscore=1501 impostorscore=0 spamscore=0 clxscore=1015 mlxscore=0 suspectscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748447989496474168?= X-GMAIL-MSGID: =?utf-8?q?1749624827455171294?= Like commit aa031b8f702e ("crypto: x86/sha512 - load based on CPU features"), these x86-optimized crypto modules already have module aliases based on CPU feature bits: crc32, crc32c, and crct10dif Rename the unique device table data structure to a generic name so the code has the same pattern in all the modules. Remove the print on a device table mismatch from crc32 that is not present in the other modules. Modules are not supposed to print unless they are active. Signed-off-by: Robert Elliott --- arch/x86/crypto/crc32-pclmul_glue.c | 10 ++++------ arch/x86/crypto/crc32c-intel_glue.c | 6 +++--- arch/x86/crypto/crct10dif-pclmul_glue.c | 6 +++--- 3 files changed, 10 insertions(+), 12 deletions(-) diff --git a/arch/x86/crypto/crc32-pclmul_glue.c b/arch/x86/crypto/crc32-pclmul_glue.c index df3dbc754818..d5e889c24bea 100644 --- a/arch/x86/crypto/crc32-pclmul_glue.c +++ b/arch/x86/crypto/crc32-pclmul_glue.c @@ -182,20 +182,18 @@ static struct shash_alg alg = { } }; -static const struct x86_cpu_id crc32pclmul_cpu_id[] = { +static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_PCLMULQDQ, NULL), {} }; -MODULE_DEVICE_TABLE(x86cpu, crc32pclmul_cpu_id); +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static int __init crc32_pclmul_mod_init(void) { - - if (!x86_match_cpu(crc32pclmul_cpu_id)) { - pr_info("PCLMULQDQ-NI instructions are not detected.\n"); + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - } + return crypto_register_shash(&alg); } diff --git a/arch/x86/crypto/crc32c-intel_glue.c b/arch/x86/crypto/crc32c-intel_glue.c index f08ed68ec93d..aff132e925ea 100644 --- a/arch/x86/crypto/crc32c-intel_glue.c +++ b/arch/x86/crypto/crc32c-intel_glue.c @@ -240,15 +240,15 @@ static struct shash_alg alg = { } }; -static const struct x86_cpu_id crc32c_cpu_id[] = { +static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_XMM4_2, NULL), {} }; -MODULE_DEVICE_TABLE(x86cpu, crc32c_cpu_id); +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static int __init crc32c_intel_mod_init(void) { - if (!x86_match_cpu(crc32c_cpu_id)) + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; #ifdef CONFIG_X86_64 if (boot_cpu_has(X86_FEATURE_PCLMULQDQ)) { diff --git a/arch/x86/crypto/crct10dif-pclmul_glue.c b/arch/x86/crypto/crct10dif-pclmul_glue.c index 4f6b8c727d88..a26dbd27da96 100644 --- a/arch/x86/crypto/crct10dif-pclmul_glue.c +++ b/arch/x86/crypto/crct10dif-pclmul_glue.c @@ -139,15 +139,15 @@ static struct shash_alg alg = { } }; -static const struct x86_cpu_id crct10dif_cpu_id[] = { +static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_PCLMULQDQ, NULL), {} }; -MODULE_DEVICE_TABLE(x86cpu, crct10dif_cpu_id); +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static int __init crct10dif_intel_mod_init(void) { - if (!x86_match_cpu(crct10dif_cpu_id)) + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; return crypto_register_shash(&alg); From patchwork Wed Nov 16 04:13:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20707 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084438wru; Tue, 15 Nov 2022 20:17:37 -0800 (PST) X-Google-Smtp-Source: AA0mqf6quTV87IpBdWGH5frrRyFLq0pXXqzL5r8y7Q0fqGaicrfQuES9YdF9Aa19R+2JqUTmh5WQ X-Received: by 2002:a63:f94d:0:b0:46f:fe3e:3ebe with SMTP id q13-20020a63f94d000000b0046ffe3e3ebemr18900663pgk.518.1668572256887; Tue, 15 Nov 2022 20:17:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572256; cv=none; d=google.com; s=arc-20160816; b=VWdyaRoSqO3QXG2K8O56Te5eAo1IL/+87BBQbS/o0b5V3U4mdrWzCyCsdBz7JLkjPD JdhpicIlrn7D5g8dFF2uYbLcEd2XdvCT6ECzSwk+MK+UQ3ka1j+envojEv+GOLocMv4T AZBFF5lUtFoUUWpgBlFgp9DFLIhQLYgW23IXKwme4it2cpgmKlmM5y7EqmKbbXb6oydO OHKvnomok3HknHCy5oSKYJr8PiIcTlZJf7bVv4O5vUhx0YuZjlOJCtyZuLIneknxCQnc p1wJP+Jx4t+3Z/4YbnfTGLjvwOa7qLX76G43zhS8sEm3HbPRqLXs8g8XEJRpkUX7DGQh 1veA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=D3DfJIKsHFauPuLg8l8zt1cOt0x3vTUhVteBJZCL2WE=; b=vfNo5Gz9AjP20pTLk7QXyUBciOJVJh08YxwaYzy6QeKn9vs+we0GjUHMBeBkG7hMdL ZnkYRkxSKal326fHCLa1seQWUzHhxFTkJJHu4Xfa5FOiS5FHecYpmtqFHY2mQsrFDTac EG6Ne1jSrhcCnFbrLV1d6B7fu6iAR3KFOl0NPAOmAHW24b9Os6RV+waZzh7N/IXS8sgB 6MCPYrGurn0lct6vN0zv7+Wl8+dZTF8MOdGap9Em8nrdMOHHpSn9omi0H50oIHoF59u/ 4KHpdp4Flieqr2zurou9Ov0DaXRrVp2QJWWyOg4PJsoZncUKOV2w5SJDDLFwH4CYXjV2 7JFg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=I3h47jty; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a64-20020a639043000000b00476ea7b91ddsi452595pge.51.2022.11.15.20.17.22; Tue, 15 Nov 2022 20:17:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=I3h47jty; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232373AbiKPEPh (ORCPT + 99 others); Tue, 15 Nov 2022 23:15:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231912AbiKPEO0 (ORCPT ); Tue, 15 Nov 2022 23:14:26 -0500 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63C8931FA7; Tue, 15 Nov 2022 20:14:22 -0800 (PST) Received: from pps.filterd (m0150244.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG4CaG2022245; Wed, 16 Nov 2022 04:14:13 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=D3DfJIKsHFauPuLg8l8zt1cOt0x3vTUhVteBJZCL2WE=; b=I3h47jtyqI1/uAdnG6Dz8OOmpP4hcRv8bH3lW9oFuC6iVijpCQUTG+5UNJyBGHxfHcSZ trzNAFvQWAcHVqoO7W/b0Hw3qNCF17MTYuMm/EhEwSND/wwqiqlEB/r3m6exupj63cpZ nebvWBO6rO7VPl8WJ1/+0LsXMTSONzTjG1DPQjiHPZxa34U7MLIiiSiyS+ICtnji2UXY w4L9FNtVa3NvLgh9XsLOgtIOhwfbtA6c2429DmF2hyi9TijCpVPRcxFZPcGFxYOmAiU2 FPwzneU8cvjqRla8oJ5fwEV0nGbgY4K5yBBG5fRSExpx+lddMlHnvthY5FSeeacqcE8n Lg== Received: from p1lg14878.it.hpe.com (p1lg14878.it.hpe.com [16.230.97.204]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvrmng09e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:13 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14878.it.hpe.com (Postfix) with ESMTPS id 944162EECF; Wed, 16 Nov 2022 04:14:12 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 1F7FD808B9A; Wed, 16 Nov 2022 04:14:12 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 16/24] crypto: x86/sm3 - load based on CPU features Date: Tue, 15 Nov 2022 22:13:34 -0600 Message-Id: <20221116041342.3841-17-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: 0lkMSIZrD6OBxDlNv6JZ2MeDy4pF3oRr X-Proofpoint-ORIG-GUID: 0lkMSIZrD6OBxDlNv6JZ2MeDy4pF3oRr X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 lowpriorityscore=0 priorityscore=1501 mlxscore=0 suspectscore=0 phishscore=0 spamscore=0 adultscore=0 impostorscore=0 malwarescore=0 clxscore=1015 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748447972251599637?= X-GMAIL-MSGID: =?utf-8?q?1749624822826268543?= Like commit aa031b8f702e ("crypto: x86/sha512 - load based on CPU features"), add module aliases for x86-optimized crypto modules: sm3 based on CPU feature bits so udev gets a chance to load them later in the boot process when the filesystems are all running. Signed-off-by: Robert Elliott --- v4 removed second AVX check that is unreachable --- arch/x86/crypto/sm3_avx_glue.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/crypto/sm3_avx_glue.c b/arch/x86/crypto/sm3_avx_glue.c index 483aaed996ba..c7786874319c 100644 --- a/arch/x86/crypto/sm3_avx_glue.c +++ b/arch/x86/crypto/sm3_avx_glue.c @@ -15,6 +15,7 @@ #include #include #include +#include #include /* avoid kernel_fpu_begin/end scheduler/rcu stalls */ @@ -119,14 +120,18 @@ static struct shash_alg sm3_avx_alg = { } }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init sm3_avx_mod_init(void) { const char *feature_name; - if (!boot_cpu_has(X86_FEATURE_AVX)) { - pr_info("AVX instruction are not detected.\n"); + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - } if (!boot_cpu_has(X86_FEATURE_BMI2)) { pr_info("BMI2 instruction are not detected.\n"); From patchwork Wed Nov 16 04:13:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20704 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084340wru; Tue, 15 Nov 2022 20:17:13 -0800 (PST) X-Google-Smtp-Source: AA0mqf6vJ9FE7fCIMk5cJf6gpo7SKu93O7bDo307cn2S5giKQLglE7Hx5HbwXJ5iVZ0gZ/4RKIdH X-Received: by 2002:a17:90a:4d47:b0:200:2069:7702 with SMTP id l7-20020a17090a4d4700b0020020697702mr1722332pjh.239.1668572223649; Tue, 15 Nov 2022 20:17:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572223; cv=none; d=google.com; s=arc-20160816; b=XP9tlyA+pIFJi2poyXWUD1GStvoZgx1GQWzAMHLqsO/NCjDDLvCXNqAWz44/yiZJNX PHfyTVgjTO1sM5lSui/wqoufQkWt9tQk/e41C6j+LN2/8NMJ1bTOQxdRC5vw/sMbW/G0 7dI5FRKvwvflX5yRsnUVFTa4F0Sx+5NwpeUcNYCrmbvHdJz5CfODwh0Xr+A/D3k6EhjB E/5AxJXpXl1OKblOslZFjMLAsssxqYRHVkjPs2dtwgRDHQMTPZd7cxL2KiOBRg4bCDeg Hzm6Vc2PcnsgXrMyrqO6aLth5H2q43U0osMjhnqKMzGP5RP6dC6e9HJR/vPA3Xf9hkHq E3CQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cTzqPQFdhTvUVOcsBjcnPT2NWlxFqM19tqppsEYjuYM=; b=Wru2gqG5rcTAshbr1odRn8CBAgGJZFtRBjQWlOSEs806NYPktIKfwGVxtgh452gIh6 Opz6KFl0aWyGhnJFZGiQnsYCXyivW6t31TXltU3irdUehzqzwe1WoFoaJJM4UQQN/CxT igJnYJ6rFtsY2NAd4C/sRo4A+RqOfspVb9Tyji60QRAHvYiXqvSR9m0rDMVYmiqAAtEX ASi+008wprMGrT7Uvycc785bzyjYNVPy3xGC+CplzH/VuoY+B+/DhyL8sNItyKPvbz1X lcshqG/TCvsV6Y7W6lKRvw5a2Ip6CWzMFQu7Mb2F60l7hVTVmmzMWOpLh2mHTg0m3bLn KrXQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=gG3oz65q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id u9-20020a170902e80900b001868d5fc27csi16850528plg.254.2022.11.15.20.16.49; Tue, 15 Nov 2022 20:17:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=gG3oz65q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232413AbiKPEP7 (ORCPT + 99 others); Tue, 15 Nov 2022 23:15:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54520 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229495AbiKPEOc (ORCPT ); Tue, 15 Nov 2022 23:14:32 -0500 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 561163205E; Tue, 15 Nov 2022 20:14:25 -0800 (PST) Received: from pps.filterd (m0150241.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3Zd6V015695; Wed, 16 Nov 2022 04:14:14 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=cTzqPQFdhTvUVOcsBjcnPT2NWlxFqM19tqppsEYjuYM=; b=gG3oz65q2sA8UpQnUKLlfZu9X3sSK8E6WnisyHzRLbbdryTKn++4uqJg2Q21hi80yHYl 6T5GDkhbneJerWV5h27PFn0c176350ZaY8kI886qTKzqU7/aKdIaVHrtEwarGm+TwF9u zWLXpyvfIUf9cMS70MU6xv49SkIYoL/lFD2JNuc91i+UHeEdTZljrp3rul/cS3SvLLtY ISM1d5ZcxPrGMLXlQgPRocsZPer0gftY1flBACb6mrxoS6m4/U8COWf/XJ05n4pcl8OK 7CN+EmjMQxj6h9MUJHOtl5rY4+1fzxdtKS+C5UJuJlaqxlmB4R+dBRHNmadpVeGtXAVP XQ== Received: from p1lg14879.it.hpe.com (p1lg14879.it.hpe.com [16.230.97.200]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqkbreg5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:14 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14879.it.hpe.com (Postfix) with ESMTPS id 0A42A4B5C9; Wed, 16 Nov 2022 04:14:14 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 8E539808BA4; Wed, 16 Nov 2022 04:14:13 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 17/24] crypto: x86/poly - load based on CPU features Date: Tue, 15 Nov 2022 22:13:35 -0600 Message-Id: <20221116041342.3841-18-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: SD-tb7NqfoFHqzT8wvROtWvqBdLg81oS X-Proofpoint-ORIG-GUID: SD-tb7NqfoFHqzT8wvROtWvqBdLg81oS X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 impostorscore=0 spamscore=0 clxscore=1015 mlxscore=0 suspectscore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 priorityscore=1501 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749624788122797376?= X-GMAIL-MSGID: =?utf-8?q?1749624788122797376?= Like commit aa031b8f702e ("crypto: x86/sha512 - load based on CPU features"), these x86-optimized crypto modules already have module aliases based on CPU feature bits: nhpoly1305 poly1305 polyval Rename the unique device table data structure to a generic name so the code has the same pattern in all the modules. Remove the __maybe_unused attribute from polyval since it is always used. Signed-off-by: Robert Elliott --- v4 Removed CPU feature checks that are unreachable because the x86_match_cpu call already handles them. Made poly1305 match on all features since it does provide an x86_64 asm function if avx, avx2, and avx512f are not available. Move polyval into this patch rather than pair with ghash. Remove __maybe_unused from polyval. --- arch/x86/crypto/nhpoly1305-avx2-glue.c | 13 +++++++++++-- arch/x86/crypto/nhpoly1305-sse2-glue.c | 9 ++++++++- arch/x86/crypto/poly1305_glue.c | 10 ++++++++++ arch/x86/crypto/polyval-clmulni_glue.c | 6 +++--- 4 files changed, 32 insertions(+), 6 deletions(-) diff --git a/arch/x86/crypto/nhpoly1305-avx2-glue.c b/arch/x86/crypto/nhpoly1305-avx2-glue.c index f7dc9c563bb5..fa415fec5793 100644 --- a/arch/x86/crypto/nhpoly1305-avx2-glue.c +++ b/arch/x86/crypto/nhpoly1305-avx2-glue.c @@ -11,6 +11,7 @@ #include #include #include +#include #include /* avoid kernel_fpu_begin/end scheduler/rcu stalls */ @@ -60,10 +61,18 @@ static struct shash_alg nhpoly1305_alg = { .descsize = sizeof(struct nhpoly1305_state), }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init nhpoly1305_mod_init(void) { - if (!boot_cpu_has(X86_FEATURE_AVX2) || - !boot_cpu_has(X86_FEATURE_OSXSAVE)) + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + + if (!boot_cpu_has(X86_FEATURE_OSXSAVE)) return -ENODEV; return crypto_register_shash(&nhpoly1305_alg); diff --git a/arch/x86/crypto/nhpoly1305-sse2-glue.c b/arch/x86/crypto/nhpoly1305-sse2-glue.c index daffcc7019ad..c47765e46236 100644 --- a/arch/x86/crypto/nhpoly1305-sse2-glue.c +++ b/arch/x86/crypto/nhpoly1305-sse2-glue.c @@ -11,6 +11,7 @@ #include #include #include +#include #include /* avoid kernel_fpu_begin/end scheduler/rcu stalls */ @@ -60,9 +61,15 @@ static struct shash_alg nhpoly1305_alg = { .descsize = sizeof(struct nhpoly1305_state), }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_XMM2, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init nhpoly1305_mod_init(void) { - if (!boot_cpu_has(X86_FEATURE_XMM2)) + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; return crypto_register_shash(&nhpoly1305_alg); diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c index 16831c036d71..f1e39e23b2a3 100644 --- a/arch/x86/crypto/poly1305_glue.c +++ b/arch/x86/crypto/poly1305_glue.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include @@ -268,8 +269,17 @@ static struct shash_alg alg = { }, }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_ANY, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init poly1305_simd_mod_init(void) { + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (boot_cpu_has(X86_FEATURE_AVX) && cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) static_branch_enable(&poly1305_use_avx); diff --git a/arch/x86/crypto/polyval-clmulni_glue.c b/arch/x86/crypto/polyval-clmulni_glue.c index de1c908f7412..b98e32f8e2a4 100644 --- a/arch/x86/crypto/polyval-clmulni_glue.c +++ b/arch/x86/crypto/polyval-clmulni_glue.c @@ -176,15 +176,15 @@ static struct shash_alg polyval_alg = { }, }; -__maybe_unused static const struct x86_cpu_id pcmul_cpu_id[] = { +static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_PCLMULQDQ, NULL), {} }; -MODULE_DEVICE_TABLE(x86cpu, pcmul_cpu_id); +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static int __init polyval_clmulni_mod_init(void) { - if (!x86_match_cpu(pcmul_cpu_id)) + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; if (!boot_cpu_has(X86_FEATURE_AVX)) From patchwork Wed Nov 16 04:13:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20715 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3087964wru; Tue, 15 Nov 2022 20:32:57 -0800 (PST) X-Google-Smtp-Source: AA0mqf7uBIr0ai1TV7YIquySWbCL74+YXjwwaF36vTBrpaBYFvwGYGT4yb77xfL15H765RaxpmB7 X-Received: by 2002:a17:90a:b946:b0:213:d7cc:39cb with SMTP id f6-20020a17090ab94600b00213d7cc39cbmr1827599pjw.144.1668573176786; Tue, 15 Nov 2022 20:32:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668573176; cv=none; d=google.com; s=arc-20160816; b=h5FkYk38o/q3bALnsmAl3md60Hvg0TAALtjBr7QmvJfT8w4a2z0GFvQv2ffg8KFzbZ iKzQNvqjtHmo3Nx/zEqY1PDMQow/uTjyzqkw3rElEdz4hdlBvrEDfAm3NW/aCm/IkF/l QZnFngsE8bqFmVSezYO90MEMeJFBu29gh1d9D6qvdvb8bUVh1Yfz5Uf0WzWTTTJaJePH VJyle3+qInBZWkT7du8ExBFLHfucs6VDzEWV78VfQcxDNhHlXBMCCXLE3tUk5V2I2qTx UL72FwUMp4Z3y8Z8KuojfHk8FAuDm9FvUDjskvD9V2BFjakucgY8RjQxJ/b/IdLQIibk QeHw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=4jOIa0LKBXHh2Ll5Fgpe6Cbe58VzxCQ8HYLa+/r8boU=; b=sjAuB8bAIjiKyPcNbVMdx7EAd2OCK5KrnMz4zo98o+YmkHIAlpp5ow0BZIYzediXqr +qDCs/8boK7mNRR4ZYuG53/41Sv8IsWsTSPcLt3Vbhw9DYwnWni6HQ62gxzRAoS+Sqyh K9UCX8Jli/kUodmVefP3j5Xig1E6l3gXH6Rrw1PDfttpZK5LQ3EfXOUkPb08Tmuoim75 yN6V3b8YyFV0oBok/b7cXa9xVTDsHRKOT9x5RaZTgxA/wcYsDwiQ/tX+1rfepyMqm8Eg IH74FEN18p6nkBBXmar9jMZRF3QGVufi5TwANChDcs8YZBE5xRZWH1c/SwfHlZ0301Gr Mk+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=nxQ1HgbN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id cp1-20020a170902e78100b00186658dbbcesi13161127plb.339.2022.11.15.20.32.41; Tue, 15 Nov 2022 20:32:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=nxQ1HgbN; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230083AbiKPEQs (ORCPT + 99 others); Tue, 15 Nov 2022 23:16:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232126AbiKPEO4 (ORCPT ); Tue, 15 Nov 2022 23:14:56 -0500 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 02E1131EF2; Tue, 15 Nov 2022 20:14:29 -0800 (PST) Received: from pps.filterd (m0150244.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG4CbvL022259; Wed, 16 Nov 2022 04:14:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=4jOIa0LKBXHh2Ll5Fgpe6Cbe58VzxCQ8HYLa+/r8boU=; b=nxQ1HgbNaQZ0NUphmOGWelg5sN6A15ZiJGA8Jh8h8pyHkU2ENSkM+qi2ZcnfmNhxTIte lawV4R+MdIwMBEPWi3R5sPK3FPS1kZ0p3WunbzrX/wqb52k/sHCjlPKfHbTqoerqgpF/ WQlV2HvR2nd3nD8ELmhSb63BpZT5w3AEHgYcfGmra5ABXlO0D/a3v34zdarI75AhXCH6 P4lwrZl5R1nKcGCX9V/Uo8l5sGZi3YUGyuXRcRQb3LIWos5mjgqKWPtQQXPldmSkifpo FL/bQBBe5XtJESiMWixDjawiyrlBJTepuMNqwpyqJJI0l+vzkgqQ6eA1l18brN5JXSnu EQ== Received: from p1lg14879.it.hpe.com (p1lg14879.it.hpe.com [16.230.97.200]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvrmng09p-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:17 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14879.it.hpe.com (Postfix) with ESMTPS id 4A03D4B5E0; Wed, 16 Nov 2022 04:14:15 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id C1C548058DE; Wed, 16 Nov 2022 04:14:14 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 18/24] crypto: x86/ghash - load based on CPU features Date: Tue, 15 Nov 2022 22:13:36 -0600 Message-Id: <20221116041342.3841-19-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: flpUPOjh0lI0Hb5cDcS50prqvbbal9Qx X-Proofpoint-ORIG-GUID: flpUPOjh0lI0Hb5cDcS50prqvbbal9Qx X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 lowpriorityscore=0 priorityscore=1501 mlxscore=0 suspectscore=0 phishscore=0 spamscore=0 adultscore=0 impostorscore=0 malwarescore=0 clxscore=1015 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749625786984787813?= X-GMAIL-MSGID: =?utf-8?q?1749625786984787813?= Like commit aa031b8f702e ("crypto: x86/sha512 - load based on CPU features"), these x86-optimized crypto modules already have module aliases based on CPU feature bits: ghash Rename the unique device table data structure to a generic name so the code has the same pattern in all the modules. Signed-off-by: Robert Elliott --- v4 move polyval into a separate patch --- arch/x86/crypto/ghash-clmulni-intel_glue.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c index 0f24c3b23fd2..d19a8e9b34a6 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_glue.c +++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c @@ -325,17 +325,17 @@ static struct ahash_alg ghash_async_alg = { }, }; -static const struct x86_cpu_id pcmul_cpu_id[] = { +static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_PCLMULQDQ, NULL), /* Pickle-Mickle-Duck */ {} }; -MODULE_DEVICE_TABLE(x86cpu, pcmul_cpu_id); +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static int __init ghash_pclmulqdqni_mod_init(void) { int err; - if (!x86_match_cpu(pcmul_cpu_id)) + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; err = crypto_register_shash(&ghash_alg); From patchwork Wed Nov 16 04:13:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20712 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084548wru; Tue, 15 Nov 2022 20:18:01 -0800 (PST) X-Google-Smtp-Source: AA0mqf4zTzYndnkFxsclSalqjXKYt97TSAaKbh3n/UnVBb9eUQxXOb00sR2962uwqjUxsDO9C3MW X-Received: by 2002:a17:90b:3e8b:b0:213:2411:50e8 with SMTP id rj11-20020a17090b3e8b00b00213241150e8mr1708417pjb.181.1668572280878; Tue, 15 Nov 2022 20:18:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572280; cv=none; d=google.com; s=arc-20160816; b=Of3DxAvQd6rnfDxDI4OcsoRuuNEWU3sW7vY8s7O76jYq9eTmJpB7MrwSIHEQu3F07i b3gDn+m5TwqKDtwSo4LWpyA97d8M0vTiRiPxYWCdupFxVpedOpzvshAdkh6txqSC1v9z qOicANuNjkle6QL8ZelssqcSnSrZKKRsmK9KstlLvF1YDpgGT2y1rfOQNE8IJ7nc7Bep zwOqcqyA7RdJNurNjGCoP95TGHFvYA9khvEjo153K0d41akjHfmseC/WYAJwk4XREEOW nlCzyvlcbISX2OBNxkJBQ/b0sLhq+9sWkS9CWn52kWmROXuweaRHA/I4cGOQJELYKxu4 R69A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=ATW8MpC9q+O4VRVxf5janx71iQ8VtXUe9qgUYqmHxAc=; b=O2TB9sx6kvU0IhSsZ/HdO4Wg1Fv2l9gu5HsX7F/1N6owsoJuNOMhNmBClje7Oa8O4t 4oGF4Zeh7OyaH5oHg9lP7ycNbhnMEUcgM5JOvFgoNoOjjcsJgiEFNvQn37BNs6aXODPz Q863OEoai5fre9eU3j6zawLu/XeguYJirFGmi9F5NY5b9eYrtY6gvw5pZ/sFTh1bXJv8 4+DpsgBQVeqdrXZrx8c0UU2O4Bi0/gJfIPKLbRc2n+tLt4vD7v4uvI8SPQXQ6OQQF2o3 j9r3nujzLUUR4ULBuDGJpkuFlpl7pWd0anIErzRkd3yt7wRv/xeoXPKGlLNWh48lGS08 353A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=jSJnt9ZE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q13-20020a17090a304d00b0020d5867aab6si830683pjl.141.2022.11.15.20.17.47; Tue, 15 Nov 2022 20:18:00 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=jSJnt9ZE; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232589AbiKPEQn (ORCPT + 99 others); Tue, 15 Nov 2022 23:16:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231951AbiKPEO4 (ORCPT ); Tue, 15 Nov 2022 23:14:56 -0500 Received: from mx0b-002e3701.pphosted.com (mx0b-002e3701.pphosted.com [148.163.143.35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F9551E3D2; Tue, 15 Nov 2022 20:14:29 -0800 (PST) Received: from pps.filterd (m0134425.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3RlJw014105; Wed, 16 Nov 2022 04:14:17 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=ATW8MpC9q+O4VRVxf5janx71iQ8VtXUe9qgUYqmHxAc=; b=jSJnt9ZEzw1YYoF1FG+NFl8t5I5zXtZnNxu2Yv8IHUkho0bBgeBxFcbZtBLmZofqJ/ij TPXYEGb0+E5GKSuDuWmBWuk+oto4N33EZq7h8fU7mSsdcNh2yjEjWd39BiaChMY47TfB D3xXGAJKkjZfbUJG/mgCSREPUpXiPgjLGGdFFtFORlT84ud9ioqaPJVXuNeoJA0lgL9o fAnJ7ayuiBtfihXnmQP0Le/f+Pn0ZTUh2iwEp92hZcR9SjT0kxT2K99XoYjN8K+0tZ/+ wqcjM+96NrUV1Fgnyx+RcLaszSBxPusaGuKucgQfaPAo8fzl8eWc4LbKPt4Qgnv9dBuJ lg== Received: from p1lg14878.it.hpe.com (p1lg14878.it.hpe.com [16.230.97.204]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqyfr9jx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:17 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14878.it.hpe.com (Postfix) with ESMTPS id 8C75C2EECF; Wed, 16 Nov 2022 04:14:16 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 1DB358065DB; Wed, 16 Nov 2022 04:14:16 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 19/24] crypto: x86/aesni - avoid type conversions Date: Tue, 15 Nov 2022 22:13:37 -0600 Message-Id: <20221116041342.3841-20-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: 10RL2QTx2GYoy_g3Ammilg1YMs8JFH35 X-Proofpoint-ORIG-GUID: 10RL2QTx2GYoy_g3Ammilg1YMs8JFH35 X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 lowpriorityscore=0 phishscore=0 adultscore=0 malwarescore=0 priorityscore=1501 impostorscore=0 spamscore=0 clxscore=1015 mlxscore=0 suspectscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749624848206684267?= X-GMAIL-MSGID: =?utf-8?q?1749624848206684267?= Change the type of the GCM auth_tag_len argument and derivative variables from unsigned long to unsigned int, so they preserve the type returned by crypto_aead_authsize(). Continue to pass it to the asm functions as an unsigned long, but let those function calls be the place where the conversion to the possibly larger type occurs. This avoids possible truncation for calculations like: scatterwalk_map_and_copy(auth_tag_msg, req->src, req->assoclen + req->cryptlen - auth_tag_len, auth_tag_len, 0); whose third argument is an unsigned int. If unsigned long were bigger than unsigned int, that equation could wrap. Use unsigned int rather than int for intermediate variables containing byte counts and block counts, since all the functions using them accept unsigned int arguments. Signed-off-by: Robert Elliott --- arch/x86/crypto/aesni-intel_glue.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index a5b0cb3efeba..921680373855 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -381,7 +381,7 @@ static int cts_cbc_encrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct crypto_aes_ctx *ctx = aes_ctx(crypto_skcipher_ctx(tfm)); - int cbc_blocks = DIV_ROUND_UP(req->cryptlen, AES_BLOCK_SIZE) - 2; + unsigned int cbc_blocks = DIV_ROUND_UP(req->cryptlen, AES_BLOCK_SIZE) - 2; struct scatterlist *src = req->src, *dst = req->dst; struct scatterlist sg_src[2], sg_dst[2]; struct skcipher_request subreq; @@ -437,7 +437,7 @@ static int cts_cbc_decrypt(struct skcipher_request *req) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct crypto_aes_ctx *ctx = aes_ctx(crypto_skcipher_ctx(tfm)); - int cbc_blocks = DIV_ROUND_UP(req->cryptlen, AES_BLOCK_SIZE) - 2; + unsigned int cbc_blocks = DIV_ROUND_UP(req->cryptlen, AES_BLOCK_SIZE) - 2; struct scatterlist *src = req->src, *dst = req->dst; struct scatterlist sg_src[2], sg_dst[2]; struct skcipher_request subreq; @@ -671,11 +671,11 @@ static int generic_gcmaes_set_authsize(struct crypto_aead *tfm, static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, unsigned int assoclen, u8 *hash_subkey, u8 *iv, void *aes_ctx, u8 *auth_tag, - unsigned long auth_tag_len) + unsigned int auth_tag_len) { u8 databuf[sizeof(struct gcm_context_data) + (AESNI_ALIGN - 8)] __aligned(8); struct gcm_context_data *data = PTR_ALIGN((void *)databuf, AESNI_ALIGN); - unsigned long left = req->cryptlen; + unsigned int left = req->cryptlen; struct scatter_walk assoc_sg_walk; struct skcipher_walk walk; bool do_avx, do_avx2; @@ -782,7 +782,7 @@ static int gcmaes_encrypt(struct aead_request *req, unsigned int assoclen, u8 *hash_subkey, u8 *iv, void *aes_ctx) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); - unsigned long auth_tag_len = crypto_aead_authsize(tfm); + unsigned int auth_tag_len = crypto_aead_authsize(tfm); u8 auth_tag[16]; int err; @@ -801,7 +801,7 @@ static int gcmaes_decrypt(struct aead_request *req, unsigned int assoclen, u8 *hash_subkey, u8 *iv, void *aes_ctx) { struct crypto_aead *tfm = crypto_aead_reqtfm(req); - unsigned long auth_tag_len = crypto_aead_authsize(tfm); + unsigned int auth_tag_len = crypto_aead_authsize(tfm); u8 auth_tag_msg[16]; u8 auth_tag[16]; int err; @@ -907,7 +907,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt) { struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm); - int tail = req->cryptlen % AES_BLOCK_SIZE; + unsigned int tail = req->cryptlen % AES_BLOCK_SIZE; struct skcipher_request subreq; struct skcipher_walk walk; int err; @@ -920,7 +920,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt) return err; if (unlikely(tail > 0 && walk.nbytes < walk.total)) { - int blocks = DIV_ROUND_UP(req->cryptlen, AES_BLOCK_SIZE) - 2; + unsigned int blocks = DIV_ROUND_UP(req->cryptlen, AES_BLOCK_SIZE) - 2; skcipher_walk_abort(&walk); @@ -945,7 +945,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt) aesni_enc(aes_ctx(ctx->raw_tweak_ctx), walk.iv, walk.iv); while (walk.nbytes > 0) { - int nbytes = walk.nbytes; + unsigned int nbytes = walk.nbytes; if (nbytes < walk.total) nbytes &= ~(AES_BLOCK_SIZE - 1); From patchwork Wed Nov 16 04:13:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20706 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084387wru; Tue, 15 Nov 2022 20:17:26 -0800 (PST) X-Google-Smtp-Source: AA0mqf4AtoavOwQ4C5LUTUxYMkrReEwpQbzMSvA8Qxsy42qmJrs9dPk/uEg5UjXhYpAn65Sjpv25 X-Received: by 2002:a17:903:3255:b0:183:8006:3338 with SMTP id ji21-20020a170903325500b0018380063338mr7158106plb.125.1668572246213; Tue, 15 Nov 2022 20:17:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572246; cv=none; d=google.com; s=arc-20160816; b=QMfBy4Ttfjr2EWDolIbRlafHwkWdacQ8EpxqBJ8ORS3yICd2Ec/lURl4LW0fRT5oLy gXuiHS0OeKFhlJhkbmb0uUmkTDydHGNp7Fl9iVaDooF0ell3fkKhsyPJBY0f+fJ2sUp4 E/dJd6WwfGYBRPgE7kuNCdCOj9loGf4/rmguWKLOHCpqdsYKkI/zVwiHGTzREZVmi4iN 82hlCHHePQRAkBvWeWNdovqi3wOyydrqVQv4Udb8YCiW9cVnmracXAkijQD6gCyMUCl0 pRbaGjS8qT/oFtkGcVxdy06LC6MmZEfScTZVaIejor/botwXVM8aeV9RpdOP5ti+sPau +6dQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=9Vrn3Lcfdc63P//BKRWW0n5WCD4BkxzoQc4vkqFvrJ4=; b=ll/nVYFYe1UFHsutc1xP5ssbWE7nSrUsCLqnzoLT6KUOJU2YZ2dru9dHuwOUtlbzsy VzCnKQpZRdOd+9ZQmh0dNsEJ+5vc4CxW8nH/7LDsS0luB3uSo2rzum6ZNLGaERkVqDsN 7EmkHFAHk2CaCZBht24jZ9CFcaghiCZA32pw49MrHTkeYfmVCbpOiyunto6LoQ4ZOtuS uISd9/+NMx63x4dmTIwQ8PK4mBO6xBKm8TFQEmy6SYkJW6WJfreLLpKWove+wToDPrTF D59S7PTKztRuk9asy0rWg80NUQxE6ubRDYwtC6E7wuIH+wxIXXSbfwbk/gUpq5eOwo96 XO4Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=g12To9Hg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id w11-20020a170902d3cb00b00188bd728bc9si8141958plb.624.2022.11.15.20.17.13; Tue, 15 Nov 2022 20:17:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=g12To9Hg; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232116AbiKPEQg (ORCPT + 99 others); Tue, 15 Nov 2022 23:16:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231821AbiKPEOy (ORCPT ); Tue, 15 Nov 2022 23:14:54 -0500 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAF7932075; Tue, 15 Nov 2022 20:14:27 -0800 (PST) Received: from pps.filterd (m0134422.ppops.net [127.0.0.1]) by mx0b-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG405Sx022067; Wed, 16 Nov 2022 04:14:18 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=9Vrn3Lcfdc63P//BKRWW0n5WCD4BkxzoQc4vkqFvrJ4=; b=g12To9Hgp8+ki2cPD7TZU/b3mY4ZW1tdICF781eb0lhMK+fvt/53XROzdTgmP4fP5cpQ mGtSpmERKASB9Fx9t4AoZMmZQNMsETkLrZuihBLbQEkv6Igp5XMegbCvszOCaISfjyQA HZ479s0hJhJ4Sb3ouriHsqX3JYXMMxuW8UwLehGnoCc/prgPLxCT79ZlL9f1qsMyH1cO CnEd2ld8EuS6xDMLRAtfc8yORkKHmSaMIRY93WcyWMl7sZPs1pPh4K2QIN02gGz+WSWD 26vncZIURuEMWekJoncGtrAW7thheacC+QEhoPHNck+Y3xz2FocTarB2ZtupoLCipW1O tg== Received: from p1lg14881.it.hpe.com (p1lg14881.it.hpe.com [16.230.97.202]) by mx0b-002e3701.pphosted.com (PPS) with ESMTPS id 3kvrew82xf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:18 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14881.it.hpe.com (Postfix) with ESMTPS id CC405809F55; Wed, 16 Nov 2022 04:14:17 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 557EA80FE88; Wed, 16 Nov 2022 04:14:17 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 20/24] crypto: x86/ciphers - load based on CPU features Date: Tue, 15 Nov 2022 22:13:38 -0600 Message-Id: <20221116041342.3841-21-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: veUzpVFdiJqBLXq4d6paoFfIICqaJzCv X-Proofpoint-GUID: veUzpVFdiJqBLXq4d6paoFfIICqaJzCv X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxscore=0 impostorscore=0 adultscore=0 lowpriorityscore=0 clxscore=1015 priorityscore=1501 malwarescore=0 phishscore=0 suspectscore=0 spamscore=0 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749624811873116853?= X-GMAIL-MSGID: =?utf-8?q?1749624811873116853?= Like commit aa031b8f702e ("crypto: x86/sha512 - load based on CPU features"), add module aliases based on CPU feature bits for modules not implementing hash algorithms: aegis, aesni, aria blake2s, blowfish camellia, cast5, cast6, chacha, curve25519 des3_ede serpent, sm4 twofish Signed-off-by: Robert Elliott --- v4 Remove CPU feature checks that are unreachable because x86_match_cpu already handles them. Make curve25519 match on ADX and check BMI2. --- arch/x86/crypto/aegis128-aesni-glue.c | 10 +++++++++- arch/x86/crypto/aesni-intel_glue.c | 6 +++--- arch/x86/crypto/aria_aesni_avx_glue.c | 15 ++++++++++++--- arch/x86/crypto/blake2s-glue.c | 12 +++++++++++- arch/x86/crypto/blowfish_glue.c | 10 ++++++++++ arch/x86/crypto/camellia_aesni_avx2_glue.c | 17 +++++++++++++---- arch/x86/crypto/camellia_aesni_avx_glue.c | 15 ++++++++++++--- arch/x86/crypto/camellia_glue.c | 10 ++++++++++ arch/x86/crypto/cast5_avx_glue.c | 10 ++++++++++ arch/x86/crypto/cast6_avx_glue.c | 10 ++++++++++ arch/x86/crypto/chacha_glue.c | 11 +++++++++-- arch/x86/crypto/curve25519-x86_64.c | 19 ++++++++++++++----- arch/x86/crypto/des3_ede_glue.c | 10 ++++++++++ arch/x86/crypto/serpent_avx2_glue.c | 14 ++++++++++++-- arch/x86/crypto/serpent_avx_glue.c | 10 ++++++++++ arch/x86/crypto/serpent_sse2_glue.c | 11 ++++++++--- arch/x86/crypto/sm4_aesni_avx2_glue.c | 13 +++++++++++-- arch/x86/crypto/sm4_aesni_avx_glue.c | 15 ++++++++++++--- arch/x86/crypto/twofish_avx_glue.c | 10 ++++++++++ arch/x86/crypto/twofish_glue.c | 10 ++++++++++ arch/x86/crypto/twofish_glue_3way.c | 10 ++++++++++ 21 files changed, 216 insertions(+), 32 deletions(-) diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c index 6e96bdda2811..a3ebd018953c 100644 --- a/arch/x86/crypto/aegis128-aesni-glue.c +++ b/arch/x86/crypto/aegis128-aesni-glue.c @@ -282,12 +282,20 @@ static struct aead_alg crypto_aegis128_aesni_alg = { } }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AES, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static struct simd_aead_alg *simd_alg; static int __init crypto_aegis128_aesni_module_init(void) { + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (!boot_cpu_has(X86_FEATURE_XMM2) || - !boot_cpu_has(X86_FEATURE_AES) || !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) return -ENODEV; diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 921680373855..0505d4f9d2a2 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -1228,17 +1228,17 @@ static struct aead_alg aesni_aeads[0]; static struct simd_aead_alg *aesni_simd_aeads[ARRAY_SIZE(aesni_aeads)]; -static const struct x86_cpu_id aesni_cpu_id[] = { +static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_AES, NULL), {} }; -MODULE_DEVICE_TABLE(x86cpu, aesni_cpu_id); +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static int __init aesni_init(void) { int err; - if (!x86_match_cpu(aesni_cpu_id)) + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; #ifdef CONFIG_X86_64 if (boot_cpu_has(X86_FEATURE_AVX2)) { diff --git a/arch/x86/crypto/aria_aesni_avx_glue.c b/arch/x86/crypto/aria_aesni_avx_glue.c index c561ea4fefa5..6a135203a767 100644 --- a/arch/x86/crypto/aria_aesni_avx_glue.c +++ b/arch/x86/crypto/aria_aesni_avx_glue.c @@ -5,6 +5,7 @@ * Copyright (c) 2022 Taehee Yoo */ +#include #include #include #include @@ -165,14 +166,22 @@ static struct skcipher_alg aria_algs[] = { static struct simd_skcipher_alg *aria_simd_algs[ARRAY_SIZE(aria_algs)]; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init aria_avx_init(void) { const char *feature_name; - if (!boot_cpu_has(X86_FEATURE_AVX) || - !boot_cpu_has(X86_FEATURE_AES) || + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + + if (!boot_cpu_has(X86_FEATURE_AES) || !boot_cpu_has(X86_FEATURE_OSXSAVE)) { - pr_info("AVX or AES-NI instructions are not detected.\n"); + pr_info("AES or OSXSAVE instructions are not detected.\n"); return -ENODEV; } diff --git a/arch/x86/crypto/blake2s-glue.c b/arch/x86/crypto/blake2s-glue.c index aaba21230528..df757d18a35a 100644 --- a/arch/x86/crypto/blake2s-glue.c +++ b/arch/x86/crypto/blake2s-glue.c @@ -10,7 +10,7 @@ #include #include #include - +#include #include #include #include @@ -55,8 +55,18 @@ void blake2s_compress(struct blake2s_state *state, const u8 *block, } EXPORT_SYMBOL(blake2s_compress); +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL), + X86_MATCH_FEATURE(X86_FEATURE_AVX512VL, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init blake2s_mod_init(void) { + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (boot_cpu_has(X86_FEATURE_SSSE3)) static_branch_enable(&blake2s_use_ssse3); diff --git a/arch/x86/crypto/blowfish_glue.c b/arch/x86/crypto/blowfish_glue.c index 019c64c1340a..4c0ead71b198 100644 --- a/arch/x86/crypto/blowfish_glue.c +++ b/arch/x86/crypto/blowfish_glue.c @@ -15,6 +15,7 @@ #include #include #include +#include /* regular block cipher functions */ asmlinkage void __blowfish_enc_blk(struct bf_ctx *ctx, u8 *dst, const u8 *src, @@ -303,10 +304,19 @@ static int force; module_param(force, int, 0); MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist"); +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_ANY, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init blowfish_init(void) { int err; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (!force && is_blacklisted_cpu()) { printk(KERN_INFO "blowfish-x86_64: performance on this CPU " diff --git a/arch/x86/crypto/camellia_aesni_avx2_glue.c b/arch/x86/crypto/camellia_aesni_avx2_glue.c index e7e4d64e9577..6c48fc9f3fde 100644 --- a/arch/x86/crypto/camellia_aesni_avx2_glue.c +++ b/arch/x86/crypto/camellia_aesni_avx2_glue.c @@ -11,6 +11,7 @@ #include #include #include +#include #include "camellia.h" #include "ecb_cbc_helpers.h" @@ -98,17 +99,25 @@ static struct skcipher_alg camellia_algs[] = { }, }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static struct simd_skcipher_alg *camellia_simd_algs[ARRAY_SIZE(camellia_algs)]; static int __init camellia_aesni_init(void) { const char *feature_name; - if (!boot_cpu_has(X86_FEATURE_AVX) || - !boot_cpu_has(X86_FEATURE_AVX2) || - !boot_cpu_has(X86_FEATURE_AES) || + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + + if (!boot_cpu_has(X86_FEATURE_AES) || + !boot_cpu_has(X86_FEATURE_AVX) || !boot_cpu_has(X86_FEATURE_OSXSAVE)) { - pr_info("AVX2 or AES-NI instructions are not detected.\n"); + pr_info("AES-NI, AVX, or OSXSAVE instructions are not detected.\n"); return -ENODEV; } diff --git a/arch/x86/crypto/camellia_aesni_avx_glue.c b/arch/x86/crypto/camellia_aesni_avx_glue.c index c7ccf63e741e..6d7fc96d242e 100644 --- a/arch/x86/crypto/camellia_aesni_avx_glue.c +++ b/arch/x86/crypto/camellia_aesni_avx_glue.c @@ -11,6 +11,7 @@ #include #include #include +#include #include "camellia.h" #include "ecb_cbc_helpers.h" @@ -98,16 +99,24 @@ static struct skcipher_alg camellia_algs[] = { } }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static struct simd_skcipher_alg *camellia_simd_algs[ARRAY_SIZE(camellia_algs)]; static int __init camellia_aesni_init(void) { const char *feature_name; - if (!boot_cpu_has(X86_FEATURE_AVX) || - !boot_cpu_has(X86_FEATURE_AES) || + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + + if (!boot_cpu_has(X86_FEATURE_AES) || !boot_cpu_has(X86_FEATURE_OSXSAVE)) { - pr_info("AVX or AES-NI instructions are not detected.\n"); + pr_info("AES-NI or OSXSAVE instructions are not detected.\n"); return -ENODEV; } diff --git a/arch/x86/crypto/camellia_glue.c b/arch/x86/crypto/camellia_glue.c index d45e9c0c42ac..a3df1043ed73 100644 --- a/arch/x86/crypto/camellia_glue.c +++ b/arch/x86/crypto/camellia_glue.c @@ -8,6 +8,7 @@ * Copyright (C) 2006 NTT (Nippon Telegraph and Telephone Corporation) */ +#include #include #include #include @@ -1377,10 +1378,19 @@ static int force; module_param(force, int, 0); MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist"); +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_ANY, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init camellia_init(void) { int err; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (!force && is_blacklisted_cpu()) { printk(KERN_INFO "camellia-x86_64: performance on this CPU " diff --git a/arch/x86/crypto/cast5_avx_glue.c b/arch/x86/crypto/cast5_avx_glue.c index 3976a87f92ad..bdc3c763334c 100644 --- a/arch/x86/crypto/cast5_avx_glue.c +++ b/arch/x86/crypto/cast5_avx_glue.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "ecb_cbc_helpers.h" @@ -93,12 +94,21 @@ static struct skcipher_alg cast5_algs[] = { } }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static struct simd_skcipher_alg *cast5_simd_algs[ARRAY_SIZE(cast5_algs)]; static int __init cast5_init(void) { const char *feature_name; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, &feature_name)) { pr_info("CPU feature '%s' is not supported.\n", feature_name); diff --git a/arch/x86/crypto/cast6_avx_glue.c b/arch/x86/crypto/cast6_avx_glue.c index 7e2aea372349..addca34b3511 100644 --- a/arch/x86/crypto/cast6_avx_glue.c +++ b/arch/x86/crypto/cast6_avx_glue.c @@ -15,6 +15,7 @@ #include #include #include +#include #include "ecb_cbc_helpers.h" @@ -93,12 +94,21 @@ static struct skcipher_alg cast6_algs[] = { }, }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static struct simd_skcipher_alg *cast6_simd_algs[ARRAY_SIZE(cast6_algs)]; static int __init cast6_init(void) { const char *feature_name; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, &feature_name)) { pr_info("CPU feature '%s' is not supported.\n", feature_name); diff --git a/arch/x86/crypto/chacha_glue.c b/arch/x86/crypto/chacha_glue.c index 7b3a1cf0984b..546ab0abf30c 100644 --- a/arch/x86/crypto/chacha_glue.c +++ b/arch/x86/crypto/chacha_glue.c @@ -13,6 +13,7 @@ #include #include #include +#include #include asmlinkage void chacha_block_xor_ssse3(u32 *state, u8 *dst, const u8 *src, @@ -276,10 +277,16 @@ static struct skcipher_alg algs[] = { }, }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init chacha_simd_mod_init(void) { - if (!boot_cpu_has(X86_FEATURE_SSSE3)) - return 0; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; static_branch_enable(&chacha_use_simd); diff --git a/arch/x86/crypto/curve25519-x86_64.c b/arch/x86/crypto/curve25519-x86_64.c index d55fa9e9b9e6..ae7536b17bf9 100644 --- a/arch/x86/crypto/curve25519-x86_64.c +++ b/arch/x86/crypto/curve25519-x86_64.c @@ -12,7 +12,7 @@ #include #include #include - +#include #include #include @@ -1697,13 +1697,22 @@ static struct kpp_alg curve25519_alg = { .max_size = curve25519_max_size, }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_ADX, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); static int __init curve25519_mod_init(void) { - if (boot_cpu_has(X86_FEATURE_BMI2) && boot_cpu_has(X86_FEATURE_ADX)) - static_branch_enable(&curve25519_use_bmi2_adx); - else - return 0; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + + if (!boot_cpu_has(X86_FEATURE_BMI2)) + return -ENODEV; + + static_branch_enable(&curve25519_use_bmi2_adx); + return IS_REACHABLE(CONFIG_CRYPTO_KPP) ? crypto_register_kpp(&curve25519_alg) : 0; } diff --git a/arch/x86/crypto/des3_ede_glue.c b/arch/x86/crypto/des3_ede_glue.c index abb8b1fe123b..168cac5c6ca6 100644 --- a/arch/x86/crypto/des3_ede_glue.c +++ b/arch/x86/crypto/des3_ede_glue.c @@ -15,6 +15,7 @@ #include #include #include +#include struct des3_ede_x86_ctx { struct des3_ede_ctx enc; @@ -354,10 +355,19 @@ static int force; module_param(force, int, 0); MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist"); +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_ANY, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init des3_ede_x86_init(void) { int err; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (!force && is_blacklisted_cpu()) { pr_info("des3_ede-x86_64: performance on this CPU would be suboptimal: disabling des3_ede-x86_64.\n"); return -ENODEV; diff --git a/arch/x86/crypto/serpent_avx2_glue.c b/arch/x86/crypto/serpent_avx2_glue.c index 347e97f4b713..bc18149fb928 100644 --- a/arch/x86/crypto/serpent_avx2_glue.c +++ b/arch/x86/crypto/serpent_avx2_glue.c @@ -12,6 +12,7 @@ #include #include #include +#include #include "serpent-avx.h" #include "ecb_cbc_helpers.h" @@ -94,14 +95,23 @@ static struct skcipher_alg serpent_algs[] = { }, }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static struct simd_skcipher_alg *serpent_simd_algs[ARRAY_SIZE(serpent_algs)]; static int __init serpent_avx2_init(void) { const char *feature_name; - if (!boot_cpu_has(X86_FEATURE_AVX2) || !boot_cpu_has(X86_FEATURE_OSXSAVE)) { - pr_info("AVX2 instructions are not detected.\n"); + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + + if (!boot_cpu_has(X86_FEATURE_OSXSAVE)) { + pr_info("OSXSAVE instructions are not detected.\n"); return -ENODEV; } if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, diff --git a/arch/x86/crypto/serpent_avx_glue.c b/arch/x86/crypto/serpent_avx_glue.c index 6c248e1ea4ef..0db18d99da50 100644 --- a/arch/x86/crypto/serpent_avx_glue.c +++ b/arch/x86/crypto/serpent_avx_glue.c @@ -15,6 +15,7 @@ #include #include #include +#include #include "serpent-avx.h" #include "ecb_cbc_helpers.h" @@ -100,12 +101,21 @@ static struct skcipher_alg serpent_algs[] = { }, }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static struct simd_skcipher_alg *serpent_simd_algs[ARRAY_SIZE(serpent_algs)]; static int __init serpent_init(void) { const char *feature_name; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, &feature_name)) { pr_info("CPU feature '%s' is not supported.\n", feature_name); diff --git a/arch/x86/crypto/serpent_sse2_glue.c b/arch/x86/crypto/serpent_sse2_glue.c index d78f37e9b2cf..74f0c89f55ef 100644 --- a/arch/x86/crypto/serpent_sse2_glue.c +++ b/arch/x86/crypto/serpent_sse2_glue.c @@ -20,6 +20,7 @@ #include #include #include +#include #include "serpent-sse2.h" #include "ecb_cbc_helpers.h" @@ -103,14 +104,18 @@ static struct skcipher_alg serpent_algs[] = { }, }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_XMM2, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static struct simd_skcipher_alg *serpent_simd_algs[ARRAY_SIZE(serpent_algs)]; static int __init serpent_sse2_init(void) { - if (!boot_cpu_has(X86_FEATURE_XMM2)) { - printk(KERN_INFO "SSE2 instructions are not detected.\n"); + if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - } return simd_register_skciphers_compat(serpent_algs, ARRAY_SIZE(serpent_algs), diff --git a/arch/x86/crypto/sm4_aesni_avx2_glue.c b/arch/x86/crypto/sm4_aesni_avx2_glue.c index 84bc718f49a3..125b00db89b1 100644 --- a/arch/x86/crypto/sm4_aesni_avx2_glue.c +++ b/arch/x86/crypto/sm4_aesni_avx2_glue.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -126,6 +127,12 @@ static struct skcipher_alg sm4_aesni_avx2_skciphers[] = { } }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static struct simd_skcipher_alg * simd_sm4_aesni_avx2_skciphers[ARRAY_SIZE(sm4_aesni_avx2_skciphers)]; @@ -133,11 +140,13 @@ static int __init sm4_init(void) { const char *feature_name; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (!boot_cpu_has(X86_FEATURE_AVX) || - !boot_cpu_has(X86_FEATURE_AVX2) || !boot_cpu_has(X86_FEATURE_AES) || !boot_cpu_has(X86_FEATURE_OSXSAVE)) { - pr_info("AVX2 or AES-NI instructions are not detected.\n"); + pr_info("AVX, AES-NI, and/or OSXSAVE instructions are not detected.\n"); return -ENODEV; } diff --git a/arch/x86/crypto/sm4_aesni_avx_glue.c b/arch/x86/crypto/sm4_aesni_avx_glue.c index 7800f77d68ad..ac8182b197cf 100644 --- a/arch/x86/crypto/sm4_aesni_avx_glue.c +++ b/arch/x86/crypto/sm4_aesni_avx_glue.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -445,6 +446,12 @@ static struct skcipher_alg sm4_aesni_avx_skciphers[] = { } }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static struct simd_skcipher_alg * simd_sm4_aesni_avx_skciphers[ARRAY_SIZE(sm4_aesni_avx_skciphers)]; @@ -452,10 +459,12 @@ static int __init sm4_init(void) { const char *feature_name; - if (!boot_cpu_has(X86_FEATURE_AVX) || - !boot_cpu_has(X86_FEATURE_AES) || + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + + if (!boot_cpu_has(X86_FEATURE_AES) || !boot_cpu_has(X86_FEATURE_OSXSAVE)) { - pr_info("AVX or AES-NI instructions are not detected.\n"); + pr_info("AES-NI or OSXSAVE instructions are not detected.\n"); return -ENODEV; } diff --git a/arch/x86/crypto/twofish_avx_glue.c b/arch/x86/crypto/twofish_avx_glue.c index 3eb3440b477a..4657e6efc35d 100644 --- a/arch/x86/crypto/twofish_avx_glue.c +++ b/arch/x86/crypto/twofish_avx_glue.c @@ -15,6 +15,7 @@ #include #include #include +#include #include "twofish.h" #include "ecb_cbc_helpers.h" @@ -103,12 +104,21 @@ static struct skcipher_alg twofish_algs[] = { }, }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static struct simd_skcipher_alg *twofish_simd_algs[ARRAY_SIZE(twofish_algs)]; static int __init twofish_init(void) { const char *feature_name; + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, &feature_name)) { pr_info("CPU feature '%s' is not supported.\n", feature_name); return -ENODEV; diff --git a/arch/x86/crypto/twofish_glue.c b/arch/x86/crypto/twofish_glue.c index f9c4adc27404..ade98aef3402 100644 --- a/arch/x86/crypto/twofish_glue.c +++ b/arch/x86/crypto/twofish_glue.c @@ -43,6 +43,7 @@ #include #include #include +#include asmlinkage void twofish_enc_blk(struct twofish_ctx *ctx, u8 *dst, const u8 *src); @@ -81,8 +82,17 @@ static struct crypto_alg alg = { } }; +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_ANY, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init twofish_glue_init(void) { + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + return crypto_register_alg(&alg); } diff --git a/arch/x86/crypto/twofish_glue_3way.c b/arch/x86/crypto/twofish_glue_3way.c index 90454cf18e0d..790e5a59a9a7 100644 --- a/arch/x86/crypto/twofish_glue_3way.c +++ b/arch/x86/crypto/twofish_glue_3way.c @@ -11,6 +11,7 @@ #include #include #include +#include #include "twofish.h" #include "ecb_cbc_helpers.h" @@ -140,8 +141,17 @@ static int force; module_param(force, int, 0); MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist"); +static const struct x86_cpu_id module_cpu_ids[] = { + X86_MATCH_FEATURE(X86_FEATURE_ANY, NULL), + {} +}; +MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); + static int __init twofish_3way_init(void) { + if (!x86_match_cpu(module_cpu_ids)) + return -ENODEV; + if (!force && is_blacklisted_cpu()) { printk(KERN_INFO "twofish-x86_64-3way: performance on this CPU " From patchwork Wed Nov 16 04:13:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20705 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3084360wru; Tue, 15 Nov 2022 20:17:19 -0800 (PST) X-Google-Smtp-Source: AA0mqf4lFkCyLCzRabB2gNAQ6PYwK+/g6qgo5HgDRNlckxPlwQdstEe4pT5mBPY6sRbZ8qf+e4Ha X-Received: by 2002:a63:5023:0:b0:429:8266:b617 with SMTP id e35-20020a635023000000b004298266b617mr19617185pgb.136.1668572239238; Tue, 15 Nov 2022 20:17:19 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572239; cv=none; d=google.com; s=arc-20160816; b=krNuew905/gDg4tJi8JmPNx9avKGU82Y8Xt5zJQgcJjzEbQTLyD1yqom7iurKl2J46 XfEdfvDh68nj0zCRCLo1JKCoSWW/vHBzFkLqsliZcyy5sM5B2r/WYvxuM2i9m2dcCBsD LDLNzhD4ZyW4ceM0o4Ghh/nPoTcqQELTlpm/PRMelFV/R7XAPQVVhD5Za3IQNyWGnnNn Ox6s5KTXtJLTfciRpP8tNq2ar43UpxvRSxH5VbmqPmQyLZIVLhQorvp9NuzzV+ODp/Lt +7Pg2XWYCmrTmD1ne6RfA5aevR8PdFWkzjZzJBnqym3NxrMfU59+fNrsgNNDaef3c929 Sylg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=cJKMCHkT5xvvzAOQNWsrZR5cPiaIXad3M8CPel+8VaU=; b=O0pYDZ3mv6Jg51sZ9L6/VtHuKAAVBUCq6Yacslbeso52CvBnGXj2MuucUZfY9mCGcM lcPdjdCaMFWM1fQGwB91uDQ01ny3qWp4MmuUJVfnAPbFjw77SKnCYM1Oqq1xBuvLRX3h KMvIP91mO/Fl+oIzBGWw8NuFQkGzn/41/+ejW1LtHZaBBqSQXHAc5nMegh9KCN0Up9aT kPe7Lth7Ty7kiJa86v3OJXI758yKoglbuCVfB8EsXI6QRXYu2Tjta2I7WNak9NpkhdWJ wJ6ZuP/HjcQ6J2uPAzLKcH48ocCPkNzG1DiIaeArKROAWswYQAMskWxvU1kFO4esufLW 5mCw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=aaj4zTKH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g30-20020a63565e000000b0046f729604f6si13580811pgm.174.2022.11.15.20.17.05; Tue, 15 Nov 2022 20:17:19 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=aaj4zTKH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232073AbiKPEQb (ORCPT + 99 others); Tue, 15 Nov 2022 23:16:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231932AbiKPEOu (ORCPT ); Tue, 15 Nov 2022 23:14:50 -0500 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B8D231F84; Tue, 15 Nov 2022 20:14:27 -0800 (PST) Received: from pps.filterd (m0150241.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG31I51027442; Wed, 16 Nov 2022 04:14:19 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=cJKMCHkT5xvvzAOQNWsrZR5cPiaIXad3M8CPel+8VaU=; b=aaj4zTKHdL8CvWgZf8JaLChD8Ff6fXt/Pv3AmQNZfK1ArLiXvLY+f/SkTCyN1L2cuFJd lSpLZ9HLESNaKYx88cDih+LSXcm1vN5wrBMnm7fii2rvFVm8yLTYEtD74aSJS0XkJhyQ Q/J0IzAl/uVgC7+A8MuWWIY188bmbAi9tbRBgRKrQ4d/giIKg4He589MGK4bGzR2zHl/ Q7l0/+Nab3lQmEWtFf5rpv1yG+YyTYMgNa6vxhO/3iuTIYsYwcgro4LfNFJs1pj87XkH PDqRWcin6ouzq0zQrA6wxlsTwv3zsvsy12I/MarJ3shdSTmxsscMTZywUZLReuBS+VF6 9A== Received: from p1lg14881.it.hpe.com (p1lg14881.it.hpe.com [16.230.97.202]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqkbregf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:19 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14881.it.hpe.com (Postfix) with ESMTPS id 0606C809F54; Wed, 16 Nov 2022 04:14:19 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 89340808BA4; Wed, 16 Nov 2022 04:14:18 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 21/24] crypto: x86 - report used CPU features via module parameters Date: Tue, 15 Nov 2022 22:13:39 -0600 Message-Id: <20221116041342.3841-22-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: tom0kJNCaO36T67a9Jf75QJ94g939dEK X-Proofpoint-ORIG-GUID: tom0kJNCaO36T67a9Jf75QJ94g939dEK X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 impostorscore=0 spamscore=0 clxscore=1015 mlxscore=0 suspectscore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 priorityscore=1501 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749624804165530152?= X-GMAIL-MSGID: =?utf-8?q?1749624804165530152?= For modules that have multiple choices, add read-only module parameters reporting which CPU features a module is using. The parameters show up as follows for modules that modify the behavior of their registered drivers or register additional drivers for each choice: /sys/module/aesni_intel/parameters/using_x86_avx:1 /sys/module/aesni_intel/parameters/using_x86_avx2:1 /sys/module/aria_aesni_avx_x86_64/parameters/using_x86_gfni:0 /sys/module/chacha_x86_64/parameters/using_x86_avx2:1 /sys/module/chacha_x86_64/parameters/using_x86_avx512:1 /sys/module/crc32c_intel/parameters/using_x86_pclmulqdq:1 /sys/module/curve25519_x86_64/parameters/using_x86_adx:1 /sys/module/libblake2s_x86_64/parameters/using_x86_avx512:1 /sys/module/libblake2s_x86_64/parameters/using_x86_ssse3:1 /sys/module/poly1305_x86_64/parameters/using_x86_avx:1 /sys/module/poly1305_x86_64/parameters/using_x86_avx2:1 /sys/module/poly1305_x86_64/parameters/using_x86_avx512:0 /sys/module/sha1_ssse3/parameters/using_x86_avx:1 /sys/module/sha1_ssse3/parameters/using_x86_avx2:1 /sys/module/sha1_ssse3/parameters/using_x86_shani:0 /sys/module/sha1_ssse3/parameters/using_x86_ssse3:1 /sys/module/sha256_ssse3/parameters/using_x86_avx:1 /sys/module/sha256_ssse3/parameters/using_x86_avx2:1 /sys/module/sha256_ssse3/parameters/using_x86_shani:0 /sys/module/sha256_ssse3/parameters/using_x86_ssse3:1 /sys/module/sha512_ssse3/parameters/using_x86_avx:1 /sys/module/sha512_ssse3/parameters/using_x86_avx2:1 /sys/module/sha512_ssse3/parameters/using_x86_ssse3:1 Delete the aesni_intel prints reporting those selections: pr_info("AVX2 version of gcm_enc/dec engaged.\n"); Signed-off-by: Robert Elliott --- arch/x86/crypto/aesni-intel_glue.c | 19 ++++++++----------- arch/x86/crypto/aria_aesni_avx_glue.c | 6 ++++++ arch/x86/crypto/blake2s-glue.c | 5 +++++ arch/x86/crypto/chacha_glue.c | 5 +++++ arch/x86/crypto/crc32c-intel_glue.c | 6 ++++++ arch/x86/crypto/curve25519-x86_64.c | 3 +++ arch/x86/crypto/poly1305_glue.c | 7 +++++++ arch/x86/crypto/sha1_ssse3_glue.c | 11 +++++++++++ arch/x86/crypto/sha256_ssse3_glue.c | 20 +++++++++++--------- arch/x86/crypto/sha512_ssse3_glue.c | 7 +++++++ 10 files changed, 69 insertions(+), 20 deletions(-) diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 0505d4f9d2a2..80dbf98c53fd 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -1228,6 +1228,11 @@ static struct aead_alg aesni_aeads[0]; static struct simd_aead_alg *aesni_simd_aeads[ARRAY_SIZE(aesni_aeads)]; +module_param_named(using_x86_avx2, gcm_use_avx2.key.enabled.counter, int, 0444); +module_param_named(using_x86_avx, gcm_use_avx.key.enabled.counter, int, 0444); +MODULE_PARM_DESC(using_x86_avx2, "Using x86 instruction set extensions: AVX2 (for GCM mode)"); +MODULE_PARM_DESC(using_x86_avx, "Using x86 instruction set extensions: AVX (for CTR and GCM modes)"); + static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_AES, NULL), {} @@ -1241,22 +1246,14 @@ static int __init aesni_init(void) if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; #ifdef CONFIG_X86_64 - if (boot_cpu_has(X86_FEATURE_AVX2)) { - pr_info("AVX2 version of gcm_enc/dec engaged.\n"); - static_branch_enable(&gcm_use_avx); + if (boot_cpu_has(X86_FEATURE_AVX2)) static_branch_enable(&gcm_use_avx2); - } else + if (boot_cpu_has(X86_FEATURE_AVX)) { - pr_info("AVX version of gcm_enc/dec engaged.\n"); static_branch_enable(&gcm_use_avx); - } else { - pr_info("SSE version of gcm_enc/dec engaged.\n"); - } - if (boot_cpu_has(X86_FEATURE_AVX)) { - /* optimize performance of ctr mode encryption transform */ static_call_update(aesni_ctr_enc_tfm, aesni_ctr_enc_avx_tfm); - pr_info("AES CTR mode by8 optimization enabled\n"); } + #endif /* CONFIG_X86_64 */ err = crypto_register_alg(&aesni_cipher_alg); diff --git a/arch/x86/crypto/aria_aesni_avx_glue.c b/arch/x86/crypto/aria_aesni_avx_glue.c index 6a135203a767..9fd3d1fe1105 100644 --- a/arch/x86/crypto/aria_aesni_avx_glue.c +++ b/arch/x86/crypto/aria_aesni_avx_glue.c @@ -166,6 +166,10 @@ static struct skcipher_alg aria_algs[] = { static struct simd_skcipher_alg *aria_simd_algs[ARRAY_SIZE(aria_algs)]; +static int using_x86_gfni; +module_param(using_x86_gfni, int, 0444); +MODULE_PARM_DESC(using_x86_gfni, "Using x86 instruction set extensions: GF-NI"); + static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), {} @@ -192,6 +196,7 @@ static int __init aria_avx_init(void) } if (boot_cpu_has(X86_FEATURE_GFNI)) { + using_x86_gfni = 1; aria_ops.aria_encrypt_16way = aria_aesni_avx_gfni_encrypt_16way; aria_ops.aria_decrypt_16way = aria_aesni_avx_gfni_decrypt_16way; aria_ops.aria_ctr_crypt_16way = aria_aesni_avx_gfni_ctr_crypt_16way; @@ -210,6 +215,7 @@ static void __exit aria_avx_exit(void) { simd_unregister_skciphers(aria_algs, ARRAY_SIZE(aria_algs), aria_simd_algs); + using_x86_gfni = 0; } module_init(aria_avx_init); diff --git a/arch/x86/crypto/blake2s-glue.c b/arch/x86/crypto/blake2s-glue.c index df757d18a35a..781cf9471cb6 100644 --- a/arch/x86/crypto/blake2s-glue.c +++ b/arch/x86/crypto/blake2s-glue.c @@ -55,6 +55,11 @@ void blake2s_compress(struct blake2s_state *state, const u8 *block, } EXPORT_SYMBOL(blake2s_compress); +module_param_named(using_x86_ssse3, blake2s_use_ssse3.key.enabled.counter, int, 0444); +module_param_named(using_x86_avx512vl, blake2s_use_avx512.key.enabled.counter, int, 0444); +MODULE_PARM_DESC(using_x86_ssse3, "Using x86 instruction set extensions: SSSE3"); +MODULE_PARM_DESC(using_x86_avx512vl, "Using x86 instruction set extensions: AVX-512VL"); + static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL), X86_MATCH_FEATURE(X86_FEATURE_AVX512VL, NULL), diff --git a/arch/x86/crypto/chacha_glue.c b/arch/x86/crypto/chacha_glue.c index 546ab0abf30c..ec7461412c5e 100644 --- a/arch/x86/crypto/chacha_glue.c +++ b/arch/x86/crypto/chacha_glue.c @@ -277,6 +277,11 @@ static struct skcipher_alg algs[] = { }, }; +module_param_named(using_x86_avx512vl, chacha_use_avx512vl.key.enabled.counter, int, 0444); +module_param_named(using_x86_avx2, chacha_use_avx2.key.enabled.counter, int, 0444); +MODULE_PARM_DESC(using_x86_avx512vl, "Using x86 instruction set extensions: AVX-512VL"); +MODULE_PARM_DESC(using_x86_avx2, "Using x86 instruction set extensions: AVX2"); + static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL), {} diff --git a/arch/x86/crypto/crc32c-intel_glue.c b/arch/x86/crypto/crc32c-intel_glue.c index aff132e925ea..3c2bf7032667 100644 --- a/arch/x86/crypto/crc32c-intel_glue.c +++ b/arch/x86/crypto/crc32c-intel_glue.c @@ -240,6 +240,10 @@ static struct shash_alg alg = { } }; +static int using_x86_pclmulqdq; +module_param(using_x86_pclmulqdq, int, 0444); +MODULE_PARM_DESC(using_x86_pclmulqdq, "Using x86 instruction set extensions: PCLMULQDQ"); + static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_XMM4_2, NULL), {} @@ -252,6 +256,7 @@ static int __init crc32c_intel_mod_init(void) return -ENODEV; #ifdef CONFIG_X86_64 if (boot_cpu_has(X86_FEATURE_PCLMULQDQ)) { + using_x86_pclmulqdq = 1; alg.update = crc32c_pcl_intel_update; alg.finup = crc32c_pcl_intel_finup; alg.digest = crc32c_pcl_intel_digest; @@ -263,6 +268,7 @@ static int __init crc32c_intel_mod_init(void) static void __exit crc32c_intel_mod_fini(void) { crypto_unregister_shash(&alg); + using_x86_pclmulqdq = 0; } module_init(crc32c_intel_mod_init); diff --git a/arch/x86/crypto/curve25519-x86_64.c b/arch/x86/crypto/curve25519-x86_64.c index ae7536b17bf9..6d222849e409 100644 --- a/arch/x86/crypto/curve25519-x86_64.c +++ b/arch/x86/crypto/curve25519-x86_64.c @@ -1697,6 +1697,9 @@ static struct kpp_alg curve25519_alg = { .max_size = curve25519_max_size, }; +module_param_named(using_x86_adx, curve25519_use_bmi2_adx.key.enabled.counter, int, 0444); +MODULE_PARM_DESC(using_x86_adx, "Using x86 instruction set extensions: ADX"); + static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_ADX, NULL), {} diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c index f1e39e23b2a3..d3c0d5b335ea 100644 --- a/arch/x86/crypto/poly1305_glue.c +++ b/arch/x86/crypto/poly1305_glue.c @@ -269,6 +269,13 @@ static struct shash_alg alg = { }, }; +module_param_named(using_x86_avx, poly1305_use_avx.key.enabled.counter, int, 0444); +module_param_named(using_x86_avx2, poly1305_use_avx2.key.enabled.counter, int, 0444); +module_param_named(using_x86_avx512f, poly1305_use_avx512.key.enabled.counter, int, 0444); +MODULE_PARM_DESC(using_x86_avx, "Using x86 instruction set extensions: AVX"); +MODULE_PARM_DESC(using_x86_avx2, "Using x86 instruction set extensions: AVX2"); +MODULE_PARM_DESC(using_x86_avx512f, "Using x86 instruction set extensions: AVX-512F"); + static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_ANY, NULL), {} diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c index 806463f57b6d..2445648cf234 100644 --- a/arch/x86/crypto/sha1_ssse3_glue.c +++ b/arch/x86/crypto/sha1_ssse3_glue.c @@ -90,6 +90,17 @@ static int using_x86_avx2; static int using_x86_shani; #endif +#ifdef CONFIG_AS_SHA1_NI +module_param(using_x86_shani, int, 0444); +MODULE_PARM_DESC(using_x86_shani, "Using x86 instruction set extensions: SHA-NI"); +#endif +module_param(using_x86_ssse3, int, 0444); +module_param(using_x86_avx, int, 0444); +module_param(using_x86_avx2, int, 0444); +MODULE_PARM_DESC(using_x86_ssse3, "Using x86 instruction set extensions: SSSE3"); +MODULE_PARM_DESC(using_x86_avx, "Using x86 instruction set extensions: AVX"); +MODULE_PARM_DESC(using_x86_avx2, "Using x86 instruction set extensions: AVX2"); + static int sha1_update(struct shash_desc *desc, const u8 *data, unsigned int len, unsigned int bytes_per_fpu, sha1_block_fn *sha1_xform) diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index 30c8c50c1123..1464e6ccf912 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -104,6 +104,17 @@ static int using_x86_avx2; static int using_x86_shani; #endif +#ifdef CONFIG_AS_SHA256_NI +module_param(using_x86_shani, int, 0444); +MODULE_PARM_DESC(using_x86_shani, "Using x86 instruction set extensions: SHA-NI"); +#endif +module_param(using_x86_ssse3, int, 0444); +module_param(using_x86_avx, int, 0444); +module_param(using_x86_avx2, int, 0444); +MODULE_PARM_DESC(using_x86_ssse3, "Using x86 instruction set extensions: SSSE3"); +MODULE_PARM_DESC(using_x86_avx, "Using x86 instruction set extensions: AVX"); +MODULE_PARM_DESC(using_x86_avx2, "Using x86 instruction set extensions: AVX2"); + static int _sha256_update(struct shash_desc *desc, const u8 *data, unsigned int len, unsigned int bytes_per_fpu, sha256_block_fn *sha256_xform) @@ -212,9 +223,6 @@ static void unregister_sha256_ssse3(void) } } -asmlinkage void sha256_transform_avx(struct sha256_state *state, - const u8 *data, int blocks); - static int sha256_avx_update(struct shash_desc *desc, const u8 *data, unsigned int len) { @@ -273,9 +281,6 @@ static void unregister_sha256_avx(void) } } -asmlinkage void sha256_transform_rorx(struct sha256_state *state, - const u8 *data, int blocks); - static int sha256_avx2_update(struct shash_desc *desc, const u8 *data, unsigned int len) { @@ -335,9 +340,6 @@ static void unregister_sha256_avx2(void) } #ifdef CONFIG_AS_SHA256_NI -asmlinkage void sha256_ni_transform(struct sha256_state *digest, - const u8 *data, int rounds); - static int sha256_ni_update(struct shash_desc *desc, const u8 *data, unsigned int len) { diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index 48586ab40d55..04e2af951a3e 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -81,6 +81,13 @@ static int using_x86_ssse3; static int using_x86_avx; static int using_x86_avx2; +module_param(using_x86_ssse3, int, 0444); +module_param(using_x86_avx, int, 0444); +module_param(using_x86_avx2, int, 0444); +MODULE_PARM_DESC(using_x86_ssse3, "Using x86 instruction set extensions: SSSE3"); +MODULE_PARM_DESC(using_x86_avx, "Using x86 instruction set extensions: AVX"); +MODULE_PARM_DESC(using_x86_avx2, "Using x86 instruction set extensions: AVX2"); + static int sha512_update(struct shash_desc *desc, const u8 *data, unsigned int len, unsigned int bytes_per_fpu, sha512_block_fn *sha512_xform) From patchwork Wed Nov 16 04:13:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20714 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3086721wru; Tue, 15 Nov 2022 20:28:10 -0800 (PST) X-Google-Smtp-Source: AA0mqf5CfmDZNFckLCuuv04Umo2axEBfMn0kjCuGvwajzXEeX+YItpr5HkiZdg1usNji4xRBW5Zh X-Received: by 2002:a17:906:fcd8:b0:7ad:d18f:c2d6 with SMTP id qx24-20020a170906fcd800b007add18fc2d6mr16617261ejb.271.1668572890386; Tue, 15 Nov 2022 20:28:10 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572890; cv=none; d=google.com; s=arc-20160816; b=RizS460SHDoRPr/MA+/2wd+VQwNyaFwgxZlUYT8M3rezDKuVGZ6cV6o8i7CUWHNEL/ sB3KBtjBUpDrrHiJOhmbdisGnBaOqsfFqZuX+Q6mxjPemYzSDKAJzAOoNb5JSSqHaxQ1 BNy9xCWB+1sVfn469r8gUwypNBGUMru8x658jQ/R5NXbKrpLHk+2dioarWZTL2F+dg0B 4q+jU7u1MMFG9tEj8l31rYUpEsdp7qYAKQh5yAH1tyRr8OIObZ5ZljTQ5VqjJZO5M+1y 2tIThYOw/HbYC4h1VEdbjyK+2Fm/7Jfys2/qRdMofVRGwKYSeSbRCy5mq6gYNCQVYzsT GcKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=fUEoWBalDn15t5cghIYwbOp4kr7w2eJcKg0SlIogXXk=; b=S2j7K2XYSF4hBV/6p69zKMTnBkC825yLiIAOu4sCZGkNyJqmefpEHqnOreYR/L4Fpp YsQk4Vm4zHmweR+9B8xeY6tbH8iNxAf0sLb8320dB68G6ytzYkP3h78WFFvXnXZcjBCq b28Lt+Cb3ZPRzMA9JfiWXUly6Dw+aN6ujyqxiNVQO8RqGPTxt2OtJxXSnIqpAFvbx/3Y z6BfkMPIuUutGJQ8wFBBOFsWfwZGfpyqJhYnD2JKanttWfWt/50l5TxbYs+Z3l0w5NZ1 rfOFC8dAIAaReWNnxESX+qibNOqooqoZ28nDnHqZt6LCyz+cqj+rZp7B60g0kPcmhKTO 1rNQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b="j/FVsITL"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id bv9-20020a170906b1c900b0076f0940cb28si10683963ejb.175.2022.11.15.20.27.45; Tue, 15 Nov 2022 20:28:10 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b="j/FVsITL"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232680AbiKPEQ5 (ORCPT + 99 others); Tue, 15 Nov 2022 23:16:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232134AbiKPEO4 (ORCPT ); Tue, 15 Nov 2022 23:14:56 -0500 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A1A8C31ED3; Tue, 15 Nov 2022 20:14:29 -0800 (PST) Received: from pps.filterd (m0150241.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG31FuU027400; Wed, 16 Nov 2022 04:14:21 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=fUEoWBalDn15t5cghIYwbOp4kr7w2eJcKg0SlIogXXk=; b=j/FVsITL3XCoJxa42FwyW4IUVUWbCaYUqCnslWOqvkZKz/uWmcMQjnbeMhGUDPVx1rfj WpqScT+88gC9OIempR+TuEiV3z6Z3Xcx7jIRmvCoO+yMwSF4lcDpAcHUX2A4VQyJKSv6 mCkDwpLpdvYRRD97c91v5xDEajKIxb847PL/R/4IJkZIZr6ZoiO4O8fZpdZyDmlHhfyQ Tmr3I5zQalhn9zOhx82HadVFirJTkhccBSwjmGP/PlEL5VATDLEmh1WSulKre3AspVmK ps+nvZqqxSnNQpXQQ6v/3pCwPcos9R9hiilEspFwPL5edK7r2ovK8nh+3ob+Gc7Q7ZT3 Hw== Received: from p1lg14878.it.hpe.com (p1lg14878.it.hpe.com [16.230.97.204]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqkbregk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:20 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14878.it.hpe.com (Postfix) with ESMTPS id 318412EEEE; Wed, 16 Nov 2022 04:14:20 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id B379180FE95; Wed, 16 Nov 2022 04:14:19 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 22/24] crypto: x86 - report missing CPU features via module parameters Date: Tue, 15 Nov 2022 22:13:40 -0600 Message-Id: <20221116041342.3841-23-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: SFk6rY61usHRwlIC9RezUVnsIZGLCIrq X-Proofpoint-ORIG-GUID: SFk6rY61usHRwlIC9RezUVnsIZGLCIrq X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 impostorscore=0 spamscore=0 clxscore=1015 mlxscore=0 suspectscore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 priorityscore=1501 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749625487155286537?= X-GMAIL-MSGID: =?utf-8?q?1749625487155286537?= Don't refuse to load modules based on missing additional x86 features (e.g., OSXSAVE) or x86 XSAVE features (e.g., YMM). Instead, load the module, but don't register any crypto drivers. Report the fact that one or more features are missing in a new missing_x86_features module parameter (0 = no problems, 1 = something is missing; each module parameter description lists all the features that it wants). For the SHA functions that register up to four drivers based on CPU features, report separate module parameters for each set: missing_x86_features_avx2 missing_x86_features_avx Signed-off-by: Robert Elliott --- arch/x86/crypto/aegis128-aesni-glue.c | 15 ++++++++++--- arch/x86/crypto/aria_aesni_avx_glue.c | 24 +++++++++++--------- arch/x86/crypto/camellia_aesni_avx2_glue.c | 25 ++++++++++++--------- arch/x86/crypto/camellia_aesni_avx_glue.c | 25 ++++++++++++--------- arch/x86/crypto/cast5_avx_glue.c | 20 ++++++++++------- arch/x86/crypto/cast6_avx_glue.c | 20 ++++++++++------- arch/x86/crypto/curve25519-x86_64.c | 12 ++++++++-- arch/x86/crypto/nhpoly1305-avx2-glue.c | 14 +++++++++--- arch/x86/crypto/polyval-clmulni_glue.c | 15 ++++++++++--- arch/x86/crypto/serpent_avx2_glue.c | 24 +++++++++++--------- arch/x86/crypto/serpent_avx_glue.c | 21 ++++++++++------- arch/x86/crypto/sha1_ssse3_glue.c | 20 +++++++++++++---- arch/x86/crypto/sha256_ssse3_glue.c | 18 +++++++++++++-- arch/x86/crypto/sha512_ssse3_glue.c | 18 +++++++++++++-- arch/x86/crypto/sm3_avx_glue.c | 22 ++++++++++-------- arch/x86/crypto/sm4_aesni_avx2_glue.c | 26 +++++++++++++--------- arch/x86/crypto/sm4_aesni_avx_glue.c | 26 +++++++++++++--------- arch/x86/crypto/twofish_avx_glue.c | 19 ++++++++++------ 18 files changed, 243 insertions(+), 121 deletions(-) diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c index a3ebd018953c..e0312ecf34a8 100644 --- a/arch/x86/crypto/aegis128-aesni-glue.c +++ b/arch/x86/crypto/aegis128-aesni-glue.c @@ -288,6 +288,11 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 instruction set extensions (SSE2) and/or XSAVE features (SSE)"); + static struct simd_aead_alg *simd_alg; static int __init crypto_aegis128_aesni_module_init(void) @@ -296,8 +301,10 @@ static int __init crypto_aegis128_aesni_module_init(void) return -ENODEV; if (!boot_cpu_has(X86_FEATURE_XMM2) || - !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) - return -ENODEV; + !cpu_has_xfeatures(XFEATURE_MASK_SSE, NULL)) { + missing_x86_features = 1; + return 0; + } return simd_register_aeads_compat(&crypto_aegis128_aesni_alg, 1, &simd_alg); @@ -305,7 +312,9 @@ static int __init crypto_aegis128_aesni_module_init(void) static void __exit crypto_aegis128_aesni_module_exit(void) { - simd_unregister_aeads(&crypto_aegis128_aesni_alg, 1, &simd_alg); + if (!missing_x86_features) + simd_unregister_aeads(&crypto_aegis128_aesni_alg, 1, &simd_alg); + missing_x86_features = 0; } module_init(crypto_aegis128_aesni_module_init); diff --git a/arch/x86/crypto/aria_aesni_avx_glue.c b/arch/x86/crypto/aria_aesni_avx_glue.c index 9fd3d1fe1105..ebb9760967b5 100644 --- a/arch/x86/crypto/aria_aesni_avx_glue.c +++ b/arch/x86/crypto/aria_aesni_avx_glue.c @@ -176,23 +176,25 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 instruction set extensions (AES-NI, OSXSAVE) and/or XSAVE features (SSE, YMM)"); + static int __init aria_avx_init(void) { - const char *feature_name; - if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; if (!boot_cpu_has(X86_FEATURE_AES) || !boot_cpu_has(X86_FEATURE_OSXSAVE)) { - pr_info("AES or OSXSAVE instructions are not detected.\n"); - return -ENODEV; + missing_x86_features = 1; + return 0; } - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, - &feature_name)) { - pr_info("CPU feature '%s' is not supported.\n", feature_name); - return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { + missing_x86_features = 1; + return 0; } if (boot_cpu_has(X86_FEATURE_GFNI)) { @@ -213,8 +215,10 @@ static int __init aria_avx_init(void) static void __exit aria_avx_exit(void) { - simd_unregister_skciphers(aria_algs, ARRAY_SIZE(aria_algs), - aria_simd_algs); + if (!missing_x86_features) + simd_unregister_skciphers(aria_algs, ARRAY_SIZE(aria_algs), + aria_simd_algs); + missing_x86_features = 0; using_x86_gfni = 0; } diff --git a/arch/x86/crypto/camellia_aesni_avx2_glue.c b/arch/x86/crypto/camellia_aesni_avx2_glue.c index 6c48fc9f3fde..e8ae1e1a801d 100644 --- a/arch/x86/crypto/camellia_aesni_avx2_glue.c +++ b/arch/x86/crypto/camellia_aesni_avx2_glue.c @@ -105,26 +105,28 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 instruction set extensions (AES-NI, AVX, OSXSAVE) and/or XSAVE features (SSE, YMM)"); + static struct simd_skcipher_alg *camellia_simd_algs[ARRAY_SIZE(camellia_algs)]; static int __init camellia_aesni_init(void) { - const char *feature_name; - if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; if (!boot_cpu_has(X86_FEATURE_AES) || !boot_cpu_has(X86_FEATURE_AVX) || !boot_cpu_has(X86_FEATURE_OSXSAVE)) { - pr_info("AES-NI, AVX, or OSXSAVE instructions are not detected.\n"); - return -ENODEV; + missing_x86_features = 1; + return 0; } - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, - &feature_name)) { - pr_info("CPU feature '%s' is not supported.\n", feature_name); - return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { + missing_x86_features = 1; + return 0; } return simd_register_skciphers_compat(camellia_algs, @@ -134,8 +136,11 @@ static int __init camellia_aesni_init(void) static void __exit camellia_aesni_fini(void) { - simd_unregister_skciphers(camellia_algs, ARRAY_SIZE(camellia_algs), - camellia_simd_algs); + if (!missing_x86_features) + simd_unregister_skciphers(camellia_algs, + ARRAY_SIZE(camellia_algs), + camellia_simd_algs); + missing_x86_features = 0; } module_init(camellia_aesni_init); diff --git a/arch/x86/crypto/camellia_aesni_avx_glue.c b/arch/x86/crypto/camellia_aesni_avx_glue.c index 6d7fc96d242e..6784d631575c 100644 --- a/arch/x86/crypto/camellia_aesni_avx_glue.c +++ b/arch/x86/crypto/camellia_aesni_avx_glue.c @@ -105,25 +105,27 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 instruction set extensions (AES-NI, OSXSAVE) and/or XSAVE features (SSE, YMM)"); + static struct simd_skcipher_alg *camellia_simd_algs[ARRAY_SIZE(camellia_algs)]; static int __init camellia_aesni_init(void) { - const char *feature_name; - if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; if (!boot_cpu_has(X86_FEATURE_AES) || !boot_cpu_has(X86_FEATURE_OSXSAVE)) { - pr_info("AES-NI or OSXSAVE instructions are not detected.\n"); - return -ENODEV; + missing_x86_features = 1; + return 0; } - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, - &feature_name)) { - pr_info("CPU feature '%s' is not supported.\n", feature_name); - return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { + missing_x86_features = 1; + return 0; } return simd_register_skciphers_compat(camellia_algs, @@ -133,8 +135,11 @@ static int __init camellia_aesni_init(void) static void __exit camellia_aesni_fini(void) { - simd_unregister_skciphers(camellia_algs, ARRAY_SIZE(camellia_algs), - camellia_simd_algs); + if (!missing_x86_features) + simd_unregister_skciphers(camellia_algs, + ARRAY_SIZE(camellia_algs), + camellia_simd_algs); + missing_x86_features = 0; } module_init(camellia_aesni_init); diff --git a/arch/x86/crypto/cast5_avx_glue.c b/arch/x86/crypto/cast5_avx_glue.c index bdc3c763334c..34ef032bb8d0 100644 --- a/arch/x86/crypto/cast5_avx_glue.c +++ b/arch/x86/crypto/cast5_avx_glue.c @@ -100,19 +100,21 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 XSAVE features (SSE, YMM)"); + static struct simd_skcipher_alg *cast5_simd_algs[ARRAY_SIZE(cast5_algs)]; static int __init cast5_init(void) { - const char *feature_name; - if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, - &feature_name)) { - pr_info("CPU feature '%s' is not supported.\n", feature_name); - return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { + missing_x86_features = 1; + return 0; } return simd_register_skciphers_compat(cast5_algs, @@ -122,8 +124,10 @@ static int __init cast5_init(void) static void __exit cast5_exit(void) { - simd_unregister_skciphers(cast5_algs, ARRAY_SIZE(cast5_algs), - cast5_simd_algs); + if (!missing_x86_features) + simd_unregister_skciphers(cast5_algs, ARRAY_SIZE(cast5_algs), + cast5_simd_algs); + missing_x86_features = 0; } module_init(cast5_init); diff --git a/arch/x86/crypto/cast6_avx_glue.c b/arch/x86/crypto/cast6_avx_glue.c index addca34b3511..71559fd3ea87 100644 --- a/arch/x86/crypto/cast6_avx_glue.c +++ b/arch/x86/crypto/cast6_avx_glue.c @@ -100,19 +100,21 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 XSAVE features (SSE, YMM)"); + static struct simd_skcipher_alg *cast6_simd_algs[ARRAY_SIZE(cast6_algs)]; static int __init cast6_init(void) { - const char *feature_name; - if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, - &feature_name)) { - pr_info("CPU feature '%s' is not supported.\n", feature_name); - return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { + missing_x86_features = 1; + return 0; } return simd_register_skciphers_compat(cast6_algs, @@ -122,8 +124,10 @@ static int __init cast6_init(void) static void __exit cast6_exit(void) { - simd_unregister_skciphers(cast6_algs, ARRAY_SIZE(cast6_algs), - cast6_simd_algs); + if (!missing_x86_features) + simd_unregister_skciphers(cast6_algs, ARRAY_SIZE(cast6_algs), + cast6_simd_algs); + missing_x86_features = 0; } module_init(cast6_init); diff --git a/arch/x86/crypto/curve25519-x86_64.c b/arch/x86/crypto/curve25519-x86_64.c index 6d222849e409..74672351e534 100644 --- a/arch/x86/crypto/curve25519-x86_64.c +++ b/arch/x86/crypto/curve25519-x86_64.c @@ -1706,13 +1706,20 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 instruction set extensions (BMI2)"); + static int __init curve25519_mod_init(void) { if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - if (!boot_cpu_has(X86_FEATURE_BMI2)) - return -ENODEV; + if (!boot_cpu_has(X86_FEATURE_BMI2)) { + missing_x86_features = 1; + return 0; + } static_branch_enable(&curve25519_use_bmi2_adx); @@ -1725,6 +1732,7 @@ static void __exit curve25519_mod_exit(void) if (IS_REACHABLE(CONFIG_CRYPTO_KPP) && static_branch_likely(&curve25519_use_bmi2_adx)) crypto_unregister_kpp(&curve25519_alg); + missing_x86_features = 0; } module_init(curve25519_mod_init); diff --git a/arch/x86/crypto/nhpoly1305-avx2-glue.c b/arch/x86/crypto/nhpoly1305-avx2-glue.c index fa415fec5793..2e63947bc9fa 100644 --- a/arch/x86/crypto/nhpoly1305-avx2-glue.c +++ b/arch/x86/crypto/nhpoly1305-avx2-glue.c @@ -67,20 +67,28 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 instruction set extensions (OSXSAVE)"); + static int __init nhpoly1305_mod_init(void) { if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - if (!boot_cpu_has(X86_FEATURE_OSXSAVE)) - return -ENODEV; + if (!boot_cpu_has(X86_FEATURE_OSXSAVE)) { + missing_x86_features = 1; + return 0; + } return crypto_register_shash(&nhpoly1305_alg); } static void __exit nhpoly1305_mod_exit(void) { - crypto_unregister_shash(&nhpoly1305_alg); + if (!missing_x86_features) + crypto_unregister_shash(&nhpoly1305_alg); } module_init(nhpoly1305_mod_init); diff --git a/arch/x86/crypto/polyval-clmulni_glue.c b/arch/x86/crypto/polyval-clmulni_glue.c index b98e32f8e2a4..20d4a68ec1d7 100644 --- a/arch/x86/crypto/polyval-clmulni_glue.c +++ b/arch/x86/crypto/polyval-clmulni_glue.c @@ -182,20 +182,29 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 instruction set extensions (AVX)"); + static int __init polyval_clmulni_mod_init(void) { if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - if (!boot_cpu_has(X86_FEATURE_AVX)) - return -ENODEV; + if (!boot_cpu_has(X86_FEATURE_AVX)) { + missing_x86_features = 1; + return 0; + } return crypto_register_shash(&polyval_alg); } static void __exit polyval_clmulni_mod_exit(void) { - crypto_unregister_shash(&polyval_alg); + if (!missing_x86_features) + crypto_unregister_shash(&polyval_alg); + missing_x86_features = 0; } module_init(polyval_clmulni_mod_init); diff --git a/arch/x86/crypto/serpent_avx2_glue.c b/arch/x86/crypto/serpent_avx2_glue.c index bc18149fb928..2aa62c93a16f 100644 --- a/arch/x86/crypto/serpent_avx2_glue.c +++ b/arch/x86/crypto/serpent_avx2_glue.c @@ -101,23 +101,25 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 instruction set extensions (OSXSAVE) and/or XSAVE features (SSE, YMM)"); + static struct simd_skcipher_alg *serpent_simd_algs[ARRAY_SIZE(serpent_algs)]; static int __init serpent_avx2_init(void) { - const char *feature_name; - if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; if (!boot_cpu_has(X86_FEATURE_OSXSAVE)) { - pr_info("OSXSAVE instructions are not detected.\n"); - return -ENODEV; + missing_x86_features = 1; + return 0; } - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, - &feature_name)) { - pr_info("CPU feature '%s' is not supported.\n", feature_name); - return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { + missing_x86_features = 1; + return 0; } return simd_register_skciphers_compat(serpent_algs, @@ -127,8 +129,10 @@ static int __init serpent_avx2_init(void) static void __exit serpent_avx2_fini(void) { - simd_unregister_skciphers(serpent_algs, ARRAY_SIZE(serpent_algs), - serpent_simd_algs); + if (!missing_x86_features) + simd_unregister_skciphers(serpent_algs, ARRAY_SIZE(serpent_algs), + serpent_simd_algs); + missing_x86_features = 0; } module_init(serpent_avx2_init); diff --git a/arch/x86/crypto/serpent_avx_glue.c b/arch/x86/crypto/serpent_avx_glue.c index 0db18d99da50..28ee9717df49 100644 --- a/arch/x86/crypto/serpent_avx_glue.c +++ b/arch/x86/crypto/serpent_avx_glue.c @@ -107,19 +107,21 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 XSAVE features (SSE, YMM)"); + static struct simd_skcipher_alg *serpent_simd_algs[ARRAY_SIZE(serpent_algs)]; static int __init serpent_init(void) { - const char *feature_name; - if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, - &feature_name)) { - pr_info("CPU feature '%s' is not supported.\n", feature_name); - return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { + missing_x86_features = 1; + return 0; } return simd_register_skciphers_compat(serpent_algs, @@ -129,8 +131,11 @@ static int __init serpent_init(void) static void __exit serpent_exit(void) { - simd_unregister_skciphers(serpent_algs, ARRAY_SIZE(serpent_algs), - serpent_simd_algs); + if (!missing_x86_features) + simd_unregister_skciphers(serpent_algs, + ARRAY_SIZE(serpent_algs), + serpent_simd_algs); + missing_x86_features = 0; } module_init(serpent_init); diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c index 2445648cf234..405af5e14b67 100644 --- a/arch/x86/crypto/sha1_ssse3_glue.c +++ b/arch/x86/crypto/sha1_ssse3_glue.c @@ -351,9 +351,17 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features_avx2; +static int missing_x86_features_avx; +module_param(missing_x86_features_avx2, int, 0444); +module_param(missing_x86_features_avx, int, 0444); +MODULE_PARM_DESC(missing_x86_features_avx2, + "Missing x86 instruction set extensions (BMI1, BMI2) to support AVX2"); +MODULE_PARM_DESC(missing_x86_features_avx, + "Missing x86 XSAVE features (SSE, YMM) to support AVX"); + static int __init sha1_ssse3_mod_init(void) { - const char *feature_name; int ret; if (!x86_match_cpu(module_cpu_ids)) @@ -374,10 +382,11 @@ static int __init sha1_ssse3_mod_init(void) if (boot_cpu_has(X86_FEATURE_BMI1) && boot_cpu_has(X86_FEATURE_BMI2)) { - ret = crypto_register_shash(&sha1_avx2_alg); if (!ret) using_x86_avx2 = 1; + } else { + missing_x86_features_avx2 = 1; } } @@ -385,11 +394,12 @@ static int __init sha1_ssse3_mod_init(void) if (boot_cpu_has(X86_FEATURE_AVX)) { if (cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, - &feature_name)) { - + NULL)) { ret = crypto_register_shash(&sha1_avx_alg); if (!ret) using_x86_avx = 1; + } else { + missing_x86_features_avx = 1; } } @@ -415,6 +425,8 @@ static void __exit sha1_ssse3_mod_fini(void) unregister_sha1_avx2(); unregister_sha1_avx(); unregister_sha1_ssse3(); + missing_x86_features_avx2 = 0; + missing_x86_features_avx = 0; } module_init(sha1_ssse3_mod_init); diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index 1464e6ccf912..293cf7085dd3 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -413,9 +413,17 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features_avx2; +static int missing_x86_features_avx; +module_param(missing_x86_features_avx2, int, 0444); +module_param(missing_x86_features_avx, int, 0444); +MODULE_PARM_DESC(missing_x86_features_avx2, + "Missing x86 instruction set extensions (BMI2) to support AVX2"); +MODULE_PARM_DESC(missing_x86_features_avx, + "Missing x86 XSAVE features (SSE, YMM) to support AVX"); + static int __init sha256_ssse3_mod_init(void) { - const char *feature_name; int ret; if (!x86_match_cpu(module_cpu_ids)) @@ -440,6 +448,8 @@ static int __init sha256_ssse3_mod_init(void) ARRAY_SIZE(sha256_avx2_algs)); if (!ret) using_x86_avx2 = 1; + } else { + missing_x86_features_avx2 = 1; } } @@ -447,11 +457,13 @@ static int __init sha256_ssse3_mod_init(void) if (boot_cpu_has(X86_FEATURE_AVX)) { if (cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, - &feature_name)) { + NULL)) { ret = crypto_register_shashes(sha256_avx_algs, ARRAY_SIZE(sha256_avx_algs)); if (!ret) using_x86_avx = 1; + } else { + missing_x86_features_avx = 1; } } @@ -478,6 +490,8 @@ static void __exit sha256_ssse3_mod_fini(void) unregister_sha256_avx2(); unregister_sha256_avx(); unregister_sha256_ssse3(); + missing_x86_features_avx2 = 0; + missing_x86_features_avx = 0; } module_init(sha256_ssse3_mod_init); diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index 04e2af951a3e..9f13baf7dda9 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -319,6 +319,15 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features_avx2; +static int missing_x86_features_avx; +module_param(missing_x86_features_avx2, int, 0444); +module_param(missing_x86_features_avx, int, 0444); +MODULE_PARM_DESC(missing_x86_features_avx2, + "Missing x86 instruction set extensions (BMI2) to support AVX2"); +MODULE_PARM_DESC(missing_x86_features_avx, + "Missing x86 XSAVE features (SSE, YMM) to support AVX"); + static void unregister_sha512_avx2(void) { if (using_x86_avx2) { @@ -330,7 +339,6 @@ static void unregister_sha512_avx2(void) static int __init sha512_ssse3_mod_init(void) { - const char *feature_name; int ret; if (!x86_match_cpu(module_cpu_ids)) @@ -343,6 +351,8 @@ static int __init sha512_ssse3_mod_init(void) ARRAY_SIZE(sha512_avx2_algs)); if (!ret) using_x86_avx2 = 1; + } else { + missing_x86_features_avx2 = 1; } } @@ -350,11 +360,13 @@ static int __init sha512_ssse3_mod_init(void) if (boot_cpu_has(X86_FEATURE_AVX)) { if (cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, - &feature_name)) { + NULL)) { ret = crypto_register_shashes(sha512_avx_algs, ARRAY_SIZE(sha512_avx_algs)); if (!ret) using_x86_avx = 1; + } else { + missing_x86_features_avx = 1; } } @@ -376,6 +388,8 @@ static void __exit sha512_ssse3_mod_fini(void) unregister_sha512_avx2(); unregister_sha512_avx(); unregister_sha512_ssse3(); + missing_x86_features_avx2 = 0; + missing_x86_features_avx = 0; } module_init(sha512_ssse3_mod_init); diff --git a/arch/x86/crypto/sm3_avx_glue.c b/arch/x86/crypto/sm3_avx_glue.c index c7786874319c..169ba6a2c806 100644 --- a/arch/x86/crypto/sm3_avx_glue.c +++ b/arch/x86/crypto/sm3_avx_glue.c @@ -126,22 +126,24 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 instruction set extensions (BMI2) and/or XSAVE features (SSE, YMM)"); + static int __init sm3_avx_mod_init(void) { - const char *feature_name; - if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; if (!boot_cpu_has(X86_FEATURE_BMI2)) { - pr_info("BMI2 instruction are not detected.\n"); - return -ENODEV; + missing_x86_features = 1; + return 0; } - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, - &feature_name)) { - pr_info("CPU feature '%s' is not supported.\n", feature_name); - return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { + missing_x86_features = 1; + return 0; } return crypto_register_shash(&sm3_avx_alg); @@ -149,7 +151,9 @@ static int __init sm3_avx_mod_init(void) static void __exit sm3_avx_mod_exit(void) { - crypto_unregister_shash(&sm3_avx_alg); + if (!missing_x86_features) + crypto_unregister_shash(&sm3_avx_alg); + missing_x86_features = 0; } module_init(sm3_avx_mod_init); diff --git a/arch/x86/crypto/sm4_aesni_avx2_glue.c b/arch/x86/crypto/sm4_aesni_avx2_glue.c index 125b00db89b1..6bcf78231888 100644 --- a/arch/x86/crypto/sm4_aesni_avx2_glue.c +++ b/arch/x86/crypto/sm4_aesni_avx2_glue.c @@ -133,27 +133,29 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 instruction set extensions (AES-NI, AVX, OSXSAVE) and/or XSAVE features (SSE, YMM)"); + static struct simd_skcipher_alg * simd_sm4_aesni_avx2_skciphers[ARRAY_SIZE(sm4_aesni_avx2_skciphers)]; static int __init sm4_init(void) { - const char *feature_name; - if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; if (!boot_cpu_has(X86_FEATURE_AVX) || !boot_cpu_has(X86_FEATURE_AES) || !boot_cpu_has(X86_FEATURE_OSXSAVE)) { - pr_info("AVX, AES-NI, and/or OSXSAVE instructions are not detected.\n"); - return -ENODEV; + missing_x86_features = 1; + return 0; } - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, - &feature_name)) { - pr_info("CPU feature '%s' is not supported.\n", feature_name); - return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { + missing_x86_features = 1; + return 0; } return simd_register_skciphers_compat(sm4_aesni_avx2_skciphers, @@ -163,9 +165,11 @@ static int __init sm4_init(void) static void __exit sm4_exit(void) { - simd_unregister_skciphers(sm4_aesni_avx2_skciphers, - ARRAY_SIZE(sm4_aesni_avx2_skciphers), - simd_sm4_aesni_avx2_skciphers); + if (!missing_x86_features) + simd_unregister_skciphers(sm4_aesni_avx2_skciphers, + ARRAY_SIZE(sm4_aesni_avx2_skciphers), + simd_sm4_aesni_avx2_skciphers); + missing_x86_features = 0; } module_init(sm4_init); diff --git a/arch/x86/crypto/sm4_aesni_avx_glue.c b/arch/x86/crypto/sm4_aesni_avx_glue.c index ac8182b197cf..03775b1079dc 100644 --- a/arch/x86/crypto/sm4_aesni_avx_glue.c +++ b/arch/x86/crypto/sm4_aesni_avx_glue.c @@ -452,26 +452,28 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 instruction set extensions (AES-NI, OSXSAVE) and/or XSAVE features (SSE, YMM)"); + static struct simd_skcipher_alg * simd_sm4_aesni_avx_skciphers[ARRAY_SIZE(sm4_aesni_avx_skciphers)]; static int __init sm4_init(void) { - const char *feature_name; - if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; if (!boot_cpu_has(X86_FEATURE_AES) || !boot_cpu_has(X86_FEATURE_OSXSAVE)) { - pr_info("AES-NI or OSXSAVE instructions are not detected.\n"); - return -ENODEV; + missing_x86_features = 1; + return 0; } - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, - &feature_name)) { - pr_info("CPU feature '%s' is not supported.\n", feature_name); - return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { + missing_x86_features = 1; + return 0; } return simd_register_skciphers_compat(sm4_aesni_avx_skciphers, @@ -481,9 +483,11 @@ static int __init sm4_init(void) static void __exit sm4_exit(void) { - simd_unregister_skciphers(sm4_aesni_avx_skciphers, - ARRAY_SIZE(sm4_aesni_avx_skciphers), - simd_sm4_aesni_avx_skciphers); + if (!missing_x86_features) + simd_unregister_skciphers(sm4_aesni_avx_skciphers, + ARRAY_SIZE(sm4_aesni_avx_skciphers), + simd_sm4_aesni_avx_skciphers); + missing_x86_features = 0; } module_init(sm4_init); diff --git a/arch/x86/crypto/twofish_avx_glue.c b/arch/x86/crypto/twofish_avx_glue.c index 4657e6efc35d..ae3cc4ad6f4f 100644 --- a/arch/x86/crypto/twofish_avx_glue.c +++ b/arch/x86/crypto/twofish_avx_glue.c @@ -110,18 +110,21 @@ static const struct x86_cpu_id module_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); +static int missing_x86_features; +module_param(missing_x86_features, int, 0444); +MODULE_PARM_DESC(missing_x86_features, + "Missing x86 XSAVE features (SSE, YMM)"); + static struct simd_skcipher_alg *twofish_simd_algs[ARRAY_SIZE(twofish_algs)]; static int __init twofish_init(void) { - const char *feature_name; - if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, &feature_name)) { - pr_info("CPU feature '%s' is not supported.\n", feature_name); - return -ENODEV; + if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { + missing_x86_features = 1; + return 0; } return simd_register_skciphers_compat(twofish_algs, @@ -131,8 +134,10 @@ static int __init twofish_init(void) static void __exit twofish_exit(void) { - simd_unregister_skciphers(twofish_algs, ARRAY_SIZE(twofish_algs), - twofish_simd_algs); + if (!missing_x86_features) + simd_unregister_skciphers(twofish_algs, ARRAY_SIZE(twofish_algs), + twofish_simd_algs); + missing_x86_features = 0; } module_init(twofish_init); From patchwork Wed Nov 16 04:13:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20713 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3085982wru; Tue, 15 Nov 2022 20:24:26 -0800 (PST) X-Google-Smtp-Source: AA0mqf6MsS3n6d85LgCrdwyOBXolRYQa6ELhaJWcNg4729vmeyR6v2opBZYPHFX7GMumqbYpojxp X-Received: by 2002:aa7:c557:0:b0:464:b8b:f526 with SMTP id s23-20020aa7c557000000b004640b8bf526mr18158540edr.342.1668572666760; Tue, 15 Nov 2022 20:24:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668572666; cv=none; d=google.com; s=arc-20160816; b=XNuKA9RU1lTSb1aL+uH7KBzyQk6kd2MGF4YgzNMApqJ7TdFj+j5uxhWk2MkM6+lTBd Otb7r3HlRvZCs2sUnhwypoWznIh4Mdr1ayarRd6NhfdBS3GnKWiJI437YLJITp9lMgc5 GVJXMpU1kTDRED5MicS4rFDYZcXnh+cX+5zWOQ3/8xnwD65LQOBiK5H9MBxf/zPMaD0F qdsQiSTsBPAkgLZb24kfgROesm1Y+B+DYY9U1/+69TqWC0IuArUYMytpgqvYsSmL9RDV GlWHM58+8zLRXJNT5jvO4u3kHx6Z4l3qeYvyiuuXEuhe5a+Pzm2IKPirrf93P+qWthLe YDaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=B/wYDw1cSUgG16tdEg26Rduz4xQRfTooo+shmZDe7Zs=; b=i4UWCqS9j6dmTs9xtusPf5vWLYpzL7oy4Lwl/0aDEAOSvNb6Z8DZ8Pc66HNl7+jYlE AtuFQ13R/4ahUd61jkdywZx76LCTM/6svgFsKW9vyD19rG/PKjeL6VHD6Du4Oc5iCECA SM+gMMdpAR3Z5UjNS3iewHM4CyoOW+Q0ik0LMA9Ba5s4mfBxfEaLpOZt31QNDXinG16X O2nS401TA3uvJ7gv2S/QGqZzq7mK4Jh8JICwjkwwC+Ibu6eYicop516N1vR4gjdn4C8L nwZy0+tCpiuc1veL4JS/kAf1xorE7ldlqUehLc9/MqP03JCoMTNpXin6j+Iynz3CS+C4 QArw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b="BzId7/Tt"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id dm4-20020a170907948400b007ad945aa04asi5588791ejc.678.2022.11.15.20.24.01; Tue, 15 Nov 2022 20:24:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b="BzId7/Tt"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231766AbiKPEQv (ORCPT + 99 others); Tue, 15 Nov 2022 23:16:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54346 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232122AbiKPEO4 (ORCPT ); Tue, 15 Nov 2022 23:14:56 -0500 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B10B431EE3; Tue, 15 Nov 2022 20:14:29 -0800 (PST) Received: from pps.filterd (m0150241.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG41wRm017677; Wed, 16 Nov 2022 04:14:22 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pps0720; bh=B/wYDw1cSUgG16tdEg26Rduz4xQRfTooo+shmZDe7Zs=; b=BzId7/TthXq5b8M91Cl8rsjkrS3kAGNB+aBvte5zURpGAuSuH+++irmQ+lkgfo5QKeX/ bRN95Pd2vOwBUvvXuBE5aaAxInwoRnyUH3PKJeGiIsJs6agBdQSbY5egGFn4/0rdh6P+ PSbGxsNfOk/bPCda2vLzyulejsqZO6sDRMe0nP7KG0rlm1rYbOnozFjkwNdolphDVU3z oLXJ92JxdaUEqVAHqS1ZQiPtEg7M2qjMCujj1CAMkopIZCzEKE9axnLenSRzqswohWsH 0GeRJU/aVgsPLu7CqzgKiRyaN6pRM5VWlnIODyUbioB+h1OQmUnXD+fRAcdLu4coJ7mA zg== Received: from p1lg14879.it.hpe.com (p1lg14879.it.hpe.com [16.230.97.200]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3kvqkbregu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:22 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14879.it.hpe.com (Postfix) with ESMTPS id 6C8FD295AF; Wed, 16 Nov 2022 04:14:21 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id F174D802A17; Wed, 16 Nov 2022 04:14:20 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 23/24] crypto: x86 - report suboptimal CPUs via module parameters Date: Tue, 15 Nov 2022 22:13:41 -0600 Message-Id: <20221116041342.3841-24-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-GUID: bqSaqtYq4Ek-oxxDAz7wPD-x9ZVWMeIg X-Proofpoint-ORIG-GUID: bqSaqtYq4Ek-oxxDAz7wPD-x9ZVWMeIg X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999 impostorscore=0 spamscore=0 clxscore=1015 mlxscore=0 suspectscore=0 phishscore=0 bulkscore=0 lowpriorityscore=0 priorityscore=1501 adultscore=0 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749625252467902864?= X-GMAIL-MSGID: =?utf-8?q?1749625252467902864?= Don't refuse to load modules on certain CPUs and print a message to the console. Instead, load the module but don't register the crypto functions, and report this condition via a new module suboptimal_x86 module parameter with this description: Crypto driver not registered because performance on this CPU would be suboptimal Reword the descriptions of the existing force module parameter to match this modified behavior: force: Force crypto driver registration on suboptimal CPUs Make the new module parameters readable via sysfs: /sys/module/blowfish_x86_64/parameters/suboptimal_x86:0 /sys/module/camellia_x86_64/parameters/suboptimal_x86:0 /sys/module/des3_ede_x86_64/parameters/suboptimal_x86:1 /sys/module/twofish_x86_64_3way/parameters/suboptimal_x86:1 If the module has been loaded and is reporting suboptimal_x86=1, remove it to try loading again: modprobe -r blowfish_x86_64 modprobe blowfish_x86_64 force=1 or specify it on the kernel command line: blowfish_x86_64.force=1 Signed-off-by: Robert Elliott --- arch/x86/crypto/blowfish_glue.c | 29 +++++++++++++++++------------ arch/x86/crypto/camellia_glue.c | 27 ++++++++++++++++----------- arch/x86/crypto/des3_ede_glue.c | 26 +++++++++++++++++--------- arch/x86/crypto/twofish_glue_3way.c | 26 +++++++++++++++----------- 4 files changed, 65 insertions(+), 43 deletions(-) diff --git a/arch/x86/crypto/blowfish_glue.c b/arch/x86/crypto/blowfish_glue.c index 4c0ead71b198..8e4de7859e34 100644 --- a/arch/x86/crypto/blowfish_glue.c +++ b/arch/x86/crypto/blowfish_glue.c @@ -283,7 +283,7 @@ static struct skcipher_alg bf_skcipher_algs[] = { }, }; -static bool is_blacklisted_cpu(void) +static bool is_suboptimal_cpu(void) { if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) return false; @@ -292,7 +292,7 @@ static bool is_blacklisted_cpu(void) /* * On Pentium 4, blowfish-x86_64 is slower than generic C * implementation because use of 64bit rotates (which are really - * slow on P4). Therefore blacklist P4s. + * slow on P4). */ return true; } @@ -302,7 +302,12 @@ static bool is_blacklisted_cpu(void) static int force; module_param(force, int, 0); -MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist"); +MODULE_PARM_DESC(force, "Force crypto driver registration on suboptimal CPUs"); + +static int suboptimal_x86; +module_param(suboptimal_x86, int, 0444); +MODULE_PARM_DESC(suboptimal_x86, + "Crypto driver not registered because performance on this CPU would be suboptimal"); static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_ANY, NULL), @@ -317,12 +322,9 @@ static int __init blowfish_init(void) if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - if (!force && is_blacklisted_cpu()) { - printk(KERN_INFO - "blowfish-x86_64: performance on this CPU " - "would be suboptimal: disabling " - "blowfish-x86_64.\n"); - return -ENODEV; + if (!force && is_suboptimal_cpu()) { + suboptimal_x86 = 1; + return 0; } err = crypto_register_alg(&bf_cipher_alg); @@ -339,9 +341,12 @@ static int __init blowfish_init(void) static void __exit blowfish_fini(void) { - crypto_unregister_alg(&bf_cipher_alg); - crypto_unregister_skciphers(bf_skcipher_algs, - ARRAY_SIZE(bf_skcipher_algs)); + if (!suboptimal_x86) { + crypto_unregister_alg(&bf_cipher_alg); + crypto_unregister_skciphers(bf_skcipher_algs, + ARRAY_SIZE(bf_skcipher_algs)); + } + suboptimal_x86 = 0; } module_init(blowfish_init); diff --git a/arch/x86/crypto/camellia_glue.c b/arch/x86/crypto/camellia_glue.c index a3df1043ed73..2cb9b24d9437 100644 --- a/arch/x86/crypto/camellia_glue.c +++ b/arch/x86/crypto/camellia_glue.c @@ -1356,7 +1356,7 @@ static struct skcipher_alg camellia_skcipher_algs[] = { } }; -static bool is_blacklisted_cpu(void) +static bool is_suboptimal_cpu(void) { if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) return false; @@ -1376,7 +1376,12 @@ static bool is_blacklisted_cpu(void) static int force; module_param(force, int, 0); -MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist"); +MODULE_PARM_DESC(force, "Force crypto driver registration on suboptimal CPUs"); + +static int suboptimal_x86; +module_param(suboptimal_x86, int, 0444); +MODULE_PARM_DESC(suboptimal_x86, + "Crypto driver not registered because performance on this CPU would be suboptimal"); static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_ANY, NULL), @@ -1391,12 +1396,9 @@ static int __init camellia_init(void) if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - if (!force && is_blacklisted_cpu()) { - printk(KERN_INFO - "camellia-x86_64: performance on this CPU " - "would be suboptimal: disabling " - "camellia-x86_64.\n"); - return -ENODEV; + if (!force && is_suboptimal_cpu()) { + suboptimal_x86 = 1; + return 0; } err = crypto_register_alg(&camellia_cipher_alg); @@ -1413,9 +1415,12 @@ static int __init camellia_init(void) static void __exit camellia_fini(void) { - crypto_unregister_alg(&camellia_cipher_alg); - crypto_unregister_skciphers(camellia_skcipher_algs, - ARRAY_SIZE(camellia_skcipher_algs)); + if (!suboptimal_x86) { + crypto_unregister_alg(&camellia_cipher_alg); + crypto_unregister_skciphers(camellia_skcipher_algs, + ARRAY_SIZE(camellia_skcipher_algs)); + } + suboptimal_x86 = 0; } module_init(camellia_init); diff --git a/arch/x86/crypto/des3_ede_glue.c b/arch/x86/crypto/des3_ede_glue.c index 168cac5c6ca6..a4cac5129148 100644 --- a/arch/x86/crypto/des3_ede_glue.c +++ b/arch/x86/crypto/des3_ede_glue.c @@ -334,7 +334,7 @@ static struct skcipher_alg des3_ede_skciphers[] = { } }; -static bool is_blacklisted_cpu(void) +static bool is_suboptimal_cpu(void) { if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) return false; @@ -343,7 +343,7 @@ static bool is_blacklisted_cpu(void) /* * On Pentium 4, des3_ede-x86_64 is slower than generic C * implementation because use of 64bit rotates (which are really - * slow on P4). Therefore blacklist P4s. + * slow on P4). */ return true; } @@ -353,7 +353,12 @@ static bool is_blacklisted_cpu(void) static int force; module_param(force, int, 0); -MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist"); +MODULE_PARM_DESC(force, "Force crypto driver registration on suboptimal CPUs"); + +static int suboptimal_x86; +module_param(suboptimal_x86, int, 0444); +MODULE_PARM_DESC(suboptimal_x86, + "Crypto driver not registered because performance on this CPU would be suboptimal"); static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_ANY, NULL), @@ -368,9 +373,9 @@ static int __init des3_ede_x86_init(void) if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - if (!force && is_blacklisted_cpu()) { - pr_info("des3_ede-x86_64: performance on this CPU would be suboptimal: disabling des3_ede-x86_64.\n"); - return -ENODEV; + if (!force && is_suboptimal_cpu()) { + suboptimal_x86 = 1; + return 0; } err = crypto_register_alg(&des3_ede_cipher); @@ -387,9 +392,12 @@ static int __init des3_ede_x86_init(void) static void __exit des3_ede_x86_fini(void) { - crypto_unregister_alg(&des3_ede_cipher); - crypto_unregister_skciphers(des3_ede_skciphers, - ARRAY_SIZE(des3_ede_skciphers)); + if (!suboptimal_x86) { + crypto_unregister_alg(&des3_ede_cipher); + crypto_unregister_skciphers(des3_ede_skciphers, + ARRAY_SIZE(des3_ede_skciphers)); + } + suboptimal_x86 = 0; } module_init(des3_ede_x86_init); diff --git a/arch/x86/crypto/twofish_glue_3way.c b/arch/x86/crypto/twofish_glue_3way.c index 790e5a59a9a7..8db2f23b3056 100644 --- a/arch/x86/crypto/twofish_glue_3way.c +++ b/arch/x86/crypto/twofish_glue_3way.c @@ -103,7 +103,7 @@ static struct skcipher_alg tf_skciphers[] = { }, }; -static bool is_blacklisted_cpu(void) +static bool is_suboptimal_cpu(void) { if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL) return false; @@ -118,8 +118,7 @@ static bool is_blacklisted_cpu(void) * storing blocks in 64bit registers to allow three blocks to * be processed parallel. Parallel operation then allows gaining * more performance than was trade off, on out-of-order CPUs. - * However Atom does not benefit from this parallelism and - * should be blacklisted. + * However Atom does not benefit from this parallelism. */ return true; } @@ -139,7 +138,12 @@ static bool is_blacklisted_cpu(void) static int force; module_param(force, int, 0); -MODULE_PARM_DESC(force, "Force module load, ignore CPU blacklist"); +MODULE_PARM_DESC(force, "Force crypto driver registration on suboptimal CPUs"); + +static int suboptimal_x86; +module_param(suboptimal_x86, int, 0444); +MODULE_PARM_DESC(suboptimal_x86, + "Crypto driver not registered because performance on this CPU would be suboptimal"); static const struct x86_cpu_id module_cpu_ids[] = { X86_MATCH_FEATURE(X86_FEATURE_ANY, NULL), @@ -152,12 +156,9 @@ static int __init twofish_3way_init(void) if (!x86_match_cpu(module_cpu_ids)) return -ENODEV; - if (!force && is_blacklisted_cpu()) { - printk(KERN_INFO - "twofish-x86_64-3way: performance on this CPU " - "would be suboptimal: disabling " - "twofish-x86_64-3way.\n"); - return -ENODEV; + if (!force && is_suboptimal_cpu()) { + suboptimal_x86 = 1; + return 0; } return crypto_register_skciphers(tf_skciphers, @@ -166,7 +167,10 @@ static int __init twofish_3way_init(void) static void __exit twofish_3way_fini(void) { - crypto_unregister_skciphers(tf_skciphers, ARRAY_SIZE(tf_skciphers)); + if (!suboptimal_x86) + crypto_unregister_skciphers(tf_skciphers, ARRAY_SIZE(tf_skciphers)); + + suboptimal_x86 = 0; } module_init(twofish_3way_init); From patchwork Wed Nov 16 04:13:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Elliott, Robert (Servers)" X-Patchwork-Id: 20716 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp3088865wru; Tue, 15 Nov 2022 20:36:36 -0800 (PST) X-Google-Smtp-Source: AA0mqf5YjwaIV8N92G4S/+BkK2sZP7pk0nQrQ6SRJVbOuiolGrbNmv4agCn/G4uwjqmp6NllYdAs X-Received: by 2002:a05:6a00:1893:b0:563:4e9:5342 with SMTP id x19-20020a056a00189300b0056304e95342mr21266869pfh.56.1668573396073; Tue, 15 Nov 2022 20:36:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1668573396; cv=none; d=google.com; s=arc-20160816; b=Lv9B95PoF1MtjqxsRVKVPj9fOvk8kMzhW+7kvby11yHTdDOMs8Y/OJToFUamLlsbhV cqPZipLoDnLcBjhZ9DKevKrciDdS8raIMrIxbDfdGS0H3WUjlc+AEsY3pBakZ8Qmfz0r ur0ZtgUIyU4qsD5zeWMt2qUfv09xjXZVuMUdj7WtpgrOr1BwkMWBgAzlJsb+IRj52fk5 oPAXg7p3NP4cZyib46Ob66WzOyHem+4Ey2vke6co/wAilgnszC4UMgcEbpgUajLJxtXr dAzXq7DtcDlwp36XpjT94Nb84mcoUR8aapPg8hn2RnhPpnlFG72kUxQc+4sucevTwTyM fuJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=UQxbDx6IIdceogmBqO1UQxXhra12osc0rpf/5otoIoo=; b=kDLDmDk0lifPk5Ayvsrdje6a1omZphwVMkVN0S5sdBF4oKrB2fXZbZAIM5ON+HrLkL Yz3zC5FDYaO8DqdkKRSCmlNEqE+3ZapfZN56KUpRpfirKp1IzicnPejeTjbO6tN4esd+ 2kDuQB4Igwd5/876R5ZPyzSCNMhdvxvWsZ9SuqnzyGiUTjn8fu4AuJaqVXSBsXcOPoz5 fozeby8jEIvMiSSx6uayJaiQQVw+sZWZjISiKyuHAm0FUfQfBNxj3kCwgFOFbjZNUwkg qHY8sjUZLN9IeRnYTwUn3zjOW212YyQs+XyVUAmlcRjsqimEax/mFBNL8TEfwNByeRkG mbEw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=JuiV6+re; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s125-20020a632c83000000b0047005e8d871si13636235pgs.43.2022.11.15.20.36.21; Tue, 15 Nov 2022 20:36:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@hpe.com header.s=pps0720 header.b=JuiV6+re; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=hpe.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231977AbiKPERM (ORCPT + 99 others); Tue, 15 Nov 2022 23:17:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54562 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232237AbiKPEPE (ORCPT ); Tue, 15 Nov 2022 23:15:04 -0500 Received: from mx0a-002e3701.pphosted.com (mx0a-002e3701.pphosted.com [148.163.147.86]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5F089326E0; Tue, 15 Nov 2022 20:14:32 -0800 (PST) Received: from pps.filterd (m0150242.ppops.net [127.0.0.1]) by mx0a-002e3701.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AG3d5HG008268; Wed, 16 Nov 2022 04:14:23 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=hpe.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=pps0720; bh=UQxbDx6IIdceogmBqO1UQxXhra12osc0rpf/5otoIoo=; b=JuiV6+repaWTvE6Oc8gih82ztv7a+wPeUX/MiERlVvA+4kV+U3fJHxmA3i3FxNKdIroy DNSuZsxdmHVH21Fd+o3GS6qDHEFwA7C8Gx7InIGxja+WvU7f6UPCiRlujuXuNNR4RBQl OlAq3mepHuMzpANMhnIsXjsgakfNYLe2tmueFk5yNHhX9ogdhE29UMTaqZZKqCGDoqoc kuAWWEvfb4F2rOOom9V9fkHUUqOV5RzCIzdlmiRbvpa565myotqi0iFwJsErIUabtxcP K9+FaJdEX3ShOj8ir0vIExTTocTkXtic3t2EddrgezxdReHW39sqcn2PIiJFgoA15woO cQ== Received: from p1lg14879.it.hpe.com (p1lg14879.it.hpe.com [16.230.97.200]) by mx0a-002e3701.pphosted.com (PPS) with ESMTPS id 3kvr5486ub-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 16 Nov 2022 04:14:23 +0000 Received: from p1lg14885.dc01.its.hpecorp.net (unknown [10.119.18.236]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by p1lg14879.it.hpe.com (Postfix) with ESMTPS id A4C00295AF; Wed, 16 Nov 2022 04:14:22 +0000 (UTC) Received: from adevxp033-sys.us.rdlabs.hpecorp.net (unknown [16.231.227.36]) by p1lg14885.dc01.its.hpecorp.net (Postfix) with ESMTP id 340848058DE; Wed, 16 Nov 2022 04:14:22 +0000 (UTC) From: Robert Elliott To: herbert@gondor.apana.org.au, davem@davemloft.net, tim.c.chen@linux.intel.com, ap420073@gmail.com, ardb@kernel.org, Jason@zx2c4.com, David.Laight@ACULAB.COM, ebiggers@kernel.org, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org Cc: Robert Elliott Subject: [PATCH v4 24/24] crypto: x86 - standarize module descriptions Date: Tue, 15 Nov 2022 22:13:42 -0600 Message-Id: <20221116041342.3841-25-elliott@hpe.com> X-Mailer: git-send-email 2.38.1 In-Reply-To: <20221116041342.3841-1-elliott@hpe.com> References: <20221103042740.6556-1-elliott@hpe.com> <20221116041342.3841-1-elliott@hpe.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: wMrNsQqPPYHlmk88iOocarT2UTYp9ERN X-Proofpoint-GUID: wMrNsQqPPYHlmk88iOocarT2UTYp9ERN X-HPE-SCL: -1 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-15_08,2022-11-15_03,2022-06-22_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 malwarescore=0 adultscore=0 clxscore=1015 lowpriorityscore=0 spamscore=0 bulkscore=0 mlxlogscore=999 phishscore=0 impostorscore=0 priorityscore=1501 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2210170000 definitions=main-2211160029 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW, SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1749626017543934215?= X-GMAIL-MSGID: =?utf-8?q?1749626017543934215?= Make the module descriptions for the x86 optimized crypto modules match the descriptions of the generic modules and the names in Kconfig. End each description with "with " listing the features used for module matching. "-- accelerated for x86 with AVX2" Mention any other required CPU features: "(also required: AES-NI)" Mention any CPU features that are not required but enable additional acceleration: "(optional: GF-NI)" Signed-off-by: Robert Elliott --- arch/x86/crypto/aegis128-aesni-glue.c | 2 +- arch/x86/crypto/aesni-intel_glue.c | 2 +- arch/x86/crypto/aria_aesni_avx_glue.c | 2 +- arch/x86/crypto/blake2s-glue.c | 1 + arch/x86/crypto/blowfish_glue.c | 2 +- arch/x86/crypto/camellia_aesni_avx2_glue.c | 2 +- arch/x86/crypto/camellia_aesni_avx_glue.c | 2 +- arch/x86/crypto/camellia_glue.c | 2 +- arch/x86/crypto/cast5_avx_glue.c | 2 +- arch/x86/crypto/cast6_avx_glue.c | 2 +- arch/x86/crypto/chacha_glue.c | 2 +- arch/x86/crypto/crc32-pclmul_glue.c | 2 +- arch/x86/crypto/crc32c-intel_glue.c | 2 +- arch/x86/crypto/crct10dif-pclmul_glue.c | 2 +- arch/x86/crypto/curve25519-x86_64.c | 1 + arch/x86/crypto/des3_ede_glue.c | 2 +- arch/x86/crypto/ghash-clmulni-intel_glue.c | 2 +- arch/x86/crypto/nhpoly1305-avx2-glue.c | 2 +- arch/x86/crypto/nhpoly1305-sse2-glue.c | 2 +- arch/x86/crypto/poly1305_glue.c | 2 +- arch/x86/crypto/polyval-clmulni_glue.c | 2 +- arch/x86/crypto/serpent_avx2_glue.c | 2 +- arch/x86/crypto/serpent_avx_glue.c | 2 +- arch/x86/crypto/serpent_sse2_glue.c | 2 +- arch/x86/crypto/sha1_ssse3_glue.c | 2 +- arch/x86/crypto/sha256_ssse3_glue.c | 2 +- arch/x86/crypto/sha512_ssse3_glue.c | 2 +- arch/x86/crypto/sm3_avx_glue.c | 2 +- arch/x86/crypto/sm4_aesni_avx2_glue.c | 2 +- arch/x86/crypto/sm4_aesni_avx_glue.c | 2 +- arch/x86/crypto/twofish_avx_glue.c | 2 +- arch/x86/crypto/twofish_glue.c | 2 +- arch/x86/crypto/twofish_glue_3way.c | 2 +- crypto/aes_ti.c | 2 +- crypto/blake2b_generic.c | 2 +- crypto/blowfish_common.c | 2 +- crypto/crct10dif_generic.c | 2 +- crypto/curve25519-generic.c | 1 + crypto/sha256_generic.c | 2 +- crypto/sha512_generic.c | 2 +- crypto/sm3.c | 2 +- crypto/sm4.c | 2 +- crypto/twofish_common.c | 2 +- crypto/twofish_generic.c | 2 +- 44 files changed, 44 insertions(+), 41 deletions(-) diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c index e0312ecf34a8..e72ae7ba5f12 100644 --- a/arch/x86/crypto/aegis128-aesni-glue.c +++ b/arch/x86/crypto/aegis128-aesni-glue.c @@ -322,6 +322,6 @@ module_exit(crypto_aegis128_aesni_module_exit); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Ondrej Mosnacek "); -MODULE_DESCRIPTION("AEGIS-128 AEAD algorithm -- AESNI+SSE2 implementation"); +MODULE_DESCRIPTION("AEGIS-128 AEAD algorithm -- accelerated for x86 with AES-NI (also required: SEE2)"); MODULE_ALIAS_CRYPTO("aegis128"); MODULE_ALIAS_CRYPTO("aegis128-aesni"); diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c index 80dbf98c53fd..3d8508598e76 100644 --- a/arch/x86/crypto/aesni-intel_glue.c +++ b/arch/x86/crypto/aesni-intel_glue.c @@ -1311,6 +1311,6 @@ static void __exit aesni_exit(void) late_initcall(aesni_init); module_exit(aesni_exit); -MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm, Intel AES-NI instructions optimized"); +MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm -- accelerated for x86 with AES-NI (optional: AVX, AVX2)"); MODULE_LICENSE("GPL"); MODULE_ALIAS_CRYPTO("aes"); diff --git a/arch/x86/crypto/aria_aesni_avx_glue.c b/arch/x86/crypto/aria_aesni_avx_glue.c index ebb9760967b5..1d23c7ef7aef 100644 --- a/arch/x86/crypto/aria_aesni_avx_glue.c +++ b/arch/x86/crypto/aria_aesni_avx_glue.c @@ -227,6 +227,6 @@ module_exit(aria_avx_exit); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Taehee Yoo "); -MODULE_DESCRIPTION("ARIA Cipher Algorithm, AVX/AES-NI/GFNI optimized"); +MODULE_DESCRIPTION("ARIA Cipher Algorithm -- accelerated for x86 with AVX (also required: AES-NI, OSXSAVE)(optional: GF-NI)"); MODULE_ALIAS_CRYPTO("aria"); MODULE_ALIAS_CRYPTO("aria-aesni-avx"); diff --git a/arch/x86/crypto/blake2s-glue.c b/arch/x86/crypto/blake2s-glue.c index 781cf9471cb6..0618f0d31fae 100644 --- a/arch/x86/crypto/blake2s-glue.c +++ b/arch/x86/crypto/blake2s-glue.c @@ -90,3 +90,4 @@ static int __init blake2s_mod_init(void) module_init(blake2s_mod_init); MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("BLAKE2s hash algorithm -- accelerated for x86 with SSSE3 or AVX-512VL"); diff --git a/arch/x86/crypto/blowfish_glue.c b/arch/x86/crypto/blowfish_glue.c index 8e4de7859e34..67f7562d2d02 100644 --- a/arch/x86/crypto/blowfish_glue.c +++ b/arch/x86/crypto/blowfish_glue.c @@ -353,6 +353,6 @@ module_init(blowfish_init); module_exit(blowfish_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Blowfish Cipher Algorithm, asm optimized"); +MODULE_DESCRIPTION("Blowfish Cipher Algorithm -- accelerated for x86"); MODULE_ALIAS_CRYPTO("blowfish"); MODULE_ALIAS_CRYPTO("blowfish-asm"); diff --git a/arch/x86/crypto/camellia_aesni_avx2_glue.c b/arch/x86/crypto/camellia_aesni_avx2_glue.c index e8ae1e1a801d..da89fef184d2 100644 --- a/arch/x86/crypto/camellia_aesni_avx2_glue.c +++ b/arch/x86/crypto/camellia_aesni_avx2_glue.c @@ -147,6 +147,6 @@ module_init(camellia_aesni_init); module_exit(camellia_aesni_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Camellia Cipher Algorithm, AES-NI/AVX2 optimized"); +MODULE_DESCRIPTION("Camellia Cipher Algorithm -- accelerated for x86 with AVX2 (also required: AES-NI, AVX, OSXSAVE)"); MODULE_ALIAS_CRYPTO("camellia"); MODULE_ALIAS_CRYPTO("camellia-asm"); diff --git a/arch/x86/crypto/camellia_aesni_avx_glue.c b/arch/x86/crypto/camellia_aesni_avx_glue.c index 6784d631575c..0eebb56bc440 100644 --- a/arch/x86/crypto/camellia_aesni_avx_glue.c +++ b/arch/x86/crypto/camellia_aesni_avx_glue.c @@ -146,6 +146,6 @@ module_init(camellia_aesni_init); module_exit(camellia_aesni_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Camellia Cipher Algorithm, AES-NI/AVX optimized"); +MODULE_DESCRIPTION("Camellia Cipher Algorithm -- accelerated for x86 with AVX (also required: AES-NI, OSXSAVE)"); MODULE_ALIAS_CRYPTO("camellia"); MODULE_ALIAS_CRYPTO("camellia-asm"); diff --git a/arch/x86/crypto/camellia_glue.c b/arch/x86/crypto/camellia_glue.c index 2cb9b24d9437..b8cad1655c66 100644 --- a/arch/x86/crypto/camellia_glue.c +++ b/arch/x86/crypto/camellia_glue.c @@ -1427,6 +1427,6 @@ module_init(camellia_init); module_exit(camellia_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Camellia Cipher Algorithm, asm optimized"); +MODULE_DESCRIPTION("Camellia Cipher Algorithm -- accelerated for x86"); MODULE_ALIAS_CRYPTO("camellia"); MODULE_ALIAS_CRYPTO("camellia-asm"); diff --git a/arch/x86/crypto/cast5_avx_glue.c b/arch/x86/crypto/cast5_avx_glue.c index 34ef032bb8d0..4a11d3ea9838 100644 --- a/arch/x86/crypto/cast5_avx_glue.c +++ b/arch/x86/crypto/cast5_avx_glue.c @@ -133,6 +133,6 @@ static void __exit cast5_exit(void) module_init(cast5_init); module_exit(cast5_exit); -MODULE_DESCRIPTION("Cast5 Cipher Algorithm, AVX optimized"); +MODULE_DESCRIPTION("Cast5 Cipher Algorithm -- accelerated for x86 with AVX"); MODULE_LICENSE("GPL"); MODULE_ALIAS_CRYPTO("cast5"); diff --git a/arch/x86/crypto/cast6_avx_glue.c b/arch/x86/crypto/cast6_avx_glue.c index 71559fd3ea87..53a92999a234 100644 --- a/arch/x86/crypto/cast6_avx_glue.c +++ b/arch/x86/crypto/cast6_avx_glue.c @@ -133,6 +133,6 @@ static void __exit cast6_exit(void) module_init(cast6_init); module_exit(cast6_exit); -MODULE_DESCRIPTION("Cast6 Cipher Algorithm, AVX optimized"); +MODULE_DESCRIPTION("Cast6 Cipher Algorithm -- accelerated for x86 with AVX"); MODULE_LICENSE("GPL"); MODULE_ALIAS_CRYPTO("cast6"); diff --git a/arch/x86/crypto/chacha_glue.c b/arch/x86/crypto/chacha_glue.c index ec7461412c5e..563546d0bc2a 100644 --- a/arch/x86/crypto/chacha_glue.c +++ b/arch/x86/crypto/chacha_glue.c @@ -320,7 +320,7 @@ module_exit(chacha_simd_mod_fini); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Martin Willi "); -MODULE_DESCRIPTION("ChaCha and XChaCha stream ciphers (x64 SIMD accelerated)"); +MODULE_DESCRIPTION("ChaCha and XChaCha stream ciphers -- accelerated for x86 with SSSE3 (optional: AVX, AVX2, AVX-512VL and AVX-512BW)"); MODULE_ALIAS_CRYPTO("chacha20"); MODULE_ALIAS_CRYPTO("chacha20-simd"); MODULE_ALIAS_CRYPTO("xchacha20"); diff --git a/arch/x86/crypto/crc32-pclmul_glue.c b/arch/x86/crypto/crc32-pclmul_glue.c index d5e889c24bea..1c297fae5d39 100644 --- a/arch/x86/crypto/crc32-pclmul_glue.c +++ b/arch/x86/crypto/crc32-pclmul_glue.c @@ -207,6 +207,6 @@ module_exit(crc32_pclmul_mod_fini); MODULE_AUTHOR("Alexander Boyko "); MODULE_LICENSE("GPL"); - +MODULE_DESCRIPTION("CRC32 -- accelerated for x86 with PCLMULQDQ"); MODULE_ALIAS_CRYPTO("crc32"); MODULE_ALIAS_CRYPTO("crc32-pclmul"); diff --git a/arch/x86/crypto/crc32c-intel_glue.c b/arch/x86/crypto/crc32c-intel_glue.c index 3c2bf7032667..ba7899d04bb1 100644 --- a/arch/x86/crypto/crc32c-intel_glue.c +++ b/arch/x86/crypto/crc32c-intel_glue.c @@ -275,7 +275,7 @@ module_init(crc32c_intel_mod_init); module_exit(crc32c_intel_mod_fini); MODULE_AUTHOR("Austin Zhang , Kent Liu "); -MODULE_DESCRIPTION("CRC32c (Castagnoli) optimization using Intel Hardware."); +MODULE_DESCRIPTION("CRC32c (Castagnoli) -- accelerated for x86 with SSE4.2 (optional: PCLMULQDQ)"); MODULE_LICENSE("GPL"); MODULE_ALIAS_CRYPTO("crc32c"); diff --git a/arch/x86/crypto/crct10dif-pclmul_glue.c b/arch/x86/crypto/crct10dif-pclmul_glue.c index a26dbd27da96..df9f81ee97a3 100644 --- a/arch/x86/crypto/crct10dif-pclmul_glue.c +++ b/arch/x86/crypto/crct10dif-pclmul_glue.c @@ -162,7 +162,7 @@ module_init(crct10dif_intel_mod_init); module_exit(crct10dif_intel_mod_fini); MODULE_AUTHOR("Tim Chen "); -MODULE_DESCRIPTION("T10 DIF CRC calculation accelerated with PCLMULQDQ."); +MODULE_DESCRIPTION("T10 DIF CRC -- accelerated for x86 with PCLMULQDQ"); MODULE_LICENSE("GPL"); MODULE_ALIAS_CRYPTO("crct10dif"); diff --git a/arch/x86/crypto/curve25519-x86_64.c b/arch/x86/crypto/curve25519-x86_64.c index 74672351e534..078508f53ff0 100644 --- a/arch/x86/crypto/curve25519-x86_64.c +++ b/arch/x86/crypto/curve25519-x86_64.c @@ -1742,3 +1742,4 @@ MODULE_ALIAS_CRYPTO("curve25519"); MODULE_ALIAS_CRYPTO("curve25519-x86"); MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Jason A. Donenfeld "); +MODULE_DESCRIPTION("Curve25519 algorithm -- accelerated for x86 with ADX (also requires BMI2)"); diff --git a/arch/x86/crypto/des3_ede_glue.c b/arch/x86/crypto/des3_ede_glue.c index a4cac5129148..fc90c0a076e3 100644 --- a/arch/x86/crypto/des3_ede_glue.c +++ b/arch/x86/crypto/des3_ede_glue.c @@ -404,7 +404,7 @@ module_init(des3_ede_x86_init); module_exit(des3_ede_x86_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Triple DES EDE Cipher Algorithm, asm optimized"); +MODULE_DESCRIPTION("Triple DES EDE Cipher Algorithm -- accelerated for x86"); MODULE_ALIAS_CRYPTO("des3_ede"); MODULE_ALIAS_CRYPTO("des3_ede-asm"); MODULE_AUTHOR("Jussi Kivilinna "); diff --git a/arch/x86/crypto/ghash-clmulni-intel_glue.c b/arch/x86/crypto/ghash-clmulni-intel_glue.c index d19a8e9b34a6..30f4966df4de 100644 --- a/arch/x86/crypto/ghash-clmulni-intel_glue.c +++ b/arch/x86/crypto/ghash-clmulni-intel_glue.c @@ -363,5 +363,5 @@ module_init(ghash_pclmulqdqni_mod_init); module_exit(ghash_pclmulqdqni_mod_exit); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("GHASH hash function, accelerated by PCLMULQDQ-NI"); +MODULE_DESCRIPTION("GHASH hash function -- accelerated for x86 with PCLMULQDQ"); MODULE_ALIAS_CRYPTO("ghash"); diff --git a/arch/x86/crypto/nhpoly1305-avx2-glue.c b/arch/x86/crypto/nhpoly1305-avx2-glue.c index 2e63947bc9fa..ed6209f027e7 100644 --- a/arch/x86/crypto/nhpoly1305-avx2-glue.c +++ b/arch/x86/crypto/nhpoly1305-avx2-glue.c @@ -94,7 +94,7 @@ static void __exit nhpoly1305_mod_exit(void) module_init(nhpoly1305_mod_init); module_exit(nhpoly1305_mod_exit); -MODULE_DESCRIPTION("NHPoly1305 ε-almost-∆-universal hash function (AVX2-accelerated)"); +MODULE_DESCRIPTION("NHPoly1305 ε-almost-∆-universal hash function -- accelerated for x86 with AVX2 (also required: OSXSAVE)"); MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Eric Biggers "); MODULE_ALIAS_CRYPTO("nhpoly1305"); diff --git a/arch/x86/crypto/nhpoly1305-sse2-glue.c b/arch/x86/crypto/nhpoly1305-sse2-glue.c index c47765e46236..d09156e702dd 100644 --- a/arch/x86/crypto/nhpoly1305-sse2-glue.c +++ b/arch/x86/crypto/nhpoly1305-sse2-glue.c @@ -83,7 +83,7 @@ static void __exit nhpoly1305_mod_exit(void) module_init(nhpoly1305_mod_init); module_exit(nhpoly1305_mod_exit); -MODULE_DESCRIPTION("NHPoly1305 ε-almost-∆-universal hash function (SSE2-accelerated)"); +MODULE_DESCRIPTION("NHPoly1305 ε-almost-∆-universal hash function -- accelerated for x86 with SSE2"); MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Eric Biggers "); MODULE_ALIAS_CRYPTO("nhpoly1305"); diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c index d3c0d5b335ea..78f88be4a22a 100644 --- a/arch/x86/crypto/poly1305_glue.c +++ b/arch/x86/crypto/poly1305_glue.c @@ -313,6 +313,6 @@ module_exit(poly1305_simd_mod_exit); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Jason A. Donenfeld "); -MODULE_DESCRIPTION("Poly1305 authenticator"); +MODULE_DESCRIPTION("Poly1305 authenticator -- accelerated for x86 (optional: AVX, AVX2, AVX-512F)"); MODULE_ALIAS_CRYPTO("poly1305"); MODULE_ALIAS_CRYPTO("poly1305-simd"); diff --git a/arch/x86/crypto/polyval-clmulni_glue.c b/arch/x86/crypto/polyval-clmulni_glue.c index 20d4a68ec1d7..447f0f219759 100644 --- a/arch/x86/crypto/polyval-clmulni_glue.c +++ b/arch/x86/crypto/polyval-clmulni_glue.c @@ -211,6 +211,6 @@ module_init(polyval_clmulni_mod_init); module_exit(polyval_clmulni_mod_exit); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("POLYVAL hash function accelerated by PCLMULQDQ-NI"); +MODULE_DESCRIPTION("POLYVAL hash function - accelerated for x86 with PCLMULQDQ (also required: AVX)"); MODULE_ALIAS_CRYPTO("polyval"); MODULE_ALIAS_CRYPTO("polyval-clmulni"); diff --git a/arch/x86/crypto/serpent_avx2_glue.c b/arch/x86/crypto/serpent_avx2_glue.c index 2aa62c93a16f..0a57779a7559 100644 --- a/arch/x86/crypto/serpent_avx2_glue.c +++ b/arch/x86/crypto/serpent_avx2_glue.c @@ -139,6 +139,6 @@ module_init(serpent_avx2_init); module_exit(serpent_avx2_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Serpent Cipher Algorithm, AVX2 optimized"); +MODULE_DESCRIPTION("Serpent Cipher Algorithm -- accelerated for x86 with AVX2 (also required: OSXSAVE)"); MODULE_ALIAS_CRYPTO("serpent"); MODULE_ALIAS_CRYPTO("serpent-asm"); diff --git a/arch/x86/crypto/serpent_avx_glue.c b/arch/x86/crypto/serpent_avx_glue.c index 28ee9717df49..9d03fb25537f 100644 --- a/arch/x86/crypto/serpent_avx_glue.c +++ b/arch/x86/crypto/serpent_avx_glue.c @@ -141,6 +141,6 @@ static void __exit serpent_exit(void) module_init(serpent_init); module_exit(serpent_exit); -MODULE_DESCRIPTION("Serpent Cipher Algorithm, AVX optimized"); +MODULE_DESCRIPTION("Serpent Cipher Algorithm -- accelerated for x86 with AVX"); MODULE_LICENSE("GPL"); MODULE_ALIAS_CRYPTO("serpent"); diff --git a/arch/x86/crypto/serpent_sse2_glue.c b/arch/x86/crypto/serpent_sse2_glue.c index 74f0c89f55ef..287b19527105 100644 --- a/arch/x86/crypto/serpent_sse2_glue.c +++ b/arch/x86/crypto/serpent_sse2_glue.c @@ -131,6 +131,6 @@ static void __exit serpent_sse2_exit(void) module_init(serpent_sse2_init); module_exit(serpent_sse2_exit); -MODULE_DESCRIPTION("Serpent Cipher Algorithm, SSE2 optimized"); +MODULE_DESCRIPTION("Serpent Cipher Algorithm -- accelerated for x86 with SSE2"); MODULE_LICENSE("GPL"); MODULE_ALIAS_CRYPTO("serpent"); diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c index 405af5e14b67..113756544d4e 100644 --- a/arch/x86/crypto/sha1_ssse3_glue.c +++ b/arch/x86/crypto/sha1_ssse3_glue.c @@ -433,7 +433,7 @@ module_init(sha1_ssse3_mod_init); module_exit(sha1_ssse3_mod_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm, Supplemental SSE3 accelerated"); +MODULE_DESCRIPTION("SHA1 Secure Hash Algorithm -- accelerated for x86 with SSSE3, AVX, AVX2, or SHA-NI"); MODULE_ALIAS_CRYPTO("sha1"); MODULE_ALIAS_CRYPTO("sha1-ssse3"); diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c index 293cf7085dd3..78fa25d2e4ba 100644 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ b/arch/x86/crypto/sha256_ssse3_glue.c @@ -498,7 +498,7 @@ module_init(sha256_ssse3_mod_init); module_exit(sha256_ssse3_mod_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA256 Secure Hash Algorithm, Supplemental SSE3 accelerated"); +MODULE_DESCRIPTION("SHA-224 and SHA-256 Secure Hash Algorithms -- accelerated for x86 with SSSE3, AVX, AVX2, or SHA-NI"); MODULE_ALIAS_CRYPTO("sha256"); MODULE_ALIAS_CRYPTO("sha256-ssse3"); diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c index 9f13baf7dda9..2fa951069604 100644 --- a/arch/x86/crypto/sha512_ssse3_glue.c +++ b/arch/x86/crypto/sha512_ssse3_glue.c @@ -396,7 +396,7 @@ module_init(sha512_ssse3_mod_init); module_exit(sha512_ssse3_mod_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA512 Secure Hash Algorithm, Supplemental SSE3 accelerated"); +MODULE_DESCRIPTION("SHA-384 and SHA-512 Secure Hash Algorithms -- accelerated for x86 with SSSE3, AVX, or AVX2"); MODULE_ALIAS_CRYPTO("sha512"); MODULE_ALIAS_CRYPTO("sha512-ssse3"); diff --git a/arch/x86/crypto/sm3_avx_glue.c b/arch/x86/crypto/sm3_avx_glue.c index 169ba6a2c806..9e1177fbf032 100644 --- a/arch/x86/crypto/sm3_avx_glue.c +++ b/arch/x86/crypto/sm3_avx_glue.c @@ -161,6 +161,6 @@ module_exit(sm3_avx_mod_exit); MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Tianjia Zhang "); -MODULE_DESCRIPTION("SM3 Secure Hash Algorithm, AVX assembler accelerated"); +MODULE_DESCRIPTION("SM3 Secure Hash Algorithm -- accelerated for x86 with AVX (also required: BMI2)"); MODULE_ALIAS_CRYPTO("sm3"); MODULE_ALIAS_CRYPTO("sm3-avx"); diff --git a/arch/x86/crypto/sm4_aesni_avx2_glue.c b/arch/x86/crypto/sm4_aesni_avx2_glue.c index 6bcf78231888..b497a6006c8d 100644 --- a/arch/x86/crypto/sm4_aesni_avx2_glue.c +++ b/arch/x86/crypto/sm4_aesni_avx2_glue.c @@ -177,6 +177,6 @@ module_exit(sm4_exit); MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Tianjia Zhang "); -MODULE_DESCRIPTION("SM4 Cipher Algorithm, AES-NI/AVX2 optimized"); +MODULE_DESCRIPTION("SM4 Cipher Algorithm -- accelerated for x86 with AVX2 (also required: AES-NI, AVX, OSXSAVE)"); MODULE_ALIAS_CRYPTO("sm4"); MODULE_ALIAS_CRYPTO("sm4-aesni-avx2"); diff --git a/arch/x86/crypto/sm4_aesni_avx_glue.c b/arch/x86/crypto/sm4_aesni_avx_glue.c index 03775b1079dc..e583ee0948af 100644 --- a/arch/x86/crypto/sm4_aesni_avx_glue.c +++ b/arch/x86/crypto/sm4_aesni_avx_glue.c @@ -495,6 +495,6 @@ module_exit(sm4_exit); MODULE_LICENSE("GPL v2"); MODULE_AUTHOR("Tianjia Zhang "); -MODULE_DESCRIPTION("SM4 Cipher Algorithm, AES-NI/AVX optimized"); +MODULE_DESCRIPTION("SM4 Cipher Algorithm -- accelerated for x86 with AVX (also required: AES-NI, OSXSAVE)"); MODULE_ALIAS_CRYPTO("sm4"); MODULE_ALIAS_CRYPTO("sm4-aesni-avx"); diff --git a/arch/x86/crypto/twofish_avx_glue.c b/arch/x86/crypto/twofish_avx_glue.c index ae3cc4ad6f4f..7b405c66d5fa 100644 --- a/arch/x86/crypto/twofish_avx_glue.c +++ b/arch/x86/crypto/twofish_avx_glue.c @@ -143,6 +143,6 @@ static void __exit twofish_exit(void) module_init(twofish_init); module_exit(twofish_exit); -MODULE_DESCRIPTION("Twofish Cipher Algorithm, AVX optimized"); +MODULE_DESCRIPTION("Twofish Cipher Algorithm -- accelerated for x86 with AVX"); MODULE_LICENSE("GPL"); MODULE_ALIAS_CRYPTO("twofish"); diff --git a/arch/x86/crypto/twofish_glue.c b/arch/x86/crypto/twofish_glue.c index ade98aef3402..10729675e79c 100644 --- a/arch/x86/crypto/twofish_glue.c +++ b/arch/x86/crypto/twofish_glue.c @@ -105,6 +105,6 @@ module_init(twofish_glue_init); module_exit(twofish_glue_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION ("Twofish Cipher Algorithm, asm optimized"); +MODULE_DESCRIPTION("Twofish Cipher Algorithm -- accelerated for x86"); MODULE_ALIAS_CRYPTO("twofish"); MODULE_ALIAS_CRYPTO("twofish-asm"); diff --git a/arch/x86/crypto/twofish_glue_3way.c b/arch/x86/crypto/twofish_glue_3way.c index 8db2f23b3056..43f428b59684 100644 --- a/arch/x86/crypto/twofish_glue_3way.c +++ b/arch/x86/crypto/twofish_glue_3way.c @@ -177,6 +177,6 @@ module_init(twofish_3way_init); module_exit(twofish_3way_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Twofish Cipher Algorithm, 3-way parallel asm optimized"); +MODULE_DESCRIPTION("Twofish Cipher Algorithm -- accelerated for x86 (3-way parallel)"); MODULE_ALIAS_CRYPTO("twofish"); MODULE_ALIAS_CRYPTO("twofish-asm"); diff --git a/crypto/aes_ti.c b/crypto/aes_ti.c index 205c2c257d49..3cff553495ad 100644 --- a/crypto/aes_ti.c +++ b/crypto/aes_ti.c @@ -78,6 +78,6 @@ static void __exit aes_fini(void) module_init(aes_init); module_exit(aes_fini); -MODULE_DESCRIPTION("Generic fixed time AES"); +MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm -- generic fixed time"); MODULE_AUTHOR("Ard Biesheuvel "); MODULE_LICENSE("GPL v2"); diff --git a/crypto/blake2b_generic.c b/crypto/blake2b_generic.c index 6704c0355889..ee53f25ff254 100644 --- a/crypto/blake2b_generic.c +++ b/crypto/blake2b_generic.c @@ -175,7 +175,7 @@ subsys_initcall(blake2b_mod_init); module_exit(blake2b_mod_fini); MODULE_AUTHOR("David Sterba "); -MODULE_DESCRIPTION("BLAKE2b generic implementation"); +MODULE_DESCRIPTION("BLAKE2b hash algorithm"); MODULE_LICENSE("GPL"); MODULE_ALIAS_CRYPTO("blake2b-160"); MODULE_ALIAS_CRYPTO("blake2b-160-generic"); diff --git a/crypto/blowfish_common.c b/crypto/blowfish_common.c index 1c072012baff..8c75fdfcd09c 100644 --- a/crypto/blowfish_common.c +++ b/crypto/blowfish_common.c @@ -394,4 +394,4 @@ int blowfish_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int keylen) EXPORT_SYMBOL_GPL(blowfish_setkey); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Blowfish Cipher common functions"); +MODULE_DESCRIPTION("Blowfish Cipher Algorithm common functions"); diff --git a/crypto/crct10dif_generic.c b/crypto/crct10dif_generic.c index e843982073bb..81c131c8ccd0 100644 --- a/crypto/crct10dif_generic.c +++ b/crypto/crct10dif_generic.c @@ -116,7 +116,7 @@ subsys_initcall(crct10dif_mod_init); module_exit(crct10dif_mod_fini); MODULE_AUTHOR("Tim Chen "); -MODULE_DESCRIPTION("T10 DIF CRC calculation."); +MODULE_DESCRIPTION("T10 DIF CRC calculation"); MODULE_LICENSE("GPL"); MODULE_ALIAS_CRYPTO("crct10dif"); MODULE_ALIAS_CRYPTO("crct10dif-generic"); diff --git a/crypto/curve25519-generic.c b/crypto/curve25519-generic.c index d055b0784c77..4f96583b31dd 100644 --- a/crypto/curve25519-generic.c +++ b/crypto/curve25519-generic.c @@ -88,3 +88,4 @@ module_exit(curve25519_exit); MODULE_ALIAS_CRYPTO("curve25519"); MODULE_ALIAS_CRYPTO("curve25519-generic"); MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Curve25519 algorithm"); diff --git a/crypto/sha256_generic.c b/crypto/sha256_generic.c index bf147b01e313..141430c25e15 100644 --- a/crypto/sha256_generic.c +++ b/crypto/sha256_generic.c @@ -102,7 +102,7 @@ subsys_initcall(sha256_generic_mod_init); module_exit(sha256_generic_mod_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-224 and SHA-256 Secure Hash Algorithm"); +MODULE_DESCRIPTION("SHA-224 and SHA-256 Secure Hash Algorithms"); MODULE_ALIAS_CRYPTO("sha224"); MODULE_ALIAS_CRYPTO("sha224-generic"); diff --git a/crypto/sha512_generic.c b/crypto/sha512_generic.c index be70e76d6d86..63c5616ec770 100644 --- a/crypto/sha512_generic.c +++ b/crypto/sha512_generic.c @@ -219,7 +219,7 @@ subsys_initcall(sha512_generic_mod_init); module_exit(sha512_generic_mod_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-512 and SHA-384 Secure Hash Algorithms"); +MODULE_DESCRIPTION("SHA-384 and SHA-512 Secure Hash Algorithms"); MODULE_ALIAS_CRYPTO("sha384"); MODULE_ALIAS_CRYPTO("sha384-generic"); diff --git a/crypto/sm3.c b/crypto/sm3.c index d473e358a873..2a400eb69e66 100644 --- a/crypto/sm3.c +++ b/crypto/sm3.c @@ -242,5 +242,5 @@ void sm3_final(struct sm3_state *sctx, u8 *out) } EXPORT_SYMBOL_GPL(sm3_final); -MODULE_DESCRIPTION("Generic SM3 library"); +MODULE_DESCRIPTION("SM3 Secure Hash Algorithm generic library"); MODULE_LICENSE("GPL v2"); diff --git a/crypto/sm4.c b/crypto/sm4.c index 2c44193bc27e..d46b598b41cd 100644 --- a/crypto/sm4.c +++ b/crypto/sm4.c @@ -180,5 +180,5 @@ void sm4_crypt_block(const u32 *rk, u8 *out, const u8 *in) } EXPORT_SYMBOL_GPL(sm4_crypt_block); -MODULE_DESCRIPTION("Generic SM4 library"); +MODULE_DESCRIPTION("SM4 Cipher Algorithm generic library"); MODULE_LICENSE("GPL v2"); diff --git a/crypto/twofish_common.c b/crypto/twofish_common.c index f921f30334f4..daa28045069d 100644 --- a/crypto/twofish_common.c +++ b/crypto/twofish_common.c @@ -690,4 +690,4 @@ int twofish_setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int key_len) EXPORT_SYMBOL_GPL(twofish_setkey); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Twofish cipher common functions"); +MODULE_DESCRIPTION("Twofish Cipher Algorithm common functions"); diff --git a/crypto/twofish_generic.c b/crypto/twofish_generic.c index 86b2f067a416..4fe42b4ac82d 100644 --- a/crypto/twofish_generic.c +++ b/crypto/twofish_generic.c @@ -191,6 +191,6 @@ subsys_initcall(twofish_mod_init); module_exit(twofish_mod_fini); MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION ("Twofish Cipher Algorithm"); +MODULE_DESCRIPTION("Twofish Cipher Algorithm"); MODULE_ALIAS_CRYPTO("twofish"); MODULE_ALIAS_CRYPTO("twofish-generic");