From patchwork Wed Mar 29 14:06:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Heiko_St=C3=BCbner?= X-Patchwork-Id: 76592 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp448012vqo; Wed, 29 Mar 2023 07:17:36 -0700 (PDT) X-Google-Smtp-Source: AK7set/3fk1LkxQAev9Ko1G468odzWzTcmrMHMwKtdwT6MbPVbhLzRPFJtswmRXJhLphdPznEhqU X-Received: by 2002:a05:6a20:a82a:b0:da:5ef4:69f5 with SMTP id cb42-20020a056a20a82a00b000da5ef469f5mr14819620pzb.21.1680099455755; Wed, 29 Mar 2023 07:17:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680099455; cv=none; d=google.com; s=arc-20160816; b=Avfvgh3vcYzMwwDpNioSBj9eK1ekp4XMF9qMtvyJ3yRZQjcmlswV77+gojXph1k/EX 14nraoh+WN6zfJ52skFoX+RDR3rp8YtLZMODtgBztQ5BiN4b21Z/eNv6mv0htoi5NrNZ Y0NJu3ueWBhkq9+pTIVln6Bm8+p412ZN8+H+JpnGNVoyd9qkixm8t8ypf/eAyoKwTQVy dJu7hA3T+b6Hyk2TtMcaZbihab6b19mzHyooKixlPOk/2J595xrFtCb/EX4uhkZfdCLu u7fURAJoVIP78p3/rjIgheFweyWFeSSw3yprIelNqxURXExS9p5ndzL351yBh9ZHnvwA +1iQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=cOdwWrm/mE1ljbe/eERXohMBumvAQAeG74IbxdkoKGg=; b=g44kHG40gBVW3klJCIV7MkvQUBSBjYsyIe8DLQHYtu1lkE/whRXYQF2RjVMlfRWdQM p2MptTDC6ouFXg2H/6w0190Zpj0E+2eLLOiUpost9zJMqCFx6Oka+4XT17MbKor1rHi8 DyBmFJGfvxLrKEs3wH5S2fCdRQaD57qsD1pdbNop59yjSh4TIpizCvCRtyAXsz5U9dn0 5jU3w7/eY9giZA+85x3pCgRUFiiigF5XfyELQEybXvji/WoSyaSNNHsRFEk75J+dfkTC T5cwE9jLSI4ZAG1RWCB27Tn+sqbIJijDpryf1mvjqem/c3ZymSImVnr//Zx5hJrHbntu icyw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=sntech.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l69-20020a638848000000b00513234112b6si13098538pgd.898.2023.03.29.07.17.23; Wed, 29 Mar 2023 07:17:35 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=sntech.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229915AbjC2OIp (ORCPT + 99 others); Wed, 29 Mar 2023 10:08:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229679AbjC2OIn (ORCPT ); Wed, 29 Mar 2023 10:08:43 -0400 Received: from gloria.sntech.de (gloria.sntech.de [185.11.138.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDCC655B8; Wed, 29 Mar 2023 07:07:39 -0700 (PDT) Received: from ip4d1634d3.dynamic.kabel-deutschland.de ([77.22.52.211] helo=phil.lan) by gloria.sntech.de with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1phWSA-0004ZJ-D5; Wed, 29 Mar 2023 16:07:06 +0200 From: Heiko Stuebner To: palmer@dabbelt.com Cc: paul.walmsley@sifive.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, christoph.muellner@vrull.eu, heiko@sntech.de, Heiko Stuebner Subject: [PATCH v4 1/4] RISC-V: add Zbc extension detection Date: Wed, 29 Mar 2023 16:06:39 +0200 Message-Id: <20230329140642.2186644-2-heiko.stuebner@vrull.eu> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230329140642.2186644-1-heiko.stuebner@vrull.eu> References: <20230329140642.2186644-1-heiko.stuebner@vrull.eu> MIME-Version: 1.0 X-Spam-Status: No, score=0.0 required=5.0 tests=SPF_PASS,T_SPF_HELO_TEMPERROR autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761711966765932682?= X-GMAIL-MSGID: =?utf-8?q?1761711966765932682?= From: Heiko Stuebner Add handling for Zbc extension. Zbc provides instruction for carry-less multiplication. Signed-off-by: Heiko Stuebner --- arch/riscv/Kconfig | 22 ++++++++++++++++++++++ arch/riscv/include/asm/hwcap.h | 1 + arch/riscv/kernel/cpu.c | 1 + arch/riscv/kernel/cpufeature.c | 1 + 4 files changed, 25 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index d7c467670be8..d5646316caf4 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -460,6 +460,28 @@ config RISCV_ISA_ZBB If you don't know what to do here, say Y. +config TOOLCHAIN_HAS_ZBC + bool + default y + depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64ima_zbc) + depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32ima_zbc) + depends on LLD_VERSION >= 150000 || LD_VERSION >= 23900 + depends on AS_IS_GNU + +config RISCV_ISA_ZBC + bool "Zbc extension support for bit manipulation instructions" + depends on TOOLCHAIN_HAS_ZBC + depends on !XIP_KERNEL && MMU + default y + help + Adds support to dynamically detect the presence of the ZBC + extension (carry-less multiplication) and enable its usage. + + The Zbc extension provides instructions clmul, clmulh and clmulr + to accelerate carry-less multiplications. + + If you don't know what to do here, say Y. + config RISCV_ISA_ZICBOM bool "Zicbom extension support for non-coherent DMA operation" depends on !XIP_KERNEL && MMU diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index bbde5aafa957..c3cdad6b6ec8 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -44,6 +44,7 @@ #define RISCV_ISA_EXT_ZIHINTPAUSE 32 #define RISCV_ISA_EXT_SVNAPOT 33 #define RISCV_ISA_EXT_ZICBOZ 34 +#define RISCV_ISA_EXT_ZBC 35 #define RISCV_ISA_EXT_MAX 64 #define RISCV_ISA_EXT_NAME_LEN_MAX 32 diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c index 9203e18320f9..4cab0432d7ef 100644 --- a/arch/riscv/kernel/cpu.c +++ b/arch/riscv/kernel/cpu.c @@ -189,6 +189,7 @@ static struct riscv_isa_ext_data isa_ext_arr[] = { __RISCV_ISA_EXT_DATA(zicboz, RISCV_ISA_EXT_ZICBOZ), __RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE), __RISCV_ISA_EXT_DATA(zbb, RISCV_ISA_EXT_ZBB), + __RISCV_ISA_EXT_DATA(zbc, RISCV_ISA_EXT_ZBC), __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), __RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC), __RISCV_ISA_EXT_DATA(svinval, RISCV_ISA_EXT_SVINVAL), diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 9d4ca6de26cc..3ddc7cebd810 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -232,6 +232,7 @@ printk("!!!! isa-string: %s\n\n\n", isa); SET_ISA_EXT_MAP("svnapot", RISCV_ISA_EXT_SVNAPOT); SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT); SET_ISA_EXT_MAP("zbb", RISCV_ISA_EXT_ZBB); + SET_ISA_EXT_MAP("zbc", RISCV_ISA_EXT_ZBC); SET_ISA_EXT_MAP("zicbom", RISCV_ISA_EXT_ZICBOM); SET_ISA_EXT_MAP("zicboz", RISCV_ISA_EXT_ZICBOZ); SET_ISA_EXT_MAP("zihintpause", RISCV_ISA_EXT_ZIHINTPAUSE); From patchwork Wed Mar 29 14:06:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Heiko_St=C3=BCbner?= X-Patchwork-Id: 76599 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp449270vqo; Wed, 29 Mar 2023 07:19:16 -0700 (PDT) X-Google-Smtp-Source: AKy350b4eKxu3QqsAgLaoJEYn9ditLEuzw/16/5qWIXeOOqrCWndqPZmT7nL4B+m5igA2jk97LqO X-Received: by 2002:a17:906:1cc5:b0:931:42d2:a77f with SMTP id i5-20020a1709061cc500b0093142d2a77fmr18971965ejh.15.1680099556329; Wed, 29 Mar 2023 07:19:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680099556; cv=none; d=google.com; s=arc-20160816; b=UvoS8Y7Xpwkg0ZPZhhQzJxNvJ3CQUcR3SdzPqxU3x210ZlD+Z4v/1E3RkkwIYELYAU iHjiKsJZYiIgnMykYyygZfhgdT8uNVrYuKrhYtlFOn221yT3kgC1nXk+kkQmjMgZ5Qgg 9XgT/031/j0sBjw+Y2cNLLsHVHtHXR+m7ic+gldM1T7nrpItOU4XYomybuFIYeKB4IZZ N4Cn48OhCi3mr1nagHWEhmclmcuX+ezC9XB+eTdyNLKYu/ckWpfavgfCMU5YoLSmXUMu kqk/YqLNRop1h1xTOQ0wC/vC5qkFKWEvlywGMWaugohMpzc4ZnDCslGr7yXwWRlVgkxt YwGA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=YQ9HLytthhv0Wt+K+DRCOZvMWnKOX0IEE/U81at4NGg=; b=d7ROz0wPDduSjrxRL798nfngjldlqyRTPkBb4GXaL9w/8O2vle4p1SxxIXXQhFnEno cYdX9fXFHiwgHKnXnQ3n/OdlLI5ryMw2RYDbqhbWKPrx3TUy/EZynT1trhiDX5HdQpGf KDRp9sP+EOwxLZaDPOcIYGLge6xmFgI9KAWBr7ogQZQM33NU3yKCYxQBN9DepFPJe0hx s/+XAq9QVr4TnDhFgnD6lkgX4Ze+b36nK/0t0g4DfuhBsdApEU+75PyJ5q4wkSliKv7/ ODa3+sbP+LsfhYDqZfqgbFpEY4jcWNivnQHQ9MsorxfHHdiGZ5hTUowgF8XJGpuudzGL y6Vg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=sntech.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id mm10-20020a170906cc4a00b00930de1d9553si10864114ejb.16.2023.03.29.07.18.52; Wed, 29 Mar 2023 07:19:16 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=sntech.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230141AbjC2OJB (ORCPT + 99 others); Wed, 29 Mar 2023 10:09:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43374 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230104AbjC2OIx (ORCPT ); Wed, 29 Mar 2023 10:08:53 -0400 Received: from gloria.sntech.de (gloria.sntech.de [185.11.138.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03CFC4C3E; Wed, 29 Mar 2023 07:07:46 -0700 (PDT) Received: from ip4d1634d3.dynamic.kabel-deutschland.de ([77.22.52.211] helo=phil.lan) by gloria.sntech.de with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1phWSA-0004ZJ-Pd; Wed, 29 Mar 2023 16:07:06 +0200 From: Heiko Stuebner To: palmer@dabbelt.com Cc: paul.walmsley@sifive.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, christoph.muellner@vrull.eu, heiko@sntech.de, Heiko Stuebner Subject: [PATCH v4 2/4] RISC-V: add Zbkb extension detection Date: Wed, 29 Mar 2023 16:06:40 +0200 Message-Id: <20230329140642.2186644-3-heiko.stuebner@vrull.eu> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230329140642.2186644-1-heiko.stuebner@vrull.eu> References: <20230329140642.2186644-1-heiko.stuebner@vrull.eu> MIME-Version: 1.0 X-Spam-Status: No, score=0.0 required=5.0 tests=SPF_PASS,T_SPF_HELO_TEMPERROR autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761712072202863211?= X-GMAIL-MSGID: =?utf-8?q?1761712072202863211?= From: Heiko Stuebner Add detection for Zbkb extension. Zbkb is part of the set of scalar cryptography extensions and provides bitmanip instructions for cryptography, with them being a "subset of the Zbb extension particularly useful for cryptography". Zbkb was ratified in january 2022. Expect code using the extension to pre-encode zbkb instructions, so don't introduce special toolchain requirements for now. Signed-off-by: Heiko Stuebner --- arch/riscv/include/asm/hwcap.h | 1 + arch/riscv/kernel/cpu.c | 1 + arch/riscv/kernel/cpufeature.c | 1 + 3 files changed, 3 insertions(+) diff --git a/arch/riscv/include/asm/hwcap.h b/arch/riscv/include/asm/hwcap.h index c3cdad6b6ec8..90b02d01ded4 100644 --- a/arch/riscv/include/asm/hwcap.h +++ b/arch/riscv/include/asm/hwcap.h @@ -45,6 +45,7 @@ #define RISCV_ISA_EXT_SVNAPOT 33 #define RISCV_ISA_EXT_ZICBOZ 34 #define RISCV_ISA_EXT_ZBC 35 +#define RISCV_ISA_EXT_ZBKB 36 #define RISCV_ISA_EXT_MAX 64 #define RISCV_ISA_EXT_NAME_LEN_MAX 32 diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c index 4cab0432d7ef..32470119f31c 100644 --- a/arch/riscv/kernel/cpu.c +++ b/arch/riscv/kernel/cpu.c @@ -190,6 +190,7 @@ static struct riscv_isa_ext_data isa_ext_arr[] = { __RISCV_ISA_EXT_DATA(zihintpause, RISCV_ISA_EXT_ZIHINTPAUSE), __RISCV_ISA_EXT_DATA(zbb, RISCV_ISA_EXT_ZBB), __RISCV_ISA_EXT_DATA(zbc, RISCV_ISA_EXT_ZBC), + __RISCV_ISA_EXT_DATA(zbkb, RISCV_ISA_EXT_ZBKB), __RISCV_ISA_EXT_DATA(sscofpmf, RISCV_ISA_EXT_SSCOFPMF), __RISCV_ISA_EXT_DATA(sstc, RISCV_ISA_EXT_SSTC), __RISCV_ISA_EXT_DATA(svinval, RISCV_ISA_EXT_SVINVAL), diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c index 3ddc7cebd810..1c4392421fe9 100644 --- a/arch/riscv/kernel/cpufeature.c +++ b/arch/riscv/kernel/cpufeature.c @@ -233,6 +233,7 @@ printk("!!!! isa-string: %s\n\n\n", isa); SET_ISA_EXT_MAP("svpbmt", RISCV_ISA_EXT_SVPBMT); SET_ISA_EXT_MAP("zbb", RISCV_ISA_EXT_ZBB); SET_ISA_EXT_MAP("zbc", RISCV_ISA_EXT_ZBC); + SET_ISA_EXT_MAP("zbkb", RISCV_ISA_EXT_ZBKB); SET_ISA_EXT_MAP("zicbom", RISCV_ISA_EXT_ZICBOM); SET_ISA_EXT_MAP("zicboz", RISCV_ISA_EXT_ZICBOZ); SET_ISA_EXT_MAP("zihintpause", RISCV_ISA_EXT_ZIHINTPAUSE); From patchwork Wed Mar 29 14:06:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?q?Heiko_St=C3=BCbner?= X-Patchwork-Id: 76594 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp448304vqo; Wed, 29 Mar 2023 07:17:59 -0700 (PDT) X-Google-Smtp-Source: AKy350Yz75xjMF9Wy45w7z4OGIP7jAFfOKW3SdsFLcG1r1hOObo8fI3nofCb5UWj7eDdTmNm8/23 X-Received: by 2002:aa7:cf14:0:b0:501:cf67:97fc with SMTP id a20-20020aa7cf14000000b00501cf6797fcmr2536635edy.10.1680099479507; Wed, 29 Mar 2023 07:17:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680099479; cv=none; d=google.com; s=arc-20160816; b=lVCapPq5pOB0n6ggQL4jHGS7hwRRBVJkf9vV3RixfrosnkkTss3xclo+O0lWXgdhH2 kJk/96LanbIRcV8H11goU8OKrFPoocxQryeeLjKcvByIl5OZEm5zIpAiCKaOCwoamCMz HlLsEjFLZ5DQjbkCPkhrKuIuDVGU4zW+pgXOIhwg8cUrV1H6JNebNVoh+KxIlG0OWKpG z0YzzZ/2KDFZHKfdzUthqFFu+SC+niqrtVu82/SQ+zKMDFDGlgHColt7S63X1x8yuy2k EGFcxsKP41e5bmEWAHJe6OJVyXEkoULQ3IA24x3niSSvLFi7RJSQ2SuVLQWMmo2UpNin M1ig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=vJF6CvlSXvmq7qX4eunn3BoeOyl7pdqADErsJfHPImU=; b=KF7nf3qXTL2rxd4nraBCs3P357uujq0GfifOXtK/8KlCPuvefDDuiUu+I5NYgEZ6rJ LwcPO0dRDVlPjAOVKWXMDbtNEjZs2pCzTckyN7w1K42khmKD/rZeLxuNYhKgXS9BZ9nO hatv6kMGZPG0tw0tkm+XKIryc8P/e6btVpiejGIiP9mVxAd1WegAgFJra0KvPwXYKlor fhKYi0ZTMTp5f9LAOuCJ2o/0sL16YxJ+lEsGtt9zaLGdx4XfNOto1wh14zNMvfj6z4ck xoiXxva2ECASnoCBTn3FmiWDqfvs/b7EE16WnkNaoKRZbIBNvKCeoElLjMu7hE4QFlKi oiig== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=sntech.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j18-20020aa7c412000000b005021f2d34e0si15048694edq.351.2023.03.29.07.17.36; Wed, 29 Mar 2023 07:17:59 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=sntech.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230020AbjC2OI2 (ORCPT + 99 others); Wed, 29 Mar 2023 10:08:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229775AbjC2OIY (ORCPT ); Wed, 29 Mar 2023 10:08:24 -0400 Received: from gloria.sntech.de (gloria.sntech.de [185.11.138.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2B9995592; Wed, 29 Mar 2023 07:07:16 -0700 (PDT) Received: from ip4d1634d3.dynamic.kabel-deutschland.de ([77.22.52.211] helo=phil.lan) by gloria.sntech.de with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1phWSB-0004ZJ-5m; Wed, 29 Mar 2023 16:07:07 +0200 From: Heiko Stuebner To: palmer@dabbelt.com Cc: paul.walmsley@sifive.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, christoph.muellner@vrull.eu, heiko@sntech.de, Heiko Stuebner Subject: [PATCH v4 3/4] RISC-V: hook new crypto subdir into build-system Date: Wed, 29 Mar 2023 16:06:41 +0200 Message-Id: <20230329140642.2186644-4-heiko.stuebner@vrull.eu> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230329140642.2186644-1-heiko.stuebner@vrull.eu> References: <20230329140642.2186644-1-heiko.stuebner@vrull.eu> MIME-Version: 1.0 X-Spam-Status: No, score=0.0 required=5.0 tests=SPF_PASS,T_SPF_HELO_TEMPERROR autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761711991672396075?= X-GMAIL-MSGID: =?utf-8?q?1761711991672396075?= From: Heiko Stuebner Create a crypto subdirectory for added accelerated cryptography routines and hook it into the riscv Kbuild and the main crypto Kconfig. Signed-off-by: Heiko Stuebner --- arch/riscv/Kbuild | 1 + arch/riscv/crypto/Kconfig | 5 +++++ arch/riscv/crypto/Makefile | 4 ++++ crypto/Kconfig | 3 +++ 4 files changed, 13 insertions(+) create mode 100644 arch/riscv/crypto/Kconfig create mode 100644 arch/riscv/crypto/Makefile diff --git a/arch/riscv/Kbuild b/arch/riscv/Kbuild index afa83e307a2e..250d1fd38618 100644 --- a/arch/riscv/Kbuild +++ b/arch/riscv/Kbuild @@ -2,6 +2,7 @@ obj-y += kernel/ mm/ net/ obj-$(CONFIG_BUILTIN_DTB) += boot/dts/ +obj-$(CONFIG_CRYPTO) += crypto/ obj-y += errata/ obj-$(CONFIG_KVM) += kvm/ diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig new file mode 100644 index 000000000000..10d60edc0110 --- /dev/null +++ b/arch/riscv/crypto/Kconfig @@ -0,0 +1,5 @@ +# SPDX-License-Identifier: GPL-2.0 + +menu "Accelerated Cryptographic Algorithms for CPU (riscv)" + +endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile new file mode 100644 index 000000000000..b3b6332c9f6d --- /dev/null +++ b/arch/riscv/crypto/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0-only +# +# linux/arch/riscv/crypto/Makefile +# diff --git a/crypto/Kconfig b/crypto/Kconfig index 9c86f7045157..003921cb0301 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1401,6 +1401,9 @@ endif if PPC source "arch/powerpc/crypto/Kconfig" endif +if RISCV +source "arch/riscv/crypto/Kconfig" +endif if S390 source "arch/s390/crypto/Kconfig" endif From patchwork Wed Mar 29 14:06:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Heiko_St=C3=BCbner?= X-Patchwork-Id: 76597 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp449027vqo; Wed, 29 Mar 2023 07:18:55 -0700 (PDT) X-Google-Smtp-Source: AKy350Z4tu5MoXiNUq828uEzQ+XG7Mv9B+FbQ3DP0qF+/VBgPlfcSEXVJj/zzue1IxQxY50dFTqx X-Received: by 2002:aa7:cf86:0:b0:500:2cac:332c with SMTP id z6-20020aa7cf86000000b005002cac332cmr18120024edx.25.1680099535725; Wed, 29 Mar 2023 07:18:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1680099535; cv=none; d=google.com; s=arc-20160816; b=vuyCZpW9JFoN3hHKU6iv9XLH8cuFjJ6xlKBty/v304H04DRL4xu5ao38yxIvnq9ISj MHXIqeMySCEEe3GSRx3siDyExmuq3O4zXdWWcJ7n7KnI6k4JPWr+8gMJjNv6Ope1EhS5 PFaqidRE6otCCe5jO31rYrAhdtIayNdIYUlEUcWuPbzQLKKDoUcSR6H8aNW+YGQah4Bt 6QehC31c/e34/TOqUL5hsmBO6+sbBfX7mHEn4AfVeaQKqWHFfTPYUS0DqJVkMz3DzE29 Ff+cBRmMSDNgxRKQZMpS2O/ksM8T/1KjL74rJkwwxL7itUWR/j3nA364a7lE9iliRfBS /Tmg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=RQiptlsFxqcFswZUk1FYGTMLqJ+qki7fk1cDtSUemg4=; b=f57vAlhM2gEtcSL931bvjrzJ3ckLkfJ04EusFcWrQ9OakI+g1WfPOVIzTnTfvxrx+L Txmmo7f6VvYcR42jJOO4j1bKh01u2Mk0r5bFuqHTctelf7nS1etMbNd/kvJci21r+v4O kW5Zv0WmkeBxSubJU8BTph534QcsmhMqw32I6HHEwUC+HzvPqZ1HZjbbgF1GrlKAlqYf DURlGSkWowvZCI0AiP0hXcuqi4NIDQMkaN5A/OxKx6l07WCSBrK5USEfBq2uIEGAKAky ufsFi4lBiC8UFJDwhag/4ygRrDNnwJxX7uuUlgfs1neOq9MgQCHbJrtlrU+VYBEQD38b Pp1Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=sntech.de Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id va9-20020a17090711c900b008cc211391bdsi31925911ejb.820.2023.03.29.07.18.32; Wed, 29 Mar 2023 07:18:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=sntech.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230021AbjC2OIZ (ORCPT + 99 others); Wed, 29 Mar 2023 10:08:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229682AbjC2OIX (ORCPT ); Wed, 29 Mar 2023 10:08:23 -0400 Received: from gloria.sntech.de (gloria.sntech.de [185.11.138.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9489C59F7; Wed, 29 Mar 2023 07:07:18 -0700 (PDT) Received: from ip4d1634d3.dynamic.kabel-deutschland.de ([77.22.52.211] helo=phil.lan) by gloria.sntech.de with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1phWSB-0004ZJ-HZ; Wed, 29 Mar 2023 16:07:07 +0200 From: Heiko Stuebner To: palmer@dabbelt.com Cc: paul.walmsley@sifive.com, aou@eecs.berkeley.edu, herbert@gondor.apana.org.au, davem@davemloft.net, conor.dooley@microchip.com, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-crypto@vger.kernel.org, christoph.muellner@vrull.eu, heiko@sntech.de, Heiko Stuebner Subject: [PATCH v4 4/4] RISC-V: crypto: add accelerated GCM GHASH implementation Date: Wed, 29 Mar 2023 16:06:42 +0200 Message-Id: <20230329140642.2186644-5-heiko.stuebner@vrull.eu> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230329140642.2186644-1-heiko.stuebner@vrull.eu> References: <20230329140642.2186644-1-heiko.stuebner@vrull.eu> MIME-Version: 1.0 X-Spam-Status: No, score=0.0 required=5.0 tests=SPF_PASS,T_SPF_HELO_TEMPERROR autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1761712050854286323?= X-GMAIL-MSGID: =?utf-8?q?1761712050854286323?= From: Heiko Stuebner With different sets of available extensions a number of different implementation variants are possible. Quite a number of them are already implemented in openSSL or are in the process of being implemented, so pick the relevant openSSL coden and add suitable glue code similar to arm64 and powerpc to use it for kernel-specific cryptography. The prioritization of the algorithms follows the ifdef chain for the assembly callbacks done in openssl but here algorithms will get registered separately so that all of them can be part of the crypto selftests. The crypto subsystem will select the most performant of all registered algorithms on the running system but will selftest all registered ones. In a first step this adds scalar variants using the Zbc, Zbb and possible Zbkb (bitmanip crypto extension) and the perl implementation stems from openSSL pull request on https://github.com/openssl/openssl/pull/20078 Co-developed-by: Christoph Müllner Signed-off-by: Christoph Müllner Signed-off-by: Heiko Stuebner --- arch/riscv/crypto/Kconfig | 13 + arch/riscv/crypto/Makefile | 14 + arch/riscv/crypto/ghash-riscv64-glue.c | 258 ++++++++++++++++ arch/riscv/crypto/ghash-riscv64-zbc.pl | 400 +++++++++++++++++++++++++ arch/riscv/crypto/riscv.pm | 231 ++++++++++++++ 5 files changed, 916 insertions(+) create mode 100644 arch/riscv/crypto/ghash-riscv64-glue.c create mode 100644 arch/riscv/crypto/ghash-riscv64-zbc.pl create mode 100644 arch/riscv/crypto/riscv.pm diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index 10d60edc0110..cd2237923e68 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -2,4 +2,17 @@ menu "Accelerated Cryptographic Algorithms for CPU (riscv)" +config CRYPTO_GHASH_RISCV64 + tristate "Hash functions: GHASH" + depends on 64BIT && RISCV_ISA_ZBC + select CRYPTO_HASH + select CRYPTO_LIB_GF128MUL + help + GCM GHASH function (NIST SP800-38D) + + Architecture: riscv64 using one of: + - Zbc extension + - Zbc + Zbb extensions + - Zbc + Zbkb extensions + endmenu diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index b3b6332c9f6d..0a158919e9da 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -2,3 +2,17 @@ # # linux/arch/riscv/crypto/Makefile # + +obj-$(CONFIG_CRYPTO_GHASH_RISCV64) += ghash-riscv64.o +ghash-riscv64-y := ghash-riscv64-glue.o +ifdef CONFIG_RISCV_ISA_ZBC +ghash-riscv64-y += ghash-riscv64-zbc.o +endif + +quiet_cmd_perlasm = PERLASM $@ + cmd_perlasm = $(PERL) $(<) void $(@) + +$(obj)/ghash-riscv64-zbc.S: $(src)/ghash-riscv64-zbc.pl + $(call cmd,perlasm) + +clean-files += ghash-riscv64-zbc.S diff --git a/arch/riscv/crypto/ghash-riscv64-glue.c b/arch/riscv/crypto/ghash-riscv64-glue.c new file mode 100644 index 000000000000..5ab704c49539 --- /dev/null +++ b/arch/riscv/crypto/ghash-riscv64-glue.c @@ -0,0 +1,258 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * RISC-V optimized GHASH routines + * + * Copyright (C) 2023 VRULL GmbH + * Author: Heiko Stuebner + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* Zbc (optional with zbkb improvements) */ +void gcm_ghash_rv64i_zbc(u64 Xi[2], const u128 Htable[16], + const u8 *inp, size_t len); +void gcm_ghash_rv64i_zbc__zbkb(u64 Xi[2], const u128 Htable[16], + const u8 *inp, size_t len); + +struct riscv64_ghash_ctx { + void (*ghash_func)(u64 Xi[2], const u128 Htable[16], + const u8 *inp, size_t len); + + /* key used by vector asm */ + u128 htable[16]; + /* key used by software fallback */ + be128 key; +}; + +struct riscv64_ghash_desc_ctx { + u64 shash[2]; + u8 buffer[GHASH_DIGEST_SIZE]; + int bytes; +}; + +static int riscv64_ghash_init(struct shash_desc *desc) +{ + struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc); + + dctx->bytes = 0; + memset(dctx->shash, 0, GHASH_DIGEST_SIZE); + return 0; +} + +#ifdef CONFIG_RISCV_ISA_ZBC + +#define RISCV64_ZBC_SETKEY(VARIANT, GHASH) \ +void gcm_init_rv64i_ ## VARIANT(u128 Htable[16], const u64 Xi[2]); \ +static int riscv64_zbc_ghash_setkey_ ## VARIANT(struct crypto_shash *tfm, \ + const u8 *key, \ + unsigned int keylen) \ +{ \ + struct riscv64_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(tfm)); \ + const u64 k[2] = { cpu_to_be64(((const u64 *)key)[0]), \ + cpu_to_be64(((const u64 *)key)[1]) }; \ + \ + if (keylen != GHASH_BLOCK_SIZE) \ + return -EINVAL; \ + \ + memcpy(&ctx->key, key, GHASH_BLOCK_SIZE); \ + gcm_init_rv64i_ ## VARIANT(ctx->htable, k); \ + \ + ctx->ghash_func = gcm_ghash_rv64i_ ## GHASH; \ + \ + return 0; \ +} + +static int riscv64_zbc_ghash_update(struct shash_desc *desc, + const u8 *src, unsigned int srclen) +{ + unsigned int len; + struct riscv64_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm)); + struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc); + + if (dctx->bytes) { + if (dctx->bytes + srclen < GHASH_DIGEST_SIZE) { + memcpy(dctx->buffer + dctx->bytes, src, + srclen); + dctx->bytes += srclen; + return 0; + } + memcpy(dctx->buffer + dctx->bytes, src, + GHASH_DIGEST_SIZE - dctx->bytes); + + ctx->ghash_func(dctx->shash, ctx->htable, + dctx->buffer, GHASH_DIGEST_SIZE); + + src += GHASH_DIGEST_SIZE - dctx->bytes; + srclen -= GHASH_DIGEST_SIZE - dctx->bytes; + dctx->bytes = 0; + } + len = srclen & ~(GHASH_DIGEST_SIZE - 1); + + if (len) { + gcm_ghash_rv64i_zbc(dctx->shash, ctx->htable, + src, len); + src += len; + srclen -= len; + } + + if (srclen) { + memcpy(dctx->buffer, src, srclen); + dctx->bytes = srclen; + } + return 0; +} + +static int riscv64_zbc_ghash_final(struct shash_desc *desc, u8 *out) +{ + int i; + struct riscv64_ghash_ctx *ctx = crypto_tfm_ctx(crypto_shash_tfm(desc->tfm)); + struct riscv64_ghash_desc_ctx *dctx = shash_desc_ctx(desc); + + if (dctx->bytes) { + for (i = dctx->bytes; i < GHASH_DIGEST_SIZE; i++) + dctx->buffer[i] = 0; + ctx->ghash_func(dctx->shash, ctx->htable, + dctx->buffer, GHASH_DIGEST_SIZE); + dctx->bytes = 0; + } + memcpy(out, dctx->shash, GHASH_DIGEST_SIZE); + return 0; +} + +RISCV64_ZBC_SETKEY(zbc, zbc); +struct shash_alg riscv64_zbc_ghash_alg = { + .digestsize = GHASH_DIGEST_SIZE, + .init = riscv64_ghash_init, + .update = riscv64_zbc_ghash_update, + .final = riscv64_zbc_ghash_final, + .setkey = riscv64_zbc_ghash_setkey_zbc, + .descsize = sizeof(struct riscv64_ghash_desc_ctx) + + sizeof(struct ghash_desc_ctx), + .base = { + .cra_name = "ghash", + .cra_driver_name = "riscv64_zbc_ghash", + .cra_priority = 250, + .cra_blocksize = GHASH_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct riscv64_ghash_ctx), + .cra_module = THIS_MODULE, + }, +}; + +RISCV64_ZBC_SETKEY(zbc__zbb, zbc); +struct shash_alg riscv64_zbc_zbb_ghash_alg = { + .digestsize = GHASH_DIGEST_SIZE, + .init = riscv64_ghash_init, + .update = riscv64_zbc_ghash_update, + .final = riscv64_zbc_ghash_final, + .setkey = riscv64_zbc_ghash_setkey_zbc__zbb, + .descsize = sizeof(struct riscv64_ghash_desc_ctx) + + sizeof(struct ghash_desc_ctx), + .base = { + .cra_name = "ghash", + .cra_driver_name = "riscv64_zbc_zbb_ghash", + .cra_priority = 251, + .cra_blocksize = GHASH_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct riscv64_ghash_ctx), + .cra_module = THIS_MODULE, + }, +}; + +RISCV64_ZBC_SETKEY(zbc__zbkb, zbc__zbkb); +struct shash_alg riscv64_zbc_zbkb_ghash_alg = { + .digestsize = GHASH_DIGEST_SIZE, + .init = riscv64_ghash_init, + .update = riscv64_zbc_ghash_update, + .final = riscv64_zbc_ghash_final, + .setkey = riscv64_zbc_ghash_setkey_zbc__zbkb, + .descsize = sizeof(struct riscv64_ghash_desc_ctx) + + sizeof(struct ghash_desc_ctx), + .base = { + .cra_name = "ghash", + .cra_driver_name = "riscv64_zbc_zbkb_ghash", + .cra_priority = 252, + .cra_blocksize = GHASH_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct riscv64_ghash_ctx), + .cra_module = THIS_MODULE, + }, +}; + +#endif /* CONFIG_RISCV_ISA_ZBC */ + +#define RISCV64_DEFINED_GHASHES 7 + +static struct shash_alg *riscv64_ghashes[RISCV64_DEFINED_GHASHES]; +static int num_riscv64_ghashes; + +static int __init riscv64_ghash_register(struct shash_alg *ghash) +{ + int ret; + + ret = crypto_register_shash(ghash); + if (ret < 0) { + int i; + + for (i = num_riscv64_ghashes - 1; i >= 0 ; i--) + crypto_unregister_shash(riscv64_ghashes[i]); + + num_riscv64_ghashes = 0; + + return ret; + } + + pr_debug("Registered RISC-V ghash %s\n", ghash->base.cra_driver_name); + riscv64_ghashes[num_riscv64_ghashes] = ghash; + num_riscv64_ghashes++; + return 0; +} + +static int __init riscv64_ghash_mod_init(void) +{ + int ret = 0; + +#ifdef CONFIG_RISCV_ISA_ZBC + if (riscv_isa_extension_available(NULL, ZBC)) { + ret = riscv64_ghash_register(&riscv64_zbc_ghash_alg); + if (ret < 0) + return ret; + + if (riscv_isa_extension_available(NULL, ZBB)) { + ret = riscv64_ghash_register(&riscv64_zbc_zbb_ghash_alg); + if (ret < 0) + return ret; + } + + if (riscv_isa_extension_available(NULL, ZBKB)) { + ret = riscv64_ghash_register(&riscv64_zbc_zbkb_ghash_alg); + if (ret < 0) + return ret; + } + } +#endif + + return 0; +} + +static void __exit riscv64_ghash_mod_fini(void) +{ + int i; + + for (i = num_riscv64_ghashes - 1; i >= 0 ; i--) + crypto_unregister_shash(riscv64_ghashes[i]); + + num_riscv64_ghashes = 0; +} + +module_init(riscv64_ghash_mod_init); +module_exit(riscv64_ghash_mod_fini); + +MODULE_DESCRIPTION("GSM GHASH (accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); +MODULE_ALIAS_CRYPTO("ghash"); diff --git a/arch/riscv/crypto/ghash-riscv64-zbc.pl b/arch/riscv/crypto/ghash-riscv64-zbc.pl new file mode 100644 index 000000000000..691231ffa11c --- /dev/null +++ b/arch/riscv/crypto/ghash-riscv64-zbc.pl @@ -0,0 +1,400 @@ +#! /usr/bin/env perl +# Copyright 2022 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You may not use +# this file except in compliance with the License. You can obtain a copy +# in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html + +use strict; +use warnings; + +use FindBin qw($Bin); +use lib "$Bin"; +use lib "$Bin/../../perlasm"; +use riscv; + +# $output is the last argument if it looks like a file (it has an extension) +# $flavour is the first argument if it doesn't look like a file +my $output = $#ARGV >= 0 && $ARGV[$#ARGV] =~ m|\.\w+$| ? pop : undef; +my $flavour = $#ARGV >= 0 && $ARGV[0] !~ m|\.| ? shift : undef; + +$output and open STDOUT,">$output"; + +my $code=<<___; +.text +___ + +################################################################################ +# void gcm_init_rv64i_zbc(u128 Htable[16], const u64 H[2]); +# void gcm_init_rv64i_zbc__zbb(u128 Htable[16], const u64 H[2]); +# void gcm_init_rv64i_zbc__zbkb(u128 Htable[16], const u64 H[2]); +# +# input: H: 128-bit H - secret parameter E(K, 0^128) +# output: Htable: Preprocessed key data for gcm_gmult_rv64i_zbc* and +# gcm_ghash_rv64i_zbc* +# +# All callers of this function revert the byte-order unconditionally +# on little-endian machines. So we need to revert the byte-order back. +# Additionally we reverse the bits of each byte. + +{ +my ($Htable,$H,$VAL0,$VAL1,$TMP0,$TMP1,$TMP2) = ("a0","a1","a2","a3","t0","t1","t2"); + +$code .= <<___; +.p2align 3 +.globl gcm_init_rv64i_zbc +.type gcm_init_rv64i_zbc,\@function +gcm_init_rv64i_zbc: + ld $VAL0,0($H) + ld $VAL1,8($H) + @{[brev8_rv64i $VAL0, $TMP0, $TMP1, $TMP2]} + @{[brev8_rv64i $VAL1, $TMP0, $TMP1, $TMP2]} + @{[sd_rev8_rv64i $VAL0, $Htable, 0, $TMP0]} + @{[sd_rev8_rv64i $VAL1, $Htable, 8, $TMP0]} + ret +.size gcm_init_rv64i_zbc,.-gcm_init_rv64i_zbc +___ +} + +{ +my ($Htable,$H,$VAL0,$VAL1,$TMP0,$TMP1,$TMP2) = ("a0","a1","a2","a3","t0","t1","t2"); + +$code .= <<___; +.p2align 3 +.globl gcm_init_rv64i_zbc__zbb +.type gcm_init_rv64i_zbc__zbb,\@function +gcm_init_rv64i_zbc__zbb: + ld $VAL0,0($H) + ld $VAL1,8($H) + @{[brev8_rv64i $VAL0, $TMP0, $TMP1, $TMP2]} + @{[brev8_rv64i $VAL1, $TMP0, $TMP1, $TMP2]} + @{[rev8 $VAL0, $VAL0]} + @{[rev8 $VAL1, $VAL1]} + sd $VAL0,0($Htable) + sd $VAL1,8($Htable) + ret +.size gcm_init_rv64i_zbc__zbb,.-gcm_init_rv64i_zbc__zbb +___ +} + +{ +my ($Htable,$H,$TMP0,$TMP1) = ("a0","a1","t0","t1"); + +$code .= <<___; +.p2align 3 +.globl gcm_init_rv64i_zbc__zbkb +.type gcm_init_rv64i_zbc__zbkb,\@function +gcm_init_rv64i_zbc__zbkb: + ld $TMP0,0($H) + ld $TMP1,8($H) + @{[brev8 $TMP0, $TMP0]} + @{[brev8 $TMP1, $TMP1]} + @{[rev8 $TMP0, $TMP0]} + @{[rev8 $TMP1, $TMP1]} + sd $TMP0,0($Htable) + sd $TMP1,8($Htable) + ret +.size gcm_init_rv64i_zbc__zbkb,.-gcm_init_rv64i_zbc__zbkb +___ +} + +################################################################################ +# void gcm_gmult_rv64i_zbc(u64 Xi[2], const u128 Htable[16]); +# void gcm_gmult_rv64i_zbc__zbkb(u64 Xi[2], const u128 Htable[16]); +# +# input: Xi: current hash value +# Htable: copy of H +# output: Xi: next hash value Xi +# +# Compute GMULT (Xi*H mod f) using the Zbc (clmul) and Zbb (basic bit manip) +# extensions. Using the no-Karatsuba approach and clmul for the final reduction. +# This results in an implementation with minimized number of instructions. +# HW with clmul latencies higher than 2 cycles might observe a performance +# improvement with Karatsuba. HW with clmul latencies higher than 6 cycles +# might observe a performance improvement with additionally converting the +# reduction to shift&xor. For a full discussion of this estimates see +# https://github.com/riscv/riscv-crypto/blob/master/doc/supp/gcm-mode-cmul.adoc +{ +my ($Xi,$Htable,$x0,$x1,$y0,$y1) = ("a0","a1","a4","a5","a6","a7"); +my ($z0,$z1,$z2,$z3,$t0,$t1,$polymod) = ("t0","t1","t2","t3","t4","t5","t6"); + +$code .= <<___; +.p2align 3 +.globl gcm_gmult_rv64i_zbc +.type gcm_gmult_rv64i_zbc,\@function +gcm_gmult_rv64i_zbc: + # Load Xi and bit-reverse it + ld $x0, 0($Xi) + ld $x1, 8($Xi) + @{[brev8_rv64i $x0, $z0, $z1, $z2]} + @{[brev8_rv64i $x1, $z0, $z1, $z2]} + + # Load the key (already bit-reversed) + ld $y0, 0($Htable) + ld $y1, 8($Htable) + + # Load the reduction constant + la $polymod, Lpolymod + lbu $polymod, 0($polymod) + + # Multiplication (without Karatsuba) + @{[clmulh $z3, $x1, $y1]} + @{[clmul $z2, $x1, $y1]} + @{[clmulh $t1, $x0, $y1]} + @{[clmul $z1, $x0, $y1]} + xor $z2, $z2, $t1 + @{[clmulh $t1, $x1, $y0]} + @{[clmul $t0, $x1, $y0]} + xor $z2, $z2, $t1 + xor $z1, $z1, $t0 + @{[clmulh $t1, $x0, $y0]} + @{[clmul $z0, $x0, $y0]} + xor $z1, $z1, $t1 + + # Reduction with clmul + @{[clmulh $t1, $z3, $polymod]} + @{[clmul $t0, $z3, $polymod]} + xor $z2, $z2, $t1 + xor $z1, $z1, $t0 + @{[clmulh $t1, $z2, $polymod]} + @{[clmul $t0, $z2, $polymod]} + xor $x1, $z1, $t1 + xor $x0, $z0, $t0 + + # Bit-reverse Xi back and store it + @{[brev8_rv64i $x0, $z0, $z1, $z2]} + @{[brev8_rv64i $x1, $z0, $z1, $z2]} + sd $x0, 0($Xi) + sd $x1, 8($Xi) + ret +.size gcm_gmult_rv64i_zbc,.-gcm_gmult_rv64i_zbc +___ +} + +{ +my ($Xi,$Htable,$x0,$x1,$y0,$y1) = ("a0","a1","a4","a5","a6","a7"); +my ($z0,$z1,$z2,$z3,$t0,$t1,$polymod) = ("t0","t1","t2","t3","t4","t5","t6"); + +$code .= <<___; +.p2align 3 +.globl gcm_gmult_rv64i_zbc__zbkb +.type gcm_gmult_rv64i_zbc__zbkb,\@function +gcm_gmult_rv64i_zbc__zbkb: + # Load Xi and bit-reverse it + ld $x0, 0($Xi) + ld $x1, 8($Xi) + @{[brev8 $x0, $x0]} + @{[brev8 $x1, $x1]} + + # Load the key (already bit-reversed) + ld $y0, 0($Htable) + ld $y1, 8($Htable) + + # Load the reduction constant + la $polymod, Lpolymod + lbu $polymod, 0($polymod) + + # Multiplication (without Karatsuba) + @{[clmulh $z3, $x1, $y1]} + @{[clmul $z2, $x1, $y1]} + @{[clmulh $t1, $x0, $y1]} + @{[clmul $z1, $x0, $y1]} + xor $z2, $z2, $t1 + @{[clmulh $t1, $x1, $y0]} + @{[clmul $t0, $x1, $y0]} + xor $z2, $z2, $t1 + xor $z1, $z1, $t0 + @{[clmulh $t1, $x0, $y0]} + @{[clmul $z0, $x0, $y0]} + xor $z1, $z1, $t1 + + # Reduction with clmul + @{[clmulh $t1, $z3, $polymod]} + @{[clmul $t0, $z3, $polymod]} + xor $z2, $z2, $t1 + xor $z1, $z1, $t0 + @{[clmulh $t1, $z2, $polymod]} + @{[clmul $t0, $z2, $polymod]} + xor $x1, $z1, $t1 + xor $x0, $z0, $t0 + + # Bit-reverse Xi back and store it + @{[brev8 $x0, $x0]} + @{[brev8 $x1, $x1]} + sd $x0, 0($Xi) + sd $x1, 8($Xi) + ret +.size gcm_gmult_rv64i_zbc__zbkb,.-gcm_gmult_rv64i_zbc__zbkb +___ +} + +################################################################################ +# void gcm_ghash_rv64i_zbc(u64 Xi[2], const u128 Htable[16], +# const u8 *inp, size_t len); +# void gcm_ghash_rv64i_zbc__zbkb(u64 Xi[2], const u128 Htable[16], +# const u8 *inp, size_t len); +# +# input: Xi: current hash value +# Htable: copy of H +# inp: pointer to input data +# len: length of input data in bytes (mutiple of block size) +# output: Xi: Xi+1 (next hash value Xi) +{ +my ($Xi,$Htable,$inp,$len,$x0,$x1,$y0,$y1) = ("a0","a1","a2","a3","a4","a5","a6","a7"); +my ($z0,$z1,$z2,$z3,$t0,$t1,$polymod) = ("t0","t1","t2","t3","t4","t5","t6"); + +$code .= <<___; +.p2align 3 +.globl gcm_ghash_rv64i_zbc +.type gcm_ghash_rv64i_zbc,\@function +gcm_ghash_rv64i_zbc: + # Load Xi and bit-reverse it + ld $x0, 0($Xi) + ld $x1, 8($Xi) + @{[brev8_rv64i $x0, $z0, $z1, $z2]} + @{[brev8_rv64i $x1, $z0, $z1, $z2]} + + # Load the key (already bit-reversed) + ld $y0, 0($Htable) + ld $y1, 8($Htable) + + # Load the reduction constant + la $polymod, Lpolymod + lbu $polymod, 0($polymod) + +Lstep: + # Load the input data, bit-reverse them, and XOR them with Xi + ld $t0, 0($inp) + ld $t1, 8($inp) + add $inp, $inp, 16 + add $len, $len, -16 + @{[brev8_rv64i $t0, $z0, $z1, $z2]} + @{[brev8_rv64i $t1, $z0, $z1, $z2]} + xor $x0, $x0, $t0 + xor $x1, $x1, $t1 + + # Multiplication (without Karatsuba) + @{[clmulh $z3, $x1, $y1]} + @{[clmul $z2, $x1, $y1]} + @{[clmulh $t1, $x0, $y1]} + @{[clmul $z1, $x0, $y1]} + xor $z2, $z2, $t1 + @{[clmulh $t1, $x1, $y0]} + @{[clmul $t0, $x1, $y0]} + xor $z2, $z2, $t1 + xor $z1, $z1, $t0 + @{[clmulh $t1, $x0, $y0]} + @{[clmul $z0, $x0, $y0]} + xor $z1, $z1, $t1 + + # Reduction with clmul + @{[clmulh $t1, $z3, $polymod]} + @{[clmul $t0, $z3, $polymod]} + xor $z2, $z2, $t1 + xor $z1, $z1, $t0 + @{[clmulh $t1, $z2, $polymod]} + @{[clmul $t0, $z2, $polymod]} + xor $x1, $z1, $t1 + xor $x0, $z0, $t0 + + # Iterate over all blocks + bnez $len, Lstep + + # Bit-reverse final Xi back and store it + @{[brev8_rv64i $x0, $z0, $z1, $z2]} + @{[brev8_rv64i $x1, $z0, $z1, $z2]} + sd $x0, 0($Xi) + sd $x1, 8($Xi) + ret +.size gcm_ghash_rv64i_zbc,.-gcm_ghash_rv64i_zbc +___ +} + +{ +my ($Xi,$Htable,$inp,$len,$x0,$x1,$y0,$y1) = ("a0","a1","a2","a3","a4","a5","a6","a7"); +my ($z0,$z1,$z2,$z3,$t0,$t1,$polymod) = ("t0","t1","t2","t3","t4","t5","t6"); + +$code .= <<___; +.p2align 3 +.globl gcm_ghash_rv64i_zbc__zbkb +.type gcm_ghash_rv64i_zbc__zbkb,\@function +gcm_ghash_rv64i_zbc__zbkb: + # Load Xi and bit-reverse it + ld $x0, 0($Xi) + ld $x1, 8($Xi) + @{[brev8 $x0, $x0]} + @{[brev8 $x1, $x1]} + + # Load the key (already bit-reversed) + ld $y0, 0($Htable) + ld $y1, 8($Htable) + + # Load the reduction constant + la $polymod, Lpolymod + lbu $polymod, 0($polymod) + +Lstep_zkbk: + # Load the input data, bit-reverse them, and XOR them with Xi + ld $t0, 0($inp) + ld $t1, 8($inp) + add $inp, $inp, 16 + add $len, $len, -16 + @{[brev8 $t0, $t0]} + @{[brev8 $t1, $t1]} + xor $x0, $x0, $t0 + xor $x1, $x1, $t1 + + # Multiplication (without Karatsuba) + @{[clmulh $z3, $x1, $y1]} + @{[clmul $z2, $x1, $y1]} + @{[clmulh $t1, $x0, $y1]} + @{[clmul $z1, $x0, $y1]} + xor $z2, $z2, $t1 + @{[clmulh $t1, $x1, $y0]} + @{[clmul $t0, $x1, $y0]} + xor $z2, $z2, $t1 + xor $z1, $z1, $t0 + @{[clmulh $t1, $x0, $y0]} + @{[clmul $z0, $x0, $y0]} + xor $z1, $z1, $t1 + + # Reduction with clmul + @{[clmulh $t1, $z3, $polymod]} + @{[clmul $t0, $z3, $polymod]} + xor $z2, $z2, $t1 + xor $z1, $z1, $t0 + @{[clmulh $t1, $z2, $polymod]} + @{[clmul $t0, $z2, $polymod]} + xor $x1, $z1, $t1 + xor $x0, $z0, $t0 + + # Iterate over all blocks + bnez $len, Lstep_zkbk + + # Bit-reverse final Xi back and store it + @{[brev8 $x0, $x0]} + @{[brev8 $x1, $x1]} + sd $x0, 0($Xi) + sd $x1, 8($Xi) + ret +.size gcm_ghash_rv64i_zbc__zbkb,.-gcm_ghash_rv64i_zbc__zbkb +___ +} + +$code .= <<___; +.p2align 3 +Lbrev8_const: + .dword 0xAAAAAAAAAAAAAAAA + .dword 0xCCCCCCCCCCCCCCCC + .dword 0xF0F0F0F0F0F0F0F0 +.size Lbrev8_const,.-Lbrev8_const + +Lpolymod: + .byte 0x87 +.size Lpolymod,.-Lpolymod +___ + +print $code; + +close STDOUT or die "error closing STDOUT: $!"; diff --git a/arch/riscv/crypto/riscv.pm b/arch/riscv/crypto/riscv.pm new file mode 100644 index 000000000000..b0c786a13ca0 --- /dev/null +++ b/arch/riscv/crypto/riscv.pm @@ -0,0 +1,231 @@ +#! /usr/bin/env perl +# Copyright 2023 The OpenSSL Project Authors. All Rights Reserved. +# +# Licensed under the Apache License 2.0 (the "License"). You may not use +# this file except in compliance with the License. You can obtain a copy +# in the file LICENSE in the source distribution or at +# https://www.openssl.org/source/license.html + +use strict; +use warnings; + +# Set $have_stacktrace to 1 if we have Devel::StackTrace +my $have_stacktrace = 0; +if (eval {require Devel::StackTrace;1;}) { + $have_stacktrace = 1; +} + +my @regs = map("x$_",(0..31)); +# Mapping from the RISC-V psABI ABI mnemonic names to the register number. +my @regaliases = ('zero','ra','sp','gp','tp','t0','t1','t2','s0','s1', + map("a$_",(0..7)), + map("s$_",(2..11)), + map("t$_",(3..6)) +); + +my %reglookup; +@reglookup{@regs} = @regs; +@reglookup{@regaliases} = @regs; + +# Takes a register name, possibly an alias, and converts it to a register index +# from 0 to 31 +sub read_reg { + my $reg = lc shift; + if (!exists($reglookup{$reg})) { + my $trace = ""; + if ($have_stacktrace) { + $trace = Devel::StackTrace->new->as_string; + } + die("Unknown register ".$reg."\n".$trace); + } + my $regstr = $reglookup{$reg}; + if (!($regstr =~ /^x([0-9]+)$/)) { + my $trace = ""; + if ($have_stacktrace) { + $trace = Devel::StackTrace->new->as_string; + } + die("Could not process register ".$reg."\n".$trace); + } + return $1; +} + +# Helper functions + +sub brev8_rv64i { + # brev8 without `brev8` instruction (only in Zbkb) + # Bit-reverses the first argument and needs two scratch registers + my $val = shift; + my $t0 = shift; + my $t1 = shift; + my $brev8_const = shift; + my $seq = <<___; + la $brev8_const, Lbrev8_const + + ld $t0, 0($brev8_const) # 0xAAAAAAAAAAAAAAAA + slli $t1, $val, 1 + and $t1, $t1, $t0 + and $val, $val, $t0 + srli $val, $val, 1 + or $val, $t1, $val + + ld $t0, 8($brev8_const) # 0xCCCCCCCCCCCCCCCC + slli $t1, $val, 2 + and $t1, $t1, $t0 + and $val, $val, $t0 + srli $val, $val, 2 + or $val, $t1, $val + + ld $t0, 16($brev8_const) # 0xF0F0F0F0F0F0F0F0 + slli $t1, $val, 4 + and $t1, $t1, $t0 + and $val, $val, $t0 + srli $val, $val, 4 + or $val, $t1, $val +___ + return $seq; +} + +sub sd_rev8_rv64i { + # rev8 without `rev8` instruction (only in Zbb or Zbkb) + # Stores the given value byte-reversed and needs one scratch register + my $val = shift; + my $addr = shift; + my $off = shift; + my $tmp = shift; + my $off0 = ($off + 0); + my $off1 = ($off + 1); + my $off2 = ($off + 2); + my $off3 = ($off + 3); + my $off4 = ($off + 4); + my $off5 = ($off + 5); + my $off6 = ($off + 6); + my $off7 = ($off + 7); + my $seq = <<___; + sb $val, $off7($addr) + srli $tmp, $val, 8 + sb $tmp, $off6($addr) + srli $tmp, $val, 16 + sb $tmp, $off5($addr) + srli $tmp, $val, 24 + sb $tmp, $off4($addr) + srli $tmp, $val, 32 + sb $tmp, $off3($addr) + srli $tmp, $val, 40 + sb $tmp, $off2($addr) + srli $tmp, $val, 48 + sb $tmp, $off1($addr) + srli $tmp, $val, 56 + sb $tmp, $off0($addr) +___ + return $seq; +} + +# Scalar crypto instructions + +sub aes64ds { + # Encoding for aes64ds rd, rs1, rs2 instruction on RV64 + # XXXXXXX_ rs2 _ rs1 _XXX_ rd _XXXXXXX + my $template = 0b0011101_00000_00000_000_00000_0110011; + my $rd = read_reg shift; + my $rs1 = read_reg shift; + my $rs2 = read_reg shift; + return ".word ".($template | ($rs2 << 20) | ($rs1 << 15) | ($rd << 7)); +} + +sub aes64dsm { + # Encoding for aes64dsm rd, rs1, rs2 instruction on RV64 + # XXXXXXX_ rs2 _ rs1 _XXX_ rd _XXXXXXX + my $template = 0b0011111_00000_00000_000_00000_0110011; + my $rd = read_reg shift; + my $rs1 = read_reg shift; + my $rs2 = read_reg shift; + return ".word ".($template | ($rs2 << 20) | ($rs1 << 15) | ($rd << 7)); +} + +sub aes64es { + # Encoding for aes64es rd, rs1, rs2 instruction on RV64 + # XXXXXXX_ rs2 _ rs1 _XXX_ rd _XXXXXXX + my $template = 0b0011001_00000_00000_000_00000_0110011; + my $rd = read_reg shift; + my $rs1 = read_reg shift; + my $rs2 = read_reg shift; + return ".word ".($template | ($rs2 << 20) | ($rs1 << 15) | ($rd << 7)); +} + +sub aes64esm { + # Encoding for aes64esm rd, rs1, rs2 instruction on RV64 + # XXXXXXX_ rs2 _ rs1 _XXX_ rd _XXXXXXX + my $template = 0b0011011_00000_00000_000_00000_0110011; + my $rd = read_reg shift; + my $rs1 = read_reg shift; + my $rs2 = read_reg shift; + return ".word ".($template | ($rs2 << 20) | ($rs1 << 15) | ($rd << 7)); +} + +sub aes64im { + # Encoding for aes64im rd, rs1 instruction on RV64 + # XXXXXXXXXXXX_ rs1 _XXX_ rd _XXXXXXX + my $template = 0b001100000000_00000_001_00000_0010011; + my $rd = read_reg shift; + my $rs1 = read_reg shift; + return ".word ".($template | ($rs1 << 15) | ($rd << 7)); +} + +sub aes64ks1i { + # Encoding for aes64ks1i rd, rs1, rnum instruction on RV64 + # XXXXXXXX_rnum_ rs1 _XXX_ rd _XXXXXXX + my $template = 0b00110001_0000_00000_001_00000_0010011; + my $rd = read_reg shift; + my $rs1 = read_reg shift; + my $rnum = shift; + return ".word ".($template | ($rnum << 20) | ($rs1 << 15) | ($rd << 7)); +} + +sub aes64ks2 { + # Encoding for aes64ks2 rd, rs1, rs2 instruction on RV64 + # XXXXXXX_ rs2 _ rs1 _XXX_ rd _XXXXXXX + my $template = 0b0111111_00000_00000_000_00000_0110011; + my $rd = read_reg shift; + my $rs1 = read_reg shift; + my $rs2 = read_reg shift; + return ".word ".($template | ($rs2 << 20) | ($rs1 << 15) | ($rd << 7)); +} + +sub brev8 { + # brev8 rd, rs + my $template = 0b011010000111_00000_101_00000_0010011; + my $rd = read_reg shift; + my $rs = read_reg shift; + return ".word ".($template | ($rs << 15) | ($rd << 7)); +} + +sub clmul { + # Encoding for clmul rd, rs1, rs2 instruction on RV64 + # XXXXXXX_ rs2 _ rs1 _XXX_ rd _XXXXXXX + my $template = 0b0000101_00000_00000_001_00000_0110011; + my $rd = read_reg shift; + my $rs1 = read_reg shift; + my $rs2 = read_reg shift; + return ".word ".($template | ($rs2 << 20) | ($rs1 << 15) | ($rd << 7)); +} + +sub clmulh { + # Encoding for clmulh rd, rs1, rs2 instruction on RV64 + # XXXXXXX_ rs2 _ rs1 _XXX_ rd _XXXXXXX + my $template = 0b0000101_00000_00000_011_00000_0110011; + my $rd = read_reg shift; + my $rs1 = read_reg shift; + my $rs2 = read_reg shift; + return ".word ".($template | ($rs2 << 20) | ($rs1 << 15) | ($rd << 7)); +} + +sub rev8 { + # Encoding for rev8 rd, rs instruction on RV64 + # XXXXXXXXXXXXX_ rs _XXX_ rd _XXXXXXX + my $template = 0b011010111000_00000_101_00000_0010011; + my $rd = read_reg shift; + my $rs = read_reg shift; + return ".word ".($template | ($rs << 15) | ($rd << 7)); +} + +1;