From patchwork Thu Nov 3 07:40:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 14709 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp374936wru; Thu, 3 Nov 2022 00:41:48 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7qMTXLohs3qqFMZkQcoZGIZi63O6A44h8WS2bBjfIHHUtc8OhE40bWMU2EyFZ33XEwHVXn X-Received: by 2002:a17:90a:582:b0:20a:97f6:f52e with SMTP id i2-20020a17090a058200b0020a97f6f52emr30186631pji.126.1667461308434; Thu, 03 Nov 2022 00:41:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667461308; cv=none; d=google.com; s=arc-20160816; b=dbv7qHvnvApQl7xv7f/d9LS23ZK7drH+5Z7ZNb0/7bn/W8CrCSrxus3XUdyrFNBWC+ K5czfBuiYLoeqcF/Wpx0MJZ9uZYzdl8Z/CBslD7WDDWDellEhn8V7B8Sn++rT/HXhzc5 RLcHbl8gHNzXUHBvKv7mXC9BXZkmDb8KjZdZTq2SeMOTK6R3/gsKpkXcFdagJTRw2RqN aGb06/EjNAAQZjNf2jsS4Fwily1GqKPFz8ZOzL3WiBxLIuS9oA/BfWaAAAfUgNqFEHNc z2liqpVYT3TVcqxz2sE3ySYK7/7QEH3fdN2Vi4g597okFcsav88+GLuyi5CINcIFn0ac ZvKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=JdW1ChgQAp5sgu/+L9ZQYjdQ0keHByk9jfFn/hr3Rz0=; b=k7dOp1oAlx3VcEAxcqrrFmytL20yKvuV6x5QlYXSHPRHcuNYKE1dua4Il+290Rr43a aPEN/v1jfIiAxXGaKumNqE+780maZi1FtvVsOMeuumMTPE1vZPYonSk4/BXEHMeTrWgd hveeHRNDUsufS7j8iQpB7bDmAYV1m3OQKoCrwMhC7eqftH4vT5lNxhXzXg6U3xRlQ+1+ 893By9dqFhc7YF+ZdNf9WZ0llmpg0KpXt8yKDs7Ro+3JrvLnAjiZvo2E5I0sRQ4OGC6x xwLShCuMMQ0XgZqNbqM46AS9f4kab+lG60U/FbRbM6Xy0zBGfrC5BjNdHy+YaWg3iNtX FZtw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id o5-20020a056a001bc500b0056d0fcb04f1si100686pfw.92.2022.11.03.00.41.35; Thu, 03 Nov 2022 00:41:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231230AbiKCHlG (ORCPT + 99 others); Thu, 3 Nov 2022 03:41:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37930 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230376AbiKCHkv (ORCPT ); Thu, 3 Nov 2022 03:40:51 -0400 Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1A7362ADC; Thu, 3 Nov 2022 00:40:48 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R111e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=guanjun@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VTrRND8_1667461245; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VTrRND8_1667461245) by smtp.aliyun-inc.com; Thu, 03 Nov 2022 15:40:46 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, artie.ding@linux.alibaba.com, guanjun@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, xuchun.shang@linux.alibaba.com Subject: [PATCH v3 RESEND 1/9] crypto/ycc: Add YCC (Yitian Cryptography Complex) accelerator driver Date: Thu, 3 Nov 2022 15:40:35 +0800 Message-Id: <1667461243-48652-2-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> References: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748459909241074791?= X-GMAIL-MSGID: =?utf-8?q?1748459909241074791?= From: Zelin Deng YCC (Yitian Cryptography Complex) is designed to accelerate the process of encryption and decryption. Yitian is the name of Alibaba SoCs which is based on ARMv9 architecture. This patch aims to add driver with basic pci settings and irq requests. Signed-off-by: Zelin Deng Co-developed-by: Jiankang Chen Signed-off-by: Jiankang Chen Signed-off-by: Guanjun --- drivers/crypto/Kconfig | 2 + drivers/crypto/Makefile | 1 + drivers/crypto/ycc/Kconfig | 8 + drivers/crypto/ycc/Makefile | 3 + drivers/crypto/ycc/ycc_dev.h | 148 +++++++++++++++ drivers/crypto/ycc/ycc_drv.c | 444 +++++++++++++++++++++++++++++++++++++++++++ drivers/crypto/ycc/ycc_isr.c | 117 ++++++++++++ drivers/crypto/ycc/ycc_isr.h | 12 ++ 8 files changed, 735 insertions(+) create mode 100644 drivers/crypto/ycc/Kconfig create mode 100644 drivers/crypto/ycc/Makefile create mode 100644 drivers/crypto/ycc/ycc_dev.h create mode 100644 drivers/crypto/ycc/ycc_drv.c create mode 100644 drivers/crypto/ycc/ycc_isr.c create mode 100644 drivers/crypto/ycc/ycc_isr.h diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig index 55e75fb..f0c4aee 100644 --- a/drivers/crypto/Kconfig +++ b/drivers/crypto/Kconfig @@ -818,4 +818,6 @@ config CRYPTO_DEV_SA2UL source "drivers/crypto/keembay/Kconfig" source "drivers/crypto/aspeed/Kconfig" +source "drivers/crypto/ycc/Kconfig" + endif # CRYPTO_HW diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile index 116de17..a667775 100644 --- a/drivers/crypto/Makefile +++ b/drivers/crypto/Makefile @@ -52,4 +52,5 @@ obj-$(CONFIG_CRYPTO_DEV_ARTPEC6) += axis/ obj-y += xilinx/ obj-y += hisilicon/ obj-$(CONFIG_CRYPTO_DEV_AMLOGIC_GXL) += amlogic/ +obj-$(CONFIG_CRYPTO_DEV_YCC) += ycc/ obj-y += keembay/ diff --git a/drivers/crypto/ycc/Kconfig b/drivers/crypto/ycc/Kconfig new file mode 100644 index 00000000..6e88ecb --- /dev/null +++ b/drivers/crypto/ycc/Kconfig @@ -0,0 +1,8 @@ +config CRYPTO_DEV_YCC + tristate "Support for Alibaba YCC cryptographic accelerator" + depends on CRYPTO && CRYPTO_HW && PCI + default n + help + Enables the driver for the on-chip cryptographic accelerator of + Alibaba Yitian SoCs which is based on ARMv9 architecture. + If unsure say N. diff --git a/drivers/crypto/ycc/Makefile b/drivers/crypto/ycc/Makefile new file mode 100644 index 00000000..ef28b7c --- /dev/null +++ b/drivers/crypto/ycc/Makefile @@ -0,0 +1,3 @@ +# SPDX-License-Identifier: GPL-2.0 +obj-$(CONFIG_CRYPTO_DEV_YCC) += ycc.o +ycc-objs := ycc_drv.o ycc_isr.o diff --git a/drivers/crypto/ycc/ycc_dev.h b/drivers/crypto/ycc/ycc_dev.h new file mode 100644 index 00000000..427046e --- /dev/null +++ b/drivers/crypto/ycc/ycc_dev.h @@ -0,0 +1,148 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __YCC_DEV_H +#define __YCC_DEV_H +#include +#include + +#define YCC_MAX_DEBUGFS_NAME 20 + +#define PCI_VENDOR_ID_YCC 0x1DED +#define PCI_DEVICE_ID_RCEC 0x8003 +#define PCI_DEVICE_ID_RCIEP 0x8001 + +#define YCC_RINGPAIR_NUM 48 +#define YCC_IRQS (YCC_RINGPAIR_NUM + 1) + +#define RING_STOP_BIT BIT(15) +#define RING_CFG_RING_SZ GENMASK(2, 0) +#define RING_CFG_INT_TH GENMASK(15, 8) +#define RING_ERR_AXI BIT(0) +#define RING_PENDING_CNT GENMASK(9, 0) + +#define YCC_SEC_CFG_BAR 0 +#define YCC_NSEC_CFG_BAR 1 +#define YCC_SEC_Q_BAR 2 +#define YCC_NSEC_Q_BAR 3 + +/* YCC secure configuration register offset */ +#define REG_YCC_CTL 0x18 +#define REG_YCC_GO 0x50 +#define REG_YCC_HCLK_INT_STATUS 0x54 +#define REG_YCC_XCLK_INT_STATUS 0x58 +#define REG_YCC_XCLK_MEM_ECC_EN_0 0x5c +#define REG_YCC_XCLK_MEM_ECC_EN_1 0x60 +#define REG_YCC_XCLK_MEM_ECC_COR_0 0x74 +#define REG_YCC_XCLK_MEM_ECC_COR_1 0x78 +#define REG_YCC_XCLK_MEM_ECC_UNCOR_0 0x80 +#define REG_YCC_XCLK_MEM_ECC_UNCOR_1 0x84 +#define REG_YCC_HCLK_MEM_ECC_EN 0x88 +#define REG_YCC_HCLK_MEM_ECC_COR 0x94 +#define REG_YCC_HCLK_MEM_ECC_UNCOR 0x98 + +#define REG_YCC_DEV_INT_MASK 0xA4 +#define REG_YCC_HCLK_INT_MASK 0xE4 +#define REG_YCC_XCLK_INT_MASK 0xE8 + +/* ring register offset */ +#define REG_RING_CMD_BASE_ADDR_LO 0x00 +#define REG_RING_CMD_BASE_ADDR_HI 0x04 +#define REG_RING_CMD_WR_PTR 0x08 +#define REG_RING_CMD_RD_PTR 0x0C +#define REG_RING_RSP_BASE_ADDR_LO 0x10 +#define REG_RING_RSP_BASE_ADDR_HI 0x14 +#define REG_RING_RSP_WR_PTR 0x18 +#define REG_RING_RSP_RD_PTR 0x1C +#define REG_RING_CFG 0x20 +#define REG_RING_TO_TH 0x24 +#define REG_RING_STATUS 0x28 +#define REG_RING_PENDING_CMD 0x2C +#define REG_RING_RSP_WR_SHADOWN_PTR 0x30 +#define REG_RING_RSP_AFULL_TH 0x34 + +#define YCC_HCLK_AHB_ERR BIT(0) +#define YCC_HCLK_SHIELD_ERR BIT(1) +#define YCC_HCLK_TRNG_ERR BIT(2) +#define YCC_HCLK_EFUSE_ERR BIT(3) +#define YCC_HCLK_INIT_ERR GENMASK(30, 16) +#define YCC_HCLK_CB_TRNG_ERR BIT(31) + +#define YCC_CTRL_IRAM_EN BIT(1) +#define YCC_CTRL_SEC_EN BIT(3) + +#define YCC_GO_PWRON BIT(0) +#define YCC_GO_ENABLED BIT(1) + +#define PCI_EXR_DEVCTL_TRP BIT(21) +#define PCI_EXP_DEVCTL_FLREN BIT(15) + +#define YDEV_STATUS_BIND 0 +#define YDEV_STATUS_INIT 1 +#define YDEV_STATUS_RESET 2 +#define YDEV_STATUS_READY 3 +#define YDEV_STATUS_ERR 4 +#define YDEV_STATUS_SRIOV 5 + +struct ycc_bar { + void __iomem *vaddr; + resource_size_t paddr; + resource_size_t size; +}; + +enum ycc_dev_type { + YCC_RCIEP, + YCC_RCEC, +}; + +struct ycc_dev { + struct list_head list; + struct pci_dev *pdev; + + u32 type; + int id; + int node; + const char *dev_name; + struct ycc_bar ycc_bars[4]; + struct ycc_dev *assoc_dev; + + bool is_polling; + unsigned long status; + struct workqueue_struct *dev_err_q; + char err_irq_name[32]; + struct work_struct work; + char *msi_name[48]; + struct dentry *debug_dir; + atomic_t refcnt; + + bool sec; + bool is_vf; + bool enable_vf; +}; + +#define YCC_CSR_WR(csr_base, csr_offset, val) \ + __raw_writel(val, csr_base + csr_offset) +#define YCC_CSR_RD(csr_base, csr_offset) \ + __raw_readl(csr_base + csr_offset) + +static inline void ycc_dev_get(struct ycc_dev *ydev) +{ + atomic_inc(&ydev->refcnt); +} + +static inline void ycc_dev_put(struct ycc_dev *ydev) +{ + atomic_dec(&ydev->refcnt); +} + +static inline void ycc_g_err_mask(void __iomem *vaddr) +{ + /* This will mask all error interrupt */ + YCC_CSR_WR(vaddr, REG_YCC_DEV_INT_MASK, (u32)~0); +} + +static inline void ycc_g_err_unmask(void __iomem *vaddr) +{ + /* This will unmask all error interrupt */ + YCC_CSR_WR(vaddr, REG_YCC_DEV_INT_MASK, 0); +} + +#endif diff --git a/drivers/crypto/ycc/ycc_drv.c b/drivers/crypto/ycc/ycc_drv.c new file mode 100644 index 00000000..4467dcd --- /dev/null +++ b/drivers/crypto/ycc/ycc_drv.c @@ -0,0 +1,444 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * YCC: Drivers for Alibaba YCC (Yitian Cryptography Complex) cryptographic + * accelerator. Enables the on-chip cryptographic accelerator of Alibaba + * Yitian SoCs which is based on ARMv9 architecture. + * + * Copyright (C) 2020-2022 Alibaba Corporation. All rights reserved. + * Author: Zelin Deng + * Author: Guanjun + */ + +#define pr_fmt(fmt) "YCC: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ycc_isr.h" + +static const char ycc_name[] = "ycc"; + +static bool is_polling = true; +module_param(is_polling, bool, 0644); + +LIST_HEAD(ycc_table); +static DEFINE_MUTEX(ycc_mutex); + +/* + * Each ycc device (RCIEP or RCEC) supports upto 48 VFs + * when enables SR-IOV. So each socket has 98 devices, + * includes 2 PFs and 96 VFs. + */ +#define YCC_MAX_DEVICES (98 * 4) /* Assume 4 sockets */ +static DEFINE_IDR(ycc_idr); + +static int ycc_device_flr(struct pci_dev *pdev, struct pci_dev *rcec_pdev) +{ + int ret; + + /* + * NOTE: When rciep gets FLR, its associated rcec gets reset as well. + * It does not make sense that individual pcie device should impact + * others. Before it has been fixed on silicon side, add a workaround to + * do FLR properly -- save both pci states and restore them latter. + */ + ret = pci_save_state(pdev); + if (ret) { + pr_err("Failed to save pci state\n"); + return ret; + } + + ret = pci_save_state(rcec_pdev); + if (ret) { + pr_err("Failed to save RCEC pci state\n"); + return ret; + } + + pcie_reset_flr(pdev, 0); + pcie_reset_flr(rcec_pdev, 0); + + pci_restore_state(pdev); + pci_restore_state(rcec_pdev); + + return 0; +} + +static int ycc_resource_setup(struct ycc_dev *ydev) +{ + struct pci_dev *rcec_pdev = ydev->assoc_dev->pdev; + struct pci_dev *pdev = ydev->pdev; + struct ycc_bar *abar, *cfg_bar; + u32 hclk_status; + int ret; + + ret = ycc_device_flr(pdev, rcec_pdev); + if (ret) + return ret; + + ret = pci_request_regions(pdev, ydev->dev_name); + if (ret) { + pr_err("Failed to request RCIEP mem regions\n"); + return ret; + } + + ret = -EIO; + cfg_bar = &ydev->ycc_bars[YCC_SEC_CFG_BAR]; + cfg_bar->paddr = pci_resource_start(pdev, YCC_SEC_CFG_BAR); + cfg_bar->size = pci_resource_len(pdev, YCC_SEC_CFG_BAR); + cfg_bar->vaddr = ioremap(cfg_bar->paddr, cfg_bar->size); + if (!cfg_bar->vaddr) { + pr_err("Failed to ioremap RCIEP cfg bar\n"); + goto release_mem_regions; + } + + ycc_g_err_mask(cfg_bar->vaddr); + + YCC_CSR_WR(cfg_bar->vaddr, REG_YCC_CTL, 0|YCC_CTRL_IRAM_EN); + YCC_CSR_WR(cfg_bar->vaddr, REG_YCC_GO, 0|YCC_GO_PWRON); + + /* Waiting for ycc firmware ready, 1000ms is recommended by the HW designers */ + mdelay(1000); + if (!(YCC_CSR_RD(cfg_bar->vaddr, REG_YCC_GO) & YCC_GO_ENABLED)) { + pr_err("Failed to set ycc enabled\n"); + goto iounmap_cfg_bar; + } + + /* Check HCLK status register, some error may happen at PWRON stage */ + hclk_status = YCC_CSR_RD(cfg_bar->vaddr, REG_YCC_HCLK_INT_STATUS); + if (hclk_status & YCC_HCLK_INIT_ERR) { + pr_err("Error happened when ycc was initializing\n"); + goto iounmap_cfg_bar; + } + + abar = &ydev->ycc_bars[YCC_NSEC_Q_BAR]; + abar->paddr = pci_resource_start(pdev, YCC_NSEC_Q_BAR); + abar->size = pci_resource_len(pdev, YCC_NSEC_Q_BAR); + abar->vaddr = pci_iomap(pdev, YCC_NSEC_Q_BAR, abar->size); + if (!abar->vaddr) { + pr_err("Failed to ioremap RCIEP queue bar\n"); + goto iounmap_cfg_bar; + } + + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); + if (ret < 0) { + pr_info("Failed to set DMA bit mask 64, try 32\n"); + ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); + if (ret < 0) + goto iounmap_queue_bar; + } + + ret = ycc_enable_msix(ydev); + if (ret <= 0) { + pr_err("Failed to enable msix, ret: %d\n", ret); + ret = (ret == 0) ? -EINVAL : ret; + goto iounmap_queue_bar; + } + + ret = ycc_init_global_err(ydev); + if (ret) { + pr_err("Failed to enable global err\n"); + goto disable_msix; + } + + ret = ycc_alloc_irqs(ydev); + if (ret) { + pr_err("Failed to alloc irqs\n"); + goto deinit_g_err; + } + + YCC_CSR_WR(cfg_bar->vaddr, REG_YCC_HCLK_INT_STATUS, ~0); + ycc_g_err_unmask(cfg_bar->vaddr); + return 0; + +deinit_g_err: + ycc_deinit_global_err(ydev); +disable_msix: + ycc_disable_msix(ydev); +iounmap_queue_bar: + iounmap(abar->vaddr); +iounmap_cfg_bar: + iounmap(cfg_bar->vaddr); +release_mem_regions: + pci_release_regions(pdev); + return ret; +} + +static void ycc_resource_free(struct ycc_dev *ydev) +{ + ycc_deinit_global_err(ydev); + ycc_free_irqs(ydev); + ycc_disable_msix(ydev); + iounmap(ydev->ycc_bars[YCC_SEC_CFG_BAR].vaddr); + iounmap(ydev->ycc_bars[YCC_NSEC_Q_BAR].vaddr); + pci_release_regions(ydev->pdev); +} + +static inline bool ycc_rcec_match(struct pci_dev *pdev0, struct pci_dev *pdev1) +{ + return pdev0->bus->number == pdev1->bus->number; +} + +static int ycc_rcec_bind(struct ycc_dev *ydev) +{ + struct ycc_dev *assoc_dev, *rciep, *rcec; + struct list_head *itr; + int ret = 0; + + if (list_empty(&ycc_table)) + return ret; + + list_for_each(itr, &ycc_table) { + assoc_dev = list_entry(itr, struct ycc_dev, list); + /* not in the same pci bus */ + if (!ycc_rcec_match(ydev->pdev, assoc_dev->pdev)) + continue; + + /* if sriov is enabled, it could be the same */ + if (ydev == assoc_dev) + continue; + + /* if sriov is enabled, found other VFs */ + if (ydev->type == assoc_dev->type) + continue; + + /* have been bound */ + if (test_bit(YDEV_STATUS_BIND, &assoc_dev->status)) + continue; + + /* assocated device has been enabled sriov */ + if (test_bit(YDEV_STATUS_SRIOV, &assoc_dev->status)) + break; + + ydev->assoc_dev = assoc_dev; + assoc_dev->assoc_dev = ydev; + rciep = (ydev->type == YCC_RCIEP) ? ydev : ydev->assoc_dev; + rcec = rciep->assoc_dev; + + ret = sysfs_create_link(&rcec->pdev->dev.kobj, + &rciep->pdev->dev.kobj, "ycc_rciep"); + if (ret) + goto out; + + ret = sysfs_create_link(&rciep->pdev->dev.kobj, + &rcec->pdev->dev.kobj, "ycc_rcec"); + if (ret) + goto remove_rciep_link; + + ret = ycc_resource_setup(rciep); + if (ret) + goto remove_rcec_link; + + set_bit(YDEV_STATUS_READY, &rciep->status); + set_bit(YDEV_STATUS_BIND, &rciep->status); + set_bit(YDEV_STATUS_READY, &rcec->status); + set_bit(YDEV_STATUS_BIND, &rcec->status); + break; + } + + return ret; + +remove_rcec_link: + sysfs_remove_link(&rciep->pdev->dev.kobj, "ycc_rcec"); +remove_rciep_link: + sysfs_remove_link(&rcec->pdev->dev.kobj, "ycc_rciep"); +out: + return ret; +} + +static void ycc_rcec_unbind(struct ycc_dev *ydev) +{ + struct ycc_dev *rciep, *rcec; + + if (!test_bit(YDEV_STATUS_BIND, &ydev->status)) + return; + + rciep = (ydev->type == YCC_RCIEP) ? ydev : ydev->assoc_dev; + rcec = rciep->assoc_dev; + + clear_bit(YDEV_STATUS_READY, &rciep->status); + clear_bit(YDEV_STATUS_READY, &rcec->status); + clear_bit(YDEV_STATUS_BIND, &rciep->status); + clear_bit(YDEV_STATUS_BIND, &rcec->status); + sysfs_remove_link(&rcec->pdev->dev.kobj, "ycc_rciep"); + sysfs_remove_link(&rciep->pdev->dev.kobj, "ycc_rcec"); + ycc_resource_free(rciep); + rciep->assoc_dev = NULL; + rcec->assoc_dev = NULL; +} + +static int ycc_dev_add(struct ycc_dev *ydev) +{ + int ret; + + mutex_lock(&ycc_mutex); + ret = ycc_rcec_bind(ydev); + if (ret) + goto out; + list_add_tail(&ydev->list, &ycc_table); + +out: + mutex_unlock(&ycc_mutex); + return ret; +} + +static void ycc_dev_del(struct ycc_dev *ydev) +{ + mutex_lock(&ycc_mutex); + ycc_rcec_unbind(ydev); + list_del(&ydev->list); + mutex_unlock(&ycc_mutex); +} + +static inline int ycc_rciep_init(struct ycc_dev *ydev) +{ + struct pci_dev *pdev = ydev->pdev; + char name[YCC_MAX_DEBUGFS_NAME + 1]; + int idr; + + ydev->sec = false; + ydev->dev_name = ycc_name; + ydev->is_polling = is_polling; + + idr = idr_alloc(&ycc_idr, ydev, 0, YCC_MAX_DEVICES, GFP_KERNEL); + if (idr < 0) { + pr_err("Failed to allocate idr for RCIEP device\n"); + return idr; + } + + ydev->id = idr; + + snprintf(name, YCC_MAX_DEBUGFS_NAME, "ycc_%02x:%02d.%02d", + pdev->bus->number, PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn)); + ydev->debug_dir = debugfs_create_dir(name, NULL); + /* If failed to create debugfs, driver can still work */ + if (IS_ERR_OR_NULL(ydev->debug_dir)) { + pr_warn("Failed to create debugfs for RCIEP device\n"); + ydev->debug_dir = NULL; + } + + return 0; +} + +static int ycc_drv_probe(struct pci_dev *pdev, const struct pci_device_id *id) +{ + struct ycc_dev *ydev; + struct device *dev = &pdev->dev; + int node = dev_to_node(dev); + int ret = -ENOMEM; + + ydev = kzalloc_node(sizeof(struct ycc_dev), GFP_KERNEL, node); + if (!ydev) + return ret; + + ret = pci_enable_device(pdev); + if (ret) { + pr_err("Failed to enable pci device\n"); + goto free_ydev; + } + pci_set_master(pdev); + pci_set_drvdata(pdev, ydev); + + ydev->pdev = pdev; + ydev->is_vf = false; + ydev->enable_vf = false; + ydev->node = node; + if (id->device == PCI_DEVICE_ID_RCIEP) { + ydev->type = YCC_RCIEP; + ret = ycc_rciep_init(ydev); + if (ret) + goto disable_ydev; + } else { + ydev->type = YCC_RCEC; + } + + ret = ycc_dev_add(ydev); + if (ret) + goto remove_debugfs; + + return ret; + +remove_debugfs: + if (ydev->type == YCC_RCIEP) { + debugfs_remove_recursive(ydev->debug_dir); + idr_remove(&ycc_idr, ydev->id); + } +disable_ydev: + pci_disable_device(pdev); +free_ydev: + pr_err("Failed to probe %s\n", ydev->type == YCC_RCIEP ? "RCIEP" : "RCEC"); + kfree(ydev); + return ret; +} + +static void ycc_drv_remove(struct pci_dev *pdev) +{ + struct ycc_dev *ydev = pci_get_drvdata(pdev); + + ycc_dev_del(ydev); + if (ydev->type == YCC_RCIEP) { + debugfs_remove_recursive(ydev->debug_dir); + idr_remove(&ycc_idr, ydev->id); + } + + pci_disable_sriov(pdev); + pci_disable_device(pdev); + kfree(ydev); +} + +/* + * SRIOV is not supported now. + */ +static int ycc_drv_sriov_configure(struct pci_dev *pdev, int numvfs) +{ + return -EFAULT; +} + +static const struct pci_device_id ycc_id_table[] = { + { PCI_DEVICE(PCI_VENDOR_ID_YCC, PCI_DEVICE_ID_RCIEP) }, + { PCI_DEVICE(PCI_VENDOR_ID_YCC, PCI_DEVICE_ID_RCEC) }, + { 0, } +}; +MODULE_DEVICE_TABLE(pci, ycc_id_table); + +static struct pci_driver ycc_driver = { + .name = "ycc", + .id_table = ycc_id_table, + .probe = ycc_drv_probe, + .remove = ycc_drv_remove, + .sriov_configure = ycc_drv_sriov_configure, +}; + +static int __init ycc_drv_init(void) +{ + int ret; + + ret = pci_register_driver(&ycc_driver); + if (ret) + goto out; + + return 0; + +out: + return ret; +} + +static void __exit ycc_drv_exit(void) +{ + pci_unregister_driver(&ycc_driver); +} + +module_init(ycc_drv_init); +module_exit(ycc_drv_exit); +MODULE_AUTHOR("Zelin Deng "); +MODULE_AUTHOR("Guanjun "); +MODULE_LICENSE("GPL v2"); +MODULE_DESCRIPTION("Driver for Alibaba YCC cryptographic accelerator"); diff --git a/drivers/crypto/ycc/ycc_isr.c b/drivers/crypto/ycc/ycc_isr.c new file mode 100644 index 00000000..f2f751c --- /dev/null +++ b/drivers/crypto/ycc/ycc_isr.c @@ -0,0 +1,117 @@ +// SPDX-License-Identifier: GPL-2.0 + +#define pr_fmt(fmt) "YCC: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ycc_isr.h" + +/* + * TODO: will implement when ycc ring actually work. + */ +static void ycc_process_global_err(struct work_struct *work) +{ +} + +static irqreturn_t ycc_g_err_isr(int irq, void *data) +{ + struct ycc_dev *ydev = (struct ycc_dev *)data; + struct ycc_bar *cfg_bar; + + if (test_and_set_bit(YDEV_STATUS_ERR, &ydev->status)) + return IRQ_HANDLED; + + /* Mask global errors until it has been processed */ + cfg_bar = &ydev->ycc_bars[YCC_SEC_CFG_BAR]; + ycc_g_err_mask(cfg_bar->vaddr); + + clear_bit(YDEV_STATUS_READY, &ydev->status); + + schedule_work(&ydev->work); + return IRQ_HANDLED; +} + +int ycc_enable_msix(struct ycc_dev *ydev) +{ + struct pci_dev *rcec_pdev = ydev->assoc_dev->pdev; + + /* Disable intx explicitly */ + return pci_alloc_irq_vectors(rcec_pdev, YCC_IRQS, YCC_IRQS, PCI_IRQ_MSIX); +} + +void ycc_disable_msix(struct ycc_dev *ydev) +{ + struct pci_dev *rcec_pdev = ydev->assoc_dev->pdev; + + pci_free_irq_vectors(rcec_pdev); +} + +static int ycc_setup_global_err_workqueue(struct ycc_dev *ydev) +{ + char name[32] = {0}; + + sprintf(name, "ycc_dev_%d_g_errd", ydev->id); + INIT_WORK(&ydev->work, ycc_process_global_err); + + /* Allocated, but not used temporarily */ + ydev->dev_err_q = alloc_workqueue(name, WQ_UNBOUND, 0); + if (!ydev->dev_err_q) { + pr_err("Failed to alloc workqueue for dev: %d\n", ydev->id); + return -ENOMEM; + } + + return 0; +} + +static void ycc_cleanup_global_err_workqueue(struct ycc_dev *ydev) +{ + if (ydev->dev_err_q) + destroy_workqueue(ydev->dev_err_q); +} + +/* + * TODO: Just request irq for global err. Irq for each ring + * will be requested when ring actually work. + */ +int ycc_alloc_irqs(struct ycc_dev *ydev) +{ + struct pci_dev *rcec_pdev = ydev->assoc_dev->pdev; + int num = ydev->is_vf ? 1 : YCC_RINGPAIR_NUM; + int ret; + + sprintf(ydev->err_irq_name, "ycc_dev_%d_global_err", ydev->id); + ret = request_irq(pci_irq_vector(rcec_pdev, num), + ycc_g_err_isr, 0, ydev->err_irq_name, ydev); + if (ret) + pr_err("Failed to alloc global irq interrupt for dev: %d\n", ydev->id); + + return ret; +} + +/* + * TODO: Same as the allocate action. + */ +void ycc_free_irqs(struct ycc_dev *ydev) +{ + struct pci_dev *rcec_pdev = ydev->assoc_dev->pdev; + int num = ydev->is_vf ? 1 : YCC_RINGPAIR_NUM; + + free_irq(pci_irq_vector(rcec_pdev, num), ydev); +} + +int ycc_init_global_err(struct ycc_dev *ydev) +{ + return ycc_setup_global_err_workqueue(ydev); +} + +void ycc_deinit_global_err(struct ycc_dev *ydev) +{ + ycc_cleanup_global_err_workqueue(ydev); +} diff --git a/drivers/crypto/ycc/ycc_isr.h b/drivers/crypto/ycc/ycc_isr.h new file mode 100644 index 00000000..8318a6f --- /dev/null +++ b/drivers/crypto/ycc/ycc_isr.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __YCC_ISR_H + +#include "ycc_dev.h" + +int ycc_enable_msix(struct ycc_dev *ydev); +void ycc_disable_msix(struct ycc_dev *ydev); +int ycc_alloc_irqs(struct ycc_dev *ydev); +void ycc_free_irqs(struct ycc_dev *ydev); +int ycc_init_global_err(struct ycc_dev *ydev); +void ycc_deinit_global_err(struct ycc_dev *ydev); +#endif From patchwork Thu Nov 3 07:40:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 14711 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp375123wru; Thu, 3 Nov 2022 00:42:27 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7G4nOh7nTmONZgFPa8YPNvjif5GDwYOGLxJjaK0GACMRFUdwGmYtT6DzFH3jw+cBlSww28 X-Received: by 2002:a05:6a00:1993:b0:56d:a820:6181 with SMTP id d19-20020a056a00199300b0056da8206181mr15787734pfl.83.1667461346922; Thu, 03 Nov 2022 00:42:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667461346; cv=none; d=google.com; s=arc-20160816; b=NP5OnHhpbfbM/7On0L6yKChzXNNsYO4RBYgfPAW+R28QWGuVDjcrfH+eimZOE2lq1C hz2FOEWpk5KuKKR/gH2OVLF4t1xt4E/XiGHpQxUpSCagL31hSSOzSTMM3xB0oFxxc5lO qLMcSeKxnX/8r7FCYItp1qQsS5UjHb3SXiATbHeOVpEq+lwhqgGHi0Bn6pphRCqlafAL AqWeJmM93BNRPKwmEL/U4Vhd6Rsq2BcBr7ASdW5kNTBWKyD2b+xanO6nFdxIjKBCjBXv 0LOwyKcSOScAksZAElpfro4oZ24vH+/GyW8dv8I2lfuvhb3pfUoh//Ves+wo0Tnkpocs GhUg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=cik4uffu459Gea0eEXYKTKhhtIB5pDw9JptpxoYf21Y=; b=wwd9+GG8rHY5niObN2YeMxomxLUDoueVCpCVSLc0UhREHEjxemFLm4jq6nmkiSNCJI H1tJ10ERZES5Wey5CaK5FAVH2kaBYw/1YEx0v4/jYpZGdpFaZWmo0rdpkMLmRgVumLBH riee2Y+ZnS0Ct3YPh2ZxRiWvu00TUq1He4UizKt/OHosrc9vVX5w21HQ4jDO4/rgoKR2 zWCVnQ+Cw667D1h3TLsOGdb/LjGR6qa59Tyq6I8hYKyXL2o/dBJZ29zf1NQXUdnfCrju +Odfo15ceRrndbL0FXR1C6VzkcouPsm2YPPopVNZOxUWAHubQG3oVnGDFgjnQ/jXXoSM 6nMQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e14-20020a63d94e000000b0042b6e87d126si226883pgj.198.2022.11.03.00.42.13; Thu, 03 Nov 2022 00:42:26 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231161AbiKCHlU (ORCPT + 99 others); Thu, 3 Nov 2022 03:41:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37972 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230527AbiKCHky (ORCPT ); Thu, 3 Nov 2022 03:40:54 -0400 Received: from out199-9.us.a.mail.aliyun.com (out199-9.us.a.mail.aliyun.com [47.90.199.9]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2218210A2; Thu, 3 Nov 2022 00:40:51 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R181e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046051;MF=guanjun@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VTrRlYo_1667461246; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VTrRlYo_1667461246) by smtp.aliyun-inc.com; Thu, 03 Nov 2022 15:40:48 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, artie.ding@linux.alibaba.com, guanjun@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, xuchun.shang@linux.alibaba.com Subject: [PATCH v3 RESEND 2/9] crypto/ycc: Add ycc ring configuration Date: Thu, 3 Nov 2022 15:40:36 +0800 Message-Id: <1667461243-48652-3-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> References: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748459949242512933?= X-GMAIL-MSGID: =?utf-8?q?1748459949242512933?= From: Zelin Deng There're 2 types of functional rings, kernel ring and user ring. All kernel rings must be initialized in kernel driver while user rings are not supported now. Signed-off-by: Zelin Deng --- drivers/crypto/ycc/Makefile | 2 +- drivers/crypto/ycc/ycc_dev.h | 6 + drivers/crypto/ycc/ycc_drv.c | 59 ++++- drivers/crypto/ycc/ycc_isr.c | 7 + drivers/crypto/ycc/ycc_isr.h | 1 + drivers/crypto/ycc/ycc_ring.c | 559 ++++++++++++++++++++++++++++++++++++++++++ drivers/crypto/ycc/ycc_ring.h | 111 +++++++++ 7 files changed, 743 insertions(+), 2 deletions(-) create mode 100644 drivers/crypto/ycc/ycc_ring.c create mode 100644 drivers/crypto/ycc/ycc_ring.h diff --git a/drivers/crypto/ycc/Makefile b/drivers/crypto/ycc/Makefile index ef28b7c..31aae9c 100644 --- a/drivers/crypto/ycc/Makefile +++ b/drivers/crypto/ycc/Makefile @@ -1,3 +1,3 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_CRYPTO_DEV_YCC) += ycc.o -ycc-objs := ycc_drv.o ycc_isr.o +ycc-objs := ycc_drv.o ycc_isr.o ycc_ring.o diff --git a/drivers/crypto/ycc/ycc_dev.h b/drivers/crypto/ycc/ycc_dev.h index 427046e..456a53922 100644 --- a/drivers/crypto/ycc/ycc_dev.h +++ b/drivers/crypto/ycc/ycc_dev.h @@ -104,10 +104,16 @@ struct ycc_dev { struct ycc_bar ycc_bars[4]; struct ycc_dev *assoc_dev; + int max_desc; + int user_rings; bool is_polling; unsigned long status; struct workqueue_struct *dev_err_q; char err_irq_name[32]; + + struct ycc_ring *rings; + unsigned long ring_bitmap; + struct work_struct work; char *msi_name[48]; struct dentry *debug_dir; diff --git a/drivers/crypto/ycc/ycc_drv.c b/drivers/crypto/ycc/ycc_drv.c index 4467dcd..4eccd1f3 100644 --- a/drivers/crypto/ycc/ycc_drv.c +++ b/drivers/crypto/ycc/ycc_drv.c @@ -24,11 +24,16 @@ #include #include "ycc_isr.h" +#include "ycc_ring.h" static const char ycc_name[] = "ycc"; +static int max_desc = 256; +static int user_rings; static bool is_polling = true; +module_param(max_desc, int, 0644); module_param(is_polling, bool, 0644); +module_param(user_rings, int, 0644); LIST_HEAD(ycc_table); static DEFINE_MUTEX(ycc_mutex); @@ -41,6 +46,35 @@ #define YCC_MAX_DEVICES (98 * 4) /* Assume 4 sockets */ static DEFINE_IDR(ycc_idr); +static int ycc_dev_debugfs_statistics_show(struct seq_file *s, void *p) +{ + struct ycc_dev *ydev = (struct ycc_dev *)s->private; + struct ycc_ring *ring; + int i; + + seq_printf(s, "name, type, nr_binds, nr_cmds, nr_resps\n"); + for (i = 0; i < YCC_RINGPAIR_NUM; i++) { + ring = ydev->rings + i; + seq_printf(s, "Ring%02d, %d, %llu, %llu, %llu\n", ring->ring_id, + ring->type, ring->nr_binds, ring->nr_cmds, ring->nr_resps); + } + + return 0; +} + +static int ycc_dev_debugfs_statistics_open(struct inode *inode, struct file *filp) +{ + return single_open(filp, ycc_dev_debugfs_statistics_show, inode->i_private); +} + +static const struct file_operations ycc_dev_statistics_fops = { + .open = ycc_dev_debugfs_statistics_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, + .owner = THIS_MODULE, +}; + static int ycc_device_flr(struct pci_dev *pdev, struct pci_dev *rcec_pdev) { int ret; @@ -136,11 +170,21 @@ static int ycc_resource_setup(struct ycc_dev *ydev) goto iounmap_queue_bar; } + /* User ring is not supported temporarily */ + ydev->user_rings = 0; + user_rings = 0; + + ret = ycc_dev_rings_init(ydev, max_desc, user_rings); + if (ret) { + pr_err("Failed to init ycc rings\n"); + goto iounmap_queue_bar; + } + ret = ycc_enable_msix(ydev); if (ret <= 0) { pr_err("Failed to enable msix, ret: %d\n", ret); ret = (ret == 0) ? -EINVAL : ret; - goto iounmap_queue_bar; + goto release_rings; } ret = ycc_init_global_err(ydev); @@ -163,12 +207,15 @@ static int ycc_resource_setup(struct ycc_dev *ydev) ycc_deinit_global_err(ydev); disable_msix: ycc_disable_msix(ydev); +release_rings: + ycc_dev_rings_release(ydev, user_rings); iounmap_queue_bar: iounmap(abar->vaddr); iounmap_cfg_bar: iounmap(cfg_bar->vaddr); release_mem_regions: pci_release_regions(pdev); + return ret; } @@ -177,6 +224,7 @@ static void ycc_resource_free(struct ycc_dev *ydev) ycc_deinit_global_err(ydev); ycc_free_irqs(ydev); ycc_disable_msix(ydev); + ycc_dev_rings_release(ydev, ydev->user_rings); iounmap(ydev->ycc_bars[YCC_SEC_CFG_BAR].vaddr); iounmap(ydev->ycc_bars[YCC_NSEC_Q_BAR].vaddr); pci_release_regions(ydev->pdev); @@ -301,12 +349,15 @@ static void ycc_dev_del(struct ycc_dev *ydev) static inline int ycc_rciep_init(struct ycc_dev *ydev) { struct pci_dev *pdev = ydev->pdev; + struct dentry *debugfs; char name[YCC_MAX_DEBUGFS_NAME + 1]; int idr; ydev->sec = false; ydev->dev_name = ycc_name; ydev->is_polling = is_polling; + ydev->user_rings = user_rings; + ydev->max_desc = max_desc; idr = idr_alloc(&ycc_idr, ydev, 0, YCC_MAX_DEVICES, GFP_KERNEL); if (idr < 0) { @@ -323,6 +374,11 @@ static inline int ycc_rciep_init(struct ycc_dev *ydev) if (IS_ERR_OR_NULL(ydev->debug_dir)) { pr_warn("Failed to create debugfs for RCIEP device\n"); ydev->debug_dir = NULL; + } else { + debugfs = debugfs_create_file("statistics", 0400, ydev->debug_dir, + (void *)ydev, &ycc_dev_statistics_fops); + if (IS_ERR_OR_NULL(debugfs)) + pr_warn("Failed to create statistics entry for RCIEP device\n"); } return 0; @@ -351,6 +407,7 @@ static int ycc_drv_probe(struct pci_dev *pdev, const struct pci_device_id *id) ydev->is_vf = false; ydev->enable_vf = false; ydev->node = node; + ydev->ring_bitmap = 0; if (id->device == PCI_DEVICE_ID_RCIEP) { ydev->type = YCC_RCIEP; ret = ycc_rciep_init(ydev); diff --git a/drivers/crypto/ycc/ycc_isr.c b/drivers/crypto/ycc/ycc_isr.c index f2f751c..cd2a2d7 100644 --- a/drivers/crypto/ycc/ycc_isr.c +++ b/drivers/crypto/ycc/ycc_isr.c @@ -38,6 +38,13 @@ static irqreturn_t ycc_g_err_isr(int irq, void *data) return IRQ_HANDLED; } +/* + * TODO: will implement when ycc ring actually work. + */ +void ycc_resp_process(uintptr_t ring_addr) +{ +} + int ycc_enable_msix(struct ycc_dev *ydev) { struct pci_dev *rcec_pdev = ydev->assoc_dev->pdev; diff --git a/drivers/crypto/ycc/ycc_isr.h b/drivers/crypto/ycc/ycc_isr.h index 8318a6f..5a25ac7 100644 --- a/drivers/crypto/ycc/ycc_isr.h +++ b/drivers/crypto/ycc/ycc_isr.h @@ -3,6 +3,7 @@ #include "ycc_dev.h" +void ycc_resp_process(uintptr_t ring_addr); int ycc_enable_msix(struct ycc_dev *ydev); void ycc_disable_msix(struct ycc_dev *ydev); int ycc_alloc_irqs(struct ycc_dev *ydev); diff --git a/drivers/crypto/ycc/ycc_ring.c b/drivers/crypto/ycc/ycc_ring.c new file mode 100644 index 00000000..ea6877e --- /dev/null +++ b/drivers/crypto/ycc/ycc_ring.c @@ -0,0 +1,559 @@ +// SPDX-License-Identifier: GPL-2.0 + +#define pr_fmt(fmt) "YCC: Ring: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "ycc_dev.h" +#include "ycc_ring.h" +#include "ycc_isr.h" + +#define YCC_CMD_DESC_SIZE 64 +#define YCC_RESP_DESC_SIZE 16 +#define YCC_RING_CSR_STRIDE 0x1000 + +extern struct list_head ycc_table; + +static struct rb_root ring_rbtree = RB_ROOT; +static DEFINE_SPINLOCK(ring_rbtree_lock); + +/* + * Show the status of specified ring's command queue and + * response queue. + */ +static int ycc_ring_debugfs_status_show(struct seq_file *s, void *p) +{ + struct ycc_ring *ring = (struct ycc_ring *)s->private; + + seq_printf(s, "Ring ID: %d\n", ring->ring_id); + seq_printf(s, "Desscriptor Entry Size: %d, CMD Descriptor Size: %d, RESP Descriptor Size :%d\n", + ring->max_desc, YCC_CMD_DESC_SIZE, YCC_RESP_DESC_SIZE); + seq_printf(s, "CMD base addr:%llx, RESP base addr:%llx\n", + ring->cmd_base_paddr, ring->resp_base_paddr); + seq_printf(s, "CMD wr ptr:%d, CMD rd ptr: %d\n", + YCC_CSR_RD(ring->csr_vaddr, REG_RING_CMD_WR_PTR), + YCC_CSR_RD(ring->csr_vaddr, REG_RING_CMD_RD_PTR)); + seq_printf(s, "RESP rd ptr:%d, RESP wr ptr: %d\n", + YCC_CSR_RD(ring->csr_vaddr, REG_RING_RSP_RD_PTR), + YCC_CSR_RD(ring->csr_vaddr, REG_RING_RSP_WR_PTR)); + + return 0; +} + +static int ycc_ring_debugfs_status_open(struct inode *inode, struct file *filp) +{ + return single_open(filp, ycc_ring_debugfs_status_show, inode->i_private); +} + +static const struct file_operations ycc_ring_status_fops = { + .open = ycc_ring_debugfs_status_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, + .owner = THIS_MODULE, +}; + +/* + * Dump the raw content of specified ring's command queue and + * response queue. + */ +static int ycc_ring_debugfs_dump_show(struct seq_file *s, void *p) +{ + struct ycc_ring *ring = (struct ycc_ring *)s->private; + + seq_printf(s, "Ring ID: %d\n", ring->ring_id); + seq_puts(s, "-------- Ring CMD Descriptors --------\n"); + seq_hex_dump(s, "", DUMP_PREFIX_ADDRESS, 32, 4, ring->cmd_base_vaddr, + YCC_CMD_DESC_SIZE * ring->max_desc, false); + seq_puts(s, "-------- Ring RESP Descriptors --------\n"); + seq_hex_dump(s, "", DUMP_PREFIX_ADDRESS, 32, 4, ring->resp_base_vaddr, + YCC_RESP_DESC_SIZE * ring->max_desc, false); + + return 0; +} + +static int ycc_ring_debugfs_dump_open(struct inode *inode, struct file *filp) +{ + return single_open(filp, ycc_ring_debugfs_dump_show, inode->i_private); +} + +static const struct file_operations ycc_ring_dump_fops = { + .open = ycc_ring_debugfs_dump_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, + .owner = THIS_MODULE, +}; + +/* + * Create debugfs for rings, only for KERN_RING + * "/sys/kernel/debugfs/ycc_b:d.f/ring${x}" + */ +static int ycc_create_ring_debugfs(struct ycc_ring *ring) +{ + struct dentry *debugfs; + char name[8]; + + if (!ring || !ring->ydev || !ring->ydev->debug_dir) + return -EINVAL; + + snprintf(name, sizeof(name), "ring%02d", ring->ring_id); + debugfs = debugfs_create_dir(name, ring->ydev->debug_dir); + if (IS_ERR_OR_NULL(debugfs)) + goto out; + + ring->debug_dir = debugfs; + + debugfs = debugfs_create_file("status", 0400, ring->debug_dir, + (void *)ring, &ycc_ring_status_fops); + if (IS_ERR_OR_NULL(debugfs)) + goto remove_debugfs; + + debugfs = debugfs_create_file("dump", 0400, ring->debug_dir, + (void *)ring, &ycc_ring_dump_fops); + if (IS_ERR_OR_NULL(debugfs)) + goto remove_debugfs; + + return 0; + +remove_debugfs: + debugfs_remove_recursive(ring->debug_dir); +out: + ring->debug_dir = NULL; + return PTR_ERR(debugfs); +} + +static void ycc_remove_ring_debugfs(struct ycc_ring *ring) +{ + debugfs_remove_recursive(ring->debug_dir); +} + +/* + * Allocate memory for rings and initiate basic fields + */ +static int ycc_alloc_rings(struct ycc_dev *ydev) +{ + int num = YCC_RINGPAIR_NUM; + struct ycc_bar *abar; + u32 i; + + if (ydev->rings) + return 0; + + if (ydev->is_vf) { + num = 1; + abar = &ydev->ycc_bars[0]; + } else if (ydev->sec) { + abar = &ydev->ycc_bars[YCC_SEC_Q_BAR]; + } else { + abar = &ydev->ycc_bars[YCC_NSEC_Q_BAR]; + } + + ydev->rings = kzalloc_node(num * sizeof(struct ycc_ring), + GFP_KERNEL, ydev->node); + if (!ydev->rings) + return -ENOMEM; + + for (i = 0; i < num; i++) { + ydev->rings[i].ring_id = i; + ydev->rings[i].ydev = ydev; + ydev->rings[i].csr_vaddr = abar->vaddr + i * YCC_RING_CSR_STRIDE; + ydev->rings[i].csr_paddr = abar->paddr + i * YCC_RING_CSR_STRIDE; + } + + return 0; +} + +/* + * Free memory for rings + */ +static void ycc_free_rings(struct ycc_dev *ydev) +{ + kfree(ydev->rings); + ydev->rings = NULL; +} + +/* + * Initiate ring and create command queue and response queue. + */ +static int ycc_init_ring(struct ycc_ring *ring, u32 max_desc) +{ + struct ycc_dev *ydev = ring->ydev; + u32 cmd_ring_size, resp_ring_size; + + ring->type = KERN_RING; + ring->max_desc = max_desc; + + cmd_ring_size = ring->max_desc * YCC_CMD_DESC_SIZE; + resp_ring_size = ring->max_desc * YCC_RESP_DESC_SIZE; + + ring->cmd_base_vaddr = dma_alloc_coherent(&ydev->pdev->dev, + cmd_ring_size, + &ring->cmd_base_paddr, + GFP_KERNEL); + if (!ring->cmd_base_vaddr) { + pr_err("Failed to alloc cmd dma memory\n"); + return -ENOMEM; + } + memset(ring->cmd_base_vaddr, CMD_INVALID_CONTENT_U8, cmd_ring_size); + + ring->resp_base_vaddr = dma_alloc_coherent(&ydev->pdev->dev, + resp_ring_size, + &ring->resp_base_paddr, + GFP_KERNEL); + if (!ring->resp_base_vaddr) { + pr_err("Failed to alloc resp dma memory\n"); + dma_free_coherent(&ydev->pdev->dev, cmd_ring_size, + ring->cmd_base_vaddr, ring->cmd_base_paddr); + return -ENOMEM; + } + memset(ring->resp_base_vaddr, CMD_INVALID_CONTENT_U8, resp_ring_size); + + YCC_CSR_WR(ring->csr_vaddr, REG_RING_RSP_AFULL_TH, 0); + YCC_CSR_WR(ring->csr_vaddr, REG_RING_CMD_BASE_ADDR_LO, + (u32)ring->cmd_base_paddr & 0xffffffff); + YCC_CSR_WR(ring->csr_vaddr, REG_RING_CMD_BASE_ADDR_HI, + ((u64)ring->cmd_base_paddr >> 32) & 0xffffffff); + YCC_CSR_WR(ring->csr_vaddr, REG_RING_RSP_BASE_ADDR_LO, + (u32)ring->resp_base_paddr & 0xffffffff); + YCC_CSR_WR(ring->csr_vaddr, REG_RING_RSP_BASE_ADDR_HI, + ((u64)ring->resp_base_paddr >> 32) & 0xffffffff); + + if (ycc_create_ring_debugfs(ring)) + pr_warn("Failed to create debugfs entry for ring: %d\n", ring->ring_id); + + atomic_set(&ring->ref_cnt, 0); + spin_lock_init(&ring->lock); + return 0; +} + +/* + * Release dma memory for command queue and response queue. + */ +static void ycc_release_ring(struct ycc_ring *ring) +{ + u32 ring_size; + + /* This ring should not be in use here */ + WARN_ON(atomic_read(&ring->ref_cnt)); + + if (ring->cmd_base_vaddr) { + ring_size = ring->max_desc * YCC_CMD_DESC_SIZE; + dma_free_coherent(&ring->ydev->pdev->dev, ring_size, + ring->cmd_base_vaddr, + ring->cmd_base_paddr); + ring->cmd_base_vaddr = NULL; + } + if (ring->resp_base_vaddr) { + ring_size = ring->max_desc * YCC_RESP_DESC_SIZE; + dma_free_coherent(&ring->ydev->pdev->dev, ring_size, + ring->resp_base_vaddr, + ring->resp_base_paddr); + ring->resp_base_vaddr = NULL; + } + + ycc_remove_ring_debugfs(ring); + ring->type = FREE_RING; +} + +int ycc_dev_rings_init(struct ycc_dev *ydev, u32 max_desc, int user_rings) +{ + int kern_rings = YCC_RINGPAIR_NUM - user_rings; + struct ycc_ring *ring; + int ret, i; + + ret = ycc_alloc_rings(ydev); + if (ret) { + pr_err("Failed to allocate memory for rings\n"); + return ret; + } + + for (i = 0; i < kern_rings; i++) { + ring = &ydev->rings[i]; + ret = ycc_init_ring(ring, max_desc); + if (ret) + goto free_kern_rings; + + tasklet_init(&ring->resp_handler, ycc_resp_process, (uintptr_t)ring); + } + + return 0; + +free_kern_rings: + while (i--) { + ring = &ydev->rings[i]; + ycc_release_ring(ring); + } + + ycc_free_rings(ydev); + return ret; +} + +void ycc_dev_rings_release(struct ycc_dev *ydev, int user_rings) +{ + int kern_rings = YCC_RINGPAIR_NUM - user_rings; + struct ycc_ring *ring; + int i; + + for (i = 0; i < kern_rings; i++) { + ring = &ydev->rings[i]; + + tasklet_kill(&ring->resp_handler); + ycc_release_ring(ring); + } + + ycc_free_rings(ydev); +} + +/* + * Check if the command queue is full. + */ +static inline bool ycc_ring_full(struct ycc_ring *ring) +{ + return ring->cmd_rd_ptr == (ring->cmd_wr_ptr + 1) % ring->max_desc; +} + +/* + * Check if the response queue is empty + */ +static inline bool ycc_ring_empty(struct ycc_ring *ring) +{ + return ring->resp_rd_ptr == ring->resp_wr_ptr; +} + +#define __rb_node_to_type(a) rb_entry(a, struct ycc_ring, node) + +static inline bool ycc_ring_less(struct rb_node *a, const struct rb_node *b) +{ + return (atomic_read(&__rb_node_to_type(a)->ref_cnt) + < atomic_read(&__rb_node_to_type(b)->ref_cnt)); +} + +static struct ycc_ring *ycc_select_ring(void) +{ + struct ycc_ring *found = NULL; + struct rb_node *rnode; + struct list_head *itr; + struct ycc_dev *ydev; + int idx; + + if (list_empty(&ycc_table)) + return NULL; + + /* + * No need to protect the list through lock here. The external + * ycc_table list only insert/remove entry when probing/removing + * the driver. + */ + list_for_each(itr, &ycc_table) { + ydev = list_entry(itr, struct ycc_dev, list); + + /* RCEC has no rings */ + if (ydev->type != YCC_RCIEP) + continue; + + /* RCIEP is not ready */ + if (!test_bit(YDEV_STATUS_READY, &ydev->status)) + continue; + + do { + idx = find_first_zero_bit(&ydev->ring_bitmap, YCC_RINGPAIR_NUM); + if (idx == YCC_RINGPAIR_NUM) + break; + + found = ydev->rings + idx; + if (found->type != KERN_RING) { + /* Found ring is not for kernel, mark it and continue */ + set_bit(idx, &ydev->ring_bitmap); + continue; + } + } while (test_and_set_bit(idx, &ydev->ring_bitmap)); + + if (idx < YCC_RINGPAIR_NUM && found) + goto out; + } + + /* + * We didn't find the exact ring which means each ring + * has been occupied. Fallback to slow path. + */ + spin_lock(&ring_rbtree_lock); + rnode = rb_first(&ring_rbtree); + rb_erase(rnode, &ring_rbtree); + spin_unlock(&ring_rbtree_lock); + + found = __rb_node_to_type(rnode); + +out: + ycc_ring_get(found); + spin_lock(&ring_rbtree_lock); + rb_add(&found->node, &ring_rbtree, ycc_ring_less); + spin_unlock(&ring_rbtree_lock); + return found; +} + +/* + * Bind the ring to crypto + */ +struct ycc_ring *ycc_crypto_get_ring(void) +{ + struct ycc_ring *ring; + + ring = ycc_select_ring(); + if (ring) { + ycc_dev_get(ring->ydev); + ring->nr_binds++; + if (ring->ydev->is_polling && atomic_read(&ring->ref_cnt) == 1) + tasklet_hi_schedule(&ring->resp_handler); + } + + return ring; +} + +void ycc_crypto_free_ring(struct ycc_ring *ring) +{ + struct rb_node *rnode = &ring->node; + + spin_lock(&ring_rbtree_lock); + rb_erase(rnode, &ring_rbtree); + if (atomic_dec_and_test(&ring->ref_cnt)) { + clear_bit(ring->ring_id, &ring->ydev->ring_bitmap); + tasklet_kill(&ring->resp_handler); + } else { + rb_add(rnode, &ring_rbtree, ycc_ring_less); + } + + spin_unlock(&ring_rbtree_lock); + + ycc_dev_put(ring->ydev); +} + +/* + * Submit command to ring's command queue. + */ +int ycc_enqueue(struct ycc_ring *ring, void *cmd) +{ + int ret = 0; + + if (!ring || !ring->ydev || !cmd) + return -EINVAL; + + spin_lock_bh(&ring->lock); + if (!test_bit(YDEV_STATUS_READY, &ring->ydev->status) || ycc_ring_stopped(ring)) { + pr_debug("Enqueue error, device status: %ld, ring stopped: %d\n", + ring->ydev->status, ycc_ring_stopped(ring)); + + /* Fallback to software */ + ret = -EAGAIN; + goto out; + } + + ring->cmd_rd_ptr = YCC_CSR_RD(ring->csr_vaddr, REG_RING_CMD_RD_PTR); + if (ycc_ring_full(ring)) { + pr_debug("Enqueue error, ring %d is full\n", ring->ring_id); + ret = -EAGAIN; + goto out; + } + + memcpy(ring->cmd_base_vaddr + ring->cmd_wr_ptr * YCC_CMD_DESC_SIZE, cmd, + YCC_CMD_DESC_SIZE); + + /* Ensure that cmd_wr_ptr update after memcpy */ + dma_wmb(); + if (++ring->cmd_wr_ptr == ring->max_desc) + ring->cmd_wr_ptr = 0; + + ring->nr_cmds++; + YCC_CSR_WR(ring->csr_vaddr, REG_RING_CMD_WR_PTR, ring->cmd_wr_ptr); + +out: + spin_unlock_bh(&ring->lock); + return ret; +} + +static inline void ycc_check_cmd_state(u16 state) +{ + switch (state) { + case CMD_SUCCESS: + break; + case CMD_ILLEGAL: + pr_debug("Illegal command\n"); + break; + case CMD_UNDERATTACK: + pr_debug("Attack is detected\n"); + break; + case CMD_INVALID: + pr_debug("Invalid command\n"); + break; + case CMD_ERROR: + pr_debug("Command error\n"); + break; + case CMD_EXCESS: + pr_debug("Excess permission\n"); + break; + case CMD_KEY_ERROR: + pr_debug("Invalid internal key\n"); + break; + case CMD_VERIFY_ERROR: + pr_debug("Mac/tag verify failed\n"); + break; + default: + pr_debug("Unknown error\n"); + break; + } +} + +static void ycc_handle_resp(struct ycc_ring *ring, struct ycc_resp_desc *desc) +{ + struct ycc_flags *aflag; + + dma_rmb(); + + aflag = (struct ycc_flags *)desc->private_ptr; + if (!aflag || (u64)aflag == CMD_INVALID_CONTENT_U64) { + pr_debug("Invalid command aflag\n"); + return; + } + + ycc_check_cmd_state(desc->state); + aflag->ycc_done_callback(aflag->ptr, desc->state); + + memset(desc, CMD_INVALID_CONTENT_U8, sizeof(*desc)); + kfree(aflag); +} + +/* + * dequeue, read response descriptor + */ +void ycc_dequeue(struct ycc_ring *ring) +{ + struct ycc_resp_desc *resp; + int cnt = 0; + + if (!test_bit(YDEV_STATUS_READY, &ring->ydev->status) || ycc_ring_stopped(ring)) + return; + + ring->resp_wr_ptr = YCC_CSR_RD(ring->csr_vaddr, REG_RING_RSP_WR_PTR); + while (!ycc_ring_empty(ring)) { + resp = (struct ycc_resp_desc *)ring->resp_base_vaddr + + ring->resp_rd_ptr; + ycc_handle_resp(ring, resp); + + cnt++; + ring->nr_resps++; + if (++ring->resp_rd_ptr == ring->max_desc) + ring->resp_rd_ptr = 0; + } + + if (cnt) + YCC_CSR_WR(ring->csr_vaddr, REG_RING_RSP_RD_PTR, ring->resp_rd_ptr); +} diff --git a/drivers/crypto/ycc/ycc_ring.h b/drivers/crypto/ycc/ycc_ring.h new file mode 100644 index 00000000..eb3e6f9 --- /dev/null +++ b/drivers/crypto/ycc/ycc_ring.h @@ -0,0 +1,111 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __YCC_RING_H +#define __YCC_RING_H + +#include +#include + +#include "ycc_dev.h" + +#define CMD_ILLEGAL 0x15 +#define CMD_UNDERATTACK 0x25 +#define CMD_INVALID 0x35 +#define CMD_ERROR 0x45 +#define CMD_EXCESS 0x55 +#define CMD_KEY_ERROR 0x65 +#define CMD_VERIFY_ERROR 0x85 +#define CMD_SUCCESS 0xa5 +#define CMD_CANCELLED 0xff + +#define CMD_INVALID_CONTENT_U8 0x7f +#define CMD_INVALID_CONTENT_U64 0x7f7f7f7f7f7f7f7fULL + +enum ring_type { + FREE_RING, + USER_RING, + KERN_RING, + INVAL_RING, +}; + +struct ycc_ring { + u16 ring_id; + u32 status; + + struct rb_node node; + atomic_t ref_cnt; + + void __iomem *csr_vaddr; /* config register address */ + resource_size_t csr_paddr; + struct ycc_dev *ydev; /* belongs to which ydev */ + struct dentry *debug_dir; + + u32 max_desc; /* max desc entry numbers */ + u32 irq_th; + spinlock_t lock; /* used to send cmd, protect write ptr */ + enum ring_type type; + + void *cmd_base_vaddr; /* base addr of cmd ring */ + dma_addr_t cmd_base_paddr; + u16 cmd_wr_ptr; /* current cmd write pointer */ + u16 cmd_rd_ptr; /* current cmd read pointer */ + void *resp_base_vaddr; /* base addr of resp ring */ + dma_addr_t resp_base_paddr; + u16 resp_wr_ptr; /* current resp write pointer */ + u16 resp_rd_ptr; /* current resp read pointer */ + + struct tasklet_struct resp_handler; + + /* for statistics */ + u64 nr_binds; + u64 nr_cmds; + u64 nr_resps; +}; + +struct ycc_flags { + void *ptr; + int (*ycc_done_callback)(void *ptr, u16 state); +}; + +struct ycc_resp_desc { + u64 private_ptr; + u16 state; + u8 reserved[6]; +}; + +union ycc_real_cmd { + /* + * TODO: Real command will implement when + * corresponding algorithm is ready + */ + u8 padding[32]; +}; + +struct ycc_cmd_desc { + union ycc_real_cmd cmd; + u64 private_ptr; + u8 reserved0[16]; + u8 reserved1[8]; +} __packed; + +static inline void ycc_ring_get(struct ycc_ring *ring) +{ + atomic_inc(&ring->ref_cnt); +} + +static inline void ycc_ring_put(struct ycc_ring *ring) +{ + atomic_dec(&ring->ref_cnt); +} + +static inline bool ycc_ring_stopped(struct ycc_ring *ring) +{ + return !!(YCC_CSR_RD(ring->csr_vaddr, REG_RING_CFG) & RING_STOP_BIT); +} + +int ycc_enqueue(struct ycc_ring *ring, void *cmd); +void ycc_dequeue(struct ycc_ring *ring); +struct ycc_ring *ycc_crypto_get_ring(void); +void ycc_crypto_free_ring(struct ycc_ring *ring); +int ycc_dev_rings_init(struct ycc_dev *ydev, u32 max_desc, int user_rings); +void ycc_dev_rings_release(struct ycc_dev *ydev, int user_rings); +#endif From patchwork Thu Nov 3 07:40:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 14710 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp374972wru; Thu, 3 Nov 2022 00:41:56 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7AGi8xXSCqkjHQ8XFzmk3HExfWLAJg16Zli6vPFKLKPNe+sjbPs7uasRzi+DO2UkOWkNON X-Received: by 2002:a17:902:d512:b0:181:f1f4:fcb4 with SMTP id b18-20020a170902d51200b00181f1f4fcb4mr29204003plg.102.1667461316104; Thu, 03 Nov 2022 00:41:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667461316; cv=none; d=google.com; s=arc-20160816; b=F+v2LBvLaMHw2pa6vq7HqjmpDlycL+kTfm/f02pGNQZRCHKbekxJPbDN8IWfqK0ZaP WJWro4r77TVNJsDNalArcRR+id++GbxENTu5xb/AzhEFKBRKp0k66YoskymAK2axSHfN 7M9FLiE4MRhLYrjC1Pjt/S/tRU7tPMXMeotsbYExMEA6BsGrJIHHvHIrm0r6gDDYFc94 VjBspwcrrIQCphm0h7DwO9BxBaY2afDG75o7z0B9KW5xphDsWY3psopI51b6njDv+e8B K9Ros/XmCwFxer4L3e9Ugq8Iq1W8Q0mOdvw6cZw9VcFGNBM2EnfiEPQc/OECmYGS4aFU J2iw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=7UBkfXo8cXlkTnm/TbOIU/K7ajcKnr92OLwlW6GNv/E=; b=iHHQSjsWPLNLVNincU8DbR+hznzaE5+soh7OBhBUt2UO22a0CMkTZR5pwqfDWwXOZA NODmxqxK9VrHYnMiEzZcqJVpAwmBRDDS2IpsezwOQ5a8jfKqrAG+ojGyMg22Bj6881wt iXM2x2Yr0rg7nP3hwQN93eMAwt7+jgLOG0vVFRCFQXvxwhTTZRxd19+g44eAZ4P2b0Rm jlkld5qKXimGKwA4ktxFt+QLapQpeSpY7mZf87F7Xx0IE8ATsCDspV0idmEhkyW82/l0 0WdKPcLB244Kqvg5IIhei3qnRvOsgEkJS86PlY45awWoOHww6fePCVF72IHfbqKo+A9C Z3Vg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n13-20020a170902d2cd00b0017e0ca906c8si21634730plc.568.2022.11.03.00.41.43; Thu, 03 Nov 2022 00:41:56 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231343AbiKCHlN (ORCPT + 99 others); Thu, 3 Nov 2022 03:41:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230452AbiKCHkw (ORCPT ); Thu, 3 Nov 2022 03:40:52 -0400 Received: from out30-57.freemail.mail.aliyun.com (out30-57.freemail.mail.aliyun.com [115.124.30.57]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB3B7CC8; Thu, 3 Nov 2022 00:40:51 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R121e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=guanjun@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VTrZF9H_1667461248; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VTrZF9H_1667461248) by smtp.aliyun-inc.com; Thu, 03 Nov 2022 15:40:49 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, artie.ding@linux.alibaba.com, guanjun@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, xuchun.shang@linux.alibaba.com Subject: [PATCH v3 RESEND 3/9] crypto/ycc: Add irq support for ycc kernel rings Date: Thu, 3 Nov 2022 15:40:37 +0800 Message-Id: <1667461243-48652-4-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> References: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748459917360404475?= X-GMAIL-MSGID: =?utf-8?q?1748459917360404475?= From: Zelin Deng Each kernel ring has its own command done irq. Temporarily user rings will not enable irq. Signed-off-by: Zelin Deng --- drivers/crypto/ycc/ycc_isr.c | 92 ++++++++++++++++++++++++++++++++++++++------ 1 file changed, 80 insertions(+), 12 deletions(-) diff --git a/drivers/crypto/ycc/ycc_isr.c b/drivers/crypto/ycc/ycc_isr.c index cd2a2d7..a86c8d7 100644 --- a/drivers/crypto/ycc/ycc_isr.c +++ b/drivers/crypto/ycc/ycc_isr.c @@ -12,6 +12,17 @@ #include #include "ycc_isr.h" +#include "ycc_dev.h" +#include "ycc_ring.h" + + +static irqreturn_t ycc_resp_isr(int irq, void *data) +{ + struct ycc_ring *ring = (struct ycc_ring *)data; + + tasklet_hi_schedule(&ring->resp_handler); + return IRQ_HANDLED; +} /* * TODO: will implement when ycc ring actually work. @@ -38,11 +49,13 @@ static irqreturn_t ycc_g_err_isr(int irq, void *data) return IRQ_HANDLED; } -/* - * TODO: will implement when ycc ring actually work. - */ void ycc_resp_process(uintptr_t ring_addr) { + struct ycc_ring *ring = (struct ycc_ring *)ring_addr; + + ycc_dequeue(ring); + if (ring->ydev->is_polling) + tasklet_hi_schedule(&ring->resp_handler); } int ycc_enable_msix(struct ycc_dev *ydev) @@ -83,34 +96,89 @@ static void ycc_cleanup_global_err_workqueue(struct ycc_dev *ydev) destroy_workqueue(ydev->dev_err_q); } -/* - * TODO: Just request irq for global err. Irq for each ring - * will be requested when ring actually work. - */ int ycc_alloc_irqs(struct ycc_dev *ydev) { struct pci_dev *rcec_pdev = ydev->assoc_dev->pdev; int num = ydev->is_vf ? 1 : YCC_RINGPAIR_NUM; - int ret; + int cpu, cpus = num_online_cpus(); + int ret, i, j; + /* The 0 - (YCC_RINGPAIR_NUM-1) are rings irqs, the last one is dev error irq */ sprintf(ydev->err_irq_name, "ycc_dev_%d_global_err", ydev->id); ret = request_irq(pci_irq_vector(rcec_pdev, num), ycc_g_err_isr, 0, ydev->err_irq_name, ydev); - if (ret) + if (ret) { pr_err("Failed to alloc global irq interrupt for dev: %d\n", ydev->id); + goto out; + } + + if (ydev->is_polling) + goto out; + + for (i = 0; i < num; i++) { + if (ydev->rings[i].type != KERN_RING) + continue; + + ydev->msi_name[i] = kzalloc(16, GFP_KERNEL); + if (!ydev->msi_name[i]) + goto free_irq; + snprintf(ydev->msi_name[i], 16, "ycc_ring_%d", i); + ret = request_irq(pci_irq_vector(rcec_pdev, i), ycc_resp_isr, + 0, ydev->msi_name[i], &ydev->rings[i]); + if (ret) { + kfree(ydev->msi_name[i]); + goto free_irq; + } + if (!ydev->is_vf) + cpu = (i % YCC_RINGPAIR_NUM) % cpus; + else + cpu = smp_processor_id() % cpus; + + ret = irq_set_affinity_hint(pci_irq_vector(rcec_pdev, i), + get_cpu_mask(cpu)); + if (ret) { + free_irq(pci_irq_vector(rcec_pdev, i), &ydev->rings[i]); + kfree(ydev->msi_name[i]); + goto free_irq; + } + } + + return 0; + +free_irq: + for (j = 0; j < i; j++) { + if (ydev->rings[i].type != KERN_RING) + continue; + + free_irq(pci_irq_vector(rcec_pdev, j), &ydev->rings[j]); + kfree(ydev->msi_name[j]); + } + free_irq(pci_irq_vector(rcec_pdev, num), ydev); +out: return ret; } -/* - * TODO: Same as the allocate action. - */ void ycc_free_irqs(struct ycc_dev *ydev) { struct pci_dev *rcec_pdev = ydev->assoc_dev->pdev; int num = ydev->is_vf ? 1 : YCC_RINGPAIR_NUM; + int i; + /* Free device err irq */ free_irq(pci_irq_vector(rcec_pdev, num), ydev); + + if (ydev->is_polling) + return; + + for (i = 0; i < num; i++) { + if (ydev->rings[i].type != KERN_RING) + continue; + + irq_set_affinity_hint(pci_irq_vector(rcec_pdev, i), NULL); + free_irq(pci_irq_vector(rcec_pdev, i), &ydev->rings[i]); + kfree(ydev->msi_name[i]); + } } int ycc_init_global_err(struct ycc_dev *ydev) From patchwork Thu Nov 3 07:40:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 14712 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp375141wru; Thu, 3 Nov 2022 00:42:31 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4ZTf8n2xEWemTir/sTlw5US8noYaEpDVsSrTBdxbcC4i5l3NHGJ2Xz+zqpL3/R4CD00dSS X-Received: by 2002:a17:902:d4c1:b0:186:e426:ff12 with SMTP id o1-20020a170902d4c100b00186e426ff12mr28540889plg.132.1667461350957; Thu, 03 Nov 2022 00:42:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667461350; cv=none; d=google.com; s=arc-20160816; b=wZdNfNE9E+BFhWRIlmOmiAGr7saH+q89kwHpMZm5ZrexTvDm42WwbsXBmJXpBESmad oQQoY5Ve/M30gL+hf2JEwI0+p5vlg3W03m8kffzhwc6NTjnsM6Xe8lmaUenZ0jRyPJN0 ifbLF7ps/VHp8RyaBqD4AoZIbKN0Rak7IYaQgCllc3OhuZORbwrBw8E/iQrm4Ta3TUO7 ulMcQhjHBgaq0sF08q7zeSNsnRlrFuLtaEzY8ibqcEwMHLS6NxZD47dI9xqwTHERLu8f 59YaM6ulAJksPRjMdbUqMTGJT3qk89wYTiEWvCFLaa5DSYtD3mV5ZCqvzTrDQNxJAio5 i8SQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=DRNxUbfZVlzFWdOg96sNZNk4VXGl4Yx4qpKNdWwuJK4=; b=CD454DJvXB5sjPwEgVyMMii1kyymZXnvjUC35qeTh7kCZCkaYaP33Q+Q//qb91rCRC fzuWAv0NXVsY1DD0S8DHTEIV5jV6fgsB8FsO6yU2Blmp7JmwyP87muQ7kGmnF1X7/sdB icJYVb76ow0JG2I2m+CfVdcow8FwdV6sUyXfrRosvpTT/RmxlCIr+VjQyT6+y1W3Jq0a Q8w1c2DDE5JF1i7eZD1n9GmlqeaBUcVpjRdVWLRXJddDZJ/greu+hkwvanBz19864LDA d52QE3sIorB4mfEOpuPtEDqrwPLZRUAfp93SYWDuZNg404H5QyxKM2PAsn5OmO4U1sBB 7Ndg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a70-20020a639049000000b00434c05007d5si128531pge.845.2022.11.03.00.42.18; Thu, 03 Nov 2022 00:42:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231373AbiKCHl2 (ORCPT + 99 others); Thu, 3 Nov 2022 03:41:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231208AbiKCHlF (ORCPT ); Thu, 3 Nov 2022 03:41:05 -0400 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 587D92ACB; Thu, 3 Nov 2022 00:40:53 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046051;MF=guanjun@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VTrRNEY_1667461249; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VTrRNEY_1667461249) by smtp.aliyun-inc.com; Thu, 03 Nov 2022 15:40:50 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, artie.ding@linux.alibaba.com, guanjun@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, xuchun.shang@linux.alibaba.com Subject: [PATCH v3 RESEND 4/9] crypto/ycc: Add device error handling support for ycc hw errors Date: Thu, 3 Nov 2022 15:40:38 +0800 Message-Id: <1667461243-48652-5-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> References: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748459953811647735?= X-GMAIL-MSGID: =?utf-8?q?1748459953811647735?= From: Zelin Deng Due to ycc hardware limitations, in REE ycc device cannot be reset to recover from fatal error (reset register is only valid in TEE and PCIE FLR only reset queue pointers but not ycc hw), regard all hw errors except queue error as fatal error. Signed-off-by: Zelin Deng --- drivers/crypto/ycc/ycc_isr.c | 92 +++++++++++++++++++++++++++++++++++++++++-- drivers/crypto/ycc/ycc_ring.c | 90 ++++++++++++++++++++++++++++++++++++++++++ drivers/crypto/ycc/ycc_ring.h | 5 +++ 3 files changed, 183 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/ycc/ycc_isr.c b/drivers/crypto/ycc/ycc_isr.c index a86c8d7..abbe0c4 100644 --- a/drivers/crypto/ycc/ycc_isr.c +++ b/drivers/crypto/ycc/ycc_isr.c @@ -15,7 +15,6 @@ #include "ycc_dev.h" #include "ycc_ring.h" - static irqreturn_t ycc_resp_isr(int irq, void *data) { struct ycc_ring *ring = (struct ycc_ring *)data; @@ -24,11 +23,93 @@ static irqreturn_t ycc_resp_isr(int irq, void *data) return IRQ_HANDLED; } -/* - * TODO: will implement when ycc ring actually work. - */ +static void ycc_fatal_error(struct ycc_dev *ydev) +{ + struct ycc_ring *ring; + int i; + + for (i = 0; i < YCC_RINGPAIR_NUM; i++) { + ring = ydev->rings + i; + + if (ring->type != KERN_RING) + continue; + + spin_lock_bh(&ring->lock); + ycc_clear_cmd_ring(ring); + spin_unlock_bh(&ring->lock); + + ycc_clear_resp_ring(ring); + } +} + static void ycc_process_global_err(struct work_struct *work) { + struct ycc_dev *ydev = container_of(work, struct ycc_dev, work); + struct ycc_bar *cfg_bar = &ydev->ycc_bars[YCC_SEC_CFG_BAR]; + struct ycc_ring *ring; + u32 hclk_err, xclk_err; + u32 xclk_ecc_uncor_err_0, xclk_ecc_uncor_err_1; + u32 hclk_ecc_uncor_err; + int i; + + if (pci_wait_for_pending_transaction(ydev->pdev)) + pr_warn("Failed to pending transaction\n"); + + hclk_err = YCC_CSR_RD(cfg_bar->vaddr, REG_YCC_HCLK_INT_STATUS); + xclk_err = YCC_CSR_RD(cfg_bar->vaddr, REG_YCC_XCLK_INT_STATUS); + xclk_ecc_uncor_err_0 = YCC_CSR_RD(cfg_bar->vaddr, REG_YCC_XCLK_MEM_ECC_UNCOR_0); + xclk_ecc_uncor_err_1 = YCC_CSR_RD(cfg_bar->vaddr, REG_YCC_XCLK_MEM_ECC_UNCOR_1); + hclk_ecc_uncor_err = YCC_CSR_RD(cfg_bar->vaddr, REG_YCC_HCLK_MEM_ECC_UNCOR); + + if ((hclk_err & ~(YCC_HCLK_TRNG_ERR)) || xclk_err || hclk_ecc_uncor_err) { + pr_err("Got uncorrected error, must be reset\n"); + /* + * Fatal error, as ycc cannot be reset in REE, clear ring data. + */ + return ycc_fatal_error(ydev); + } + + if (xclk_ecc_uncor_err_0 || xclk_ecc_uncor_err_1) { + pr_err("Got algorithm ECC error: %x, %x\n", + xclk_ecc_uncor_err_0, xclk_ecc_uncor_err_1); + return ycc_fatal_error(ydev); + } + + /* This has to be queue error. Handling command rings. */ + for (i = 0; i < YCC_RINGPAIR_NUM; i++) { + ring = ydev->rings + i; + + if (ring->type != KERN_RING) + continue; + + ring->status = YCC_CSR_RD(ring->csr_vaddr, REG_RING_STATUS); + if (ring->status) { + pr_err("YCC: Dev: %d, Ring: %d got ring err: %x\n", + ydev->id, ring->ring_id, ring->status); + spin_lock_bh(&ring->lock); + ycc_clear_cmd_ring(ring); + spin_unlock_bh(&ring->lock); + } + } + + /* + * Give HW a chance to process all pending_cmds + * through recovering transactions. + */ + pci_set_master(ydev->pdev); + + for (i = 0; i < YCC_RINGPAIR_NUM; i++) { + ring = ydev->rings + i; + + if (ring->type != KERN_RING || !ring->status) + continue; + + ycc_clear_resp_ring(ring); + } + + ycc_g_err_unmask(cfg_bar->vaddr); + clear_bit(YDEV_STATUS_ERR, &ydev->status); + set_bit(YDEV_STATUS_READY, &ydev->status); } static irqreturn_t ycc_g_err_isr(int irq, void *data) @@ -45,6 +126,9 @@ static irqreturn_t ycc_g_err_isr(int irq, void *data) clear_bit(YDEV_STATUS_READY, &ydev->status); + /* Disable YCC mastering, no new transactions */ + pci_clear_master(ydev->pdev); + schedule_work(&ydev->work); return IRQ_HANDLED; } diff --git a/drivers/crypto/ycc/ycc_ring.c b/drivers/crypto/ycc/ycc_ring.c index ea6877e..5207228 100644 --- a/drivers/crypto/ycc/ycc_ring.c +++ b/drivers/crypto/ycc/ycc_ring.c @@ -480,6 +480,24 @@ int ycc_enqueue(struct ycc_ring *ring, void *cmd) return ret; } +static void ycc_cancel_cmd(struct ycc_ring *ring, struct ycc_cmd_desc *desc) +{ + struct ycc_flags *aflag; + + dma_rmb(); + + aflag = (struct ycc_flags *)desc->private_ptr; + if (!aflag || (u64)aflag == CMD_INVALID_CONTENT_U64) { + pr_debug("YCC: Invalid aflag\n"); + return; + } + + aflag->ycc_done_callback(aflag->ptr, CMD_CANCELLED); + + memset(desc, CMD_INVALID_CONTENT_U8, sizeof(*desc)); + kfree(aflag); +} + static inline void ycc_check_cmd_state(u16 state) { switch (state) { @@ -557,3 +575,75 @@ void ycc_dequeue(struct ycc_ring *ring) if (cnt) YCC_CSR_WR(ring->csr_vaddr, REG_RING_RSP_RD_PTR, ring->resp_rd_ptr); } + +/* + * Clear incompletion cmds in command queue while rollback cmd_wr_ptr. + * + * Note: Make sure been invoked when error occurs in YCC internal and + * YCC status is not ready. + */ +void ycc_clear_cmd_ring(struct ycc_ring *ring) +{ + struct ycc_cmd_desc *desc = NULL; + + ring->cmd_rd_ptr = YCC_CSR_RD(ring->csr_vaddr, REG_RING_CMD_RD_PTR); + ring->cmd_wr_ptr = YCC_CSR_RD(ring->csr_vaddr, REG_RING_CMD_WR_PTR); + + while (ring->cmd_rd_ptr != ring->cmd_wr_ptr) { + desc = (struct ycc_cmd_desc *)ring->cmd_base_vaddr + + ring->cmd_rd_ptr; + ycc_cancel_cmd(ring, desc); + + if (--ring->cmd_wr_ptr == 0) + ring->cmd_wr_ptr = ring->max_desc; + } + + YCC_CSR_WR(ring->csr_vaddr, REG_RING_CMD_WR_PTR, ring->cmd_wr_ptr); +} + +/* + * Clear response queue + * + * Note: Make sure been invoked when error occurs in YCC internal and + * YCC status is not ready. + */ +void ycc_clear_resp_ring(struct ycc_ring *ring) +{ + struct ycc_resp_desc *resp; + int retry; + u32 pending_cmd; + + /* + * Check if the ring has been stopped. *stop* means no + * new transactions, No need to wait for pending_cmds + * been processed under this condition. + */ + retry = ycc_ring_stopped(ring) ? 0 : MAX_ERROR_RETRY; + pending_cmd = YCC_CSR_RD(ring->csr_vaddr, REG_RING_PENDING_CMD); + + ring->resp_wr_ptr = YCC_CSR_RD(ring->csr_vaddr, REG_RING_RSP_WR_PTR); + while (!ycc_ring_empty(ring) || (retry && pending_cmd)) { + if (!ycc_ring_empty(ring)) { + resp = (struct ycc_resp_desc *)ring->resp_base_vaddr + + ring->resp_rd_ptr; + resp->state = CMD_CANCELLED; + ycc_handle_resp(ring, resp); + + if (++ring->resp_rd_ptr == ring->max_desc) + ring->resp_rd_ptr = 0; + + YCC_CSR_WR(ring->csr_vaddr, REG_RING_RSP_RD_PTR, ring->resp_rd_ptr); + } else { + udelay(MAX_SLEEP_US_PER_CHECK); + retry--; + } + + pending_cmd = YCC_CSR_RD(ring->csr_vaddr, REG_RING_PENDING_CMD); + ring->resp_wr_ptr = YCC_CSR_RD(ring->csr_vaddr, REG_RING_RSP_WR_PTR); + } + + if (!retry && pending_cmd) + ring->type = INVAL_RING; + + ring->status = 0; +} diff --git a/drivers/crypto/ycc/ycc_ring.h b/drivers/crypto/ycc/ycc_ring.h index eb3e6f9..52b0fe8 100644 --- a/drivers/crypto/ycc/ycc_ring.h +++ b/drivers/crypto/ycc/ycc_ring.h @@ -20,6 +20,9 @@ #define CMD_INVALID_CONTENT_U8 0x7f #define CMD_INVALID_CONTENT_U64 0x7f7f7f7f7f7f7f7fULL +#define MAX_SLEEP_US_PER_CHECK 100 /* every 100us to check register */ +#define MAX_ERROR_RETRY 10000 /* 1s in total */ + enum ring_type { FREE_RING, USER_RING, @@ -104,6 +107,8 @@ static inline bool ycc_ring_stopped(struct ycc_ring *ring) int ycc_enqueue(struct ycc_ring *ring, void *cmd); void ycc_dequeue(struct ycc_ring *ring); +void ycc_clear_cmd_ring(struct ycc_ring *ring); +void ycc_clear_resp_ring(struct ycc_ring *ring); struct ycc_ring *ycc_crypto_get_ring(void); void ycc_crypto_free_ring(struct ycc_ring *ring); int ycc_dev_rings_init(struct ycc_dev *ydev, u32 max_desc, int user_rings); From patchwork Thu Nov 3 07:40:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 14713 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp375177wru; Thu, 3 Nov 2022 00:42:39 -0700 (PDT) X-Google-Smtp-Source: AMsMyM58Ml6R1myy0ALK14tL3dOWH1Tye0XXmdDfJy2KAbWtuUZ7mMYSamUVLA2UKe5ajGDM5+/1 X-Received: by 2002:a63:581d:0:b0:42b:399:f15a with SMTP id m29-20020a63581d000000b0042b0399f15amr25096054pgb.337.1667461358916; Thu, 03 Nov 2022 00:42:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667461358; cv=none; d=google.com; s=arc-20160816; b=DifEEijb3Zk5SufHtbAGI8tsNZ/lvz7PwSsKI0PhtocQUIFpM6sQrXs0z9TF4BkTFM BGC3b0oaMJLoJb/eFDiSXbNwutwKRbiR+6VP3whbbnsGTAas77h3M4QpfOv+qvJVU9q4 UvJL73zowzNaCmuVULqGtsxXgv8UUxD4ak3zLa5GfN02o1lAJhEsfFemlTbM188+VZK3 6bJ/JwUcfXgXGlla/0M85mGcGvsqq6TfBnXb4p1oxLkMjuEd/m/x4pIY5X67Dgtl4C3n T7LIK/ccfdgHsdUc5BymlMpKtG7suQmArWb1Aoh0EEIZbjyC6FmqYGOG5ljdVJz7Wmg1 saeg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=iTjVg7HsORDj21Et+Q0nyFe1DtCn9XxSR65L0XGNIdo=; b=P4R2MRX2crcT/g1Ragq6a2Sp6SG8uhJphfrSBkDl13AV1wM9zNrs/78/y76MA9RuUC FqAhdB4gv7IGKyDznH0efST8zRmCwiLH9CFwfvCs+lzQvjfGGexBbdHCp3So7AzqM/ko DZEoeXKc98L9d2VpHVU6Xq7OLE2Hh66WQ1FmfbI8P7Ec7L0YV6KJR+zomg9z329Usrak tz/0j0ujxmkKIW8GJgh0tUUpB1y679EYQkJq9hLLrHBZ/cag88/J2gThPkmDbC+3ejfo qhKeYnd5D86KkUN/Dkj9t8tGybGR/bFWdWjEiKBhBofcow6fQwWEU941RnW3qeljpODp 18QA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id t33-20020a635361000000b00434dd6f6e21si324496pgl.125.2022.11.03.00.42.25; Thu, 03 Nov 2022 00:42:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231309AbiKCHld (ORCPT + 99 others); Thu, 3 Nov 2022 03:41:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38272 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231295AbiKCHlI (ORCPT ); Thu, 3 Nov 2022 03:41:08 -0400 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5507F5F4F; Thu, 3 Nov 2022 00:40:55 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R751e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018045176;MF=guanjun@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VTrZFAS_1667461251; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VTrZFAS_1667461251) by smtp.aliyun-inc.com; Thu, 03 Nov 2022 15:40:52 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, artie.ding@linux.alibaba.com, guanjun@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, xuchun.shang@linux.alibaba.com Subject: [PATCH v3 RESEND 5/9] crypto/ycc: Add skcipher algorithm support Date: Thu, 3 Nov 2022 15:40:39 +0800 Message-Id: <1667461243-48652-6-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> References: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748459961784290366?= X-GMAIL-MSGID: =?utf-8?q?1748459961784290366?= From: Guanjun Support skcipher algorithm. Signed-off-by: Guanjun --- drivers/crypto/ycc/Kconfig | 9 + drivers/crypto/ycc/Makefile | 2 +- drivers/crypto/ycc/ycc_algs.h | 114 ++++++ drivers/crypto/ycc/ycc_dev.h | 3 + drivers/crypto/ycc/ycc_drv.c | 52 +++ drivers/crypto/ycc/ycc_ring.h | 17 +- drivers/crypto/ycc/ycc_ske.c | 925 ++++++++++++++++++++++++++++++++++++++++++ 7 files changed, 1117 insertions(+), 5 deletions(-) create mode 100644 drivers/crypto/ycc/ycc_algs.h create mode 100644 drivers/crypto/ycc/ycc_ske.c diff --git a/drivers/crypto/ycc/Kconfig b/drivers/crypto/ycc/Kconfig index 6e88ecb..8dae75e 100644 --- a/drivers/crypto/ycc/Kconfig +++ b/drivers/crypto/ycc/Kconfig @@ -2,6 +2,15 @@ config CRYPTO_DEV_YCC tristate "Support for Alibaba YCC cryptographic accelerator" depends on CRYPTO && CRYPTO_HW && PCI default n + select CRYPTO_SKCIPHER + select CRYPTO_LIB_DES + select CRYPTO_SM3_GENERIC + select CRYPTO_AES + select CRYPTO_CBC + select CRYPTO_ECB + select CRYPTO_CTR + select CRYPTO_XTS + select CRYPTO_SM4 help Enables the driver for the on-chip cryptographic accelerator of Alibaba Yitian SoCs which is based on ARMv9 architecture. diff --git a/drivers/crypto/ycc/Makefile b/drivers/crypto/ycc/Makefile index 31aae9c..eedc1c8 100644 --- a/drivers/crypto/ycc/Makefile +++ b/drivers/crypto/ycc/Makefile @@ -1,3 +1,3 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_CRYPTO_DEV_YCC) += ycc.o -ycc-objs := ycc_drv.o ycc_isr.o ycc_ring.o +ycc-objs := ycc_drv.o ycc_isr.o ycc_ring.o ycc_ske.o diff --git a/drivers/crypto/ycc/ycc_algs.h b/drivers/crypto/ycc/ycc_algs.h new file mode 100644 index 00000000..6c7b0dc --- /dev/null +++ b/drivers/crypto/ycc/ycc_algs.h @@ -0,0 +1,114 @@ +// SPDX-License-Identifier: GPL-2.0 +#ifndef __YCC_ALG_H +#define __YCC_ALG_H + +#include + +#include "ycc_ring.h" +#include "ycc_dev.h" + +enum ycc_gcm_mode { + YCC_AES_128_GCM = 0, + YCC_AES_192_GCM, + YCC_AES_256_GCM, + YCC_SM4_GCM, +}; + +enum ycc_ccm_mode { + YCC_AES_128_CCM = 0, + YCC_AES_192_CCM, + YCC_AES_256_CCM, + YCC_SM4_CCM, +}; + +enum ycc_ske_alg_mode { + YCC_DES_ECB = 26, + YCC_DES_CBC, + YCC_DES_CFB, + YCC_DES_OFB, + YCC_DES_CTR, /* 30 */ + + YCC_TDES_128_ECB = 31, + YCC_TDES_128_CBC, + YCC_TDES_128_CFB, + YCC_TDES_128_OFB, + YCC_TDES_128_CTR, + YCC_TDES_192_ECB, + YCC_TDES_192_CBC, + YCC_TDES_192_CFB, + YCC_TDES_192_OFB, + YCC_TDES_192_CTR, /* 40 */ + + YCC_AES_128_ECB = 41, + YCC_AES_128_CBC, + YCC_AES_128_CFB, + YCC_AES_128_OFB, + YCC_AES_128_CTR, + YCC_AES_128_XTS, /* 46 */ + + YCC_AES_192_ECB = 48, + YCC_AES_192_CBC, + YCC_AES_192_CFB, + YCC_AES_192_OFB, + YCC_AES_192_CTR, /* 52 */ + + YCC_AES_256_ECB = 55, + YCC_AES_256_CBC, + YCC_AES_256_CFB, + YCC_AES_256_OFB, + YCC_AES_256_CTR, + YCC_AES_256_XTS, /* 60 */ + + YCC_SM4_ECB = 62, + YCC_SM4_CBC, + YCC_SM4_CFB, + YCC_SM4_OFB, + YCC_SM4_CTR, + YCC_SM4_XTS, /* 67 */ +}; + +enum ycc_cmd_id { + YCC_CMD_SKE_ENC = 0x23, + YCC_CMD_SKE_DEC, +}; + +struct ycc_crypto_ctx { + struct ycc_ring *ring; + void *soft_tfm; + + u32 keysize; + u32 key_dma_size; /* dma memory size for key/key+iv */ + + u8 mode; + u8 *cipher_key; + u8 reserved[4]; +}; + +struct ycc_crypto_req { + int mapped_src_nents; + int mapped_dst_nents; + + void *key_vaddr; + dma_addr_t key_paddr; + + struct ycc_cmd_desc desc; + struct skcipher_request *ske_req; + struct skcipher_request ske_subreq; + + void *src_vaddr; + dma_addr_t src_paddr; + void *dst_vaddr; + dma_addr_t dst_paddr; + + int in_len; + int out_len; + int aad_offset; + struct ycc_crypto_ctx *ctx; + u8 last_block[16]; /* used to store iv out when decrypt */ +}; + +#define YCC_DEV(ctx) (&(ctx)->ring->ydev->pdev->dev) + +int ycc_sym_register(void); +void ycc_sym_unregister(void); +#endif diff --git a/drivers/crypto/ycc/ycc_dev.h b/drivers/crypto/ycc/ycc_dev.h index 456a53922..a758f0d 100644 --- a/drivers/crypto/ycc/ycc_dev.h +++ b/drivers/crypto/ycc/ycc_dev.h @@ -151,4 +151,7 @@ static inline void ycc_g_err_unmask(void __iomem *vaddr) YCC_CSR_WR(vaddr, REG_YCC_DEV_INT_MASK, 0); } +int ycc_algorithm_register(void); +void ycc_algorithm_unregister(void); + #endif diff --git a/drivers/crypto/ycc/ycc_drv.c b/drivers/crypto/ycc/ycc_drv.c index 4eccd1f3..f4928a9 100644 --- a/drivers/crypto/ycc/ycc_drv.c +++ b/drivers/crypto/ycc/ycc_drv.c @@ -25,6 +25,7 @@ #include "ycc_isr.h" #include "ycc_ring.h" +#include "ycc_algs.h" static const char ycc_name[] = "ycc"; @@ -35,6 +36,8 @@ module_param(is_polling, bool, 0644); module_param(user_rings, int, 0644); +static atomic_t ycc_algs_refcnt; + LIST_HEAD(ycc_table); static DEFINE_MUTEX(ycc_mutex); @@ -75,6 +78,40 @@ static int ycc_dev_debugfs_statistics_open(struct inode *inode, struct file *fil .owner = THIS_MODULE, }; +int ycc_algorithm_register(void) +{ + int ret = 0; + + /* No kernel rings */ + if (user_rings == YCC_RINGPAIR_NUM) + return ret; + + /* Only register once */ + if (atomic_inc_return(&ycc_algs_refcnt) > 1) + return ret; + + ret = ycc_sym_register(); + if (ret) + goto err; + + return 0; + +err: + atomic_dec(&ycc_algs_refcnt); + return ret; +} + +void ycc_algorithm_unregister(void) +{ + if (user_rings == YCC_RINGPAIR_NUM) + return; + + if (atomic_dec_return(&ycc_algs_refcnt)) + return; + + ycc_sym_unregister(); +} + static int ycc_device_flr(struct pci_dev *pdev, struct pci_dev *rcec_pdev) { int ret; @@ -321,6 +358,7 @@ static void ycc_rcec_unbind(struct ycc_dev *ydev) ycc_resource_free(rciep); rciep->assoc_dev = NULL; rcec->assoc_dev = NULL; + ycc_algorithm_unregister(); } static int ycc_dev_add(struct ycc_dev *ydev) @@ -421,8 +459,20 @@ static int ycc_drv_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (ret) goto remove_debugfs; + if (test_bit(YDEV_STATUS_READY, &ydev->status)) { + ret = ycc_algorithm_register(); + if (ret) { + pr_err("Failed to register algorithm\n"); + clear_bit(YDEV_STATUS_READY, &ydev->status); + clear_bit(YDEV_STATUS_READY, &ydev->assoc_dev->status); + goto dev_del; + } + } + return ret; +dev_del: + ycc_dev_del(ydev); remove_debugfs: if (ydev->type == YCC_RCIEP) { debugfs_remove_recursive(ydev->debug_dir); @@ -478,6 +528,8 @@ static int __init ycc_drv_init(void) { int ret; + atomic_set(&ycc_algs_refcnt, 0); + ret = pci_register_driver(&ycc_driver); if (ret) goto out; diff --git a/drivers/crypto/ycc/ycc_ring.h b/drivers/crypto/ycc/ycc_ring.h index 52b0fe8..6a26fd8 100644 --- a/drivers/crypto/ycc/ycc_ring.h +++ b/drivers/crypto/ycc/ycc_ring.h @@ -75,11 +75,20 @@ struct ycc_resp_desc { u8 reserved[6]; }; +struct ycc_skcipher_cmd { + u8 cmd_id; + u8 mode; + u64 sptr:48; + u64 dptr:48; + u32 dlen; + u16 key_idx; /* key used to decrypt kek */ + u8 reserved[2]; + u64 keyptr:48; + u8 padding; +} __packed; + union ycc_real_cmd { - /* - * TODO: Real command will implement when - * corresponding algorithm is ready - */ + struct ycc_skcipher_cmd ske_cmd; u8 padding[32]; }; diff --git a/drivers/crypto/ycc/ycc_ske.c b/drivers/crypto/ycc/ycc_ske.c new file mode 100644 index 00000000..9facae7 --- /dev/null +++ b/drivers/crypto/ycc/ycc_ske.c @@ -0,0 +1,925 @@ +// SPDX-License-Identifier: GPL-2.0 + +#define pr_fmt(fmt) "YCC: Crypto: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "ycc_algs.h" + +static int ycc_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key, + unsigned int key_size, int mode, + unsigned int key_dma_size) +{ + struct ycc_crypto_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (ctx->cipher_key) { + memset(ctx->cipher_key, 0, ctx->keysize); + } else { + ctx->cipher_key = kzalloc(key_size, GFP_KERNEL); + if (!ctx->cipher_key) + return -ENOMEM; + } + memcpy(ctx->cipher_key, key, key_size); + ctx->mode = mode; + ctx->keysize = key_size; + ctx->key_dma_size = key_dma_size; + + if (ctx->soft_tfm && crypto_skcipher_setkey(ctx->soft_tfm, key, key_size)) + pr_warn("Failed to setkey for soft skcipher tfm\n"); + + return 0; +} + +#define DEFINE_YCC_SKE_AES_SETKEY(name, mode, size) \ +static int ycc_skcipher_aes_##name##_setkey(struct crypto_skcipher *tfm,\ + const u8 *key, \ + unsigned int key_size) \ +{ \ + int alg_mode; \ + switch (key_size) { \ + case AES_KEYSIZE_128: \ + alg_mode = YCC_AES_128_##mode; \ + break; \ + case AES_KEYSIZE_192: \ + alg_mode = YCC_AES_192_##mode; \ + break; \ + case AES_KEYSIZE_256: \ + alg_mode = YCC_AES_256_##mode; \ + break; \ + default: \ + return -EINVAL; \ + } \ + return ycc_skcipher_setkey(tfm, key, key_size, alg_mode, size); \ +} + +#define DEFINE_YCC_SKE_SM4_SETKEY(name, mode, size) \ +static int ycc_skcipher_sm4_##name##_setkey(struct crypto_skcipher *tfm,\ + const u8 *key, \ + unsigned int key_size) \ +{ \ + int alg_mode = YCC_SM4_##mode; \ + if (key_size != SM4_KEY_SIZE) \ + return -EINVAL; \ + return ycc_skcipher_setkey(tfm, key, key_size, alg_mode, size); \ +} + +#define DEFINE_YCC_SKE_DES_SETKEY(name, mode, size) \ +static int ycc_skcipher_des_##name##_setkey(struct crypto_skcipher *tfm,\ + const u8 *key, \ + unsigned int key_size) \ +{ \ + int alg_mode = YCC_DES_##mode; \ + int ret; \ + if (key_size != DES_KEY_SIZE) \ + return -EINVAL; \ + ret = verify_skcipher_des_key(tfm, key); \ + if (ret) \ + return ret; \ + return ycc_skcipher_setkey(tfm, key, key_size, alg_mode, size); \ +} + +#define DEFINE_YCC_SKE_3DES_SETKEY(name, mode, size) \ +static int ycc_skcipher_3des_##name##_setkey(struct crypto_skcipher *tfm,\ + const u8 *key, \ + unsigned int key_size) \ +{ \ + int alg_mode = YCC_TDES_192_##mode; \ + int ret; \ + if (key_size != DES3_EDE_KEY_SIZE) \ + return -EINVAL; \ + ret = verify_skcipher_des3_key(tfm, key); \ + if (ret) \ + return ret; \ + return ycc_skcipher_setkey(tfm, key, key_size, alg_mode, size); \ +} + +/* + * ECB: Only has 1 key, without IV, at least 32 bytes. + * Others except XTS: |key|iv|, at least 48 bytes. + */ +DEFINE_YCC_SKE_AES_SETKEY(ecb, ECB, 32); +DEFINE_YCC_SKE_AES_SETKEY(cbc, CBC, 48); +DEFINE_YCC_SKE_AES_SETKEY(ctr, CTR, 48); +DEFINE_YCC_SKE_AES_SETKEY(cfb, CFB, 48); +DEFINE_YCC_SKE_AES_SETKEY(ofb, OFB, 48); + +DEFINE_YCC_SKE_SM4_SETKEY(ecb, ECB, 32); +DEFINE_YCC_SKE_SM4_SETKEY(cbc, CBC, 48); +DEFINE_YCC_SKE_SM4_SETKEY(ctr, CTR, 48); +DEFINE_YCC_SKE_SM4_SETKEY(cfb, CFB, 48); +DEFINE_YCC_SKE_SM4_SETKEY(ofb, OFB, 48); + +DEFINE_YCC_SKE_DES_SETKEY(ecb, ECB, 32); +DEFINE_YCC_SKE_DES_SETKEY(cbc, CBC, 48); +DEFINE_YCC_SKE_DES_SETKEY(ctr, CTR, 48); +DEFINE_YCC_SKE_DES_SETKEY(cfb, CFB, 48); +DEFINE_YCC_SKE_DES_SETKEY(ofb, OFB, 48); + +DEFINE_YCC_SKE_3DES_SETKEY(ecb, ECB, 32); +DEFINE_YCC_SKE_3DES_SETKEY(cbc, CBC, 48); +DEFINE_YCC_SKE_3DES_SETKEY(ctr, CTR, 48); +DEFINE_YCC_SKE_3DES_SETKEY(cfb, CFB, 48); +DEFINE_YCC_SKE_3DES_SETKEY(ofb, OFB, 48); + +static int ycc_skcipher_aes_xts_setkey(struct crypto_skcipher *tfm, + const u8 *key, + unsigned int key_size) +{ + int alg_mode; + int ret; + + ret = xts_verify_key(tfm, key, key_size); + if (ret) + return ret; + + switch (key_size) { + case AES_KEYSIZE_128 * 2: + alg_mode = YCC_AES_128_XTS; + break; + case AES_KEYSIZE_256 * 2: + alg_mode = YCC_AES_256_XTS; + break; + default: + return -EINVAL; + } + + /* XTS: |key1|key2|iv|, at least 32 + 32 + 16 bytes */ + return ycc_skcipher_setkey(tfm, key, key_size, alg_mode, 80); +} + +static int ycc_skcipher_sm4_xts_setkey(struct crypto_skcipher *tfm, + const u8 *key, + unsigned int key_size) +{ + int alg_mode; + int ret; + + ret = xts_verify_key(tfm, key, key_size); + if (ret) + return ret; + + if (key_size != SM4_KEY_SIZE * 2) + return -EINVAL; + + alg_mode = YCC_SM4_XTS; + return ycc_skcipher_setkey(tfm, key, key_size, alg_mode, 80); +} + +static int ycc_skcipher_fill_key(struct ycc_crypto_req *req) +{ + struct ycc_crypto_ctx *ctx = req->ctx; + struct device *dev = YCC_DEV(ctx); + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req->ske_req); + u32 ivsize = crypto_skcipher_ivsize(tfm); + + if (!req->key_vaddr) { + req->key_vaddr = dma_alloc_coherent(dev, ALIGN(ctx->key_dma_size, 64), + &req->key_paddr, GFP_ATOMIC); + if (!req->key_vaddr) + return -ENOMEM; + } + + memset(req->key_vaddr, 0, ALIGN(ctx->key_dma_size, 64)); + /* XTS Mode has 2 keys & 1 iv */ + if (ctx->key_dma_size == 80) { + memcpy(req->key_vaddr + (32 - ctx->keysize / 2), + ctx->cipher_key, ctx->keysize / 2); + memcpy(req->key_vaddr + (64 - ctx->keysize / 2), + ctx->cipher_key + ctx->keysize / 2, ctx->keysize / 2); + } else { + memcpy(req->key_vaddr + (32 - ctx->keysize), ctx->cipher_key, + ctx->keysize); + } + + if (ivsize) { + if (ctx->mode == YCC_DES_ECB || + ctx->mode == YCC_TDES_128_ECB || + ctx->mode == YCC_TDES_192_ECB || + ctx->mode == YCC_AES_128_ECB || + ctx->mode == YCC_AES_192_ECB || + ctx->mode == YCC_AES_256_ECB || + ctx->mode == YCC_SM4_ECB) { + pr_err("Illegal ivsize for ECB mode, should be zero"); + goto clear_key; + } + + /* DES or 3DES */ + if (ctx->mode >= YCC_DES_ECB && ctx->mode <= YCC_TDES_192_CTR) { + if (ivsize > 8) + goto clear_key; + memcpy(req->key_vaddr + ctx->key_dma_size - 8, + req->ske_req->iv, ivsize); + } else { + memcpy(req->key_vaddr + ctx->key_dma_size - 16, + req->ske_req->iv, ivsize); + } + } + + return 0; +clear_key: + memset(req->key_vaddr, 0, ALIGN(ctx->key_dma_size, 64)); + dma_free_coherent(dev, ctx->key_dma_size, req->key_vaddr, req->key_paddr); + req->key_vaddr = NULL; + return -EINVAL; +} + +static int ycc_skcipher_sg_map(struct ycc_crypto_req *req) +{ + struct device *dev = YCC_DEV(req->ctx); + struct skcipher_request *ske_req = req->ske_req; + int src_nents; + + src_nents = sg_nents_for_len(ske_req->src, ske_req->cryptlen); + if (unlikely(src_nents <= 0)) { + pr_err("Failed to get src sg len\n"); + return -EINVAL; + } + + req->src_vaddr = dma_alloc_coherent(dev, ALIGN(req->in_len, 64), + &req->src_paddr, GFP_ATOMIC); + if (!req->src_vaddr) + return -ENOMEM; + + req->dst_vaddr = dma_alloc_coherent(dev, ALIGN(req->in_len, 64), + &req->dst_paddr, GFP_ATOMIC); + if (!req->dst_vaddr) { + dma_free_coherent(dev, ALIGN(req->in_len, 64), + req->src_vaddr, req->src_paddr); + return -ENOMEM; + } + + sg_copy_to_buffer(ske_req->src, src_nents, req->src_vaddr, ske_req->cryptlen); + return 0; +} + +static inline void ycc_skcipher_sg_unmap(struct ycc_crypto_req *req) +{ + struct device *dev = YCC_DEV(req->ctx); + + dma_free_coherent(dev, ALIGN(req->in_len, 64), req->src_vaddr, req->src_paddr); + dma_free_coherent(dev, ALIGN(req->in_len, 64), req->dst_vaddr, req->dst_paddr); +} + +/* + * For CBC & CTR + */ +static void ycc_skcipher_iv_out(struct ycc_crypto_req *req, void *dst) +{ + struct skcipher_request *ske_req = req->ske_req; + struct crypto_skcipher *stfm = crypto_skcipher_reqtfm(ske_req); + u8 bs = crypto_skcipher_blocksize(stfm); + u8 mode = req->ctx->mode; + u8 cmd = req->desc.cmd.ske_cmd.cmd_id; + u32 nb = (ske_req->cryptlen + bs - 1) / bs; + + switch (mode) { + case YCC_DES_CBC: + case YCC_TDES_128_CBC: + case YCC_TDES_192_CBC: + case YCC_AES_128_CBC: + case YCC_AES_192_CBC: + case YCC_AES_256_CBC: + case YCC_SM4_CBC: + if (cmd == YCC_CMD_SKE_DEC) + memcpy(ske_req->iv, req->last_block, bs); + else + memcpy(ske_req->iv, + (u8 *)dst + ALIGN(ske_req->cryptlen, bs) - bs, + bs); + break; + case YCC_DES_CTR: + case YCC_TDES_128_CTR: + case YCC_TDES_192_CTR: + case YCC_AES_128_CTR: + case YCC_AES_192_CTR: + case YCC_AES_256_CTR: + case YCC_SM4_CTR: + for ( ; nb-- ; ) + crypto_inc(ske_req->iv, bs); + break; + default: + return; + } +} + +static int ycc_skcipher_callback(void *ptr, u16 state) +{ + struct ycc_crypto_req *req = (struct ycc_crypto_req *)ptr; + struct skcipher_request *ske_req = req->ske_req; + struct ycc_crypto_ctx *ctx = req->ctx; + struct device *dev = YCC_DEV(ctx); + + sg_copy_from_buffer(ske_req->dst, + sg_nents_for_len(ske_req->dst, ske_req->cryptlen), + req->dst_vaddr, ske_req->cryptlen); + + if (state == CMD_SUCCESS) + ycc_skcipher_iv_out(req, req->dst_vaddr); + + ycc_skcipher_sg_unmap(req); + + if (req->key_vaddr) { + memset(req->key_vaddr, 0, ALIGN(ctx->key_dma_size, 64)); + dma_free_coherent(dev, ALIGN(ctx->key_dma_size, 64), + req->key_vaddr, req->key_paddr); + req->key_vaddr = NULL; + } + if (ske_req->base.complete) + ske_req->base.complete(&ske_req->base, + state == CMD_SUCCESS ? 0 : -EBADMSG); + + return 0; +} + +static inline bool ycc_skcipher_do_soft(struct ycc_dev *ydev) +{ + return !test_bit(YDEV_STATUS_READY, &ydev->status); +} + +static int ycc_skcipher_submit_desc(struct skcipher_request *ske_req, u8 cmd) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(ske_req); + struct ycc_crypto_req *req = skcipher_request_ctx(ske_req); + struct ycc_skcipher_cmd *ske_cmd = &req->desc.cmd.ske_cmd; + struct ycc_crypto_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ycc_flags *aflags; + u8 bs = crypto_skcipher_blocksize(tfm); + int ret; + + memset(req, 0, sizeof(*req)); + req->ctx = ctx; + req->ske_req = ske_req; + req->in_len = ALIGN(ske_req->cryptlen, bs); + + /* + * The length of request, 64n + bs, may lead the device hung. + * So append one bs here. This is a workaround for hardware issue. + */ + if (req->in_len % 64 == bs) + req->in_len += bs; + + ret = ycc_skcipher_fill_key(req); + if (ret) + return ret; + + ret = ycc_skcipher_sg_map(req); + if (ret) + goto free_key; + + ret = -ENOMEM; + aflags = kzalloc(sizeof(struct ycc_flags), GFP_ATOMIC); + if (!aflags) + goto sg_unmap; + + aflags->ptr = (void *)req; + aflags->ycc_done_callback = ycc_skcipher_callback; + + req->desc.private_ptr = (u64)aflags; + ske_cmd->cmd_id = cmd; + ske_cmd->mode = ctx->mode; + ske_cmd->sptr = req->src_paddr; + ske_cmd->dptr = req->dst_paddr; + ske_cmd->dlen = req->in_len; + ske_cmd->keyptr = req->key_paddr; + ske_cmd->padding = 0; + + /* LKCF will check iv output, for decryption, the iv is its last block */ + if (cmd == YCC_CMD_SKE_DEC) + memcpy(req->last_block, + req->src_vaddr + ALIGN(ske_req->cryptlen, bs) - bs, bs); + + ret = ycc_enqueue(ctx->ring, &req->desc); + if (!ret) + return -EINPROGRESS; + + pr_debug("Failed to submit desc to ring\n"); + kfree(aflags); + +sg_unmap: + ycc_skcipher_sg_unmap(req); +free_key: + memset(req->key_vaddr, 0, ALIGN(ctx->key_dma_size, 64)); + dma_free_coherent(YCC_DEV(ctx), + ALIGN(ctx->key_dma_size, 64), + req->key_vaddr, req->key_paddr); + req->key_vaddr = NULL; + return ret; +} + +static int ycc_skcipher_encrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct ycc_crypto_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_request *subreq = + &((struct ycc_crypto_req *)skcipher_request_ctx(req))->ske_subreq; + + if (ycc_skcipher_do_soft(ctx->ring->ydev)) { + skcipher_request_set_tfm(subreq, ctx->soft_tfm); + skcipher_request_set_callback(subreq, req->base.flags, + req->base.complete, req->base.data); + skcipher_request_set_crypt(subreq, req->src, req->dst, + req->cryptlen, req->iv); + return crypto_skcipher_encrypt(subreq); + } + + return ycc_skcipher_submit_desc(req, YCC_CMD_SKE_ENC); +} + +static int ycc_skcipher_decrypt(struct skcipher_request *req) +{ + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); + struct ycc_crypto_ctx *ctx = crypto_skcipher_ctx(tfm); + struct skcipher_request *subreq = + &((struct ycc_crypto_req *)skcipher_request_ctx(req))->ske_subreq; + + if (ycc_skcipher_do_soft(ctx->ring->ydev)) { + skcipher_request_set_tfm(subreq, ctx->soft_tfm); + skcipher_request_set_callback(subreq, req->base.flags, + req->base.complete, req->base.data); + skcipher_request_set_crypt(subreq, req->src, req->dst, + req->cryptlen, req->iv); + return crypto_skcipher_encrypt(subreq); + } + + return ycc_skcipher_submit_desc(req, YCC_CMD_SKE_DEC); +} + +static int ycc_skcipher_init(struct crypto_skcipher *tfm) +{ + struct ycc_crypto_ctx *ctx = crypto_skcipher_ctx(tfm); + struct ycc_ring *ring; + + ctx->soft_tfm = crypto_alloc_skcipher(crypto_tfm_alg_name(crypto_skcipher_tfm(tfm)), 0, + CRYPTO_ALG_NEED_FALLBACK | CRYPTO_ALG_ASYNC); + if (IS_ERR(ctx->soft_tfm)) { + pr_warn("Failed to allocate soft tfm for:%s, software fallback is limited\n", + crypto_tfm_alg_name(crypto_skcipher_tfm(tfm))); + ctx->soft_tfm = NULL; + crypto_skcipher_set_reqsize(tfm, sizeof(struct ycc_crypto_req)); + } else { + /* + * If it's software fallback, store meta data of soft request. + */ + crypto_skcipher_set_reqsize(tfm, sizeof(struct ycc_crypto_req) + + crypto_skcipher_reqsize(ctx->soft_tfm)); + } + + ring = ycc_crypto_get_ring(); + if (!ring) + return -ENOMEM; + + ctx->ring = ring; + return 0; +} + +static void ycc_skcipher_exit(struct crypto_skcipher *tfm) +{ + struct ycc_crypto_ctx *ctx = crypto_skcipher_ctx(tfm); + + if (ctx->ring) + ycc_crypto_free_ring(ctx->ring); + + kfree(ctx->cipher_key); + + if (ctx->soft_tfm) + crypto_free_skcipher((struct crypto_skcipher *)ctx->soft_tfm); +} + + +static struct skcipher_alg ycc_skciphers[] = { + { + .base = { + .cra_name = "cbc(aes)", + .cra_driver_name = "cbc-aes-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_aes_cbc_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ecb(aes)", + .cra_driver_name = "ecb-aes-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_aes_ecb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = 0, + }, + { + .base = { + .cra_name = "ctr(aes)", + .cra_driver_name = "ctr-aes-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_aes_ctr_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cfb(aes)", + .cra_driver_name = "cfb-aes-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_aes_cfb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ofb(aes)", + .cra_driver_name = "ofb-aes-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_aes_ofb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE, + .max_keysize = AES_MAX_KEY_SIZE, + .ivsize = AES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "xts(aes)", + .cra_driver_name = "xts-aes-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_aes_xts_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = AES_MIN_KEY_SIZE * 2, + .max_keysize = AES_MAX_KEY_SIZE * 2, + .ivsize = AES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cbc(sm4)", + .cra_driver_name = "cbc-sm4-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_sm4_cbc_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = SM4_KEY_SIZE, + .max_keysize = SM4_KEY_SIZE, + .ivsize = SM4_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ecb(sm4)", + .cra_driver_name = "ecb-sm4-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_sm4_ecb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = SM4_KEY_SIZE, + .max_keysize = SM4_KEY_SIZE, + .ivsize = 0, + }, + { + .base = { + .cra_name = "ctr(sm4)", + .cra_driver_name = "ctr-sm4-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_sm4_ctr_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = SM4_KEY_SIZE, + .max_keysize = SM4_KEY_SIZE, + .ivsize = SM4_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cfb(sm4)", + .cra_driver_name = "cfb-sm4-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_sm4_cfb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = SM4_KEY_SIZE, + .max_keysize = SM4_KEY_SIZE, + .ivsize = SM4_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ofb(sm4)", + .cra_driver_name = "ofb-sm4-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_sm4_ofb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = SM4_KEY_SIZE, + .max_keysize = SM4_KEY_SIZE, + .ivsize = SM4_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "xts(sm4)", + .cra_driver_name = "xts-sm4-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = SM4_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_sm4_xts_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = SM4_KEY_SIZE * 2, + .max_keysize = SM4_KEY_SIZE * 2, + .ivsize = SM4_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cbc(des)", + .cra_driver_name = "cbc-des-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_des_cbc_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ecb(des)", + .cra_driver_name = "ecb-des-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_des_ecb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = 0, + }, + { + .base = { + .cra_name = "ctr(des)", + .cra_driver_name = "ctr-des-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_des_ctr_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cfb(des)", + .cra_driver_name = "cfb-des-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_des_cfb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ofb(des)", + .cra_driver_name = "ofb-des-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_des_ofb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES_KEY_SIZE, + .max_keysize = DES_KEY_SIZE, + .ivsize = DES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cbc(des3_ede)", + .cra_driver_name = "cbc-des3_ede-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_3des_cbc_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ecb(des3_ede)", + .cra_driver_name = "ecb-des3_ede-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_3des_ecb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = 0, + }, + { + .base = { + .cra_name = "ctr(des3_ede)", + .cra_driver_name = "ctr-des3_ede-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_3des_ctr_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "cfb(des3_ede)", + .cra_driver_name = "cfb-des3_ede-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_3des_cfb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ofb(des3_ede)", + .cra_driver_name = "ofb-des3_ede-ycc", + .cra_priority = 4001, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = DES3_EDE_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_skcipher_init, + .exit = ycc_skcipher_exit, + .setkey = ycc_skcipher_3des_ofb_setkey, + .encrypt = ycc_skcipher_encrypt, + .decrypt = ycc_skcipher_decrypt, + .min_keysize = DES3_EDE_KEY_SIZE, + .max_keysize = DES3_EDE_KEY_SIZE, + .ivsize = DES3_EDE_BLOCK_SIZE, + }, +}; + +int ycc_sym_register(void) +{ + return crypto_register_skciphers(ycc_skciphers, ARRAY_SIZE(ycc_skciphers)); +} + +void ycc_sym_unregister(void) +{ + crypto_unregister_skciphers(ycc_skciphers, ARRAY_SIZE(ycc_skciphers)); +} From patchwork Thu Nov 3 07:40:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 14714 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp375239wru; Thu, 3 Nov 2022 00:42:48 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5nrvvResdBbkcDqIsaUNrkS/1FTBkS+1BWwyeaQRoWicAT9BwM9n4ceS4dwhEETRxus19n X-Received: by 2002:a17:90a:582:b0:20a:97f6:f52e with SMTP id i2-20020a17090a058200b0020a97f6f52emr30190212pji.126.1667461367919; Thu, 03 Nov 2022 00:42:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667461367; cv=none; d=google.com; s=arc-20160816; b=mG7t1ZnlxD0pHbGFasUMi1QirtsMA5SPbGTcikKR33UeG5juWw7KnYSYaVT+4IklFU G+Cxvah6iZg4wCiTsRgBsOgnijx35n9b/gU6XzTZyLlhYxFCZTEU6O2t7FqbKR7gRH08 q+7SA+RdobM5p0pGHcB/Gvj9enFeGLYyl6BMRu9rUNG41sEeJ/1VQS3pAag2xDBaQLI3 ju7//BEZns0DwbNJets9BZW6WHd7dyF/cLk+bDVnCAeTF64CIgcg5XXlHtdBeModwJ8i o1zI/EBZCCFx2zf/CNjj2SC+blXOOM/YIKCbxcH02NXKexaTI6pG3JWUVyBjxN0xXY8E aHnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=abvua3ybaH7Lp+LwvrbF1hs8V434b+oom70VdsdUu/I=; b=mOVQjFfzWTUxGcsgMPJz1ob+s8A6KPtvD+/K3OB9zPwDe6j7P4lKBruYuCIu/J5lul +w/3fCshNVL8y1WVDql66ulwIkAsnztDJeh8Yt+ZhtooVADwpQnW7EVfN+PCaQP8u/xZ UYcoeSTIOfPQDUzSb2DNp3RuXp2275KbDRCbDQgaBU6WiVM5BeD66OFAvhS25sXluM2z f5oIPqzdjL4VshqTLwp4WaQgSuuz3+pDL88DLDyM8ZHBFiH7aCVgcv1totaeSw7jk7DA f/r81gA56jPclI2tonpUY9jC8PLLVVTiJ3Rdl90/KEMThY1f9aklmaQ9uDSm4Rso/Su8 FsGA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id pg6-20020a17090b1e0600b001fd70129092si499554pjb.15.2022.11.03.00.42.35; Thu, 03 Nov 2022 00:42:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231408AbiKCHlk (ORCPT + 99 others); Thu, 3 Nov 2022 03:41:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231305AbiKCHlI (ORCPT ); Thu, 3 Nov 2022 03:41:08 -0400 Received: from out30-131.freemail.mail.aliyun.com (out30-131.freemail.mail.aliyun.com [115.124.30.131]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC3EA5FF6; Thu, 3 Nov 2022 00:40:56 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R461e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046049;MF=guanjun@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VTrRNFW_1667461253; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VTrRNFW_1667461253) by smtp.aliyun-inc.com; Thu, 03 Nov 2022 15:40:54 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, artie.ding@linux.alibaba.com, guanjun@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, xuchun.shang@linux.alibaba.com Subject: [PATCH v3 RESEND 6/9] crypto/ycc: Add aead algorithm support Date: Thu, 3 Nov 2022 15:40:40 +0800 Message-Id: <1667461243-48652-7-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> References: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748459971312104641?= X-GMAIL-MSGID: =?utf-8?q?1748459971312104641?= From: Guanjun Support aead algorithm. Signed-off-by: Guanjun --- drivers/crypto/ycc/Kconfig | 1 + drivers/crypto/ycc/Makefile | 2 +- drivers/crypto/ycc/ycc_aead.c | 646 ++++++++++++++++++++++++++++++++++++++++++ drivers/crypto/ycc/ycc_algs.h | 20 +- drivers/crypto/ycc/ycc_drv.c | 7 + drivers/crypto/ycc/ycc_ring.h | 14 + 6 files changed, 687 insertions(+), 3 deletions(-) create mode 100644 drivers/crypto/ycc/ycc_aead.c diff --git a/drivers/crypto/ycc/Kconfig b/drivers/crypto/ycc/Kconfig index 8dae75e..d2808c3 100644 --- a/drivers/crypto/ycc/Kconfig +++ b/drivers/crypto/ycc/Kconfig @@ -5,6 +5,7 @@ config CRYPTO_DEV_YCC select CRYPTO_SKCIPHER select CRYPTO_LIB_DES select CRYPTO_SM3_GENERIC + select CRYPTO_AEAD select CRYPTO_AES select CRYPTO_CBC select CRYPTO_ECB diff --git a/drivers/crypto/ycc/Makefile b/drivers/crypto/ycc/Makefile index eedc1c8..d629dd5 100644 --- a/drivers/crypto/ycc/Makefile +++ b/drivers/crypto/ycc/Makefile @@ -1,3 +1,3 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_CRYPTO_DEV_YCC) += ycc.o -ycc-objs := ycc_drv.o ycc_isr.o ycc_ring.o ycc_ske.o +ycc-objs := ycc_drv.o ycc_isr.o ycc_ring.o ycc_ske.o ycc_aead.o diff --git a/drivers/crypto/ycc/ycc_aead.c b/drivers/crypto/ycc/ycc_aead.c new file mode 100644 index 00000000..8e9489e --- /dev/null +++ b/drivers/crypto/ycc/ycc_aead.c @@ -0,0 +1,646 @@ +// SPDX-License-Identifier: GPL-2.0 + +#define pr_fmt(fmt) "YCC: Crypto: " fmt + +#include +#include +#include +#include +#include +#include +#include +#include +#include "ycc_algs.h" + +static int ycc_aead_init(struct crypto_aead *tfm) +{ + struct ycc_crypto_ctx *ctx = crypto_aead_ctx(tfm); + struct ycc_ring *ring; + + ctx->soft_tfm = crypto_alloc_aead(crypto_tfm_alg_name(crypto_aead_tfm(tfm)), + 0, + CRYPTO_ALG_NEED_FALLBACK | CRYPTO_ALG_ASYNC); + if (IS_ERR(ctx->soft_tfm)) { + pr_warn("Failed to allocate soft tfm for:%s, software fallback is limited\n", + crypto_tfm_alg_name(crypto_aead_tfm(tfm))); + ctx->soft_tfm = NULL; + crypto_aead_set_reqsize(tfm, sizeof(struct ycc_crypto_req)); + } else { + /* + * If it's software fallback, store meta data of soft request. + */ + crypto_aead_set_reqsize(tfm, sizeof(struct ycc_crypto_req) + + crypto_aead_reqsize(ctx->soft_tfm)); + } + + ring = ycc_crypto_get_ring(); + if (!ring) + return -ENOMEM; + + ctx->ring = ring; + return 0; +} + +static void ycc_aead_exit(struct crypto_aead *tfm) +{ + struct ycc_crypto_ctx *ctx = crypto_aead_ctx(tfm); + + if (ctx->ring) + ycc_crypto_free_ring(ctx->ring); + + kfree(ctx->cipher_key); + + if (ctx->soft_tfm) + crypto_free_aead((struct crypto_aead *)ctx->soft_tfm); +} + +static int ycc_aead_setkey(struct crypto_aead *tfm, const u8 *key, + unsigned int key_size) +{ + struct ycc_crypto_ctx *ctx = crypto_aead_ctx(tfm); + const char *alg_name = crypto_tfm_alg_name(&tfm->base); + + if (!strncmp("gcm(sm4)", alg_name, strlen("gcm(sm4)"))) { + if (key_size != SM4_KEY_SIZE) + return -EINVAL; + ctx->mode = YCC_SM4_GCM; + } else if (!strncmp("ccm(sm4)", alg_name, strlen("ccm(sm4)"))) { + ctx->mode = YCC_SM4_CCM; + } else if (!strncmp("gcm(aes)", alg_name, strlen("gcm(aes)"))) { + switch (key_size) { + case AES_KEYSIZE_128: + ctx->mode = YCC_AES_128_GCM; + break; + case AES_KEYSIZE_192: + ctx->mode = YCC_AES_192_GCM; + break; + case AES_KEYSIZE_256: + ctx->mode = YCC_AES_256_GCM; + break; + default: + return -EINVAL; + } + } else if (!strncmp("ccm(aes)", alg_name, strlen("ccm(aes)"))) { + switch (key_size) { + case AES_KEYSIZE_128: + ctx->mode = YCC_AES_128_CCM; + break; + case AES_KEYSIZE_192: + ctx->mode = YCC_AES_192_CCM; + break; + case AES_KEYSIZE_256: + ctx->mode = YCC_AES_256_CCM; + break; + default: + return -EINVAL; + } + } + + if (ctx->cipher_key) { + memset(ctx->cipher_key, 0, ctx->keysize); + } else { + ctx->cipher_key = kzalloc(key_size, GFP_KERNEL); + if (!ctx->cipher_key) + return -ENOMEM; + } + + memcpy(ctx->cipher_key, key, key_size); + ctx->keysize = key_size; + if (ctx->soft_tfm) + if (crypto_aead_setkey(ctx->soft_tfm, key, key_size)) + pr_warn("Failed to setkey for soft aead tfm\n"); + + return 0; +} + +static int ycc_aead_fill_key(struct ycc_crypto_req *req) +{ + struct ycc_crypto_ctx *ctx = req->ctx; + struct device *dev = YCC_DEV(ctx); + struct aead_request *aead_req = req->aead_req; + struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req); + const char *alg_name = crypto_tfm_alg_name(&tfm->base); + int iv_len = 12; + + if (!strncmp("ccm", alg_name, strlen("ccm"))) + iv_len = 16; + + if (!req->key_vaddr) { + req->key_vaddr = dma_alloc_coherent(dev, 64, &req->key_paddr, + GFP_ATOMIC); + if (!req->key_vaddr) + return -ENOMEM; + } + + memset(req->key_vaddr, 0, 64); + memcpy(req->key_vaddr + (32 - ctx->keysize), ctx->cipher_key, ctx->keysize); + memcpy(req->key_vaddr + 32, req->aead_req->iv, iv_len); + ctx->key_dma_size = 64; + return 0; +} + +static int ycc_aead_sg_map(struct ycc_crypto_req *req) +{ + struct device *dev = YCC_DEV(req->ctx); + int ret = -ENOMEM; + + req->src_paddr = dma_map_single(dev, req->src_vaddr, + ALIGN(req->in_len, 64), DMA_TO_DEVICE); + if (dma_mapping_error(dev, req->src_paddr)) { + pr_err("Failed to map src dma memory\n"); + goto out; + } + + req->dst_vaddr = dma_alloc_coherent(dev, ALIGN(req->out_len, 64), + &req->dst_paddr, GFP_ATOMIC); + if (!req->dst_vaddr) + goto unmap_src; + + return 0; +unmap_src: + dma_unmap_single(dev, req->src_paddr, ALIGN(req->in_len, 64), DMA_TO_DEVICE); +out: + return ret; +} + +static void ycc_aead_sg_unmap(struct ycc_crypto_req *req) +{ + struct device *dev = YCC_DEV(req->ctx); + + dma_unmap_single(dev, req->src_paddr, ALIGN(req->in_len, 64), DMA_TO_DEVICE); + dma_free_coherent(dev, ALIGN(req->in_len, 64), req->dst_vaddr, req->dst_paddr); +} + +static inline void ycc_aead_unformat_data(struct ycc_crypto_req *req) +{ + kfree(req->src_vaddr); +} + +static int ycc_aead_callback(void *ptr, u16 state) +{ + struct ycc_crypto_req *req = (struct ycc_crypto_req *)ptr; + struct aead_request *aead_req = req->aead_req; + struct ycc_crypto_ctx *ctx = req->ctx; + struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req); + int taglen = crypto_aead_authsize(tfm); + struct device *dev = YCC_DEV(ctx); + + /* TODO: workaround for GCM/CCM with junk bytes between ctext and tag */ + if ((req->desc.cmd.aead_cmd.cmd_id == YCC_CMD_GCM_ENC || + req->desc.cmd.aead_cmd.cmd_id == YCC_CMD_CCM_ENC) && + aead_req->cryptlen % 16 != 0) + memcpy(req->dst_vaddr + aead_req->cryptlen, + req->dst_vaddr + ALIGN(aead_req->cryptlen, 16), taglen); + scatterwalk_map_and_copy(req->src_vaddr + req->aad_offset, aead_req->dst, 0, + aead_req->assoclen, 1); + if (req->desc.cmd.aead_cmd.cmd_id == YCC_CMD_GCM_ENC || + req->desc.cmd.aead_cmd.cmd_id == YCC_CMD_CCM_ENC) { + scatterwalk_map_and_copy(req->dst_vaddr, aead_req->dst, + aead_req->assoclen, + aead_req->cryptlen + taglen, 1); + } else { + scatterwalk_map_and_copy(req->dst_vaddr, aead_req->dst, + aead_req->assoclen, + aead_req->cryptlen - taglen, 1); + } + + ycc_aead_sg_unmap(req); + ycc_aead_unformat_data(req); + if (req->key_vaddr) { + memset(req->key_vaddr, 0, 64); + dma_free_coherent(dev, 64, req->key_vaddr, req->key_paddr); + req->key_vaddr = NULL; + } + + if (aead_req->base.complete) + aead_req->base.complete(&aead_req->base, state == CMD_SUCCESS ? 0 : -EBADMSG); + + return 0; +} + +#define aead_blob_len(x, y, z) ALIGN(((x) + (y) + (z)), 16) + +static void *__ycc_aead_format_data(struct ycc_crypto_req *req, u8 *b0, u8 *b1, + int alen, u8 cmd) +{ + struct aead_request *aead_req = req->aead_req; + int aad_len = aead_req->assoclen; + int cryptlen = aead_req->cryptlen; + int taglen = crypto_aead_authsize(crypto_aead_reqtfm(aead_req)); + int src_len = cryptlen; + int b0_len = 0; + void *vaddr; + int size; + + /* b0 != NULL means ccm, b0 len is 16 bytes */ + if (b0) + b0_len = 16; + + size = aead_blob_len(b0_len, alen, aad_len); + if (cmd == YCC_CMD_GCM_DEC || cmd == YCC_CMD_CCM_DEC) { + /* + * LKCF format is not aligned |cipher_text|tag_text| + * while ycc request |16-align cipher_text|16-align tag_text| + */ + src_len = cryptlen - taglen; + size += ALIGN(src_len, 16) + ALIGN(taglen, 16); + } else { + size += ALIGN(cryptlen, 16); + } + + vaddr = kzalloc(ALIGN(size, 64), GFP_ATOMIC); + if (!vaddr) + return NULL; + + if (b0) + memcpy(vaddr, b0, b0_len); + if (b1) + memcpy(vaddr + b0_len, b1, alen); + scatterwalk_map_and_copy(vaddr + b0_len + alen, aead_req->src, 0, + aad_len, 0); + scatterwalk_map_and_copy(vaddr + aead_blob_len(b0_len, alen, aad_len), + aead_req->src, aad_len, + src_len, 0); + if (cmd == YCC_CMD_GCM_DEC || cmd == YCC_CMD_CCM_DEC) + scatterwalk_map_and_copy(vaddr + + aead_blob_len(b0_len, alen, aad_len) + + ALIGN(src_len, 16), + aead_req->src, aad_len + cryptlen - taglen, + taglen, 0); + + req->in_len = size; + req->aad_offset = b0_len + alen; + return vaddr; +} + +static void *ycc_aead_format_ccm_data(struct ycc_crypto_req *req, + u16 *new_aad_len, u8 cmd) +{ + struct aead_request *aead_req = req->aead_req; + unsigned int taglen = crypto_aead_authsize(crypto_aead_reqtfm(aead_req)); + unsigned int aad_len = aead_req->assoclen; + unsigned int cryptlen = aead_req->cryptlen; + u8 b0[16] = {0}; + u8 b1[10] = {0}; /* Store encoded aad length */ + u8 alen = 0; + int l; + __be32 msglen; + + /* 1. check iv value aead_req->iv[0] = L - 1 */ + if (aead_req->iv[0] < 1 || aead_req->iv[0] > 7) { + pr_err("L value is not valid for CCM\n"); + return NULL; + } + + l = aead_req->iv[0] + 1; + + /* 2. format control infomration and nonce */ + memcpy(b0, aead_req->iv, 16); /* iv max size is 15 - L */ + b0[0] |= (((taglen - 2) / 2) << 3); + if (aad_len) { + b0[0] |= (1 << 6); + if (aad_len < 65280) { + /* 2 bytes encode aad length */ + *(__be16 *)b1 = cpu_to_be16(aad_len); + alen = 2; + } else { + *(__be16 *)b1 = cpu_to_be16(0xfffe); + *(__be32 *)&b1[2] = cpu_to_be32(aad_len); + alen = 6; + } + *new_aad_len = ALIGN((16 + alen + aad_len), 16); + } else { + *new_aad_len = 16; + } + b0[0] |= aead_req->iv[0]; + + /* 3. set msg length. L - 1 Bytes store msg length */ + if (l >= 4) + l = 4; + else if (cryptlen > (1 << (8 * l))) + return NULL; + if (cmd == YCC_CMD_CCM_DEC) + msglen = cpu_to_be32(cryptlen - taglen); + else + msglen = cpu_to_be32(cryptlen); + memcpy(&b0[16 - l], (u8 *)&msglen + 4 - l, l); + + return __ycc_aead_format_data(req, b0, b1, alen, cmd); +} + +static void *ycc_aead_format_data(struct ycc_crypto_req *req, u16 *new_aad_len, + u32 *new_cryptlen, u8 cmd) +{ + struct aead_request *aead_req = req->aead_req; + struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req); + int taglen = crypto_aead_authsize(tfm); + + if (cmd == YCC_CMD_GCM_ENC || cmd == YCC_CMD_GCM_DEC) { + /* CCM */ + *new_aad_len = aead_req->assoclen; + *new_cryptlen = aead_req->cryptlen; + req->out_len = *new_cryptlen + taglen; + return __ycc_aead_format_data(req, NULL, NULL, 0, cmd); + } + + /* GCM */ + *new_cryptlen = ALIGN(aead_req->cryptlen, 16); + req->out_len = *new_cryptlen + taglen; + return ycc_aead_format_ccm_data(req, new_aad_len, cmd); +} + +/* + * This is a workaround. If ycc output len is outlen % 64 == 16, it + * might hang. taglen is 16 or 0 + */ +static inline bool ycc_aead_do_soft(struct aead_request *aead_req, int taglen) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req); + struct ycc_crypto_ctx *ctx = crypto_aead_ctx(tfm); + struct ycc_dev *ydev = ctx->ring->ydev; + + if ((ALIGN(aead_req->cryptlen, 64) + taglen) % 64 == 16 || + !test_bit(YDEV_STATUS_READY, &ydev->status)) + return true; + + return false; +} + +static int ycc_aead_submit_desc(struct aead_request *aead_req, u8 cmd) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req); + struct ycc_crypto_ctx *ctx = crypto_aead_ctx(tfm); + struct ycc_crypto_req *req = aead_request_ctx(aead_req); + struct ycc_flags *aflags; + int taglen = crypto_aead_authsize(tfm); + u16 new_aad_len; + u32 new_cryptlen; + struct crypto_aes_ctx aes_ctx; + u8 tag[16]; + u8 ziv[16] = {0}; + __be32 counter = cpu_to_be32(1); + int ret = 0; + + /* + * YCC hw does not support gcm zero length plaintext. According to spec + * if cryptlen is 0, just do aes_encrypt against IV + */ + if (aead_req->cryptlen == 0 && cmd == YCC_CMD_GCM_ENC) { + ret = aes_expandkey(&aes_ctx, ctx->cipher_key, ctx->keysize); + if (ret) + return ret; + memcpy(ziv, aead_req->iv, 12); + memcpy(ziv + 12, &counter, 4); + aes_encrypt(&aes_ctx, tag, ziv); + sg_copy_from_buffer(aead_req->dst, + sg_nents_for_len(aead_req->dst, taglen), + tag, taglen); + return 0; + } + + if (aead_req->cryptlen == taglen && cmd == YCC_CMD_GCM_DEC) { + ret = aes_expandkey(&aes_ctx, ctx->cipher_key, ctx->keysize); + if (ret) + return ret; + /* Skip aad */ + sg_copy_buffer(aead_req->src, + sg_nents_for_len(aead_req->src, taglen), + tag, taglen, aead_req->assoclen, 1); + aes_decrypt(&aes_ctx, ziv, tag); + sg_copy_from_buffer(aead_req->dst, + sg_nents_for_len(aead_req->dst, taglen), + ziv, taglen); + return 0; + } + + memset(req, 0, sizeof(*req)); + req->ctx = ctx; + req->aead_req = aead_req; + + ret = ycc_aead_fill_key(req); + if (ret) + return ret; + + req->src_vaddr = ycc_aead_format_data(req, &new_aad_len, &new_cryptlen, cmd); + if (!req->src_vaddr) + goto free_key; + + ret = ycc_aead_sg_map(req); + if (ret) + goto unformat; + + ret = -ENOMEM; + aflags = kzalloc(sizeof(struct ycc_flags), GFP_ATOMIC); + if (!aflags) + goto sg_unmap; + + memset(&req->desc.cmd, 0, sizeof(union ycc_real_cmd)); + aflags->ptr = (void *)req; + aflags->ycc_done_callback = ycc_aead_callback; + req->desc.private_ptr = (u64)aflags; + req->desc.cmd.aead_cmd.cmd_id = cmd; + req->desc.cmd.aead_cmd.mode = ctx->mode; + req->desc.cmd.aead_cmd.sptr = req->src_paddr; + req->desc.cmd.aead_cmd.dptr = req->dst_paddr; + if (cmd == YCC_CMD_GCM_DEC || cmd == YCC_CMD_CCM_DEC) + new_cryptlen = aead_req->cryptlen - taglen; + req->desc.cmd.aead_cmd.dlen = new_cryptlen; + req->desc.cmd.aead_cmd.keyptr = req->key_paddr; + req->desc.cmd.aead_cmd.aadlen = new_aad_len; + req->desc.cmd.aead_cmd.taglen = taglen; + + /* 4. submit desc to cmd queue */ + ret = ycc_enqueue(ctx->ring, &req->desc); + if (!ret) + return -EINPROGRESS; + + pr_err("Failed to submit desc to ring\n"); + kfree(aflags); + +sg_unmap: + ycc_aead_sg_unmap(req); +unformat: + ycc_aead_unformat_data(req); +free_key: + memset(req->key_vaddr, 0, 64); + dma_free_coherent(YCC_DEV(ctx), 64, req->key_vaddr, req->key_paddr); + req->key_vaddr = NULL; + return ret; +} + +static int ycc_aead_ccm_encrypt(struct aead_request *aead_req) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req); + struct ycc_crypto_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_request *subreq = + &((struct ycc_crypto_req *)aead_request_ctx(aead_req))->aead_subreq; + + if (ycc_aead_do_soft(aead_req, 16)) { + if (!ctx->soft_tfm) + return -ENOENT; + aead_request_set_tfm(subreq, ctx->soft_tfm); + aead_request_set_callback(subreq, aead_req->base.flags, + aead_req->base.complete, aead_req->base.data); + aead_request_set_crypt(subreq, aead_req->src, aead_req->dst, + aead_req->cryptlen, aead_req->iv); + aead_request_set_ad(subreq, aead_req->assoclen); + crypto_aead_setauthsize(ctx->soft_tfm, crypto_aead_authsize(tfm)); + return crypto_aead_encrypt(subreq); + } + + return ycc_aead_submit_desc(aead_req, YCC_CMD_CCM_ENC); +} + +static int ycc_aead_gcm_encrypt(struct aead_request *aead_req) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req); + struct ycc_crypto_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_request *subreq = + &((struct ycc_crypto_req *)aead_request_ctx(aead_req))->aead_subreq; + + if (ycc_aead_do_soft(aead_req, 16)) { + if (!ctx->soft_tfm) + return -ENOENT; + aead_request_set_tfm(subreq, ctx->soft_tfm); + aead_request_set_callback(subreq, aead_req->base.flags, + aead_req->base.complete, aead_req->base.data); + aead_request_set_crypt(subreq, aead_req->src, aead_req->dst, + aead_req->cryptlen, aead_req->iv); + aead_request_set_ad(subreq, aead_req->assoclen); + crypto_aead_setauthsize(ctx->soft_tfm, crypto_aead_authsize(tfm)); + return crypto_aead_encrypt(subreq); + } + + return ycc_aead_submit_desc(aead_req, YCC_CMD_GCM_ENC); +} + +static int ycc_aead_gcm_decrypt(struct aead_request *aead_req) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req); + struct ycc_crypto_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_request *subreq = + &((struct ycc_crypto_req *)aead_request_ctx(aead_req))->aead_subreq; + + if (ycc_aead_do_soft(aead_req, 0)) { + if (!ctx->soft_tfm) + return -ENOENT; + aead_request_set_tfm(subreq, ctx->soft_tfm); + aead_request_set_callback(subreq, aead_req->base.flags, + aead_req->base.complete, aead_req->base.data); + aead_request_set_crypt(subreq, aead_req->src, aead_req->dst, + aead_req->cryptlen, aead_req->iv); + aead_request_set_ad(subreq, aead_req->assoclen); + crypto_aead_setauthsize(ctx->soft_tfm, crypto_aead_authsize(tfm)); + return crypto_aead_decrypt(subreq); + } + + return ycc_aead_submit_desc(aead_req, YCC_CMD_GCM_DEC); +} + +static int ycc_aead_ccm_decrypt(struct aead_request *aead_req) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req); + struct ycc_crypto_ctx *ctx = crypto_aead_ctx(tfm); + struct aead_request *subreq = + &((struct ycc_crypto_req *)aead_request_ctx(aead_req))->aead_subreq; + + if (ycc_aead_do_soft(aead_req, 0)) { + if (!ctx->soft_tfm) + return -ENOENT; + aead_request_set_tfm(subreq, ctx->soft_tfm); + aead_request_set_callback(subreq, aead_req->base.flags, + aead_req->base.complete, aead_req->base.data); + aead_request_set_crypt(subreq, aead_req->src, aead_req->dst, + aead_req->cryptlen, aead_req->iv); + aead_request_set_ad(subreq, aead_req->assoclen); + crypto_aead_setauthsize(ctx->soft_tfm, crypto_aead_authsize(tfm)); + return crypto_aead_decrypt(subreq); + } + + return ycc_aead_submit_desc(aead_req, YCC_CMD_CCM_DEC); +} + +static struct aead_alg ycc_aeads[] = { + { + .base = { + .cra_name = "gcm(aes)", + .cra_driver_name = "gcm-aes-ycc", + .cra_priority = 350, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_aead_init, + .exit = ycc_aead_exit, + .setkey = ycc_aead_setkey, + .decrypt = ycc_aead_gcm_decrypt, + .encrypt = ycc_aead_gcm_encrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "gcm(sm4)", + .cra_driver_name = "gcm-sm4-ycc", + .cra_priority = 350, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_aead_init, + .exit = ycc_aead_exit, + .setkey = ycc_aead_setkey, + .decrypt = ycc_aead_gcm_decrypt, + .encrypt = ycc_aead_gcm_encrypt, + .ivsize = SM4_BLOCK_SIZE, + .maxauthsize = SM4_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ccm(aes)", + .cra_driver_name = "ccm-aes-ycc", + .cra_priority = 350, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_aead_init, + .exit = ycc_aead_exit, + .setkey = ycc_aead_setkey, + .decrypt = ycc_aead_ccm_decrypt, + .encrypt = ycc_aead_ccm_encrypt, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = AES_BLOCK_SIZE, + }, + { + .base = { + .cra_name = "ccm(sm4)", + .cra_driver_name = "ccm-sm4-ycc", + .cra_priority = 350, + .cra_flags = CRYPTO_ALG_ASYNC, + .cra_blocksize = 1, + .cra_ctxsize = sizeof(struct ycc_crypto_ctx), + .cra_module = THIS_MODULE, + }, + .init = ycc_aead_init, + .exit = ycc_aead_exit, + .setkey = ycc_aead_setkey, + .decrypt = ycc_aead_ccm_decrypt, + .encrypt = ycc_aead_ccm_encrypt, + .ivsize = SM4_BLOCK_SIZE, + .maxauthsize = SM4_BLOCK_SIZE, + }, +}; + +int ycc_aead_register(void) +{ + return crypto_register_aeads(ycc_aeads, ARRAY_SIZE(ycc_aeads)); +} + +void ycc_aead_unregister(void) +{ + crypto_unregister_aeads(ycc_aeads, ARRAY_SIZE(ycc_aeads)); +} diff --git a/drivers/crypto/ycc/ycc_algs.h b/drivers/crypto/ycc/ycc_algs.h index 6c7b0dc..e3be83ec 100644 --- a/drivers/crypto/ycc/ycc_algs.h +++ b/drivers/crypto/ycc/ycc_algs.h @@ -3,6 +3,7 @@ #define __YCC_ALG_H #include +#include #include "ycc_ring.h" #include "ycc_dev.h" @@ -70,6 +71,11 @@ enum ycc_ske_alg_mode { enum ycc_cmd_id { YCC_CMD_SKE_ENC = 0x23, YCC_CMD_SKE_DEC, + + YCC_CMD_GCM_ENC = 0x25, + YCC_CMD_GCM_DEC, + YCC_CMD_CCM_ENC, + YCC_CMD_CCM_DEC, /* 0x28 */ }; struct ycc_crypto_ctx { @@ -92,8 +98,10 @@ struct ycc_crypto_req { dma_addr_t key_paddr; struct ycc_cmd_desc desc; - struct skcipher_request *ske_req; - struct skcipher_request ske_subreq; + union { + struct skcipher_request *ske_req; + struct aead_request *aead_req; + }; void *src_vaddr; dma_addr_t src_paddr; @@ -105,10 +113,18 @@ struct ycc_crypto_req { int aad_offset; struct ycc_crypto_ctx *ctx; u8 last_block[16]; /* used to store iv out when decrypt */ + + /* soft request for fallback, keep at the end */ + union { + struct skcipher_request ske_subreq; + struct aead_request aead_subreq; + }; }; #define YCC_DEV(ctx) (&(ctx)->ring->ydev->pdev->dev) int ycc_sym_register(void); void ycc_sym_unregister(void); +int ycc_aead_register(void); +void ycc_aead_unregister(void); #endif diff --git a/drivers/crypto/ycc/ycc_drv.c b/drivers/crypto/ycc/ycc_drv.c index f4928a9..b8af132 100644 --- a/drivers/crypto/ycc/ycc_drv.c +++ b/drivers/crypto/ycc/ycc_drv.c @@ -94,8 +94,14 @@ int ycc_algorithm_register(void) if (ret) goto err; + ret = ycc_aead_register(); + if (ret) + goto unregister_sym; + return 0; +unregister_sym: + ycc_sym_unregister(); err: atomic_dec(&ycc_algs_refcnt); return ret; @@ -109,6 +115,7 @@ void ycc_algorithm_unregister(void) if (atomic_dec_return(&ycc_algs_refcnt)) return; + ycc_aead_unregister(); ycc_sym_unregister(); } diff --git a/drivers/crypto/ycc/ycc_ring.h b/drivers/crypto/ycc/ycc_ring.h index 6a26fd8..1bb301b 100644 --- a/drivers/crypto/ycc/ycc_ring.h +++ b/drivers/crypto/ycc/ycc_ring.h @@ -87,8 +87,22 @@ struct ycc_skcipher_cmd { u8 padding; } __packed; +struct ycc_aead_cmd { + u8 cmd_id; + u8 mode; + u64 sptr:48; /* include aad + payload */ + u64 dptr:48; /* encrypted/decrypted + tag */ + u32 dlen; /* data size */ + u16 key_idx; + u16 kek_idx; + u64 keyptr:48; + u16 aadlen; + u8 taglen; /* authenc size */ +} __packed; + union ycc_real_cmd { struct ycc_skcipher_cmd ske_cmd; + struct ycc_aead_cmd aead_cmd; u8 padding[32]; }; From patchwork Thu Nov 3 07:40:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 14716 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp376875wru; Thu, 3 Nov 2022 00:47:55 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7naCONcxf6GDLBLGYipYMDzRUQqmRTdSHd+FlzOsGmYzWjkzbgQzlME9ty3pYlsexgT4tl X-Received: by 2002:a17:903:2342:b0:17c:ae18:1c86 with SMTP id c2-20020a170903234200b0017cae181c86mr28096775plh.5.1667461675418; Thu, 03 Nov 2022 00:47:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667461675; cv=none; d=google.com; s=arc-20160816; b=DwyCZiXJd5koH16lDVMguufAklQbWGDDRMrYCx6rXMz+er4IU7CdAlQa6ihwk8fger ainenult/daO5gj/fH8tuA1aQDJOcLXFVVauncd+4VGGz9509KCBBp0Ixc3FdBXnhodC BwztNsWtze1rVZ2lx4Z6Nzsdyb3LmPMmFW0P+riy8fgZJ6HVWifbMFWWhWF/Yf1c+M80 luCoXhups6wHz9FAv0aoBIptbvYcJb9Jp7JYQOZIRYETXzoLts6Lx5OZWsE0xQh+CHJV 5x27pBJVCgju54NHQh/9kSdmFFAgWgDKaJgjfVYYkzospzd74jpcAVL+ld+zEyIUDm1q nBtg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=8qUa6uwny7lpQ7fUPi2itqLUfoAGJ66+njFE3Io8fIc=; b=GZjT7df9/WWRDE2zaoDSwJX/lLwyeMPY2rbA81Z4N1cGGBhanMZ1FJcRKk/CrmaC6B yQVXkA9HcnoB31/jfOaxdoynZGR0AoefWarb4UlGTiCKVqGJ/nm09SOSd4iZPJyJSwEk 7ocJ8GdFQdFrvKbBytHY8gJYYdf6JXKWykxpbIevL4YCK9wlqimpGd5Y7aH0FTDwQQq5 rqJhD3Zw5o00ohR4fxfW8njNgpfRd8ghXhwoFxuCDNUsNpk15eIQ/1Bkx3dHeIt+yW90 RW5i2+kNdZ+WTKKr3o3zpJfIYXLXn/f7StVZh9nRoRhVVTnqGXTEHycIbLv+EUQE+SX9 /5NQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id j2-20020a655582000000b004085af5007bsi224534pgs.428.2022.11.03.00.47.43; Thu, 03 Nov 2022 00:47:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231473AbiKCHmN (ORCPT + 99 others); Thu, 3 Nov 2022 03:42:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38266 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231368AbiKCHl2 (ORCPT ); Thu, 3 Nov 2022 03:41:28 -0400 Received: from out30-45.freemail.mail.aliyun.com (out30-45.freemail.mail.aliyun.com [115.124.30.45]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BDFA96333; Thu, 3 Nov 2022 00:40:58 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R551e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=guanjun@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VTrTeZS_1667461254; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VTrTeZS_1667461254) by smtp.aliyun-inc.com; Thu, 03 Nov 2022 15:40:56 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, artie.ding@linux.alibaba.com, guanjun@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, xuchun.shang@linux.alibaba.com Subject: [PATCH v3 RESEND 7/9] crypto/ycc: Add rsa algorithm support Date: Thu, 3 Nov 2022 15:40:41 +0800 Message-Id: <1667461243-48652-8-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> References: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748460293725850320?= X-GMAIL-MSGID: =?utf-8?q?1748460293725850320?= From: Guanjun Support rsa algorithm for ycc. That includes encryption\decryption and verification\signature Signed-off-by: Guanjun --- drivers/crypto/ycc/Makefile | 2 +- drivers/crypto/ycc/ycc_algs.h | 44 +++ drivers/crypto/ycc/ycc_drv.c | 7 + drivers/crypto/ycc/ycc_pke.c | 696 ++++++++++++++++++++++++++++++++++++++++++ drivers/crypto/ycc/ycc_ring.h | 23 ++ 5 files changed, 771 insertions(+), 1 deletion(-) create mode 100644 drivers/crypto/ycc/ycc_pke.c diff --git a/drivers/crypto/ycc/Makefile b/drivers/crypto/ycc/Makefile index d629dd5..d1f22a9 100644 --- a/drivers/crypto/ycc/Makefile +++ b/drivers/crypto/ycc/Makefile @@ -1,3 +1,3 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_CRYPTO_DEV_YCC) += ycc.o -ycc-objs := ycc_drv.o ycc_isr.o ycc_ring.o ycc_ske.o ycc_aead.o +ycc-objs := ycc_drv.o ycc_isr.o ycc_ring.o ycc_ske.o ycc_aead.o ycc_pke.o diff --git a/drivers/crypto/ycc/ycc_algs.h b/drivers/crypto/ycc/ycc_algs.h index e3be83ec..6a13230a 100644 --- a/drivers/crypto/ycc/ycc_algs.h +++ b/drivers/crypto/ycc/ycc_algs.h @@ -76,6 +76,13 @@ enum ycc_cmd_id { YCC_CMD_GCM_DEC, YCC_CMD_CCM_ENC, YCC_CMD_CCM_DEC, /* 0x28 */ + + YCC_CMD_RSA_ENC = 0x83, + YCC_CMD_RSA_DEC, + YCC_CMD_RSA_CRT_DEC, + YCC_CMD_RSA_CRT_SIGN, + YCC_CMD_RSA_SIGN, + YCC_CMD_RSA_VERIFY, /* 0x88 */ }; struct ycc_crypto_ctx { @@ -121,10 +128,47 @@ struct ycc_crypto_req { }; }; +#define YCC_RSA_KEY_SZ_512 64 +#define YCC_RSA_KEY_SZ_1536 192 +#define YCC_RSA_CRT_PARAMS 5 +#define YCC_RSA_E_SZ_MAX 8 +#define YCC_CMD_DATA_ALIGN_SZ 64 +#define YCC_PIN_SZ 16 + +struct ycc_pke_ctx { + struct rsa_key *rsa_key; + + void *priv_key_vaddr; + dma_addr_t priv_key_paddr; + void *pub_key_vaddr; + dma_addr_t pub_key_paddr; + + unsigned int key_len; + unsigned int e_len; + bool crt_mode; + struct ycc_ring *ring; + struct crypto_akcipher *soft_tfm; +}; + +struct ycc_pke_req { + void *src_vaddr; + dma_addr_t src_paddr; + void *dst_vaddr; + dma_addr_t dst_paddr; + + struct ycc_cmd_desc desc; + union { + struct ycc_pke_ctx *ctx; + }; + struct akcipher_request *req; +}; + #define YCC_DEV(ctx) (&(ctx)->ring->ydev->pdev->dev) int ycc_sym_register(void); void ycc_sym_unregister(void); int ycc_aead_register(void); void ycc_aead_unregister(void); +int ycc_pke_register(void); +void ycc_pke_unregister(void); #endif diff --git a/drivers/crypto/ycc/ycc_drv.c b/drivers/crypto/ycc/ycc_drv.c index b8af132..aab4419 100644 --- a/drivers/crypto/ycc/ycc_drv.c +++ b/drivers/crypto/ycc/ycc_drv.c @@ -98,8 +98,14 @@ int ycc_algorithm_register(void) if (ret) goto unregister_sym; + ret = ycc_pke_register(); + if (ret) + goto unregister_aead; + return 0; +unregister_aead: + ycc_aead_unregister(); unregister_sym: ycc_sym_unregister(); err: @@ -115,6 +121,7 @@ void ycc_algorithm_unregister(void) if (atomic_dec_return(&ycc_algs_refcnt)) return; + ycc_pke_unregister(); ycc_aead_unregister(); ycc_sym_unregister(); } diff --git a/drivers/crypto/ycc/ycc_pke.c b/drivers/crypto/ycc/ycc_pke.c new file mode 100644 index 00000000..3debd80 --- /dev/null +++ b/drivers/crypto/ycc/ycc_pke.c @@ -0,0 +1,696 @@ +// SPDX-License-Identifier: GPL-2.0 + +#define pr_fmt(fmt) "YCC: Crypto: " fmt + +#include +#include +#include +#include +#include +#include +#include "ycc_algs.h" + +static int ycc_rsa_done_callback(void *ptr, u16 state) +{ + struct ycc_pke_req *rsa_req = (struct ycc_pke_req *)ptr; + struct ycc_pke_ctx *ctx = rsa_req->ctx; + struct akcipher_request *req = rsa_req->req; + struct device *dev = YCC_DEV(ctx); + unsigned int dma_length = ctx->key_len; + + if (rsa_req->desc.cmd.rsa_enc_cmd.cmd_id == YCC_CMD_RSA_VERIFY) + dma_length = ctx->key_len << 1; + + /* For signature verify, dst is NULL */ + if (rsa_req->dst_vaddr) { + sg_copy_from_buffer(req->dst, sg_nents_for_len(req->dst, req->dst_len), + rsa_req->dst_vaddr, req->dst_len); + dma_free_coherent(dev, ALIGN(ctx->key_len, 64), + rsa_req->dst_vaddr, rsa_req->dst_paddr); + } + dma_free_coherent(dev, ALIGN(dma_length, 64), + rsa_req->src_vaddr, rsa_req->src_paddr); + + if (req->base.complete) + req->base.complete(&req->base, state == CMD_SUCCESS ? 0 : -EBADMSG); + + return 0; +} + +static int ycc_prepare_dma_buf(struct ycc_pke_req *rsa_req, int is_src) +{ + struct ycc_pke_ctx *ctx = rsa_req->ctx; + struct akcipher_request *req = rsa_req->req; + struct device *dev = YCC_DEV(ctx); + unsigned int dma_length = ctx->key_len; + dma_addr_t tmp; + void *ptr; + int shift; + + /* + * Ycc requires 2 key_len blocks, the first block stores + * message pre-padding with 0, the second block stores signature. + * LCKF akcipher verify, the first sg contains signature and + * the second contains message while src_len is signature + * length, dst len is message length + */ + if (rsa_req->desc.cmd.rsa_enc_cmd.cmd_id == YCC_CMD_RSA_VERIFY) { + dma_length = ctx->key_len << 1; + shift = ctx->key_len - req->dst_len; + } else { + shift = ctx->key_len - req->src_len; + } + + if (unlikely(shift < 0)) + return -EINVAL; + + ptr = dma_alloc_coherent(dev, ALIGN(dma_length, 64), &tmp, GFP_ATOMIC); + if (unlikely(!ptr)) { + pr_err("Failed to alloc dma for %s data\n", is_src ? "src" : "dst"); + return -ENOMEM; + } + + memset(ptr, 0, ALIGN(dma_length, 64)); + if (is_src) { + if (rsa_req->desc.cmd.rsa_enc_cmd.cmd_id == + YCC_CMD_RSA_VERIFY) { + /* Copy msg first with prepadding 0 */ + sg_copy_buffer(req->src, sg_nents(req->src), ptr + shift, + req->dst_len, req->src_len, 1); + /* Copy signature */ + sg_copy_buffer(req->src, sg_nents(req->src), ptr + ctx->key_len, + req->src_len, 0, 1); + } else { + sg_copy_buffer(req->src, sg_nents(req->src), ptr + shift, + req->src_len, 0, 1); + } + rsa_req->src_vaddr = ptr; + rsa_req->src_paddr = tmp; + } else { + rsa_req->dst_vaddr = ptr; + rsa_req->dst_paddr = tmp; + } + + return 0; +} + +/* + * Using public key to encrypt or verify + */ +static int ycc_rsa_submit_pub(struct akcipher_request *req, bool is_enc) +{ + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); + struct ycc_pke_req *rsa_req = akcipher_request_ctx(req); + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + struct ycc_rsa_enc_cmd *rsa_enc_cmd; + struct ycc_ring *ring = ctx->ring; + struct device *dev = YCC_DEV(ctx); + struct ycc_flags *aflags; + int ret = -ENOMEM; + + if (req->dst_len > ctx->key_len || req->src_len > ctx->key_len) + return -EINVAL; + + rsa_req->ctx = ctx; + rsa_req->req = req; + + if (unlikely(!ctx->pub_key_vaddr)) + return -EINVAL; + + aflags = kzalloc(sizeof(struct ycc_flags), GFP_ATOMIC); + if (!aflags) + goto out; + + aflags->ptr = (void *)rsa_req; + aflags->ycc_done_callback = ycc_rsa_done_callback; + + memset(&rsa_req->desc, 0, sizeof(rsa_req->desc)); + rsa_req->desc.private_ptr = (u64)(void *)aflags; + + rsa_enc_cmd = &rsa_req->desc.cmd.rsa_enc_cmd; + rsa_enc_cmd->cmd_id = is_enc ? YCC_CMD_RSA_ENC : YCC_CMD_RSA_VERIFY; + rsa_enc_cmd->keyptr = ctx->pub_key_paddr; + rsa_enc_cmd->elen = ctx->e_len << 3; + rsa_enc_cmd->nlen = ctx->key_len << 3; + + ret = ycc_prepare_dma_buf(rsa_req, 1); + if (unlikely(ret)) + goto free_aflags; + + rsa_enc_cmd->sptr = rsa_req->src_paddr; + if (is_enc) { + ret = ycc_prepare_dma_buf(rsa_req, 0); + if (unlikely(ret)) + goto free_src; + + rsa_enc_cmd->dptr = rsa_req->dst_paddr; + } else { + rsa_req->dst_vaddr = NULL; + } + + ret = ycc_enqueue(ring, (u8 *)&rsa_req->desc); + if (!ret) + return -EINPROGRESS; + + if (rsa_req->dst_vaddr) + dma_free_coherent(dev, ALIGN(ctx->key_len, 64), + rsa_req->dst_vaddr, rsa_req->dst_paddr); + +free_src: + dma_free_coherent(dev, ALIGN(is_enc ? ctx->key_len : ctx->key_len << 1, 64), + rsa_req->src_vaddr, rsa_req->src_paddr); +free_aflags: + kfree(aflags); +out: + return ret; +} + +/* + * Using private key to decrypt or signature + */ +static int ycc_rsa_submit_priv(struct akcipher_request *req, bool is_dec) +{ + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); + struct ycc_pke_req *rsa_req = akcipher_request_ctx(req); + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + struct ycc_rsa_dec_cmd *rsa_dec_cmd; + struct ycc_ring *ring = ctx->ring; + struct device *dev = YCC_DEV(ctx); + struct ycc_flags *aflags; + int ret = -ENOMEM; + + if (req->dst_len > ctx->key_len || req->src_len > ctx->key_len) + return -EINVAL; + + rsa_req->ctx = ctx; + rsa_req->req = req; + + if (unlikely(!ctx->priv_key_vaddr)) + return -EINVAL; + + aflags = kzalloc(sizeof(struct ycc_flags), GFP_ATOMIC); + if (!aflags) + goto out; + + aflags->ptr = (void *)rsa_req; + aflags->ycc_done_callback = ycc_rsa_done_callback; + + memset(&rsa_req->desc, 0, sizeof(rsa_req->desc)); + rsa_req->desc.private_ptr = (u64)(void *)aflags; + + rsa_dec_cmd = &rsa_req->desc.cmd.rsa_dec_cmd; + rsa_dec_cmd->keyptr = ctx->priv_key_paddr; + rsa_dec_cmd->elen = ctx->e_len << 3; + rsa_dec_cmd->nlen = ctx->key_len << 3; + if (ctx->crt_mode) + rsa_dec_cmd->cmd_id = is_dec ? YCC_CMD_RSA_CRT_DEC : YCC_CMD_RSA_CRT_SIGN; + else + rsa_dec_cmd->cmd_id = is_dec ? YCC_CMD_RSA_DEC : YCC_CMD_RSA_SIGN; + + ret = ycc_prepare_dma_buf(rsa_req, 1); + if (unlikely(ret)) + goto free_aflags; + + ret = ycc_prepare_dma_buf(rsa_req, 0); + if (unlikely(ret)) + goto free_src; + + rsa_dec_cmd->sptr = rsa_req->src_paddr; + rsa_dec_cmd->dptr = rsa_req->dst_paddr; + + ret = ycc_enqueue(ring, (u8 *)&rsa_req->desc); + if (!ret) + return -EINPROGRESS; + + dma_free_coherent(dev, ALIGN(ctx->key_len, 64), rsa_req->dst_vaddr, + rsa_req->dst_paddr); +free_src: + dma_free_coherent(dev, ALIGN(ctx->key_len, 64), rsa_req->src_vaddr, + rsa_req->src_paddr); +free_aflags: + kfree(aflags); +out: + return ret; +} + +static inline bool ycc_rsa_do_soft(struct akcipher_request *req) +{ + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + struct ycc_dev *ydev = ctx->ring->ydev; + + if (ctx->key_len == YCC_RSA_KEY_SZ_512 || + ctx->key_len == YCC_RSA_KEY_SZ_1536 || + !test_bit(YDEV_STATUS_READY, &ydev->status)) + return true; + + return false; +} + +enum rsa_ops { + RSA_ENC, + RSA_DEC, + RSA_SIGN, + RSA_VERIFY, +}; + +static inline int ycc_rsa_soft_fallback(struct akcipher_request *req, int ops) +{ + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + int ret = -EINVAL; + + akcipher_request_set_tfm(req, ctx->soft_tfm); + + switch (ops) { + case RSA_ENC: + ret = crypto_akcipher_encrypt(req); + break; + case RSA_DEC: + ret = crypto_akcipher_decrypt(req); + break; + case RSA_SIGN: + ret = crypto_akcipher_sign(req); + break; + case RSA_VERIFY: + ret = crypto_akcipher_verify(req); + break; + default: + break; + } + + akcipher_request_set_tfm(req, tfm); + return ret; +} + +static int ycc_rsa_encrypt(struct akcipher_request *req) +{ + if (ycc_rsa_do_soft(req)) + return ycc_rsa_soft_fallback(req, RSA_ENC); + + return ycc_rsa_submit_pub(req, true); +} + +static int ycc_rsa_decrypt(struct akcipher_request *req) +{ + if (ycc_rsa_do_soft(req)) + return ycc_rsa_soft_fallback(req, RSA_DEC); + + return ycc_rsa_submit_priv(req, true); +} + +static int ycc_rsa_verify(struct akcipher_request *req) +{ + if (ycc_rsa_do_soft(req)) + return ycc_rsa_soft_fallback(req, RSA_VERIFY); + + return ycc_rsa_submit_pub(req, false); +} + +static int ycc_rsa_sign(struct akcipher_request *req) +{ + if (ycc_rsa_do_soft(req)) + return ycc_rsa_soft_fallback(req, RSA_SIGN); + + return ycc_rsa_submit_priv(req, false); +} + +static int ycc_rsa_validate_n(unsigned int len) +{ + unsigned int bitslen = len << 3; + + switch (bitslen) { + case 512: + case 1024: + case 1536: + case 2048: + case 3072: + case 4096: + return 0; + default: + return -EINVAL; + } +} + +static void __ycc_rsa_drop_leading_zeros(const u8 **ptr, size_t *len) +{ + if (!*ptr) + return; + + while (!**ptr && *len) { + (*ptr)++; + (*len)--; + } +} + +static int ycc_rsa_set_n(struct ycc_pke_ctx *ctx, const char *value, + size_t value_len, bool private) +{ + const char *ptr = value; + + /* e should be set before n as we need e_len */ + if (!ctx->e_len || !value_len) + return -EINVAL; + + if (!ctx->key_len) + ctx->key_len = value_len; + + if (private && !ctx->crt_mode) + memcpy(ctx->priv_key_vaddr + ctx->e_len + YCC_PIN_SZ + + ctx->rsa_key->d_sz, ptr, value_len); + + memcpy(ctx->pub_key_vaddr + ctx->e_len, ptr, value_len); + return 0; +} + +static int ycc_rsa_set_e(struct ycc_pke_ctx *ctx, const char *value, + size_t value_len, bool private) +{ + const char *ptr = value; + + if (!ctx->key_len || !value_len || value_len > YCC_RSA_E_SZ_MAX) + return -EINVAL; + + ctx->e_len = value_len; + if (private) + memcpy(ctx->priv_key_vaddr, ptr, value_len); + + memcpy(ctx->pub_key_vaddr, ptr, value_len); + return 0; +} + +static int ycc_rsa_set_d(struct ycc_pke_ctx *ctx, const char *value, + size_t value_len) +{ + const char *ptr = value; + + if (!ctx->key_len || !value_len || value_len > ctx->key_len) + return -EINVAL; + + memcpy(ctx->priv_key_vaddr + ctx->e_len + YCC_PIN_SZ, ptr, value_len); + return 0; +} + +static int ycc_rsa_set_crt_param(char *param, size_t half_key_len, + const char *value, size_t value_len) +{ + const char *ptr = value; + size_t len = value_len; + + if (!len || len > half_key_len) + return -EINVAL; + + memcpy(param, ptr, len); + return 0; +} + +static int ycc_rsa_setkey_crt(struct ycc_pke_ctx *ctx, struct rsa_key *rsa_key) +{ + unsigned int half_key_len = ctx->key_len >> 1; + u8 *tmp = (u8 *)ctx->priv_key_vaddr; + int ret; + + tmp += ctx->rsa_key->e_sz + 16; + /* TODO: rsa_key is better to be kept original */ + ret = ycc_rsa_set_crt_param(tmp, half_key_len, rsa_key->p, rsa_key->p_sz); + if (ret) + goto err; + + tmp += half_key_len; + ret = ycc_rsa_set_crt_param(tmp, half_key_len, rsa_key->q, rsa_key->q_sz); + if (ret) + goto err; + + tmp += half_key_len; + ret = ycc_rsa_set_crt_param(tmp, half_key_len, rsa_key->dp, rsa_key->dp_sz); + if (ret) + goto err; + + tmp += half_key_len; + ret = ycc_rsa_set_crt_param(tmp, half_key_len, rsa_key->dq, rsa_key->dq_sz); + if (ret) + goto err; + + tmp += half_key_len; + ret = ycc_rsa_set_crt_param(tmp, half_key_len, rsa_key->qinv, rsa_key->qinv_sz); + if (ret) + goto err; + + ctx->crt_mode = true; + return 0; + +err: + ctx->crt_mode = false; + return ret; +} + +static void ycc_rsa_clear_ctx(struct ycc_pke_ctx *ctx) +{ + struct device *dev = YCC_DEV(ctx); + size_t size; + + if (ctx->pub_key_vaddr) { + size = ALIGN(ctx->rsa_key->e_sz + ctx->key_len, YCC_CMD_DATA_ALIGN_SZ); + dma_free_coherent(dev, size, ctx->pub_key_vaddr, ctx->pub_key_paddr); + ctx->pub_key_vaddr = NULL; + } + + if (ctx->priv_key_vaddr) { + size = ALIGN(ctx->rsa_key->e_sz + YCC_PIN_SZ + ctx->rsa_key->d_sz + + ctx->key_len, YCC_CMD_DATA_ALIGN_SZ); + memzero_explicit(ctx->priv_key_vaddr, size); + dma_free_coherent(dev, size, ctx->priv_key_vaddr, ctx->priv_key_paddr); + ctx->priv_key_vaddr = NULL; + } + + if (ctx->rsa_key) { + memzero_explicit(ctx->rsa_key, sizeof(struct rsa_key)); + kfree(ctx->rsa_key); + ctx->rsa_key = NULL; + } + + ctx->key_len = 0; + ctx->e_len = 0; + ctx->crt_mode = false; +} + +static void ycc_rsa_drop_leading_zeros(struct rsa_key *rsa_key) +{ + __ycc_rsa_drop_leading_zeros(&rsa_key->n, &rsa_key->n_sz); + __ycc_rsa_drop_leading_zeros(&rsa_key->e, &rsa_key->e_sz); + __ycc_rsa_drop_leading_zeros(&rsa_key->d, &rsa_key->d_sz); + __ycc_rsa_drop_leading_zeros(&rsa_key->p, &rsa_key->p_sz); + __ycc_rsa_drop_leading_zeros(&rsa_key->q, &rsa_key->q_sz); + __ycc_rsa_drop_leading_zeros(&rsa_key->dp, &rsa_key->dp_sz); + __ycc_rsa_drop_leading_zeros(&rsa_key->dq, &rsa_key->dq_sz); + __ycc_rsa_drop_leading_zeros(&rsa_key->qinv, &rsa_key->qinv_sz); +} + +static int ycc_rsa_alloc_key(struct ycc_pke_ctx *ctx, bool priv) +{ + struct device *dev = YCC_DEV(ctx); + struct rsa_key *rsa_key = ctx->rsa_key; + unsigned int half_key_len; + size_t size; + int ret; + + ycc_rsa_drop_leading_zeros(rsa_key); + ctx->key_len = rsa_key->n_sz; + + ret = ycc_rsa_validate_n(ctx->key_len); + if (ret) { + pr_err("Invalid n size:%d bits\n", ctx->key_len << 3); + goto out; + } + + ret = -ENOMEM; + if (priv) { + if (!(rsa_key->p_sz + rsa_key->q_sz + rsa_key->dp_sz + + rsa_key->dq_sz + rsa_key->qinv_sz)) { + size = ALIGN(rsa_key->e_sz + YCC_PIN_SZ + rsa_key->d_sz + + ctx->key_len, YCC_CMD_DATA_ALIGN_SZ); + } else { + half_key_len = ctx->key_len >> 1; + size = ALIGN(rsa_key->e_sz + YCC_PIN_SZ + half_key_len * + YCC_RSA_CRT_PARAMS, YCC_CMD_DATA_ALIGN_SZ); + ctx->crt_mode = true; + } + ctx->priv_key_vaddr = dma_alloc_coherent(dev, size, + &ctx->priv_key_paddr, + GFP_KERNEL); + if (!ctx->priv_key_vaddr) + goto out; + memset(ctx->priv_key_vaddr, 0, size); + } + + if (!ctx->pub_key_vaddr) { + size = ALIGN(ctx->key_len + rsa_key->e_sz, YCC_CMD_DATA_ALIGN_SZ); + ctx->pub_key_vaddr = dma_alloc_coherent(dev, size, + &ctx->pub_key_paddr, + GFP_KERNEL); + if (!ctx->pub_key_vaddr) + goto out; + memset(ctx->pub_key_vaddr, 0, size); + } + + ret = ycc_rsa_set_e(ctx, rsa_key->e, rsa_key->e_sz, priv); + if (ret) { + pr_err("Failed to set e for rsa %s key\n", priv ? "private" : "public"); + goto out; + } + + ret = ycc_rsa_set_n(ctx, rsa_key->n, rsa_key->n_sz, priv); + if (ret) { + pr_err("Failed to set n for rsa private key\n"); + goto out; + } + + if (priv) { + if (ctx->crt_mode) { + ret = ycc_rsa_setkey_crt(ctx, rsa_key); + if (ret) { + pr_err("Failed to set private key for rsa crt key\n"); + goto out; + } + } else { + ret = ycc_rsa_set_d(ctx, rsa_key->d, rsa_key->d_sz); + if (ret) { + pr_err("Failed to set d for rsa private key\n"); + goto out; + } + } + } + + return 0; + +out: + ycc_rsa_clear_ctx(ctx); + return ret; +} + +static int ycc_rsa_setkey(struct crypto_akcipher *tfm, const void *key, + unsigned int keylen, bool priv) +{ + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + struct rsa_key *rsa_key; + int ret; + + if (priv) + ret = crypto_akcipher_set_priv_key(ctx->soft_tfm, key, keylen); + else + ret = crypto_akcipher_set_pub_key(ctx->soft_tfm, key, keylen); + if (ret) + return ret; + + ycc_rsa_clear_ctx(ctx); + + rsa_key = kzalloc(sizeof(struct rsa_key), GFP_KERNEL); + if (!rsa_key) + return -ENOMEM; + + if (priv) + ret = rsa_parse_priv_key(rsa_key, key, keylen); + else if (!ctx->pub_key_vaddr) + ret = rsa_parse_pub_key(rsa_key, key, keylen); + if (ret) { + pr_err("Failed to parse %s key\n", priv ? "private" : "public"); + kfree(rsa_key); + return ret; + } + + ctx->rsa_key = rsa_key; + return ycc_rsa_alloc_key(ctx, priv); +} + +static int ycc_rsa_setpubkey(struct crypto_akcipher *tfm, const void *key, + unsigned int keylen) +{ + return ycc_rsa_setkey(tfm, key, keylen, false); +} + +static int ycc_rsa_setprivkey(struct crypto_akcipher *tfm, const void *key, + unsigned int keylen) +{ + return ycc_rsa_setkey(tfm, key, keylen, true); +} + +static unsigned int ycc_rsa_max_size(struct crypto_akcipher *tfm) +{ + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + + /* + * 512 and 1536 bits key size are not supported by YCC, + * we use soft tfm instead + */ + if (ctx->key_len == YCC_RSA_KEY_SZ_512 || + ctx->key_len == YCC_RSA_KEY_SZ_1536) + return crypto_akcipher_maxsize(ctx->soft_tfm); + + return ctx->rsa_key ? ctx->key_len : 0; +} + +static int ycc_rsa_init(struct crypto_akcipher *tfm) +{ + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + struct ycc_ring *ring; + + ctx->soft_tfm = crypto_alloc_akcipher("rsa-generic", 0, 0); + if (IS_ERR(ctx->soft_tfm)) { + pr_err("Can not alloc_akcipher!\n"); + return PTR_ERR(ctx->soft_tfm); + } + + /* Reserve enough space if soft request reqires additional space */ + akcipher_set_reqsize(tfm, sizeof(struct ycc_pke_req) + + crypto_akcipher_alg(ctx->soft_tfm)->reqsize); + + ring = ycc_crypto_get_ring(); + if (!ring) { + crypto_free_akcipher(ctx->soft_tfm); + return -EINVAL; + } + + ctx->ring = ring; + ctx->key_len = 0; + return 0; +} + +static void ycc_rsa_exit(struct crypto_akcipher *tfm) +{ + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + + if (ctx->ring) + ycc_crypto_free_ring(ctx->ring); + + ycc_rsa_clear_ctx(ctx); + crypto_free_akcipher(ctx->soft_tfm); +} + +static struct akcipher_alg ycc_rsa = { + .base = { + .cra_name = "rsa", + .cra_driver_name = "rsa-ycc", + .cra_priority = 1000, + .cra_module = THIS_MODULE, + .cra_ctxsize = sizeof(struct ycc_pke_ctx), + }, + .sign = ycc_rsa_sign, + .verify = ycc_rsa_verify, + .encrypt = ycc_rsa_encrypt, + .decrypt = ycc_rsa_decrypt, + .set_pub_key = ycc_rsa_setpubkey, + .set_priv_key = ycc_rsa_setprivkey, + .max_size = ycc_rsa_max_size, + .init = ycc_rsa_init, + .exit = ycc_rsa_exit, +}; + +int ycc_pke_register(void) +{ + return crypto_register_akcipher(&ycc_rsa); +} + +void ycc_pke_unregister(void) +{ + crypto_unregister_akcipher(&ycc_rsa); +} diff --git a/drivers/crypto/ycc/ycc_ring.h b/drivers/crypto/ycc/ycc_ring.h index 1bb301b..67c7f0b 100644 --- a/drivers/crypto/ycc/ycc_ring.h +++ b/drivers/crypto/ycc/ycc_ring.h @@ -100,9 +100,32 @@ struct ycc_aead_cmd { u8 taglen; /* authenc size */ } __packed; +struct ycc_rsa_enc_cmd { + u8 cmd_id; + u64 sptr:48; + u16 key_id; + u64 keyptr:48; /* public key e+n Bytes */ + u16 elen; /* bits not byte */ + u16 nlen; + u64 dptr:48; +} __packed; + +struct ycc_rsa_dec_cmd { + u8 cmd_id; + u64 sptr:48; + u16 key_id; + u16 kek_id; + u64 keyptr:48; /* private key e + pin + d + n */ + u16 elen; /* bits not byte */ + u16 nlen; + u64 dptr:48; +} __packed; + union ycc_real_cmd { struct ycc_skcipher_cmd ske_cmd; struct ycc_aead_cmd aead_cmd; + struct ycc_rsa_enc_cmd rsa_enc_cmd; + struct ycc_rsa_dec_cmd rsa_dec_cmd; u8 padding[32]; }; From patchwork Thu Nov 3 07:40:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 14715 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp376855wru; Thu, 3 Nov 2022 00:47:47 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4ac8oADU8uG4l3iwEDrmgm+YJyNGyb21seNB43c4lMZCItqoxiKQgIYsvpBkL0Tft6Z9Ok X-Received: by 2002:a05:6a00:a05:b0:563:5d36:ccd4 with SMTP id p5-20020a056a000a0500b005635d36ccd4mr28781051pfh.25.1667461667516; Thu, 03 Nov 2022 00:47:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667461667; cv=none; d=google.com; s=arc-20160816; b=qGbYxOAj31oXruXORb0OasZhW7fIjPtF77KYtNKHJ2RnPPBvQ+WNpt5VwZrLRb5pyL k+60mTfEx7RSZIHFsw8+xmLt3CUQnWPZsDpe6MosFNStjBVW2IE15z58fRJxeLB4TK2T KiJ0OgeSvfdQdCkxZrUdVnfmuymsLfyB8+DfbQtmR+7gEMunQuqBX5+RYxRVrmfyrD6w jb3/K4F/wXkUN2p9ENxgSXL/bKT6N0H4Y9Nnme0hcsriCYdnF3SDuCYI9qKn1rSn1TAo ro6noKdNOEGX0YLDv073zJn8jJ2bIHvoNQJdiI1TrsSTfHxlIv/54Vpx7lv5TraYeSUr m12Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=huAc/kvDcgaZBAxZsyR0verphExotOTHHfnbNDmp2cQ=; b=TFQOSYhySYjRx9CU6ndcgqrkcaLpRzU/vRjOmO1ca/+vP2GQp1Jnee8ou90Qnwg/q1 LC24fDZcy9oF0Vfql4OURqRmD/Be5IUeUlLLEZQ71QZKEAdc0i7WgTTQzAZk35KAHX+S jXqY4MlPfHWpt9wrB7WEH7J/0OnKELsugbOMc5E3td+Sku7PvAZ6LzJljnqfTrjWd96j /5U/tsdQfTlnMf0F0KMHtHxGvTtJtGJqAr6HqNablb1ET4W+i/CWc/+AxfZWMM7ghR3i rT6J5Ig1nkXhuX9BCUmfRuUTsDHQUOgsAzFWJPO+Cd0pkxHWwSEsMIA1upy4TcssN5Qc STXQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id e24-20020a63d958000000b00429f2cb4a47si215670pgj.491.2022.11.03.00.47.33; Thu, 03 Nov 2022 00:47:47 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231337AbiKCHmW (ORCPT + 99 others); Thu, 3 Nov 2022 03:42:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38274 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231374AbiKCHl2 (ORCPT ); Thu, 3 Nov 2022 03:41:28 -0400 Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E40D15F45; Thu, 3 Nov 2022 00:41:00 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R831e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046060;MF=guanjun@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VTrTeZv_1667461256; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VTrTeZv_1667461256) by smtp.aliyun-inc.com; Thu, 03 Nov 2022 15:40:57 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, artie.ding@linux.alibaba.com, guanjun@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, xuchun.shang@linux.alibaba.com Subject: [PATCH v3 RESEND 8/9] crypto/ycc: Add sm2 algorithm support Date: Thu, 3 Nov 2022 15:40:42 +0800 Message-Id: <1667461243-48652-9-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> References: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748460285374533727?= X-GMAIL-MSGID: =?utf-8?q?1748460285374533727?= From: Xuchun Shang Only support verification through sm2 at present. Signed-off-by: Xuchun Shang --- drivers/crypto/ycc/Makefile | 4 + drivers/crypto/ycc/sm2signature_asn1.c | 38 +++++ drivers/crypto/ycc/sm2signature_asn1.h | 13 ++ drivers/crypto/ycc/ycc_algs.h | 2 + drivers/crypto/ycc/ycc_pke.c | 252 ++++++++++++++++++++++++++++++++- drivers/crypto/ycc/ycc_ring.h | 8 ++ 6 files changed, 316 insertions(+), 1 deletion(-) create mode 100644 drivers/crypto/ycc/sm2signature_asn1.c create mode 100644 drivers/crypto/ycc/sm2signature_asn1.h diff --git a/drivers/crypto/ycc/Makefile b/drivers/crypto/ycc/Makefile index d1f22a9..fb42eec 100644 --- a/drivers/crypto/ycc/Makefile +++ b/drivers/crypto/ycc/Makefile @@ -1,3 +1,7 @@ # SPDX-License-Identifier: GPL-2.0 obj-$(CONFIG_CRYPTO_DEV_YCC) += ycc.o ycc-objs := ycc_drv.o ycc_isr.o ycc_ring.o ycc_ske.o ycc_aead.o ycc_pke.o + +ifndef CONFIG_CRYPTO_SM2 +ycc-objs += sm2signature_asn1.o +endif diff --git a/drivers/crypto/ycc/sm2signature_asn1.c b/drivers/crypto/ycc/sm2signature_asn1.c new file mode 100644 index 00000000..1fd15c1 --- /dev/null +++ b/drivers/crypto/ycc/sm2signature_asn1.c @@ -0,0 +1,38 @@ +/* + * Automatically generated by asn1_compiler. Do not edit + * + * ASN.1 parser for sm2signature + */ +#include +#include "sm2signature_asn1.h" + +enum sm2signature_actions { + ACT_sm2_get_signature_r = 0, + ACT_sm2_get_signature_s = 1, + NR__sm2signature_actions = 2 +}; + +static const asn1_action_t sm2signature_action_table[NR__sm2signature_actions] = { + [0] = sm2_get_signature_r, + [1] = sm2_get_signature_s, +}; + +static const unsigned char sm2signature_machine[] = { + // Sm2Signature + [0] = ASN1_OP_MATCH, + [1] = _tag(UNIV, CONS, SEQ), + [2] = ASN1_OP_MATCH_ACT, // sig_r + [3] = _tag(UNIV, PRIM, INT), + [4] = _action(ACT_sm2_get_signature_r), + [5] = ASN1_OP_MATCH_ACT, // sig_s + [6] = _tag(UNIV, PRIM, INT), + [7] = _action(ACT_sm2_get_signature_s), + [8] = ASN1_OP_END_SEQ, + [9] = ASN1_OP_COMPLETE, +}; + +const struct asn1_decoder sm2signature_decoder = { + .machine = sm2signature_machine, + .machlen = sizeof(sm2signature_machine), + .actions = sm2signature_action_table, +}; diff --git a/drivers/crypto/ycc/sm2signature_asn1.h b/drivers/crypto/ycc/sm2signature_asn1.h new file mode 100644 index 00000000..192c9e1 --- /dev/null +++ b/drivers/crypto/ycc/sm2signature_asn1.h @@ -0,0 +1,13 @@ +/* + * Automatically generated by asn1_compiler. Do not edit + * + * ASN.1 parser for sm2signature + */ +#include + +extern const struct asn1_decoder sm2signature_decoder; + +extern int sm2_get_signature_r(void *context, size_t hdrlen, + unsigned char tag, const void *value, size_t vlen); +extern int sm2_get_signature_s(void *context, size_t hdrlen, + unsigned char tag, const void *value, size_t vlen); diff --git a/drivers/crypto/ycc/ycc_algs.h b/drivers/crypto/ycc/ycc_algs.h index 6a13230a..26323a8 100644 --- a/drivers/crypto/ycc/ycc_algs.h +++ b/drivers/crypto/ycc/ycc_algs.h @@ -77,6 +77,8 @@ enum ycc_cmd_id { YCC_CMD_CCM_ENC, YCC_CMD_CCM_DEC, /* 0x28 */ + YCC_CMD_SM2_VERIFY = 0x47, + YCC_CMD_RSA_ENC = 0x83, YCC_CMD_RSA_DEC, YCC_CMD_RSA_CRT_DEC, diff --git a/drivers/crypto/ycc/ycc_pke.c b/drivers/crypto/ycc/ycc_pke.c index 3debd80..ad72d12 100644 --- a/drivers/crypto/ycc/ycc_pke.c +++ b/drivers/crypto/ycc/ycc_pke.c @@ -8,6 +8,8 @@ #include #include #include + +#include "sm2signature_asn1.h" #include "ycc_algs.h" static int ycc_rsa_done_callback(void *ptr, u16 state) @@ -666,6 +668,224 @@ static void ycc_rsa_exit(struct crypto_akcipher *tfm) crypto_free_akcipher(ctx->soft_tfm); } +#define MPI_NBYTES(m) ((mpi_get_nbits(m) + 7) / 8) + +static int ycc_sm2_done_callback(void *ptr, u16 state) +{ + struct ycc_pke_req *sm2_req = (struct ycc_pke_req *)ptr; + struct ycc_pke_ctx *ctx = sm2_req->ctx; + struct akcipher_request *req = sm2_req->req; + struct device *dev = YCC_DEV(ctx); + + dma_free_coherent(dev, 128, sm2_req->src_vaddr, sm2_req->src_paddr); + + if (req->base.complete) + req->base.complete(&req->base, state == CMD_SUCCESS ? 0 : -EBADMSG); + return 0; +} + +struct sm2_signature_ctx { + MPI sig_r; + MPI sig_s; +}; + +#ifndef CONFIG_CRYPTO_SM2 +int sm2_get_signature_r(void *context, size_t hdrlen, unsigned char tag, + const void *value, size_t vlen) +{ + struct sm2_signature_ctx *sig = context; + + if (!value || !vlen) + return -EINVAL; + + sig->sig_r = mpi_read_raw_data(value, vlen); + if (!sig->sig_r) + return -ENOMEM; + + return 0; +} + +int sm2_get_signature_s(void *context, size_t hdrlen, unsigned char tag, + const void *value, size_t vlen) +{ + struct sm2_signature_ctx *sig = context; + + if (!value || !vlen) + return -EINVAL; + + sig->sig_s = mpi_read_raw_data(value, vlen); + if (!sig->sig_s) + return -ENOMEM; + + return 0; +} +#endif + +static int ycc_sm2_verify(struct akcipher_request *req) +{ + struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req); + struct ycc_pke_req *sm2_req = akcipher_request_ctx(req); + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + struct ycc_sm2_verify_cmd *sm2_verify_cmd; + struct ycc_dev *ydev = ctx->ring->ydev; + struct ycc_ring *ring = ctx->ring; + struct device *dev = YCC_DEV(ctx); + struct sm2_signature_ctx sig; + struct ycc_flags *aflags; + u8 buffer[80] = {0}; + int ret; + + /* Do software fallback */ + if (!test_bit(YDEV_STATUS_READY, &ydev->status) || ctx->key_len) { + akcipher_request_set_tfm(req, ctx->soft_tfm); + ret = crypto_akcipher_verify(req); + akcipher_request_set_tfm(req, tfm); + return ret; + } + + if (req->src_len > 72 || req->src_len < 70 || req->dst_len != 32) + return -EINVAL; + + sm2_req->ctx = ctx; + sm2_req->req = req; + + sg_copy_buffer(req->src, sg_nents(req->src), buffer, req->src_len, 0, 1); + sig.sig_r = NULL; + sig.sig_s = NULL; + ret = asn1_ber_decoder(&sm2signature_decoder, &sig, buffer, req->src_len); + if (ret) + return -EINVAL; + + ret = mpi_print(GCRYMPI_FMT_USG, buffer, MPI_NBYTES(sig.sig_r), + (size_t *)NULL, sig.sig_r); + if (ret) + return -EINVAL; + + ret = mpi_print(GCRYMPI_FMT_USG, buffer + MPI_NBYTES(sig.sig_r), + MPI_NBYTES(sig.sig_s), (size_t *)NULL, sig.sig_s); + if (ret) + return -EINVAL; + + ret = -ENOMEM; + /* Alloc dma for src, as verify has no output */ + sm2_req->src_vaddr = dma_alloc_coherent(dev, 128, &sm2_req->src_paddr, + GFP_ATOMIC); + if (!sm2_req->src_vaddr) + goto out; + + sg_copy_buffer(req->src, sg_nents(req->src), sm2_req->src_vaddr, + req->dst_len, req->src_len, 1); + memcpy(sm2_req->src_vaddr + 32, buffer, 64); + + sm2_req->dst_vaddr = NULL; + + aflags = kzalloc(sizeof(struct ycc_flags), GFP_ATOMIC); + if (!aflags) + goto free_src; + + aflags->ptr = (void *)sm2_req; + aflags->ycc_done_callback = ycc_sm2_done_callback; + + memset(&sm2_req->desc, 0, sizeof(sm2_req->desc)); + sm2_req->desc.private_ptr = (u64)(void *)aflags; + + sm2_verify_cmd = &sm2_req->desc.cmd.sm2_verify_cmd; + sm2_verify_cmd->cmd_id = YCC_CMD_SM2_VERIFY; + sm2_verify_cmd->sptr = sm2_req->src_paddr; + sm2_verify_cmd->keyptr = ctx->pub_key_paddr; + + ret = ycc_enqueue(ring, (u8 *)&sm2_req->desc); + if (!ret) + return -EINPROGRESS; + + kfree(aflags); +free_src: + dma_free_coherent(dev, 128, sm2_req->src_vaddr, sm2_req->src_paddr); +out: + return ret; +} + +static unsigned int ycc_sm2_max_size(struct crypto_akcipher *tfm) +{ + return PAGE_SIZE; +} + +static int ycc_sm2_setpubkey(struct crypto_akcipher *tfm, const void *key, + unsigned int keylen) +{ + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + struct device *dev = YCC_DEV(ctx); + int ret; + + ret = crypto_akcipher_set_pub_key(ctx->soft_tfm, key, keylen); + if (ret) + return ret; + + /* Always alloc 64 bytes for pub key */ + ctx->pub_key_vaddr = dma_alloc_coherent(dev, 64, &ctx->pub_key_paddr, + GFP_KERNEL); + if (!ctx->pub_key_vaddr) + return -ENOMEM; + + /* + * Uncompressed key 65 bytes with 0x04 flag + * Compressed key 33 bytes with 0x02 or 0x03 flag + */ + switch (keylen) { + case 65: + if (*(u8 *)key != 0x04) + return -EINVAL; + memcpy(ctx->pub_key_vaddr, key + 1, 64); + break; + case 64: + memcpy(ctx->pub_key_vaddr, key, 64); + break; + case 33: + return 0; /* TODO: use sw temporary */ + default: + return -EINVAL; + } + + return 0; +} + +static int ycc_sm2_init(struct crypto_akcipher *tfm) +{ + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + struct ycc_ring *ring; + + ctx->soft_tfm = crypto_alloc_akcipher("sm2-generic", 0, 0); + if (IS_ERR(ctx->soft_tfm)) + return PTR_ERR(ctx->soft_tfm); + + /* Reserve enough space if soft request reqires additional space */ + akcipher_set_reqsize(tfm, sizeof(struct ycc_pke_req) + + crypto_akcipher_alg(ctx->soft_tfm)->reqsize); + + ring = ycc_crypto_get_ring(); + if (!ring) { + crypto_free_akcipher(ctx->soft_tfm); + return -ENODEV; + } + + ctx->ring = ring; + return 0; +} + +static void ycc_sm2_exit(struct crypto_akcipher *tfm) +{ + struct ycc_pke_ctx *ctx = akcipher_tfm_ctx(tfm); + struct device *dev = YCC_DEV(ctx); + + if (ctx->ring) + ycc_crypto_free_ring(ctx->ring); + + if (ctx->pub_key_vaddr) + dma_free_coherent(dev, 64, ctx->pub_key_vaddr, ctx->pub_key_paddr); + + crypto_free_akcipher(ctx->soft_tfm); +} + static struct akcipher_alg ycc_rsa = { .base = { .cra_name = "rsa", @@ -685,12 +905,42 @@ static void ycc_rsa_exit(struct crypto_akcipher *tfm) .exit = ycc_rsa_exit, }; +static struct akcipher_alg ycc_sm2 = { + .base = { + .cra_name = "sm2", + .cra_driver_name = "sm2-ycc", + .cra_priority = 1000, + .cra_module = THIS_MODULE, + .cra_ctxsize = sizeof(struct ycc_pke_ctx), + }, + .verify = ycc_sm2_verify, + .set_pub_key = ycc_sm2_setpubkey, + .max_size = ycc_sm2_max_size, + .init = ycc_sm2_init, + .exit = ycc_sm2_exit, +}; + int ycc_pke_register(void) { - return crypto_register_akcipher(&ycc_rsa); + int ret; + + ret = crypto_register_akcipher(&ycc_rsa); + if (ret) { + pr_err("Failed to register rsa\n"); + return ret; + } + + ret = crypto_register_akcipher(&ycc_sm2); + if (ret) { + crypto_unregister_akcipher(&ycc_rsa); + pr_err("Failed to register sm2\n"); + } + + return ret; } void ycc_pke_unregister(void) { crypto_unregister_akcipher(&ycc_rsa); + crypto_unregister_akcipher(&ycc_sm2); } diff --git a/drivers/crypto/ycc/ycc_ring.h b/drivers/crypto/ycc/ycc_ring.h index 67c7f0b..3ad1a4d 100644 --- a/drivers/crypto/ycc/ycc_ring.h +++ b/drivers/crypto/ycc/ycc_ring.h @@ -121,11 +121,19 @@ struct ycc_rsa_dec_cmd { u64 dptr:48; } __packed; +struct ycc_sm2_verify_cmd { + u8 cmd_id; + u64 sptr:48; + u16 key_id; + u64 keyptr:48; +} __packed; + union ycc_real_cmd { struct ycc_skcipher_cmd ske_cmd; struct ycc_aead_cmd aead_cmd; struct ycc_rsa_enc_cmd rsa_enc_cmd; struct ycc_rsa_dec_cmd rsa_dec_cmd; + struct ycc_sm2_verify_cmd sm2_verify_cmd; u8 padding[32]; }; From patchwork Thu Nov 3 07:40:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: guanjun X-Patchwork-Id: 14717 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a5d:6687:0:0:0:0:0 with SMTP id l7csp378591wru; Thu, 3 Nov 2022 00:53:35 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5AI0LIKlsWfBnE+YbhhV4bJZhDqi4CxEvFwI795JpklE0/vV8gIndTeCo7MXin1eOpSVXF X-Received: by 2002:a17:90a:cb96:b0:213:1dc2:b1de with SMTP id a22-20020a17090acb9600b002131dc2b1demr45981351pju.21.1667462014789; Thu, 03 Nov 2022 00:53:34 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1667462014; cv=none; d=google.com; s=arc-20160816; b=qX4KZZam28vPy3wcGr1tNFI0NMuPHG6SgVVzU8G3MzTYnchU8qaHp/tzRbfcZagPEl 1j6gXtJj3nckAHyJ244Dey/gR0gqAr9AjdwhxiK10DNd36up7VT3O+cvgCpKWEuw6uu4 MNpsB6dpzLNxPcZsMicbHeKaBhC9XpZzwXdu2BIcnHu3uHq21QtGfvRyJZycfb0RFVEa k3Eyv58U0cepOWdE8lwiwTlngspIF0jaBTno8iEDa9JibyvpNDcRAkzrUNHNR/o0/ffa ed2330BKjOiCx0Oi5LIq1ZVd8Y0ekG970oD9gharXmMCU9065SQVzIouZjY8aWzdb3jx sdaQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:references:in-reply-to:message-id:date:subject :cc:to:from; bh=6b3QkhWLvYnnpLJEVGpnO/jtL3JsLY9tP5+2TepU4jo=; b=VXMQCLP/jHBGAD2twopTPpR05d+WgKpZslbWC7awoeFTNveOIIfJRMSAfE5WwX7yri FaSMDxugI+0k6ZZc/JKP5MBeD+v9n8DGP5dCXQdlfz8fWHZfaQFbds5LNFLRdT7lVxnM IWDrHPGtEDpHA3uDnJUzrcnwT9cZIEc0BXBuR0L9Bh5gupj5kvHssGtkTz+bJ0A2YALT 6D8dAjxjltwOE+8kG+z8mbLj7ntWm7/pUD9xc3bQsegtgR1RSmQ2Jd02t6LBbZ+bvbEe fy7BbwzzA4AV9ibRKTgPOb7vQQtS5O20nNqjwTAMLEZcc43ewyT8vSpyyOKLyfVBbY2a DTnA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a29-20020aa795bd000000b0056c231e4447si84983pfk.198.2022.11.03.00.53.20; Thu, 03 Nov 2022 00:53:34 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231419AbiKCHm2 (ORCPT + 99 others); Thu, 3 Nov 2022 03:42:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231382AbiKCHl3 (ORCPT ); Thu, 3 Nov 2022 03:41:29 -0400 Received: from out199-7.us.a.mail.aliyun.com (out199-7.us.a.mail.aliyun.com [47.90.199.7]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C7D563D7; Thu, 3 Nov 2022 00:41:02 -0700 (PDT) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R891e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=ay29a033018046050;MF=guanjun@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0VTrcGMO_1667461258; Received: from localhost(mailfrom:guanjun@linux.alibaba.com fp:SMTPD_---0VTrcGMO_1667461258) by smtp.aliyun-inc.com; Thu, 03 Nov 2022 15:40:59 +0800 From: 'Guanjun' To: herbert@gondor.apana.org.au, elliott@hpe.com Cc: zelin.deng@linux.alibaba.com, artie.ding@linux.alibaba.com, guanjun@linux.alibaba.com, linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, xuchun.shang@linux.alibaba.com Subject: [PATCH v3 RESEND 9/9] MAINTAINERS: Add Yitian Cryptography Complex (YCC) driver maintainer entry Date: Thu, 3 Nov 2022 15:40:43 +0800 Message-Id: <1667461243-48652-10-git-send-email-guanjun@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> References: <1667461243-48652-1-git-send-email-guanjun@linux.alibaba.com> X-Spam-Status: No, score=-9.9 required=5.0 tests=BAYES_00, ENV_AND_HDR_SPF_MATCH,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, USER_IN_DEF_SPF_WL autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1748460649766890503?= X-GMAIL-MSGID: =?utf-8?q?1748460649766890503?= From: Zelin Deng I will continue to add new feature, optimize the performance, and handle the issues of Yitian Cryptography Complex (YCC) driver. Guanjun and Xuchun Shang focus on various algorithms support, add them as co-maintainers. Signed-off-by: Zelin Deng Acked-by: Guanjun Acked-by: Xuchun Shang --- MAINTAINERS | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 9774e7b..5975c1b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -953,6 +953,14 @@ S: Supported F: drivers/crypto/ccp/sev* F: include/uapi/linux/psp-sev.h +ALIBABA YITIAN CRYPTOGRAPHY COMPLEX (YCC) ACCELERATOR DRIVER +M: Zelin Deng +M: Guanjun +M: Xuchun Shang +L: ali-accel@list.alibaba-inc.com +S: Supported +F: drivers/crypto/ycc/ + AMD DISPLAY CORE M: Harry Wentland M: Leo Li