From patchwork Fri Apr 14 08:06:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 83284 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp218513vqo; Fri, 14 Apr 2023 01:24:55 -0700 (PDT) X-Google-Smtp-Source: AKy350bLYIqaFWPMc8pm2sV8wA0qemLi74jvdzMb3nZ8AeQs8oRdB/MjjUE4ePjsrngpeV06kaf3 X-Received: by 2002:a17:90a:4925:b0:23f:7ff6:eba with SMTP id c34-20020a17090a492500b0023f7ff60ebamr4916315pjh.0.1681460695470; Fri, 14 Apr 2023 01:24:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681460695; cv=none; d=google.com; s=arc-20160816; b=ZUAc8ph7SjemNTrvPu0md1n+DTcG8T6ftT1X9agIeNA8Rc64/+qRP1Vu5SJu5ZUdtL i6O84QKG/5kJPQoD3tocNQX+S0uEkFkRy+qKZOI61ISHNCipDUd+Lb5y6hroqaTa2204 P3i2GW1jS61qaJ3dei5FFomD6tOmz45+0oPSqGmenm9MNU9OSEsypaZnpnzJPd+5aeog tp6u70zRnXjBZq6QPqowG19guafvi96G28kpToj2rVfIYz8ZV2hwLiLHM67vsYdGMFZK I9Qn8Lm52QymqGi9J1jJpgdZhX5JEeZwFk5I3wyneIPZrWdcEi0oQKwMUCszO6+JD3LU r3SA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=yBAp+Nw6zeorXzkZXqnqhgxNJNr0iAJS8Lz9vxalw+A=; b=mvJfl2pA+cDaEZM0sHfoC0N9blUsVWj53BqyrWetsyW5ssrLpxFEaqt5bd8BOYOL8T Huvya7kzOs/JDspA7JWp959FprbmS087kb/2wvoVklv4JwWPf4kl6IR0bao4LkRKVAOy EtxsJwhQ3baQHeF1K4sZJjTVO6vU+GpU5ySHpgIfsC9OvEUXwPNPmbLTleb4kBjvnFN/ Wb4XUAOqEFPZUR4MrMi+3KU0K0OOom9sfnFbSl2wxvYq2Xc7bUSuSI+fIVJ1/GxkMDam zhWWzd1fenOxDMCnTPht2j3Kk1RI6GiGZXgMis+tZWELmB28fwjCwxE9vfxI8dlt0yVl cegg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=uABKpT3U; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a11-20020a63d40b000000b00518ebc6f91csi4438107pgh.306.2023.04.14.01.24.43; Fri, 14 Apr 2023 01:24:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=uABKpT3U; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229888AbjDNIHT (ORCPT + 99 others); Fri, 14 Apr 2023 04:07:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229625AbjDNIHR (ORCPT ); Fri, 14 Apr 2023 04:07:17 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D21D640F1; Fri, 14 Apr 2023 01:07:16 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 63DD161610; Fri, 14 Apr 2023 08:07:16 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 24FEEC433D2; Fri, 14 Apr 2023 08:07:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1681459635; bh=CMhlV7gPjxolgtoBHwxReUD4SNxzZsVia1mEu5DPBd8=; h=From:To:Cc:Subject:Date:From; b=uABKpT3UCvTTNEsfJZpyjJBkygzHhGJQja6hX0ABRo3ZqRlS5uvNVlZFDkeNBkODp otvcfYNuh5ilBkLyOcNItBbWSMahvv2QXfZr797hemQq9CMMIAWj38JQH8YderAKqR sO6P/HMzoBSFL/erIvEGazK5Sz4CT/bQdL5ou1qDI503nuLI1F734X9l4lJx+GkrH8 E46a+sPrkwchkLNtJsm+bRvr+0bOnKVL2kW52GfGvMG4J3wnfIpUe/sCIO5MCOiOXV czagPpNjp7NFkphzMdAOFXugeRxgOOrwECN3vAsxfc16HFcG4FF6BdZ0i5mGZ6lDre BnDVmWVbfjsOA== From: Arnd Bergmann To: Linus Walleij , Imre Kaloz , Krzysztof Halasa , Corentin Labbe , Herbert Xu , "David S. Miller" , Tom Zanussi Cc: Arnd Bergmann , linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [PATCH] ixp4xx_crypto: fix building wiht 64-bit dma_addr_t Date: Fri, 14 Apr 2023 10:06:56 +0200 Message-Id: <20230414080709.284005-1-arnd@kernel.org> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 X-Spam-Status: No, score=-7.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_HI, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1763139330148117692?= X-GMAIL-MSGID: =?utf-8?q?1763139330148117692?= From: Arnd Bergmann The crypt_ctl structure must be exactly 64 bytes long to work correctly, and it has to be a power-of-two size to allow turning the 64-bit division in crypt_phys2virt() into a shift operation, avoiding the link failure: ERROR: modpost: "__aeabi_uldivmod" [drivers/crypto/intel/ixp4xx/ixp4xx_crypto.ko] undefined! The failure now shows up because the driver is available for compile testing after the move, and a previous fix turned the more descriptive BUILD_BUG_ON() into a link error. Change the variably-sized dma_addr_t into the expected 'u32' type that is needed for the hardware, and reinstate the size check for all 32-bit architectures to simplify debugging if it hits again. Fixes: 1bc7fdbf2677 ("crypto: ixp4xx - Move driver to drivers/crypto/intel/ixp4xx") Signed-off-by: Arnd Bergmann --- drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c b/drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c index 5d640f13ad1c..ed15379a9818 100644 --- a/drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c +++ b/drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c @@ -118,9 +118,9 @@ struct crypt_ctl { u8 mode; /* NPE_OP_* operation mode */ #endif u8 iv[MAX_IVLEN]; /* IV for CBC mode or CTR IV for CTR mode */ - dma_addr_t icv_rev_aes; /* icv or rev aes */ - dma_addr_t src_buf; - dma_addr_t dst_buf; + u32 icv_rev_aes; /* icv or rev aes */ + u32 src_buf; + u32 dst_buf; #ifdef __ARMEB__ u16 auth_offs; /* Authentication start offset */ u16 auth_len; /* Authentication data length */ @@ -263,7 +263,8 @@ static int setup_crypt_desc(void) { struct device *dev = &pdev->dev; - BUILD_BUG_ON(!IS_ENABLED(CONFIG_COMPILE_TEST) && + BUILD_BUG_ON(!(IS_ENABLED(CONFIG_COMPILE_TEST) && + IS_ENABLED(CONFIG_64BIT)) && sizeof(struct crypt_ctl) != 64); crypt_virt = dma_alloc_coherent(dev, NPE_QLEN * sizeof(struct crypt_ctl), @@ -1170,10 +1171,11 @@ static int aead_perform(struct aead_request *req, int encrypt, } if (unlikely(lastlen < authsize)) { + dma_addr_t dma; /* The 12 hmac bytes are scattered, * we need to copy them into a safe buffer */ - req_ctx->hmac_virt = dma_pool_alloc(buffer_pool, flags, - &crypt->icv_rev_aes); + req_ctx->hmac_virt = dma_pool_alloc(buffer_pool, flags, &dma); + crypt->icv_rev_aes = dma; if (unlikely(!req_ctx->hmac_virt)) goto free_buf_dst; if (!encrypt) {