From patchwork Fri May 12 09:22:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junxian Huang X-Patchwork-Id: 93038 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:b0ea:0:b0:3b6:4342:cba0 with SMTP id b10csp4969714vqo; Fri, 12 May 2023 02:43:39 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6vaCEun+COh968ju+Qqv2lyGAVbAOSc3yUT4G2+HfqaunwrC7xmENAjED3E+QLUdZH/PQc X-Received: by 2002:a17:90b:3a8e:b0:24e:2f9c:ee5e with SMTP id om14-20020a17090b3a8e00b0024e2f9cee5emr25332405pjb.42.1683884619603; Fri, 12 May 2023 02:43:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1683884619; cv=none; d=google.com; s=arc-20160816; b=0qqCgJXeDHNkvlh1AsK26lu9fmflGEqCkEoNKellnYZxVUhq3jd/gsBLdHZV/JQC8r Dt1XlxuS7dIjsPSqGWzrASZkU/Uat16Ghff2uucX+r6O8pXONnHufvMRCGW6FY1n1JUu xtHVl+c5VQPraWuWAHiFbziQOxmx0y5YGzSYABhjOeUobQd3qljYvIxMdi71jfwAj2A2 unCk4trdydwZVfMssZCNtWG1XkoW3lyWzz4N9cRK3wbUmQ31u9XTdKVDItGED+VgP64G 2cvqP7nth+OecwoQdThDASjzO+W1DxApty0Fm5clW8Fjao2uw5RhlQ09PzB46wKmS1+0 cc3w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=QLKILGdzlhRLwyrvUZKyPldxO3eGnkUU9fmv1j8XJ/4=; b=rCyLrp/DJ3aGL3uHvY9AMRy/6F+ky+xAsbYdDg/nnXBCc/m9Eyh3JvfCFpRZ57dzHt FdA1mN28BfI/QYlILGE3sd3l0YxGUJuH8xTUIg910vtfmiAflgWtKR4OcAxL8bbdvlMf TAqK9Agq48oZtcGdz+PRVd99fErU4z+UkrSvqOVtBoh5VM8Qcx7YER6elMHOku5KePRV UUHi+ZQLsDuX72to6fk7u+tjWuFeakva5i1AbNTyxhe9g3H3wlxUYVrQqanYom5Q0noE gEcsfX5f2hUrb1cx1HbD227k7JLcbUe9ZwEMSsOpEz9N8l09Nxe6GqIrfAMPdRji9dgd wWeA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=hisilicon.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id na3-20020a17090b4c0300b0024e1911ab42si28508322pjb.18.2023.05.12.02.43.27; Fri, 12 May 2023 02:43:39 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=hisilicon.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240442AbjELJY4 (ORCPT + 99 others); Fri, 12 May 2023 05:24:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53282 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240288AbjELJYt (ORCPT ); Fri, 12 May 2023 05:24:49 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FB80100D2; Fri, 12 May 2023 02:24:47 -0700 (PDT) Received: from kwepemi500006.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4QHjxF6StmzpVYh; Fri, 12 May 2023 17:23:29 +0800 (CST) Received: from localhost.localdomain (10.67.165.2) by kwepemi500006.china.huawei.com (7.221.188.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 12 May 2023 17:24:45 +0800 From: Junxian Huang To: , CC: , , , Subject: [PATCH for-rc 2/3] RDMA/hns: Fix base address table allocation Date: Fri, 12 May 2023 17:22:44 +0800 Message-ID: <20230512092245.344442-3-huangjunxian6@hisilicon.com> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20230512092245.344442-1-huangjunxian6@hisilicon.com> References: <20230512092245.344442-1-huangjunxian6@hisilicon.com> MIME-Version: 1.0 X-Originating-IP: [10.67.165.2] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemi500006.china.huawei.com (7.221.188.68) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1765680998530513492?= X-GMAIL-MSGID: =?utf-8?q?1765680998530513492?= From: Chengchang Tang For hns, the specification of an entry like resource (E.g. WQE/CQE/EQE) depends on BT page size, buf page size and hopnum. For user mode, the buf page size depends on UMEM. Therefore, the actual specification is controlled by BT page size and hopnum. The current BT page size and hopnum are obtained from firmware. This makes the driver inflexible and introduces unnecessary constraints. Resource allocation failures occur in many scenarios. This patch will calculate whether the BT page size set by firmware is sufficient before allocating BT, and increase the BT page size if it is insufficient. Fixes: 1133401412a9 ("RDMA/hns: Optimize base address table config flow for qp buffer") Signed-off-by: Chengchang Tang Signed-off-by: Junxian Huang --- drivers/infiniband/hw/hns/hns_roce_mr.c | 43 +++++++++++++++++++++++++ 1 file changed, 43 insertions(+) diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c index 37a5cf62f88b..14376490ac22 100644 --- a/drivers/infiniband/hw/hns/hns_roce_mr.c +++ b/drivers/infiniband/hw/hns/hns_roce_mr.c @@ -33,6 +33,7 @@ #include #include +#include #include "hns_roce_device.h" #include "hns_roce_cmd.h" #include "hns_roce_hem.h" @@ -909,6 +910,44 @@ static int mtr_init_buf_cfg(struct hns_roce_dev *hr_dev, return page_cnt; } +static u64 cal_pages_per_l1ba(unsigned int ba_per_bt, unsigned int hopnum) +{ + return int_pow(ba_per_bt, hopnum - 1); +} + +static unsigned int cal_best_bt_pg_sz(struct hns_roce_dev *hr_dev, + struct hns_roce_mtr *mtr, + unsigned int pg_shift) +{ + unsigned long cap = hr_dev->caps.page_size_cap; + struct hns_roce_buf_region *re; + unsigned int pgs_per_l1ba; + unsigned int ba_per_bt; + unsigned int ba_num; + int i; + + for_each_set_bit_from(pg_shift, &cap, sizeof(cap) * BITS_PER_BYTE) { + if (!(BIT(pg_shift) & cap)) + continue; + + ba_per_bt = BIT(pg_shift) / BA_BYTE_LEN; + ba_num = 0; + for (i = 0; i < mtr->hem_cfg.region_count; i++) { + re = &mtr->hem_cfg.region[i]; + if (re->hopnum == 0) + continue; + + pgs_per_l1ba = cal_pages_per_l1ba(ba_per_bt, re->hopnum); + ba_num += DIV_ROUND_UP(re->count, pgs_per_l1ba); + } + + if (ba_num <= ba_per_bt) + return pg_shift; + } + + return 0; +} + static int mtr_alloc_mtt(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr, unsigned int ba_page_shift) { @@ -917,6 +956,10 @@ static int mtr_alloc_mtt(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr, hns_roce_hem_list_init(&mtr->hem_list); if (!cfg->is_direct) { + ba_page_shift = cal_best_bt_pg_sz(hr_dev, mtr, ba_page_shift); + if (!ba_page_shift) + return -ERANGE; + ret = hns_roce_hem_list_request(hr_dev, &mtr->hem_list, cfg->region, cfg->region_count, ba_page_shift);