From patchwork Thu Jun 22 08:56:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hariprasad Kelam X-Patchwork-Id: 111539 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp4936350vqr; Thu, 22 Jun 2023 02:31:01 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6SVSIrmV1+xCdm4Bgpw+j1tn4SI/gRuf6vz9Dnh44WMyVIFGfUqUusj6TaB7H5nim/GBC8 X-Received: by 2002:a92:cf50:0:b0:328:a646:8bf5 with SMTP id c16-20020a92cf50000000b00328a6468bf5mr13441656ilr.27.1687426261084; Thu, 22 Jun 2023 02:31:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687426261; cv=none; d=google.com; s=arc-20160816; b=NjU/Y4SzdkHBdFjrciRv/Yz4rvsqaar72q5xsYjW7dOPZAWSFci6cIoxqBBaDik6T4 yJvmWJDwkpK1Jl2TRO46cibHSA0cC0RUWjq1w9zkwoXswmVUCl4yR9oZBhuX/x+GVAtl 01+jB7TjT4c/NzZjv3culnNxWiMKAnaOkgro5KiKN7RuyBPVOoY7RXvuDsvj9KysVRu3 mu7DFtf3G/Pl4tqMcbn4fEhI+7OoSWPOLRYyng4SqyNWW1m8FYp7y6WN0C0p+xJTkElC AM/1RdyKQicL+gTFK6USLhNnvrUqIH4hJZ/iSgPxCPLl5WslJYw7IHNz5oTTYFeqMJVN 4P1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=uCHAKQuxxvckJTPd46xGflDt87h47Ymoo5BgX5ad228=; b=MN/fWOmsKG7nonMGP+Z3yFQxEhFK31XIWywAI8rTO4AltO+eTu3btwnIQb18Z4F50z 9LDGXlyVCjW2VWk//CMuY+bDMWLWC5qqpIklVOj719AhDFXB1yyX8tTJ5FPzn3Zf4BuL 86aCHRPc9tXdX3te+pW3/3KVELDb1uQrMI4gWQOX7Ediu499uva1G2BwNkLwX/n/9u8h Ah9pwPuVeLSZUiQV5ig09YpIC+GhZbt49RW0G+o/sg7muopTBG/GTb5mZQ9eJRd0a8g9 xCO+RLRW9Xi+4VHb7sl7ZRy5g5uNnwhoEB/dtBXlT1Vtf10Qk8UOSoG0RIg3ppCaPiUT Te4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b="G/+730Mf"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id q83-20020a632a56000000b0053ef0f20fe3si2273516pgq.600.2023.06.22.02.30.46; Thu, 22 Jun 2023 02:31:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b="G/+730Mf"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231912AbjFVJE3 (ORCPT + 99 others); Thu, 22 Jun 2023 05:04:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231889AbjFVJDo (ORCPT ); Thu, 22 Jun 2023 05:03:44 -0400 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9261546A9; Thu, 22 Jun 2023 01:57:52 -0700 (PDT) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35M8cCwt017118; Thu, 22 Jun 2023 01:56:54 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=uCHAKQuxxvckJTPd46xGflDt87h47Ymoo5BgX5ad228=; b=G/+730Mf7tX2UC27MHYkQ0SAhx6yv0SFduRCF0zFVJ+2Gh9XCtpmaZm+xbAHUf7huNl+ uPhviIyuugsSKJKXv87rVT0NHxUPlQJqrR+mcSG3hxgfl/8V3uNCvLODobRVQKDWppDU X5miblWiUjqiPnj0rQSdKX0qJ5ITvhk7G99Wcck5aZ615bj6rIrC56BHumYh32mekScj SirXNw6TUvAmGQVqxwNjfgqHPQM6gJb+WaW14X33oZvciJwinNySfxLWs0aB8b5X1cMi 3YlQBG4cYn+5pYdQN6UglmCKvV1f9q5ypwnyqF2T4O0VMRbDx/zGt2jPlVOzKLSussGf /Q== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3rbyhq4anf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 22 Jun 2023 01:56:53 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 22 Jun 2023 01:56:52 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 22 Jun 2023 01:56:52 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 643C83F7072; Thu, 22 Jun 2023 01:56:46 -0700 (PDT) From: Hariprasad Kelam To: , CC: , , , , , , , , , , , , , , , , , , Subject: [net-next Patch 1/3] octeontx2-pf: implement transmit schedular allocation algorithm Date: Thu, 22 Jun 2023 14:26:36 +0530 Message-ID: <20230622085638.3509-2-hkelam@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230622085638.3509-1-hkelam@marvell.com> References: <20230622085638.3509-1-hkelam@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: lNFhyJebnc5-K-wrePBgayabTRReiow- X-Proofpoint-GUID: lNFhyJebnc5-K-wrePBgayabTRReiow- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-22_05,2023-06-16_01,2023-05-22_02 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769394679383272540?= X-GMAIL-MSGID: =?utf-8?q?1769394679383272540?= From: Naveen Mamindlapalli unlike strict priority, where number of classes are limited to max 8, there is no restriction on the number of dwrr child nodes unless the count increases the max number of child nodes supported. Hardware expects strict priority transmit schedular indexes mapped to their priority. This patch adds defines transmit schedular allocation algorithm such that the above requirement is honored. Signed-off-by: Naveen Mamindlapalli Signed-off-by: Hariprasad Kelam --- .../net/ethernet/marvell/octeontx2/nic/qos.c | 129 +++++++++++++++++- .../net/ethernet/marvell/octeontx2/nic/qos.h | 6 + 2 files changed, 129 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c index d3a76c5ccda8..4b50add9bf1e 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c @@ -19,6 +19,7 @@ #define OTX2_QOS_CLASS_NONE 0 #define OTX2_QOS_DEFAULT_PRIO 0xF #define OTX2_QOS_INVALID_SQ 0xFFFF +#define OTX2_QOS_INVALID_TXSCHQ_IDX 0xFFFF static void otx2_qos_update_tx_netdev_queues(struct otx2_nic *pfvf) { @@ -315,9 +316,12 @@ static void otx2_qos_fill_cfg_tl(struct otx2_qos_node *parent, list_for_each_entry(node, &parent->child_list, list) { otx2_qos_fill_cfg_tl(node, cfg); - cfg->schq_contig[node->level]++; otx2_qos_fill_cfg_schq(node, cfg); } + + /* Assign the required number of transmit schedular queues under the given class */ + cfg->schq_contig[parent->level - 1] += parent->child_dwrr_cnt + + parent->max_static_prio + 1; } static void otx2_qos_prepare_txschq_cfg(struct otx2_nic *pfvf, @@ -401,9 +405,13 @@ static int otx2_qos_add_child_node(struct otx2_qos_node *parent, struct otx2_qos_node *tmp_node; struct list_head *tmp; + if (node->prio > parent->max_static_prio) + parent->max_static_prio = node->prio; + for (tmp = head->next; tmp != head; tmp = tmp->next) { tmp_node = list_entry(tmp, struct otx2_qos_node, list); - if (tmp_node->prio == node->prio) + if (tmp_node->prio == node->prio && + tmp_node->is_static) return -EEXIST; if (tmp_node->prio > node->prio) { list_add_tail(&node->list, tmp); @@ -476,6 +484,7 @@ otx2_qos_sw_create_leaf_node(struct otx2_nic *pfvf, node->rate = otx2_convert_rate(rate); node->ceil = otx2_convert_rate(ceil); node->prio = prio; + node->is_static = true; __set_bit(qid, pfvf->qos.qos_sq_bmap); @@ -628,6 +637,20 @@ static int otx2_qos_txschq_alloc(struct otx2_nic *pfvf, return rc; } +static void otx2_qos_free_unused_txschq(struct otx2_nic *pfvf, struct otx2_qos_cfg *cfg) +{ + int lvl, idx, schq; + + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { + for (idx = 0; idx < cfg->schq_contig[lvl]; idx++) { + if (!cfg->schq_used_index[lvl][idx]) { + schq = cfg->schq_contig_list[lvl][idx]; + otx2_txschq_free_one(pfvf, lvl, schq); + } + } + } +} + static void otx2_qos_txschq_fill_cfg_schq(struct otx2_nic *pfvf, struct otx2_qos_node *node, struct otx2_qos_cfg *cfg) @@ -652,9 +675,10 @@ static void otx2_qos_txschq_fill_cfg_tl(struct otx2_nic *pfvf, list_for_each_entry(tmp, &node->child_list, list) { otx2_qos_txschq_fill_cfg_tl(pfvf, tmp, cfg); cnt = cfg->static_node_pos[tmp->level]; - tmp->schq = cfg->schq_contig_list[tmp->level][cnt]; + tmp->schq = cfg->schq_contig_list[tmp->level][tmp->txschq_idx]; + cfg->schq_used_index[tmp->level][tmp->txschq_idx] = true; if (cnt == 0) - node->prio_anchor = tmp->schq; + node->prio_anchor = cfg->schq_contig_list[tmp->level][0]; cfg->static_node_pos[tmp->level]++; otx2_qos_txschq_fill_cfg_schq(pfvf, tmp, cfg); } @@ -667,9 +691,84 @@ static void otx2_qos_txschq_fill_cfg(struct otx2_nic *pfvf, mutex_lock(&pfvf->qos.qos_lock); otx2_qos_txschq_fill_cfg_tl(pfvf, node, cfg); otx2_qos_txschq_fill_cfg_schq(pfvf, node, cfg); + otx2_qos_free_unused_txschq(pfvf, cfg); mutex_unlock(&pfvf->qos.qos_lock); } +static void __otx2_qos_assign_base_idx_tl(struct otx2_nic *pfvf, + struct otx2_qos_node *tmp, + unsigned long *child_idx_bmap, + int child_cnt) +{ + int idx; + + if (tmp->txschq_idx != OTX2_QOS_INVALID_TXSCHQ_IDX) + return; + + /* assign static nodes 1:1 prio mapping first, then remaining nodes */ + for (idx = 0; idx < child_cnt; idx++) { + if (tmp->is_static && tmp->prio == idx && + !test_bit(idx, child_idx_bmap)) { + tmp->txschq_idx = idx; + set_bit(idx, child_idx_bmap); + return; + } else if (!tmp->is_static && idx >= tmp->prio && + !test_bit(idx, child_idx_bmap)) { + tmp->txschq_idx = idx; + set_bit(idx, child_idx_bmap); + return; + } + } +} + +static int otx2_qos_assign_base_idx_tl(struct otx2_nic *pfvf, + struct otx2_qos_node *node) +{ + unsigned long *child_idx_bmap; + struct otx2_qos_node *tmp; + int child_cnt; + + list_for_each_entry(tmp, &node->child_list, list) + tmp->txschq_idx = OTX2_QOS_INVALID_TXSCHQ_IDX; + + /* allocate child index array */ + child_cnt = node->child_dwrr_cnt + node->max_static_prio + 1; + child_idx_bmap = kcalloc(BITS_TO_LONGS(child_cnt), sizeof(unsigned long), + GFP_KERNEL); + if (!child_idx_bmap) + return -ENOMEM; + + list_for_each_entry(tmp, &node->child_list, list) + otx2_qos_assign_base_idx_tl(pfvf, tmp); + + /* assign base index of static priority children first */ + list_for_each_entry(tmp, &node->child_list, list) { + if (!tmp->is_static) + continue; + __otx2_qos_assign_base_idx_tl(pfvf, tmp, child_idx_bmap, child_cnt); + } + + /* assign base index of dwrr priority children */ + list_for_each_entry(tmp, &node->child_list, list) + __otx2_qos_assign_base_idx_tl(pfvf, tmp, child_idx_bmap, child_cnt); + + kfree(child_idx_bmap); + + return 0; +} + +static int otx2_qos_assign_base_idx(struct otx2_nic *pfvf, + struct otx2_qos_node *node) +{ + int ret = 0; + + mutex_lock(&pfvf->qos.qos_lock); + ret = otx2_qos_assign_base_idx_tl(pfvf, node); + mutex_unlock(&pfvf->qos.qos_lock); + + return ret; +} + static int otx2_qos_txschq_push_cfg_schq(struct otx2_nic *pfvf, struct otx2_qos_node *node, struct otx2_qos_cfg *cfg) @@ -761,8 +860,10 @@ static void otx2_qos_free_cfg(struct otx2_nic *pfvf, struct otx2_qos_cfg *cfg) for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { for (idx = 0; idx < cfg->schq_contig[lvl]; idx++) { - schq = cfg->schq_contig_list[lvl][idx]; - otx2_txschq_free_one(pfvf, lvl, schq); + if (cfg->schq_used_index[lvl][idx]) { + schq = cfg->schq_contig_list[lvl][idx]; + otx2_txschq_free_one(pfvf, lvl, schq); + } } } } @@ -838,6 +939,10 @@ static int otx2_qos_push_txschq_cfg(struct otx2_nic *pfvf, if (ret) return -ENOSPC; + ret = otx2_qos_assign_base_idx(pfvf, node); + if (ret) + return -ENOMEM; + if (!(pfvf->netdev->flags & IFF_UP)) { otx2_qos_txschq_fill_cfg(pfvf, node, cfg); return 0; @@ -995,6 +1100,7 @@ static int otx2_qos_leaf_alloc_queue(struct otx2_nic *pfvf, u16 classid, if (ret) goto out; + parent->child_static_cnt++; set_bit(prio, parent->prio_bmap); /* read current txschq configuration */ @@ -1067,6 +1173,7 @@ static int otx2_qos_leaf_alloc_queue(struct otx2_nic *pfvf, u16 classid, free_old_cfg: kfree(old_cfg); reset_prio: + parent->child_static_cnt--; clear_bit(prio, parent->prio_bmap); out: return ret; @@ -1105,6 +1212,7 @@ static int otx2_qos_leaf_to_inner(struct otx2_nic *pfvf, u16 classid, goto out; } + node->child_static_cnt++; set_bit(prio, node->prio_bmap); /* store the qid to assign to leaf node */ @@ -1178,6 +1286,7 @@ static int otx2_qos_leaf_to_inner(struct otx2_nic *pfvf, u16 classid, free_old_cfg: kfree(old_cfg); reset_prio: + node->child_static_cnt--; clear_bit(prio, node->prio_bmap); out: return ret; @@ -1207,6 +1316,10 @@ static int otx2_qos_leaf_del(struct otx2_nic *pfvf, u16 *classid, otx2_qos_destroy_node(pfvf, node); pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ; + parent->child_static_cnt--; + if (!parent->child_static_cnt) + parent->max_static_prio = 0; + clear_bit(prio, parent->prio_bmap); return 0; @@ -1245,6 +1358,10 @@ static int otx2_qos_leaf_del_last(struct otx2_nic *pfvf, u16 classid, bool force otx2_qos_destroy_node(pfvf, node); pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ; + parent->child_static_cnt--; + if (!parent->child_static_cnt) + parent->max_static_prio = 0; + clear_bit(prio, parent->prio_bmap); /* create downstream txschq entries to parent */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h index 19773284be27..0c5d2f79dc15 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h @@ -35,6 +35,7 @@ struct otx2_qos_cfg { int dwrr_node_pos[NIX_TXSCH_LVL_CNT]; u16 schq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; u16 schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + u16 schq_used_index[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; }; struct otx2_qos { @@ -62,7 +63,12 @@ struct otx2_qos_node { u16 schq; /* hw txschq */ u16 qid; u16 prio_anchor; + u16 max_static_prio; + u16 child_dwrr_cnt; + u16 child_static_cnt; + u16 txschq_idx; /* txschq allocation index */ u8 level; + bool is_static; };