From patchwork Mon Jun 26 07:24:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hariprasad Kelam X-Patchwork-Id: 112753 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7305271vqr; Mon, 26 Jun 2023 00:35:23 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5378dzrBfhek4b/4DtGETTXmvsH/X9lvR8zAQQUUizb8Do+Y/44LbizUbGty3Vfj1RfPD4 X-Received: by 2002:a05:6808:138a:b0:3a1:e132:dd8b with SMTP id c10-20020a056808138a00b003a1e132dd8bmr1498192oiw.41.1687764923195; Mon, 26 Jun 2023 00:35:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687764923; cv=none; d=google.com; s=arc-20160816; b=fnM2DGjCuRA/5lbKHmmQ0KvSTnMI1VkyNn8fSIV3WH2V0vrw8UNzmwsIAYX9bFzr9G d6dbat+qL3Qr4A9GCYLpK6MxmxWV8OvpcS2EZ9XX6X7w6wrBIn1rjDh/9pRiorJIxhXB XdCKsSDRcF4agZrks/fR7jBxVRU5FyvqYtzxuW0Xk+qtiCcHVC1+w5Z3Y17H7FjpzS2v j9iN3zZGC8rEUrAD3fg942S02t/3NrP8HM0gW0akAhj2CJhEw9OIZyqm28XY/+pISw+6 SiF5e0sgwvmSEACYc/tzVc8Sf08NkskaODO3Fp4zOSfC46u//kdglpO98Ms24UYXt1Pi y5EA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=MLBI5yNtEj5QiGpTZ+ouXSBcoGGKWhR8JqCVNLfJEjc=; fh=XPZ3X7jhFHQNxxZyzwoISbNpTG1xY4tAuNs9NObj3K0=; b=i8Pm72UeDxGlPybwj7HAOQu8zOOHkHAPfBS/cOXmhWr6yEybl4UumMVl9oJu9WVwId bNMU/26L9e8PwLAKrcVESdc2B0JZj/FQo1n6uyFFhymKbnCAkIUsEkbO8uZ6xVDlWfJu NdT4GSYyQHWfmrCe2ISs4Ie6ulVoxU+qmu9PPYYumlpcenLRRJa71xNiEnqVw2WyYgrP zz2QLmE3NXJAPLWAFj+8oegch4ApAv9pbweACgdNTTc9T8IPPuWlNEKZdR8TY5N/i5Ow BAGIlsuaqyJTKIWMqUPboEybIn5mfIUmgNATaQGH6Wbq2FO/xGzD8JaPddQLE54tsnjF mBEQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=CKBNhoou; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 15-20020a17090a190f00b0025e8b44763esi4740064pjg.59.2023.06.26.00.35.11; Mon, 26 Jun 2023 00:35:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=CKBNhoou; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229770AbjFZHYm (ORCPT + 99 others); Mon, 26 Jun 2023 03:24:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59232 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229666AbjFZHYg (ORCPT ); Mon, 26 Jun 2023 03:24:36 -0400 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BAC9E1AD; Mon, 26 Jun 2023 00:24:34 -0700 (PDT) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35Q4kM7o025846; Mon, 26 Jun 2023 00:24:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=MLBI5yNtEj5QiGpTZ+ouXSBcoGGKWhR8JqCVNLfJEjc=; b=CKBNhoouvEUjYcUP3Wroi5um9J8nuXjuI6RdLruwg8r6YDre+3+xdjRcZwlQ2NeADNLE tX/IN5yyxD+pNuCjiOu338hRVBpn65CUOwa31TCosGuo2p7DNgx91h2ZMW3lwlKkVJCB 6a5xsUxBj11EKfv2pSlLU5EALvOoDLpRmfeD/iULiXgCpoLrZm0rJpmvBQoZWlR99HQc Dw2HPUavR4K7gu3CsvvAI3fIK1a7VyJJVKtR5xcw/8uwjCnMCJxuSgj7bda2OBIPdQrH nLDfyvnTGsXODyBp5D0333k22g6B7yGTjFiB78O7EJQbc0e15hq32t2qTImouF/nFNx6 Bg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3re00juufb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 26 Jun 2023 00:24:22 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 26 Jun 2023 00:24:20 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 26 Jun 2023 00:24:20 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id D5ADC3F7085; Mon, 26 Jun 2023 00:24:14 -0700 (PDT) From: Hariprasad Kelam To: , CC: , , , , , , , , , , , , , , , , , , Subject: [net-next Patchv2 1/3] octeontx2-pf: implement transmit schedular allocation algorithm Date: Mon, 26 Jun 2023 12:54:05 +0530 Message-ID: <20230626072407.32158-2-hkelam@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626072407.32158-1-hkelam@marvell.com> References: <20230626072407.32158-1-hkelam@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 91DUem_wgic8olXxgYtBNBaC2aXEzBoK X-Proofpoint-ORIG-GUID: 91DUem_wgic8olXxgYtBNBaC2aXEzBoK X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-26_04,2023-06-22_02,2023-05-22_02 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769749792002621654?= X-GMAIL-MSGID: =?utf-8?q?1769749792002621654?= From: Naveen Mamindlapalli unlike strict priority, where number of classes are limited to max 8, there is no restriction on the number of dwrr child nodes unless the count increases the max number of child nodes supported. Hardware expects strict priority transmit schedular indexes mapped to their priority. This patch adds defines transmit schedular allocation algorithm such that the above requirement is honored. Signed-off-by: Naveen Mamindlapalli Signed-off-by: Hariprasad Kelam --- .../net/ethernet/marvell/octeontx2/nic/qos.c | 129 +++++++++++++++++- .../net/ethernet/marvell/octeontx2/nic/qos.h | 6 + 2 files changed, 129 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c index d3a76c5ccda8..51e9be55d5f5 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c @@ -19,6 +19,7 @@ #define OTX2_QOS_CLASS_NONE 0 #define OTX2_QOS_DEFAULT_PRIO 0xF #define OTX2_QOS_INVALID_SQ 0xFFFF +#define OTX2_QOS_INVALID_TXSCHQ_IDX 0xFFFF static void otx2_qos_update_tx_netdev_queues(struct otx2_nic *pfvf) { @@ -315,9 +316,12 @@ static void otx2_qos_fill_cfg_tl(struct otx2_qos_node *parent, list_for_each_entry(node, &parent->child_list, list) { otx2_qos_fill_cfg_tl(node, cfg); - cfg->schq_contig[node->level]++; otx2_qos_fill_cfg_schq(node, cfg); } + + /* Assign the required number of transmit schedular queues under the given class */ + cfg->schq_contig[parent->level - 1] += parent->child_dwrr_cnt + + parent->max_static_prio + 1; } static void otx2_qos_prepare_txschq_cfg(struct otx2_nic *pfvf, @@ -401,9 +405,13 @@ static int otx2_qos_add_child_node(struct otx2_qos_node *parent, struct otx2_qos_node *tmp_node; struct list_head *tmp; + if (node->prio > parent->max_static_prio) + parent->max_static_prio = node->prio; + for (tmp = head->next; tmp != head; tmp = tmp->next) { tmp_node = list_entry(tmp, struct otx2_qos_node, list); - if (tmp_node->prio == node->prio) + if (tmp_node->prio == node->prio && + tmp_node->is_static) return -EEXIST; if (tmp_node->prio > node->prio) { list_add_tail(&node->list, tmp); @@ -476,6 +484,7 @@ otx2_qos_sw_create_leaf_node(struct otx2_nic *pfvf, node->rate = otx2_convert_rate(rate); node->ceil = otx2_convert_rate(ceil); node->prio = prio; + node->is_static = true; __set_bit(qid, pfvf->qos.qos_sq_bmap); @@ -628,6 +637,20 @@ static int otx2_qos_txschq_alloc(struct otx2_nic *pfvf, return rc; } +static void otx2_qos_free_unused_txschq(struct otx2_nic *pfvf, struct otx2_qos_cfg *cfg) +{ + int lvl, idx, schq; + + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { + for (idx = 0; idx < cfg->schq_contig[lvl]; idx++) { + if (!cfg->schq_index_used[lvl][idx]) { + schq = cfg->schq_contig_list[lvl][idx]; + otx2_txschq_free_one(pfvf, lvl, schq); + } + } + } +} + static void otx2_qos_txschq_fill_cfg_schq(struct otx2_nic *pfvf, struct otx2_qos_node *node, struct otx2_qos_cfg *cfg) @@ -652,9 +675,10 @@ static void otx2_qos_txschq_fill_cfg_tl(struct otx2_nic *pfvf, list_for_each_entry(tmp, &node->child_list, list) { otx2_qos_txschq_fill_cfg_tl(pfvf, tmp, cfg); cnt = cfg->static_node_pos[tmp->level]; - tmp->schq = cfg->schq_contig_list[tmp->level][cnt]; + tmp->schq = cfg->schq_contig_list[tmp->level][tmp->txschq_idx]; + cfg->schq_index_used[tmp->level][tmp->txschq_idx] = true; if (cnt == 0) - node->prio_anchor = tmp->schq; + node->prio_anchor = cfg->schq_contig_list[tmp->level][0]; cfg->static_node_pos[tmp->level]++; otx2_qos_txschq_fill_cfg_schq(pfvf, tmp, cfg); } @@ -667,9 +691,84 @@ static void otx2_qos_txschq_fill_cfg(struct otx2_nic *pfvf, mutex_lock(&pfvf->qos.qos_lock); otx2_qos_txschq_fill_cfg_tl(pfvf, node, cfg); otx2_qos_txschq_fill_cfg_schq(pfvf, node, cfg); + otx2_qos_free_unused_txschq(pfvf, cfg); mutex_unlock(&pfvf->qos.qos_lock); } +static void __otx2_qos_assign_base_idx_tl(struct otx2_nic *pfvf, + struct otx2_qos_node *tmp, + unsigned long *child_idx_bmap, + int child_cnt) +{ + int idx; + + if (tmp->txschq_idx != OTX2_QOS_INVALID_TXSCHQ_IDX) + return; + + /* assign static nodes 1:1 prio mapping first, then remaining nodes */ + for (idx = 0; idx < child_cnt; idx++) { + if (tmp->is_static && tmp->prio == idx && + !test_bit(idx, child_idx_bmap)) { + tmp->txschq_idx = idx; + set_bit(idx, child_idx_bmap); + return; + } else if (!tmp->is_static && idx >= tmp->prio && + !test_bit(idx, child_idx_bmap)) { + tmp->txschq_idx = idx; + set_bit(idx, child_idx_bmap); + return; + } + } +} + +static int otx2_qos_assign_base_idx_tl(struct otx2_nic *pfvf, + struct otx2_qos_node *node) +{ + unsigned long *child_idx_bmap; + struct otx2_qos_node *tmp; + int child_cnt; + + list_for_each_entry(tmp, &node->child_list, list) + tmp->txschq_idx = OTX2_QOS_INVALID_TXSCHQ_IDX; + + /* allocate child index array */ + child_cnt = node->child_dwrr_cnt + node->max_static_prio + 1; + child_idx_bmap = kcalloc(BITS_TO_LONGS(child_cnt), sizeof(unsigned long), + GFP_KERNEL); + if (!child_idx_bmap) + return -ENOMEM; + + list_for_each_entry(tmp, &node->child_list, list) + otx2_qos_assign_base_idx_tl(pfvf, tmp); + + /* assign base index of static priority children first */ + list_for_each_entry(tmp, &node->child_list, list) { + if (!tmp->is_static) + continue; + __otx2_qos_assign_base_idx_tl(pfvf, tmp, child_idx_bmap, child_cnt); + } + + /* assign base index of dwrr priority children */ + list_for_each_entry(tmp, &node->child_list, list) + __otx2_qos_assign_base_idx_tl(pfvf, tmp, child_idx_bmap, child_cnt); + + kfree(child_idx_bmap); + + return 0; +} + +static int otx2_qos_assign_base_idx(struct otx2_nic *pfvf, + struct otx2_qos_node *node) +{ + int ret = 0; + + mutex_lock(&pfvf->qos.qos_lock); + ret = otx2_qos_assign_base_idx_tl(pfvf, node); + mutex_unlock(&pfvf->qos.qos_lock); + + return ret; +} + static int otx2_qos_txschq_push_cfg_schq(struct otx2_nic *pfvf, struct otx2_qos_node *node, struct otx2_qos_cfg *cfg) @@ -761,8 +860,10 @@ static void otx2_qos_free_cfg(struct otx2_nic *pfvf, struct otx2_qos_cfg *cfg) for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { for (idx = 0; idx < cfg->schq_contig[lvl]; idx++) { - schq = cfg->schq_contig_list[lvl][idx]; - otx2_txschq_free_one(pfvf, lvl, schq); + if (cfg->schq_index_used[lvl][idx]) { + schq = cfg->schq_contig_list[lvl][idx]; + otx2_txschq_free_one(pfvf, lvl, schq); + } } } } @@ -838,6 +939,10 @@ static int otx2_qos_push_txschq_cfg(struct otx2_nic *pfvf, if (ret) return -ENOSPC; + ret = otx2_qos_assign_base_idx(pfvf, node); + if (ret) + return -ENOMEM; + if (!(pfvf->netdev->flags & IFF_UP)) { otx2_qos_txschq_fill_cfg(pfvf, node, cfg); return 0; @@ -995,6 +1100,7 @@ static int otx2_qos_leaf_alloc_queue(struct otx2_nic *pfvf, u16 classid, if (ret) goto out; + parent->child_static_cnt++; set_bit(prio, parent->prio_bmap); /* read current txschq configuration */ @@ -1067,6 +1173,7 @@ static int otx2_qos_leaf_alloc_queue(struct otx2_nic *pfvf, u16 classid, free_old_cfg: kfree(old_cfg); reset_prio: + parent->child_static_cnt--; clear_bit(prio, parent->prio_bmap); out: return ret; @@ -1105,6 +1212,7 @@ static int otx2_qos_leaf_to_inner(struct otx2_nic *pfvf, u16 classid, goto out; } + node->child_static_cnt++; set_bit(prio, node->prio_bmap); /* store the qid to assign to leaf node */ @@ -1178,6 +1286,7 @@ static int otx2_qos_leaf_to_inner(struct otx2_nic *pfvf, u16 classid, free_old_cfg: kfree(old_cfg); reset_prio: + node->child_static_cnt--; clear_bit(prio, node->prio_bmap); out: return ret; @@ -1207,6 +1316,10 @@ static int otx2_qos_leaf_del(struct otx2_nic *pfvf, u16 *classid, otx2_qos_destroy_node(pfvf, node); pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ; + parent->child_static_cnt--; + if (!parent->child_static_cnt) + parent->max_static_prio = 0; + clear_bit(prio, parent->prio_bmap); return 0; @@ -1245,6 +1358,10 @@ static int otx2_qos_leaf_del_last(struct otx2_nic *pfvf, u16 classid, bool force otx2_qos_destroy_node(pfvf, node); pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ; + parent->child_static_cnt--; + if (!parent->child_static_cnt) + parent->max_static_prio = 0; + clear_bit(prio, parent->prio_bmap); /* create downstream txschq entries to parent */ diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h index 19773284be27..faa7c24675d1 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h @@ -35,6 +35,7 @@ struct otx2_qos_cfg { int dwrr_node_pos[NIX_TXSCH_LVL_CNT]; u16 schq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; u16 schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + bool schq_index_used[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; }; struct otx2_qos { @@ -62,7 +63,12 @@ struct otx2_qos_node { u16 schq; /* hw txschq */ u16 qid; u16 prio_anchor; + u16 max_static_prio; + u16 child_dwrr_cnt; + u16 child_static_cnt; + u16 txschq_idx; /* txschq allocation index */ u8 level; + bool is_static; }; From patchwork Mon Jun 26 07:24:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hariprasad Kelam X-Patchwork-Id: 112755 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7308823vqr; Mon, 26 Jun 2023 00:45:53 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6AfNpOx5ebul6OvYhi10/3xkqOOnMoOr9r6kk9oDIO/3s4RckgyTYlaSQMrvbOgXMOV8u8 X-Received: by 2002:a05:6402:546:b0:51b:e8c5:2a1 with SMTP id i6-20020a056402054600b0051be8c502a1mr9106203edx.24.1687765553258; Mon, 26 Jun 2023 00:45:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687765553; cv=none; d=google.com; s=arc-20160816; b=SGSKBUPHwLYGr6Xn2Mb+u3eXSqW/y+MyKYjT9hejG5kAJ/vLfcLIhGBUrjNe2zKqqL gpCsgA2IUC0SsT6EMGGv0RlxIf+PbYZof5AArwVj3hPWLCH1/mpt/u48PypzB5TR7FZ4 8S7tkQyd2ZFscCVtvGzvZKgr2gGGv3TPR37Xtp7ZStGaMYA0fq3AX9wzBhvT6XJeUwvJ nF5bV6ZxGmtk3tHA0hrkWIIYzxjl/uV9EWileEvmybqz9mN/ukPnUSl0BHkIA2B5kCxY HpRQp6QG6K+jPbGBvIkxetih+PEacbETShhh186ltsFCaYJPpV2o47/q/JS6GHHAkVnH 7GFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=Rder2OAzgtZd3D78Lyy1oE9ERfYSQk2vWQvPHgIJsIA=; fh=XPZ3X7jhFHQNxxZyzwoISbNpTG1xY4tAuNs9NObj3K0=; b=gWdU0TI1No4UM5Rp8fGtuxXmmIVnpNHQGeYHUShOOBZQyv7I84ZSDn1Hi48McYaQsN ZpBWT2XqSfV4ZuHlkE9V/SHdp0cPj/D7f75YySJt3jCg4EXfxZWvl9F2WjuV5oFivWB0 N7lxq320i4oWwpThRAlfMDUwp3FeIXS/jCMdhjO+BHcKSbMt9v2WymgjGvNmgiDrg7Ay EHBejj4Imc3BuJdK5eqFqGOl0BMpJ7Deakskf/OfBTq75Fyo/xegQEmPtW3FwSfBg2bG QR25tV+TmxhAp+/lDxHzo09u4yx/Oo5VbGDuNwflzPBcmjO2J4EoHa+G/6cDQ08yGFUy YXUw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=ZCLJFrXH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l21-20020a056402125500b0051be752fa80si2413432edw.432.2023.06.26.00.45.29; Mon, 26 Jun 2023 00:45:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b=ZCLJFrXH; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229564AbjFZHZC (ORCPT + 99 others); Mon, 26 Jun 2023 03:25:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59338 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229900AbjFZHYv (ORCPT ); Mon, 26 Jun 2023 03:24:51 -0400 Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 03BC2E7C; Mon, 26 Jun 2023 00:24:47 -0700 (PDT) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35PJvuhO021224; Mon, 26 Jun 2023 00:24:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=Rder2OAzgtZd3D78Lyy1oE9ERfYSQk2vWQvPHgIJsIA=; b=ZCLJFrXHNksqfZ3Yeg8Oq5wepJBDJO+xiUF07DGufeCrwshVrh03aAtNvMHAUuZtbI9X 3ymSR/kzGt7mZepyFm0OnVhRzWTwBbhBqw3slcWK8EJAeChDh8yAOs16wIBHv8iT99Wy x9TsxNKThmDSjudGOhW7AW207JkMwQXhWdWyYZPCn+Acwbh9TVWWje86CZBysI3IsZA9 51lUpFIBH5IyMBRKCv+kfKWE/GmL+Juw8UK6BIEX7yrYDM0RLmw3TJUOUGdJ/P4h81kT I24Pb3ioAqUwpfxiaNAaDfXAL0TwgAUDT1y5RXYvAnEZVooiifEF0hSHZ4lCBtuhIdcU eA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3rdwunmq47-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 26 Jun 2023 00:24:28 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 26 Jun 2023 00:24:26 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 26 Jun 2023 00:24:26 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 079613F7043; Mon, 26 Jun 2023 00:24:20 -0700 (PDT) From: Hariprasad Kelam To: , CC: , , , , , , , , , , , , , , , , , , Subject: [net-next Patchv2 2/3] sch_htb: Allow HTB quantum parameter in offload mode Date: Mon, 26 Jun 2023 12:54:06 +0530 Message-ID: <20230626072407.32158-3-hkelam@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626072407.32158-1-hkelam@marvell.com> References: <20230626072407.32158-1-hkelam@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: tm8X8-v1OeLJNHuBTUekoIVuiPTNPrW4 X-Proofpoint-ORIG-GUID: tm8X8-v1OeLJNHuBTUekoIVuiPTNPrW4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-26_04,2023-06-22_02,2023-05-22_02 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769750452704323173?= X-GMAIL-MSGID: =?utf-8?q?1769750452704323173?= From: Naveen Mamindlapalli The current implementation of HTB offload returns the EINVAL error for quantum parameter. This patch removes the error returning checks for 'quantum' parameter and populates its value to tc_htb_qopt_offload structure such that driver can use the same. Add quantum parameter check in mlx5 driver, as mlx5 devices are not capable of supporting the quantum parameter when htb offload is used. Report error if quantum parameter is set to a non-default value. Signed-off-by: Naveen Mamindlapalli Signed-off-by: Hariprasad Kelam --- drivers/net/ethernet/mellanox/mlx5/core/en/qos.c | 4 ++-- include/net/pkt_cls.h | 1 + net/sched/sch_htb.c | 7 +++---- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c index 1874c2f0587f..244bc15a42ab 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/qos.c @@ -379,9 +379,9 @@ int mlx5e_htb_setup_tc(struct mlx5e_priv *priv, struct tc_htb_qopt_offload *htb_ if (!htb && htb_qopt->command != TC_HTB_CREATE) return -EINVAL; - if (htb_qopt->prio) { + if (htb_qopt->prio || htb_qopt->quantum) { NL_SET_ERR_MSG_MOD(htb_qopt->extack, - "prio parameter is not supported by device with HTB offload enabled."); + "prio and quantum parameters are not supported by device with HTB offload enabled."); return -EOPNOTSUPP; } diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h index a2ea45c7b53e..139cd09828af 100644 --- a/include/net/pkt_cls.h +++ b/include/net/pkt_cls.h @@ -866,6 +866,7 @@ struct tc_htb_qopt_offload { u32 parent_classid; u16 classid; u16 qid; + u32 quantum; u64 rate; u64 ceil; u8 prio; diff --git a/net/sched/sch_htb.c b/net/sched/sch_htb.c index 325c29041c7d..333800a7d4eb 100644 --- a/net/sched/sch_htb.c +++ b/net/sched/sch_htb.c @@ -1810,10 +1810,6 @@ static int htb_change_class(struct Qdisc *sch, u32 classid, NL_SET_ERR_MSG(extack, "HTB offload doesn't support the mpu parameter"); goto failure; } - if (hopt->quantum) { - NL_SET_ERR_MSG(extack, "HTB offload doesn't support the quantum parameter"); - goto failure; - } } /* Keeping backward compatible with rate_table based iproute2 tc */ @@ -1910,6 +1906,7 @@ static int htb_change_class(struct Qdisc *sch, u32 classid, .rate = max_t(u64, hopt->rate.rate, rate64), .ceil = max_t(u64, hopt->ceil.rate, ceil64), .prio = hopt->prio, + .quantum = hopt->quantum, .extack = extack, }; err = htb_offload(dev, &offload_opt); @@ -1931,6 +1928,7 @@ static int htb_change_class(struct Qdisc *sch, u32 classid, .rate = max_t(u64, hopt->rate.rate, rate64), .ceil = max_t(u64, hopt->ceil.rate, ceil64), .prio = hopt->prio, + .quantum = hopt->quantum, .extack = extack, }; err = htb_offload(dev, &offload_opt); @@ -2017,6 +2015,7 @@ static int htb_change_class(struct Qdisc *sch, u32 classid, .rate = max_t(u64, hopt->rate.rate, rate64), .ceil = max_t(u64, hopt->ceil.rate, ceil64), .prio = hopt->prio, + .quantum = hopt->quantum, .extack = extack, }; err = htb_offload(dev, &offload_opt); From patchwork Mon Jun 26 07:24:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hariprasad Kelam X-Patchwork-Id: 112756 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a59:994d:0:b0:3d9:f83d:47d9 with SMTP id k13csp7309185vqr; Mon, 26 Jun 2023 00:46:53 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5v8EJ6R8rZaYZUOTnhsBqdO+8O0PS+KaNJUe05EXl5LKpepnDAgPr/G9OQXmuZs8pJ2fsp X-Received: by 2002:a17:907:16a3:b0:98c:e3a1:dbd0 with SMTP id hc35-20020a17090716a300b0098ce3a1dbd0mr10995256ejc.6.1687765613063; Mon, 26 Jun 2023 00:46:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687765613; cv=none; d=google.com; s=arc-20160816; b=gbnrEGjzZXlzS3BOXF2pqpECnXe4m9gSX2O9XMjr6XXvb9jE5Pr+a+YSxThO6TeXvd C9vxJE+jDtTPUwvDtZa/CO3mMSE1Fmy0zRP1vPWoYiyT7DFuewrlMRiHeugQK1rGyN26 YRE2MomNwWpo+D4HfgtJdrvfeCml3I37sCyd+eR1Gxau3P5BakRlTsAQ4+Pv3LBugA0d 7VYOViDGJHBMjbrgmK682/PKk0NV+zf2AA/j2vARM7XGQbUesn+Wwe5jokCvAFPcc1gE ldoELfAnMdvLQqGobQga1jJScDGeQZ8TVQw7EmIe4EVoc8J4y5RcxqErEgGtzjbHFyxe Vw1w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=rFCpgYtF+w2uB45Jtz63kzRtZtMJNfTijhPQJd36UhQ=; fh=XPZ3X7jhFHQNxxZyzwoISbNpTG1xY4tAuNs9NObj3K0=; b=VwckPgZLJbAMTta+Iq+1JgwVlJqe+CpGgBWwrRM8pcTnhsuwdN2dNNlEdcyRYHrM+n q7ZG0MOGdBGFJNydqCQb61JEEXxFgtpkKmFOo2fojqtr2Xs93QmQZCZ1VmCPi9h/VRUZ idrsAFb7UHINIcLdvjVrtkkPwZYVpmgniCs5MeODOoz/izL1w/Q2HRFgbFryV1HL9IQt S7irbwROtX1gpqqt9g1dtUrPrsAgOYnKOYsMTwTOl/exlxvV+tPw8kTF+HR2ZS/n7gfv zzKRTRz0mk7o59Y/UR24URpYTGDIzmx5FjHPUL3lJp0aUSuJ1zgef8Rd7WM4hjv/5UCY yMsg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b="PNaJg/c4"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id r9-20020a056402034900b0051da02e98dasi255928edw.242.2023.06.26.00.46.28; Mon, 26 Jun 2023 00:46:53 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@marvell.com header.s=pfpt0220 header.b="PNaJg/c4"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=REJECT dis=NONE) header.from=marvell.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229701AbjFZH0t (ORCPT + 99 others); Mon, 26 Jun 2023 03:26:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59410 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229891AbjFZHYu (ORCPT ); Mon, 26 Jun 2023 03:24:50 -0400 Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4A704E79; Mon, 26 Jun 2023 00:24:46 -0700 (PDT) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 35Q0sDBQ021015; Mon, 26 Jun 2023 00:24:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=rFCpgYtF+w2uB45Jtz63kzRtZtMJNfTijhPQJd36UhQ=; b=PNaJg/c49ZcjsoB5+bINhXjdTD/jnmTmq0S2kZLvZcCIKfimFAJ3oMA06SkVhmFblsij tP7iabOxQE+IIsDHmWk3e4fG2u0LZ4xrMisD0+vjVCUTGXd8xM/znvmaXM4OGliO3vdg zpa34+iEJsa+Tz4g8DZgL2Udceik7kBpx/AxLPLhjUgkG/wD4A8FCItTgrtswx+3wI+6 2nxPGLcQyxUtusn0QuKHgbklKFve0PE/V1a0kClpw5n31TRte4ZqDV8iwE0Ma4LpI7qA um7kDlR/xYos5uqgtXh/95K2Q1JGAIQuNRVVpSALGd5o/13orPNJvIzV/24mRuLWqRH6 CQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3re00juufy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 26 Jun 2023 00:24:35 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Mon, 26 Jun 2023 00:24:32 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Mon, 26 Jun 2023 00:24:32 -0700 Received: from hyd1soter3.marvell.com (unknown [10.29.37.12]) by maili.marvell.com (Postfix) with ESMTP id 41E503F7063; Mon, 26 Jun 2023 00:24:27 -0700 (PDT) From: Hariprasad Kelam To: , CC: , , , , , , , , , , , , , , , , , , Subject: [net-next Patchv2 3/3] octeontx2-pf: htb offload support for Round Robin scheduling Date: Mon, 26 Jun 2023 12:54:07 +0530 Message-ID: <20230626072407.32158-4-hkelam@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230626072407.32158-1-hkelam@marvell.com> References: <20230626072407.32158-1-hkelam@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: difAx8mj14DL6uJrDimrsQRLdiViJcQh X-Proofpoint-ORIG-GUID: difAx8mj14DL6uJrDimrsQRLdiViJcQh X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.254,Aquarius:18.0.957,Hydra:6.0.591,FMLib:17.11.176.26 definitions=2023-06-26_04,2023-06-22_02,2023-05-22_02 X-Spam-Status: No, score=-2.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_LOW,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-getmail-retrieved-from-mailbox: =?utf-8?q?INBOX?= X-GMAIL-THRID: =?utf-8?q?1769750515424322691?= X-GMAIL-MSGID: =?utf-8?q?1769750515424322691?= From: Naveen Mamindlapalli When multiple traffic flows reach Transmit level with the same priority, with Round robin scheduling traffic flow with the highest quantum value is picked. With this support, the user can add multiple classes with the same priority and different quantum. This patch does necessary changes to support the same. Signed-off-by: Naveen Mamindlapalli Signed-off-by: Hariprasad Kelam --- .../marvell/octeontx2/nic/otx2_common.c | 1 + .../marvell/octeontx2/nic/otx2_common.h | 1 + .../net/ethernet/marvell/octeontx2/nic/qos.c | 236 +++++++++++++++--- .../net/ethernet/marvell/octeontx2/nic/qos.h | 5 +- 4 files changed, 203 insertions(+), 40 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c index 77c8f650f7ac..8cdd92dd9762 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c @@ -774,6 +774,7 @@ int otx2_txsch_alloc(struct otx2_nic *pfvf) rsp->schq_list[lvl][schq]; pfvf->hw.txschq_link_cfg_lvl = rsp->link_cfg_lvl; + pfvf->hw.txschq_aggr_lvl_rr_prio = rsp->aggr_lvl_rr_prio; return 0; } diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h index ba8091131ec0..37d792f18809 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h @@ -224,6 +224,7 @@ struct otx2_hw { /* NIX */ u8 txschq_link_cfg_lvl; + u8 txschq_aggr_lvl_rr_prio; u16 txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; u16 matchall_ipolicer; u32 dwrr_mtu; diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c index 51e9be55d5f5..7912824322d5 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c @@ -66,11 +66,24 @@ static void otx2_qos_get_regaddr(struct otx2_qos_node *node, } } +static int otx2_qos_quantum_to_dwrr_weight(struct otx2_nic *pfvf, u32 quantum) +{ + u32 weight; + + weight = quantum / pfvf->hw.dwrr_mtu; + if (quantum % pfvf->hw.dwrr_mtu) + weight += 1; + + return weight; +} + static void otx2_config_sched_shaping(struct otx2_nic *pfvf, struct otx2_qos_node *node, struct nix_txschq_config *cfg, int *num_regs) { + u32 rr_weight; + u32 quantum; u64 maxrate; otx2_qos_get_regaddr(node, cfg, *num_regs); @@ -87,8 +100,16 @@ static void otx2_config_sched_shaping(struct otx2_nic *pfvf, return; } - /* configure priority */ - cfg->regval[*num_regs] = (node->schq - node->parent->prio_anchor) << 24; + /* configure priority/quantum */ + if (node->is_static) { + cfg->regval[*num_regs] = (node->schq - node->parent->prio_anchor) << 24; + } else { + quantum = node->quantum ? + node->quantum : pfvf->tx_max_pktlen; + rr_weight = otx2_qos_quantum_to_dwrr_weight(pfvf, quantum); + cfg->regval[*num_regs] = node->parent->child_dwrr_prio << 24 | + rr_weight; + } (*num_regs)++; /* configure PIR */ @@ -196,9 +217,8 @@ static int otx2_qos_txschq_set_parent_topology(struct otx2_nic *pfvf, cfg->reg[0] = NIX_AF_TL1X_TOPOLOGY(parent->schq); cfg->regval[0] = (u64)parent->prio_anchor << 32; - if (parent->level == NIX_TXSCH_LVL_TL1) - cfg->regval[0] |= (u64)TXSCH_TL1_DFLT_RR_PRIO << 1; - + cfg->regval[0] |= ((parent->child_dwrr_prio != OTX2_QOS_DEFAULT_PRIO) ? + parent->child_dwrr_prio : 0) << 1; cfg->num_regs++; rc = otx2_sync_mbox_msg(&pfvf->mbox); @@ -382,10 +402,12 @@ otx2_qos_alloc_root(struct otx2_nic *pfvf) return ERR_PTR(-ENOMEM); node->parent = NULL; - if (!is_otx2_vf(pfvf->pcifunc)) + if (!is_otx2_vf(pfvf->pcifunc)) { node->level = NIX_TXSCH_LVL_TL1; - else + } else { node->level = NIX_TXSCH_LVL_TL2; + node->child_dwrr_prio = OTX2_QOS_DEFAULT_PRIO; + } WRITE_ONCE(node->qid, OTX2_QOS_QID_INNER); node->classid = OTX2_QOS_ROOT_CLASSID; @@ -442,6 +464,10 @@ static int otx2_qos_alloc_txschq_node(struct otx2_nic *pfvf, txschq_node->rate = 0; txschq_node->ceil = 0; txschq_node->prio = 0; + txschq_node->quantum = 0; + txschq_node->is_static = true; + txschq_node->child_dwrr_prio = OTX2_QOS_DEFAULT_PRIO; + txschq_node->txschq_idx = OTX2_QOS_INVALID_TXSCHQ_IDX; mutex_lock(&pfvf->qos.qos_lock); list_add_tail(&txschq_node->list, &node->child_schq_list); @@ -467,7 +493,7 @@ static struct otx2_qos_node * otx2_qos_sw_create_leaf_node(struct otx2_nic *pfvf, struct otx2_qos_node *parent, u16 classid, u32 prio, u64 rate, u64 ceil, - u16 qid) + u32 quantum, u16 qid, bool static_cfg) { struct otx2_qos_node *node; int err; @@ -484,7 +510,10 @@ otx2_qos_sw_create_leaf_node(struct otx2_nic *pfvf, node->rate = otx2_convert_rate(rate); node->ceil = otx2_convert_rate(ceil); node->prio = prio; - node->is_static = true; + node->quantum = quantum; + node->is_static = static_cfg; + node->child_dwrr_prio = OTX2_QOS_DEFAULT_PRIO; + node->txschq_idx = OTX2_QOS_INVALID_TXSCHQ_IDX; __set_bit(qid, pfvf->qos.qos_sq_bmap); @@ -631,6 +660,7 @@ static int otx2_qos_txschq_alloc(struct otx2_nic *pfvf, } pfvf->qos.link_cfg_lvl = rsp->link_cfg_lvl; + pfvf->hw.txschq_aggr_lvl_rr_prio = rsp->aggr_lvl_rr_prio; out: mutex_unlock(&mbox->lock); @@ -999,6 +1029,13 @@ static int otx2_qos_root_add(struct otx2_nic *pfvf, u16 htb_maj_id, u16 htb_defc goto free_root_node; } + /* Update TL1 RR PRIO */ + if (root->level == NIX_TXSCH_LVL_TL1) { + root->child_dwrr_prio = pfvf->hw.txschq_aggr_lvl_rr_prio; + netdev_dbg(pfvf->netdev, + "TL1 DWRR Priority %d\n", root->child_dwrr_prio); + } + if (!(pfvf->netdev->flags & IFF_UP) || root->level == NIX_TXSCH_LVL_TL1) { root->schq = new_cfg->schq_list[root->level][0]; @@ -1045,37 +1082,81 @@ static int otx2_qos_root_destroy(struct otx2_nic *pfvf) return 0; } +static int otx2_qos_validate_dwrr_cfg(struct otx2_qos_node *parent, + struct netlink_ext_ack *extack, + u64 prio) +{ + if (parent->child_dwrr_prio == OTX2_QOS_DEFAULT_PRIO) { + parent->child_dwrr_prio = prio; + } else if (prio != parent->child_dwrr_prio) { + NL_SET_ERR_MSG_MOD(extack, "Only one DWRR group is allowed"); + return -EOPNOTSUPP; + } + + return 0; +} + static int otx2_qos_validate_configuration(struct otx2_qos_node *parent, struct netlink_ext_ack *extack, struct otx2_nic *pfvf, - u64 prio) + u64 prio, bool static_cfg) { - if (test_bit(prio, parent->prio_bmap)) { - NL_SET_ERR_MSG_MOD(extack, - "Static priority child with same priority exists"); + if (prio == parent->child_dwrr_prio && static_cfg) { + NL_SET_ERR_MSG_MOD(extack, "DWRR child group with same priority exists"); return -EEXIST; } - if (prio == TXSCH_TL1_DFLT_RR_PRIO) { + if (static_cfg && test_bit(prio, parent->prio_bmap)) { NL_SET_ERR_MSG_MOD(extack, - "Priority is reserved for Round Robin"); - return -EINVAL; + "Static priority child with same priority exists"); + return -EEXIST; } return 0; } +static bool is_qos_node_dwrr(struct otx2_qos_node *parent, + struct otx2_nic *pfvf, + u64 prio) +{ + struct otx2_qos_node *node; + bool ret = false; + + if (parent->child_dwrr_prio == prio) + return true; + + mutex_lock(&pfvf->qos.qos_lock); + list_for_each_entry(node, &parent->child_list, list) { + if (prio == node->prio) { + if (parent->child_dwrr_prio != OTX2_QOS_DEFAULT_PRIO && + parent->child_dwrr_prio != prio) + continue; + /* mark old node as dwrr */ + node->is_static = false; + parent->child_dwrr_cnt++; + parent->child_static_cnt--; + ret = true; + break; + } + } + mutex_unlock(&pfvf->qos.qos_lock); + + return ret; +} + static int otx2_qos_leaf_alloc_queue(struct otx2_nic *pfvf, u16 classid, u32 parent_classid, u64 rate, u64 ceil, - u64 prio, struct netlink_ext_ack *extack) + u64 prio, u32 quantum, + struct netlink_ext_ack *extack) { struct otx2_qos_cfg *old_cfg, *new_cfg; struct otx2_qos_node *node, *parent; int qid, ret, err; + bool static_cfg; netdev_dbg(pfvf->netdev, - "TC_HTB_LEAF_ALLOC_QUEUE: classid=0x%x parent_classid=0x%x rate=%lld ceil=%lld prio=%lld\n", - classid, parent_classid, rate, ceil, prio); + "TC_HTB_LEAF_ALLOC_QUEUE: classid=0x%x parent_classid=0x%x rate=%lld ceil=%lld prio=%lld quantum=%d\n", + classid, parent_classid, rate, ceil, prio, quantum); if (prio > OTX2_QOS_MAX_PRIO) { NL_SET_ERR_MSG_MOD(extack, "Valid priority range 0 to 7"); @@ -1083,6 +1164,12 @@ static int otx2_qos_leaf_alloc_queue(struct otx2_nic *pfvf, u16 classid, goto out; } + if (!quantum || quantum > INT_MAX) { + NL_SET_ERR_MSG_MOD(extack, "Invalid quantum, range 1 - 2147483647 bytes"); + ret = -EOPNOTSUPP; + goto out; + } + /* get parent node */ parent = otx2_sw_node_find(pfvf, parent_classid); if (!parent) { @@ -1096,11 +1183,23 @@ static int otx2_qos_leaf_alloc_queue(struct otx2_nic *pfvf, u16 classid, goto out; } - ret = otx2_qos_validate_configuration(parent, extack, pfvf, prio); + static_cfg = !is_qos_node_dwrr(parent, pfvf, prio); + ret = otx2_qos_validate_configuration(parent, extack, pfvf, prio, + static_cfg); if (ret) goto out; - parent->child_static_cnt++; + if (!static_cfg) { + ret = otx2_qos_validate_dwrr_cfg(parent, extack, prio); + if (ret) + goto out; + } + + if (static_cfg) + parent->child_static_cnt++; + else + parent->child_dwrr_cnt++; + set_bit(prio, parent->prio_bmap); /* read current txschq configuration */ @@ -1125,7 +1224,7 @@ static int otx2_qos_leaf_alloc_queue(struct otx2_nic *pfvf, u16 classid, /* allocate and initialize a new child node */ node = otx2_qos_sw_create_leaf_node(pfvf, parent, classid, prio, rate, - ceil, qid); + ceil, quantum, qid, static_cfg); if (IS_ERR(node)) { NL_SET_ERR_MSG_MOD(extack, "Unable to allocate leaf node"); ret = PTR_ERR(node); @@ -1173,7 +1272,11 @@ static int otx2_qos_leaf_alloc_queue(struct otx2_nic *pfvf, u16 classid, free_old_cfg: kfree(old_cfg); reset_prio: - parent->child_static_cnt--; + if (static_cfg) + parent->child_static_cnt--; + else + parent->child_dwrr_cnt--; + clear_bit(prio, parent->prio_bmap); out: return ret; @@ -1181,10 +1284,11 @@ static int otx2_qos_leaf_alloc_queue(struct otx2_nic *pfvf, u16 classid, static int otx2_qos_leaf_to_inner(struct otx2_nic *pfvf, u16 classid, u16 child_classid, u64 rate, u64 ceil, u64 prio, - struct netlink_ext_ack *extack) + u32 quantum, struct netlink_ext_ack *extack) { struct otx2_qos_cfg *old_cfg, *new_cfg; struct otx2_qos_node *node, *child; + bool static_cfg; int ret, err; u16 qid; @@ -1198,6 +1302,12 @@ static int otx2_qos_leaf_to_inner(struct otx2_nic *pfvf, u16 classid, goto out; } + if (!quantum || quantum > INT_MAX) { + NL_SET_ERR_MSG_MOD(extack, "Invalid quantum, range 1 - 2147483647 bytes"); + ret = -EOPNOTSUPP; + goto out; + } + /* find node related to classid */ node = otx2_sw_node_find(pfvf, classid); if (!node) { @@ -1212,7 +1322,18 @@ static int otx2_qos_leaf_to_inner(struct otx2_nic *pfvf, u16 classid, goto out; } - node->child_static_cnt++; + static_cfg = !is_qos_node_dwrr(node, pfvf, prio); + if (!static_cfg) { + ret = otx2_qos_validate_dwrr_cfg(node, extack, prio); + if (ret) + goto out; + } + + if (static_cfg) + node->child_static_cnt++; + else + node->child_dwrr_cnt++; + set_bit(prio, node->prio_bmap); /* store the qid to assign to leaf node */ @@ -1235,7 +1356,8 @@ static int otx2_qos_leaf_to_inner(struct otx2_nic *pfvf, u16 classid, /* allocate and initialize a new child node */ child = otx2_qos_sw_create_leaf_node(pfvf, node, child_classid, - prio, rate, ceil, qid); + prio, rate, ceil, quantum, + qid, static_cfg); if (IS_ERR(child)) { NL_SET_ERR_MSG_MOD(extack, "Unable to allocate leaf node"); ret = PTR_ERR(child); @@ -1286,7 +1408,10 @@ static int otx2_qos_leaf_to_inner(struct otx2_nic *pfvf, u16 classid, free_old_cfg: kfree(old_cfg); reset_prio: - node->child_static_cnt--; + if (static_cfg) + node->child_static_cnt--; + else + node->child_dwrr_cnt--; clear_bit(prio, node->prio_bmap); out: return ret; @@ -1296,6 +1421,7 @@ static int otx2_qos_leaf_del(struct otx2_nic *pfvf, u16 *classid, struct netlink_ext_ack *extack) { struct otx2_qos_node *node, *parent; + int dwrr_del_node = false; u64 prio; u16 qid; @@ -1311,17 +1437,31 @@ static int otx2_qos_leaf_del(struct otx2_nic *pfvf, u16 *classid, prio = node->prio; qid = node->qid; + if (!node->is_static) + dwrr_del_node = true; + otx2_qos_disable_sq(pfvf, node->qid); otx2_qos_destroy_node(pfvf, node); pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ; - parent->child_static_cnt--; + if (dwrr_del_node) { + parent->child_dwrr_cnt--; + } else { + parent->child_static_cnt--; + clear_bit(prio, parent->prio_bmap); + } + + /* Reset DWRR priority if all dwrr nodes are deleted */ + if (!parent->child_dwrr_cnt && + parent->child_dwrr_prio != OTX2_QOS_DEFAULT_PRIO) { + parent->child_dwrr_prio = OTX2_QOS_DEFAULT_PRIO; + clear_bit(prio, parent->prio_bmap); + } + if (!parent->child_static_cnt) parent->max_static_prio = 0; - clear_bit(prio, parent->prio_bmap); - return 0; } @@ -1330,6 +1470,7 @@ static int otx2_qos_leaf_del_last(struct otx2_nic *pfvf, u16 classid, bool force { struct otx2_qos_node *node, *parent; struct otx2_qos_cfg *new_cfg; + int dwrr_del_node = false; u64 prio; int err; u16 qid; @@ -1354,16 +1495,30 @@ static int otx2_qos_leaf_del_last(struct otx2_nic *pfvf, u16 classid, bool force return -ENOENT; } + if (!node->is_static) + dwrr_del_node = true; + /* destroy the leaf node */ otx2_qos_destroy_node(pfvf, node); pfvf->qos.qid_to_sqmap[qid] = OTX2_QOS_INVALID_SQ; - parent->child_static_cnt--; + if (dwrr_del_node) { + parent->child_dwrr_cnt--; + } else { + parent->child_static_cnt--; + clear_bit(prio, parent->prio_bmap); + } + + /* Reset DWRR priority if all dwrr nodes are deleted */ + if (!parent->child_dwrr_cnt && + parent->child_dwrr_prio != OTX2_QOS_DEFAULT_PRIO) { + parent->child_dwrr_prio = OTX2_QOS_DEFAULT_PRIO; + clear_bit(prio, parent->prio_bmap); + } + if (!parent->child_static_cnt) parent->max_static_prio = 0; - clear_bit(prio, parent->prio_bmap); - /* create downstream txschq entries to parent */ err = otx2_qos_alloc_txschq_node(pfvf, parent); if (err) { @@ -1415,10 +1570,12 @@ void otx2_qos_config_txschq(struct otx2_nic *pfvf) if (!root) return; - err = otx2_qos_txschq_config(pfvf, root); - if (err) { - netdev_err(pfvf->netdev, "Error update txschq configuration\n"); - goto root_destroy; + if (root->level != NIX_TXSCH_LVL_TL1) { + err = otx2_qos_txschq_config(pfvf, root); + if (err) { + netdev_err(pfvf->netdev, "Error update txschq configuration\n"); + goto root_destroy; + } } err = otx2_qos_txschq_push_cfg_tl(pfvf, root, NULL); @@ -1451,7 +1608,8 @@ int otx2_setup_tc_htb(struct net_device *ndev, struct tc_htb_qopt_offload *htb) res = otx2_qos_leaf_alloc_queue(pfvf, htb->classid, htb->parent_classid, htb->rate, htb->ceil, - htb->prio, htb->extack); + htb->prio, htb->quantum, + htb->extack); if (res < 0) return res; htb->qid = res; @@ -1460,7 +1618,7 @@ int otx2_setup_tc_htb(struct net_device *ndev, struct tc_htb_qopt_offload *htb) return otx2_qos_leaf_to_inner(pfvf, htb->parent_classid, htb->classid, htb->rate, htb->ceil, htb->prio, - htb->extack); + htb->quantum, htb->extack); case TC_HTB_LEAF_DEL: return otx2_qos_leaf_del(pfvf, &htb->classid, htb->extack); case TC_HTB_LEAF_DEL_LAST: diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h index faa7c24675d1..221bd0438f60 100644 --- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.h +++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.h @@ -60,12 +60,15 @@ struct otx2_qos_node { u64 ceil; u32 classid; u32 prio; - u16 schq; /* hw txschq */ + u32 quantum; + /* hw txschq */ + u16 schq; u16 qid; u16 prio_anchor; u16 max_static_prio; u16 child_dwrr_cnt; u16 child_static_cnt; + u16 child_dwrr_prio; u16 txschq_idx; /* txschq allocation index */ u8 level; bool is_static;