From patchwork Sat Dec 23 02:55:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182868 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1459070dyi; Fri, 22 Dec 2023 18:59:27 -0800 (PST) X-Google-Smtp-Source: AGHT+IFxekVfqa0vnbPhoJu3atidmUx3oRdQSiyfo+0WmAq8Lpy1I9k0JTZAd7D+wX5WMMuYdt3m X-Received: by 2002:a17:903:2988:b0:1d4:1044:87d4 with SMTP id lm8-20020a170903298800b001d4104487d4mr3895677plb.19.1703300366743; Fri, 22 Dec 2023 18:59:26 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300366; cv=none; d=google.com; s=arc-20160816; b=pXr+SKx5J0xiZJ1Aj/0UAeOipnODvICsFprXLOvS1QNFaKK9VRlh2wa513uibl/wz/ LY4sGgij9nXVtFE9U2Lxgdr2u867X3cKs1U40DLKtDYgBhjcHI4LUwAJEdU4KbcrnniY NDVi18t7qJ9sU9CpGzPJGhvfosEEyAqVJJqiYc8+EJJRGSXQXLSc9pZGBvZW9aWm2jgn RBvkn07ECAtwmeOg3BY9BKcQjfDggfhROUkKbLQIJGODbA3V9y247SPyeQpYbKaVVZ/w WyRbr/Zk2qjjIJ4kaicTvSjQYf2tNKOevFKXUbDrt+uYoQfA9hXHitKgkoonRQUpFKEA 1VYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=XvRnd7Bc4IJokkQjbIJ7bsAIrY5mBkYfnu3Li7/O0CE=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=NZ3rVcEa6zigN8mL24vcSjidYh11O7NPGbd2U9QJNbtqYUq4xsCWNz/etRfjRso3KZ j5aUdkJo5R/gDKtNcG/cS9XHYEU4sHYpT4JGBMtfYy2+sMWdysK7G/DnkjrzwVd2VJNY p1w3sXbggX7W34j8zbl3Qz0zhMb/o7GrXQCpzPrkOCnyxsFVXdQu6jVWYYIWcTCkp1i8 6RAQ7YVeTeP6NYn8JlFDUj904qYRTPWhChPHEgu6fcYTFkk7vSggrjx1CcARABf6io0x TdRguur8BVTEcVSz+Rkf9krEfH6YP53wSUvZNW3dYrrO2+gsWuQTIbx6C0bawwzQ1NNe I/4Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=XBYUJ1wJ; spf=pass (google.com: domain of linux-kernel+bounces-10216-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10216-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id x2-20020a170902ea8200b001d3a096d324si4079654plb.380.2023.12.22.18.59.26 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 18:59:26 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10216-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=XBYUJ1wJ; spf=pass (google.com: domain of linux-kernel+bounces-10216-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10216-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 6ADBC284A43 for ; Sat, 23 Dec 2023 02:59:12 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 5AF588BEC; Sat, 23 Dec 2023 02:58:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="XBYUJ1wJ" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B7E3B4A0C; Sat, 23 Dec 2023 02:58:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300288; x=1734836288; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=eweqS73/c+soJ6sj/huSb8cCmNIOhaPJnf1IhKvPICk=; b=XBYUJ1wJ8OSiO9+iza/euAg5xszdMdasrzK03x5BT3rETAFepRSG+cBh PrDYt60EGV9fNlWfgIj0OfUB3rq5BZ47Blsyzm/vIOvOpYFhriuCuYp93 4fu2jp2xtz2N6/tujTRJhsdtstjpuG5acW4hjEbmR/l8+Tfcu1V0175QF setA7p7VofbvB19R41h/iGy9cqxKyW9CT7fRf11uPWzFMv/HSCJgxV4gV pf2ZqyM2pAOG/s3gZ4OattXN+WtRUNi907GPbEJPA93Qh3Ez6VhB+bHPa uvcw0HleGVJBUmgDCT3sZLzxhqbmHb8jncnd0HTAkXtKBygK6eq42wm/8 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610766" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610766" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:08 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537383" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:03 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 01/34] idpf: reuse libie's definitions of parsed ptype structures Date: Sat, 23 Dec 2023 03:55:21 +0100 Message-ID: <20231223025554.2316836-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786039885214873586 X-GMAIL-MSGID: 1786039885214873586 idpf's in-kernel parsed ptype structure is almost identical to the one used in the previous Intel drivers, which means it can be converted to use libie's definitions and even helpers. The only difference is that it doesn't use a constant table, rather than one obtained from the device. Remove the driver counterpart and use libie's helpers for hashes and checksums. This slightly optimizes skb fields processing due to faster checks. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/Kconfig | 1 + drivers/net/ethernet/intel/idpf/idpf.h | 2 +- drivers/net/ethernet/intel/idpf/idpf_main.c | 1 + .../ethernet/intel/idpf/idpf_singleq_txrx.c | 87 +++++++-------- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 101 ++++++------------ drivers/net/ethernet/intel/idpf/idpf_txrx.h | 88 +-------------- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 54 ++++++---- 7 files changed, 110 insertions(+), 224 deletions(-) diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index c7da7d05d93e..0db1aa36866e 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -378,6 +378,7 @@ config IDPF tristate "Intel(R) Infrastructure Data Path Function Support" depends on PCI_MSI select DIMLIB + select LIBIE select PAGE_POOL select PAGE_POOL_STATS help diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 0acc125decb3..8342df0f4f3d 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -385,7 +385,7 @@ struct idpf_vport { u16 num_rxq_grp; struct idpf_rxq_group *rxq_grps; u32 rxq_model; - struct idpf_rx_ptype_decoded rx_ptype_lkup[IDPF_RX_MAX_PTYPE]; + struct libie_rx_ptype_parsed rx_ptype_lkup[IDPF_RX_MAX_PTYPE]; struct idpf_adapter *adapter; struct net_device *netdev; diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c index e1febc74cefd..6471158e6f6b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_main.c +++ b/drivers/net/ethernet/intel/idpf/idpf_main.c @@ -7,6 +7,7 @@ #define DRV_SUMMARY "Intel(R) Infrastructure Data Path Function Linux Driver" MODULE_DESCRIPTION(DRV_SUMMARY); +MODULE_IMPORT_NS(LIBIE); MODULE_LICENSE("GPL"); /** diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index 8122a0cc97de..e58e08c9997d 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -636,75 +636,64 @@ static bool idpf_rx_singleq_is_non_eop(struct idpf_queue *rxq, * @rxq: Rx ring being processed * @skb: skb currently being received and modified * @csum_bits: checksum bits from descriptor - * @ptype: the packet type decoded by hardware + * @parsed: the packet type parsed by hardware * * skb->protocol must be set before this function is called */ static void idpf_rx_singleq_csum(struct idpf_queue *rxq, struct sk_buff *skb, - struct idpf_rx_csum_decoded *csum_bits, - u16 ptype) + struct idpf_rx_csum_decoded csum_bits, + struct libie_rx_ptype_parsed parsed) { - struct idpf_rx_ptype_decoded decoded; bool ipv4, ipv6; /* check if Rx checksum is enabled */ - if (unlikely(!(rxq->vport->netdev->features & NETIF_F_RXCSUM))) + if (!libie_has_rx_checksum(rxq->vport->netdev, parsed)) return; /* check if HW has decoded the packet and checksum */ - if (unlikely(!(csum_bits->l3l4p))) + if (unlikely(!csum_bits.l3l4p)) return; - decoded = rxq->vport->rx_ptype_lkup[ptype]; - if (unlikely(!(decoded.known && decoded.outer_ip))) + if (unlikely(parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_L2)) return; - ipv4 = IDPF_RX_PTYPE_TO_IPV(&decoded, IDPF_RX_PTYPE_OUTER_IPV4); - ipv6 = IDPF_RX_PTYPE_TO_IPV(&decoded, IDPF_RX_PTYPE_OUTER_IPV6); + ipv4 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV4; + ipv6 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV6; /* Check if there were any checksum errors */ - if (unlikely(ipv4 && (csum_bits->ipe || csum_bits->eipe))) + if (unlikely(ipv4 && (csum_bits.ipe || csum_bits.eipe))) goto checksum_fail; /* Device could not do any checksum offload for certain extension * headers as indicated by setting IPV6EXADD bit */ - if (unlikely(ipv6 && csum_bits->ipv6exadd)) + if (unlikely(ipv6 && csum_bits.ipv6exadd)) return; /* check for L4 errors and handle packets that were not able to be * checksummed due to arrival speed */ - if (unlikely(csum_bits->l4e)) + if (unlikely(csum_bits.l4e)) goto checksum_fail; - if (unlikely(csum_bits->nat && csum_bits->eudpe)) + if (unlikely(csum_bits.nat && csum_bits.eudpe)) goto checksum_fail; /* Handle packets that were not able to be checksummed due to arrival * speed, in this case the stack can compute the csum. */ - if (unlikely(csum_bits->pprs)) + if (unlikely(csum_bits.pprs)) return; /* If there is an outer header present that might contain a checksum * we need to bump the checksum level by 1 to reflect the fact that * we are indicating we validated the inner checksum. */ - if (decoded.tunnel_type >= IDPF_RX_PTYPE_TUNNEL_IP_GRENAT) + if (parsed.tunnel_type >= LIBIE_RX_PTYPE_TUNNEL_IP_GRENAT) skb->csum_level = 1; - /* Only report checksum unnecessary for ICMP, TCP, UDP, or SCTP */ - switch (decoded.inner_prot) { - case IDPF_RX_PTYPE_INNER_PROT_ICMP: - case IDPF_RX_PTYPE_INNER_PROT_TCP: - case IDPF_RX_PTYPE_INNER_PROT_UDP: - case IDPF_RX_PTYPE_INNER_PROT_SCTP: - skb->ip_summed = CHECKSUM_UNNECESSARY; - return; - default: - return; - } + skb->ip_summed = CHECKSUM_UNNECESSARY; + return; checksum_fail: u64_stats_update_begin(&rxq->stats_sync); @@ -717,7 +706,7 @@ static void idpf_rx_singleq_csum(struct idpf_queue *rxq, struct sk_buff *skb, * @rx_q: Rx completion queue * @skb: skb currently being received and modified * @rx_desc: the receive descriptor - * @ptype: Rx packet type + * @parsed: Rx packet type parsed by hardware * * This function only operates on the VIRTCHNL2_RXDID_1_32B_BASE_M base 32byte * descriptor writeback format. @@ -725,7 +714,7 @@ static void idpf_rx_singleq_csum(struct idpf_queue *rxq, struct sk_buff *skb, static void idpf_rx_singleq_base_csum(struct idpf_queue *rx_q, struct sk_buff *skb, union virtchnl2_rx_desc *rx_desc, - u16 ptype) + struct libie_rx_ptype_parsed parsed) { struct idpf_rx_csum_decoded csum_bits; u32 rx_error, rx_status; @@ -749,7 +738,7 @@ static void idpf_rx_singleq_base_csum(struct idpf_queue *rx_q, csum_bits.nat = 0; csum_bits.eudpe = 0; - idpf_rx_singleq_csum(rx_q, skb, &csum_bits, ptype); + idpf_rx_singleq_csum(rx_q, skb, csum_bits, parsed); } /** @@ -757,7 +746,7 @@ static void idpf_rx_singleq_base_csum(struct idpf_queue *rx_q, * @rx_q: Rx completion queue * @skb: skb currently being received and modified * @rx_desc: the receive descriptor - * @ptype: Rx packet type + * @parsed: Rx packet type parsed by hardware * * This function only operates on the VIRTCHNL2_RXDID_2_FLEX_SQ_NIC flexible * descriptor writeback format. @@ -765,7 +754,7 @@ static void idpf_rx_singleq_base_csum(struct idpf_queue *rx_q, static void idpf_rx_singleq_flex_csum(struct idpf_queue *rx_q, struct sk_buff *skb, union virtchnl2_rx_desc *rx_desc, - u16 ptype) + struct libie_rx_ptype_parsed parsed) { struct idpf_rx_csum_decoded csum_bits; u16 rx_status0, rx_status1; @@ -789,7 +778,7 @@ static void idpf_rx_singleq_flex_csum(struct idpf_queue *rx_q, rx_status1); csum_bits.pprs = 0; - idpf_rx_singleq_csum(rx_q, skb, &csum_bits, ptype); + idpf_rx_singleq_csum(rx_q, skb, csum_bits, parsed); } /** @@ -797,7 +786,7 @@ static void idpf_rx_singleq_flex_csum(struct idpf_queue *rx_q, * @rx_q: Rx completion queue * @skb: skb currently being received and modified * @rx_desc: specific descriptor - * @decoded: Decoded Rx packet type related fields + * @parsed: parsed Rx packet type related fields * * This function only operates on the VIRTCHNL2_RXDID_1_32B_BASE_M base 32byte * descriptor writeback format. @@ -805,11 +794,11 @@ static void idpf_rx_singleq_flex_csum(struct idpf_queue *rx_q, static void idpf_rx_singleq_base_hash(struct idpf_queue *rx_q, struct sk_buff *skb, union virtchnl2_rx_desc *rx_desc, - struct idpf_rx_ptype_decoded *decoded) + struct libie_rx_ptype_parsed parsed) { u64 mask, qw1; - if (unlikely(!(rx_q->vport->netdev->features & NETIF_F_RXHASH))) + if (!libie_has_rx_hash(rx_q->vport->netdev, parsed)) return; mask = VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH_M; @@ -818,7 +807,7 @@ static void idpf_rx_singleq_base_hash(struct idpf_queue *rx_q, if (FIELD_GET(mask, qw1) == mask) { u32 hash = le32_to_cpu(rx_desc->base_wb.qword0.hi_dword.rss); - skb_set_hash(skb, hash, idpf_ptype_to_htype(decoded)); + libie_skb_set_hash(skb, hash, parsed); } } @@ -827,7 +816,7 @@ static void idpf_rx_singleq_base_hash(struct idpf_queue *rx_q, * @rx_q: Rx completion queue * @skb: skb currently being received and modified * @rx_desc: specific descriptor - * @decoded: Decoded Rx packet type related fields + * @parsed: parsed Rx packet type related fields * * This function only operates on the VIRTCHNL2_RXDID_2_FLEX_SQ_NIC flexible * descriptor writeback format. @@ -835,15 +824,17 @@ static void idpf_rx_singleq_base_hash(struct idpf_queue *rx_q, static void idpf_rx_singleq_flex_hash(struct idpf_queue *rx_q, struct sk_buff *skb, union virtchnl2_rx_desc *rx_desc, - struct idpf_rx_ptype_decoded *decoded) + struct libie_rx_ptype_parsed parsed) { - if (unlikely(!(rx_q->vport->netdev->features & NETIF_F_RXHASH))) + if (!libie_has_rx_hash(rx_q->vport->netdev, parsed)) return; if (FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_M, - le16_to_cpu(rx_desc->flex_nic_wb.status_error0))) - skb_set_hash(skb, le32_to_cpu(rx_desc->flex_nic_wb.rss_hash), - idpf_ptype_to_htype(decoded)); + le16_to_cpu(rx_desc->flex_nic_wb.status_error0))) { + u32 hash = le32_to_cpu(rx_desc->flex_nic_wb.rss_hash); + + libie_skb_set_hash(skb, hash, parsed); + } } /** @@ -863,7 +854,7 @@ static void idpf_rx_singleq_process_skb_fields(struct idpf_queue *rx_q, union virtchnl2_rx_desc *rx_desc, u16 ptype) { - struct idpf_rx_ptype_decoded decoded = + struct libie_rx_ptype_parsed parsed = rx_q->vport->rx_ptype_lkup[ptype]; /* modifies the skb - consumes the enet header */ @@ -871,11 +862,11 @@ static void idpf_rx_singleq_process_skb_fields(struct idpf_queue *rx_q, /* Check if we're using base mode descriptor IDs */ if (rx_q->rxdids == VIRTCHNL2_RXDID_1_32B_BASE_M) { - idpf_rx_singleq_base_hash(rx_q, skb, rx_desc, &decoded); - idpf_rx_singleq_base_csum(rx_q, skb, rx_desc, ptype); + idpf_rx_singleq_base_hash(rx_q, skb, rx_desc, parsed); + idpf_rx_singleq_base_csum(rx_q, skb, rx_desc, parsed); } else { - idpf_rx_singleq_flex_hash(rx_q, skb, rx_desc, &decoded); - idpf_rx_singleq_flex_csum(rx_q, skb, rx_desc, ptype); + idpf_rx_singleq_flex_hash(rx_q, skb, rx_desc, parsed); + idpf_rx_singleq_flex_csum(rx_q, skb, rx_desc, parsed); } } diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 610841dc4512..70785f9afadd 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -2705,51 +2705,27 @@ netdev_tx_t idpf_tx_splitq_start(struct sk_buff *skb, return idpf_tx_splitq_frame(skb, tx_q); } -/** - * idpf_ptype_to_htype - get a hash type - * @decoded: Decoded Rx packet type related fields - * - * Returns appropriate hash type (such as PKT_HASH_TYPE_L2/L3/L4) to be used by - * skb_set_hash based on PTYPE as parsed by HW Rx pipeline and is part of - * Rx desc. - */ -enum pkt_hash_types idpf_ptype_to_htype(const struct idpf_rx_ptype_decoded *decoded) -{ - if (!decoded->known) - return PKT_HASH_TYPE_NONE; - if (decoded->payload_layer == IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY2 && - decoded->inner_prot) - return PKT_HASH_TYPE_L4; - if (decoded->payload_layer == IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY2 && - decoded->outer_ip) - return PKT_HASH_TYPE_L3; - if (decoded->outer_ip == IDPF_RX_PTYPE_OUTER_L2) - return PKT_HASH_TYPE_L2; - - return PKT_HASH_TYPE_NONE; -} - /** * idpf_rx_hash - set the hash value in the skb * @rxq: Rx descriptor ring packet is being transacted on * @skb: pointer to current skb being populated * @rx_desc: Receive descriptor - * @decoded: Decoded Rx packet type related fields + * @parsed: parsed Rx packet type related fields */ static void idpf_rx_hash(struct idpf_queue *rxq, struct sk_buff *skb, struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc, - struct idpf_rx_ptype_decoded *decoded) + struct libie_rx_ptype_parsed parsed) { u32 hash; - if (unlikely(!idpf_is_feature_ena(rxq->vport, NETIF_F_RXHASH))) + if (!libie_has_rx_hash(rxq->vport->netdev, parsed)) return; hash = le16_to_cpu(rx_desc->hash1) | (rx_desc->ff2_mirrid_hash2.hash2 << 16) | (rx_desc->hash3 << 24); - skb_set_hash(skb, hash, idpf_ptype_to_htype(decoded)); + libie_skb_set_hash(skb, hash, parsed); } /** @@ -2757,60 +2733,48 @@ static void idpf_rx_hash(struct idpf_queue *rxq, struct sk_buff *skb, * @rxq: Rx descriptor ring packet is being transacted on * @skb: pointer to current skb being populated * @csum_bits: checksum fields extracted from the descriptor - * @decoded: Decoded Rx packet type related fields + * @parsed: parsed Rx packet type related fields * * skb->protocol must be set before this function is called */ static void idpf_rx_csum(struct idpf_queue *rxq, struct sk_buff *skb, - struct idpf_rx_csum_decoded *csum_bits, - struct idpf_rx_ptype_decoded *decoded) + struct idpf_rx_csum_decoded csum_bits, + struct libie_rx_ptype_parsed parsed) { bool ipv4, ipv6; /* check if Rx checksum is enabled */ - if (unlikely(!idpf_is_feature_ena(rxq->vport, NETIF_F_RXCSUM))) + if (!libie_has_rx_checksum(rxq->vport->netdev, parsed)) return; /* check if HW has decoded the packet and checksum */ - if (!(csum_bits->l3l4p)) + if (!csum_bits.l3l4p) return; - ipv4 = IDPF_RX_PTYPE_TO_IPV(decoded, IDPF_RX_PTYPE_OUTER_IPV4); - ipv6 = IDPF_RX_PTYPE_TO_IPV(decoded, IDPF_RX_PTYPE_OUTER_IPV6); + ipv4 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV4; + ipv6 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV6; - if (ipv4 && (csum_bits->ipe || csum_bits->eipe)) + if (ipv4 && (csum_bits.ipe || csum_bits.eipe)) goto checksum_fail; - if (ipv6 && csum_bits->ipv6exadd) + if (ipv6 && csum_bits.ipv6exadd) return; /* check for L4 errors and handle packets that were not able to be * checksummed */ - if (csum_bits->l4e) + if (csum_bits.l4e) goto checksum_fail; - /* Only report checksum unnecessary for ICMP, TCP, UDP, or SCTP */ - switch (decoded->inner_prot) { - case IDPF_RX_PTYPE_INNER_PROT_ICMP: - case IDPF_RX_PTYPE_INNER_PROT_TCP: - case IDPF_RX_PTYPE_INNER_PROT_UDP: - if (!csum_bits->raw_csum_inv) { - u16 csum = csum_bits->raw_csum; - - skb->csum = csum_unfold((__force __sum16)~swab16(csum)); - skb->ip_summed = CHECKSUM_COMPLETE; - } else { - skb->ip_summed = CHECKSUM_UNNECESSARY; - } - break; - case IDPF_RX_PTYPE_INNER_PROT_SCTP: + if (csum_bits.raw_csum_inv || + parsed.inner_prot == LIBIE_RX_PTYPE_INNER_SCTP) { skb->ip_summed = CHECKSUM_UNNECESSARY; - break; - default: - break; + return; } + skb->csum = csum_unfold((__force __sum16)~swab16(csum_bits.raw_csum)); + skb->ip_summed = CHECKSUM_COMPLETE; + return; checksum_fail: @@ -2853,7 +2817,7 @@ static void idpf_rx_splitq_extract_csum_bits(struct virtchnl2_rx_flex_desc_adv_n * @rxq : Rx descriptor ring packet is being transacted on * @skb : pointer to current skb being populated * @rx_desc: Receive descriptor - * @decoded: Decoded Rx packet type related fields + * @parsed: parsed Rx packet type related fields * * Return 0 on success and error code on failure * @@ -2862,21 +2826,21 @@ static void idpf_rx_splitq_extract_csum_bits(struct virtchnl2_rx_flex_desc_adv_n */ static int idpf_rx_rsc(struct idpf_queue *rxq, struct sk_buff *skb, struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc, - struct idpf_rx_ptype_decoded *decoded) + struct libie_rx_ptype_parsed parsed) { u16 rsc_segments, rsc_seg_len; bool ipv4, ipv6; int len; - if (unlikely(!decoded->outer_ip)) + if (unlikely(parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_L2)) return -EINVAL; rsc_seg_len = le16_to_cpu(rx_desc->misc.rscseglen); if (unlikely(!rsc_seg_len)) return -EINVAL; - ipv4 = IDPF_RX_PTYPE_TO_IPV(decoded, IDPF_RX_PTYPE_OUTER_IPV4); - ipv6 = IDPF_RX_PTYPE_TO_IPV(decoded, IDPF_RX_PTYPE_OUTER_IPV6); + ipv4 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV4; + ipv6 = parsed.outer_ip == LIBIE_RX_PTYPE_OUTER_IPV6; if (unlikely(!(ipv4 ^ ipv6))) return -EINVAL; @@ -2935,30 +2899,25 @@ static int idpf_rx_process_skb_fields(struct idpf_queue *rxq, struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc) { struct idpf_rx_csum_decoded csum_bits = { }; - struct idpf_rx_ptype_decoded decoded; + struct libie_rx_ptype_parsed parsed; u16 rx_ptype; rx_ptype = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M, le16_to_cpu(rx_desc->ptype_err_fflags0)); - decoded = rxq->vport->rx_ptype_lkup[rx_ptype]; - /* If we don't know the ptype we can't do anything else with it. Just - * pass it up the stack as-is. - */ - if (!decoded.known) - return 0; + parsed = rxq->vport->rx_ptype_lkup[rx_ptype]; /* process RSS/hash */ - idpf_rx_hash(rxq, skb, rx_desc, &decoded); + idpf_rx_hash(rxq, skb, rx_desc, parsed); skb->protocol = eth_type_trans(skb, rxq->vport->netdev); if (FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M, le16_to_cpu(rx_desc->hdrlen_flags))) - return idpf_rx_rsc(rxq, skb, rx_desc, &decoded); + return idpf_rx_rsc(rxq, skb, rx_desc, parsed); idpf_rx_splitq_extract_csum_bits(rx_desc, &csum_bits); - idpf_rx_csum(rxq, skb, &csum_bits, &decoded); + idpf_rx_csum(rxq, skb, csum_bits, parsed); return 0; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index e0660ede58ff..f082d3edeb9c 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -4,6 +4,8 @@ #ifndef _IDPF_TXRX_H_ #define _IDPF_TXRX_H_ +#include + #include #include #include @@ -346,72 +348,6 @@ struct idpf_rx_buf { #define IDPF_RX_MAX_BASE_PTYPE 256 #define IDPF_INVALID_PTYPE_ID 0xFFFF -/* Packet type non-ip values */ -enum idpf_rx_ptype_l2 { - IDPF_RX_PTYPE_L2_RESERVED = 0, - IDPF_RX_PTYPE_L2_MAC_PAY2 = 1, - IDPF_RX_PTYPE_L2_TIMESYNC_PAY2 = 2, - IDPF_RX_PTYPE_L2_FIP_PAY2 = 3, - IDPF_RX_PTYPE_L2_OUI_PAY2 = 4, - IDPF_RX_PTYPE_L2_MACCNTRL_PAY2 = 5, - IDPF_RX_PTYPE_L2_LLDP_PAY2 = 6, - IDPF_RX_PTYPE_L2_ECP_PAY2 = 7, - IDPF_RX_PTYPE_L2_EVB_PAY2 = 8, - IDPF_RX_PTYPE_L2_QCN_PAY2 = 9, - IDPF_RX_PTYPE_L2_EAPOL_PAY2 = 10, - IDPF_RX_PTYPE_L2_ARP = 11, -}; - -enum idpf_rx_ptype_outer_ip { - IDPF_RX_PTYPE_OUTER_L2 = 0, - IDPF_RX_PTYPE_OUTER_IP = 1, -}; - -#define IDPF_RX_PTYPE_TO_IPV(ptype, ipv) \ - (((ptype)->outer_ip == IDPF_RX_PTYPE_OUTER_IP) && \ - ((ptype)->outer_ip_ver == (ipv))) - -enum idpf_rx_ptype_outer_ip_ver { - IDPF_RX_PTYPE_OUTER_NONE = 0, - IDPF_RX_PTYPE_OUTER_IPV4 = 1, - IDPF_RX_PTYPE_OUTER_IPV6 = 2, -}; - -enum idpf_rx_ptype_outer_fragmented { - IDPF_RX_PTYPE_NOT_FRAG = 0, - IDPF_RX_PTYPE_FRAG = 1, -}; - -enum idpf_rx_ptype_tunnel_type { - IDPF_RX_PTYPE_TUNNEL_NONE = 0, - IDPF_RX_PTYPE_TUNNEL_IP_IP = 1, - IDPF_RX_PTYPE_TUNNEL_IP_GRENAT = 2, - IDPF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3, - IDPF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4, -}; - -enum idpf_rx_ptype_tunnel_end_prot { - IDPF_RX_PTYPE_TUNNEL_END_NONE = 0, - IDPF_RX_PTYPE_TUNNEL_END_IPV4 = 1, - IDPF_RX_PTYPE_TUNNEL_END_IPV6 = 2, -}; - -enum idpf_rx_ptype_inner_prot { - IDPF_RX_PTYPE_INNER_PROT_NONE = 0, - IDPF_RX_PTYPE_INNER_PROT_UDP = 1, - IDPF_RX_PTYPE_INNER_PROT_TCP = 2, - IDPF_RX_PTYPE_INNER_PROT_SCTP = 3, - IDPF_RX_PTYPE_INNER_PROT_ICMP = 4, - IDPF_RX_PTYPE_INNER_PROT_TIMESYNC = 5, -}; - -enum idpf_rx_ptype_payload_layer { - IDPF_RX_PTYPE_PAYLOAD_LAYER_NONE = 0, - IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1, - IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2, - IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3, -}; - enum idpf_tunnel_state { IDPF_PTYPE_TUNNEL_IP = BIT(0), IDPF_PTYPE_TUNNEL_IP_GRENAT = BIT(1), @@ -419,22 +355,9 @@ enum idpf_tunnel_state { }; struct idpf_ptype_state { - bool outer_ip; - bool outer_frag; - u8 tunnel_state; -}; - -struct idpf_rx_ptype_decoded { - u32 ptype:10; - u32 known:1; - u32 outer_ip:1; - u32 outer_ip_ver:2; - u32 outer_frag:1; - u32 tunnel_type:3; - u32 tunnel_end_prot:2; - u32 tunnel_end_frag:1; - u32 inner_prot:4; - u32 payload_layer:3; + bool outer_ip:1; + bool outer_frag:1; + u8 tunnel_state:6; }; /** @@ -1014,7 +937,6 @@ int idpf_vport_intr_alloc(struct idpf_vport *vport); void idpf_vport_intr_update_itr_ena_irq(struct idpf_q_vector *q_vector); void idpf_vport_intr_deinit(struct idpf_vport *vport); int idpf_vport_intr_init(struct idpf_vport *vport); -enum pkt_hash_types idpf_ptype_to_htype(const struct idpf_rx_ptype_decoded *decoded); int idpf_config_rss(struct idpf_vport *vport); int idpf_init_rss(struct idpf_vport *vport); void idpf_deinit_rss(struct idpf_vport *vport); diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index d0cdd63b3d5b..98c904f4dcf5 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -2614,39 +2614,52 @@ int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get) * @frag: fragmentation allowed * */ -static void idpf_fill_ptype_lookup(struct idpf_rx_ptype_decoded *ptype, +static void idpf_fill_ptype_lookup(struct libie_rx_ptype_parsed *ptype, struct idpf_ptype_state *pstate, bool ipv4, bool frag) { if (!pstate->outer_ip || !pstate->outer_frag) { - ptype->outer_ip = IDPF_RX_PTYPE_OUTER_IP; pstate->outer_ip = true; if (ipv4) - ptype->outer_ip_ver = IDPF_RX_PTYPE_OUTER_IPV4; + ptype->outer_ip = LIBIE_RX_PTYPE_OUTER_IPV4; else - ptype->outer_ip_ver = IDPF_RX_PTYPE_OUTER_IPV6; + ptype->outer_ip = LIBIE_RX_PTYPE_OUTER_IPV6; if (frag) { - ptype->outer_frag = IDPF_RX_PTYPE_FRAG; + ptype->outer_frag = LIBIE_RX_PTYPE_FRAG; pstate->outer_frag = true; } } else { - ptype->tunnel_type = IDPF_RX_PTYPE_TUNNEL_IP_IP; + ptype->tunnel_type = LIBIE_RX_PTYPE_TUNNEL_IP_IP; pstate->tunnel_state = IDPF_PTYPE_TUNNEL_IP; if (ipv4) ptype->tunnel_end_prot = - IDPF_RX_PTYPE_TUNNEL_END_IPV4; + LIBIE_RX_PTYPE_TUNNEL_END_IPV4; else ptype->tunnel_end_prot = - IDPF_RX_PTYPE_TUNNEL_END_IPV6; + LIBIE_RX_PTYPE_TUNNEL_END_IPV6; if (frag) - ptype->tunnel_end_frag = IDPF_RX_PTYPE_FRAG; + ptype->tunnel_end_frag = LIBIE_RX_PTYPE_FRAG; } } +static void idpf_finalize_ptype_lookup(struct libie_rx_ptype_parsed *ptype) +{ + if (ptype->payload_layer == LIBIE_RX_PTYPE_PAYLOAD_L2 && + ptype->inner_prot) + ptype->payload_layer = LIBIE_RX_PTYPE_PAYLOAD_L4; + else if (ptype->payload_layer == LIBIE_RX_PTYPE_PAYLOAD_L2 && + ptype->outer_ip) + ptype->payload_layer = LIBIE_RX_PTYPE_PAYLOAD_L3; + else if (ptype->outer_ip == LIBIE_RX_PTYPE_OUTER_L2) + ptype->payload_layer = LIBIE_RX_PTYPE_PAYLOAD_L2; + else + ptype->payload_layer = LIBIE_RX_PTYPE_PAYLOAD_NONE; +} + /** * idpf_send_get_rx_ptype_msg - Send virtchnl for ptype info * @vport: virtual port data structure @@ -2655,7 +2668,7 @@ static void idpf_fill_ptype_lookup(struct idpf_rx_ptype_decoded *ptype, */ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport) { - struct idpf_rx_ptype_decoded *ptype_lkup = vport->rx_ptype_lkup; + struct libie_rx_ptype_parsed *ptype_lkup = vport->rx_ptype_lkup; struct virtchnl2_get_ptype_info get_ptype_info; int max_ptype, ptypes_recvd = 0, ptype_offset; struct idpf_adapter *adapter = vport->adapter; @@ -2736,9 +2749,6 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport) else k = ptype->ptype_id_8; - if (ptype->proto_id_count) - ptype_lkup[k].known = 1; - for (j = 0; j < ptype->proto_id_count; j++) { id = le16_to_cpu(ptype->proto_id[j]); switch (id) { @@ -2746,18 +2756,18 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport) if (pstate.tunnel_state == IDPF_PTYPE_TUNNEL_IP) { ptype_lkup[k].tunnel_type = - IDPF_RX_PTYPE_TUNNEL_IP_GRENAT; + LIBIE_RX_PTYPE_TUNNEL_IP_GRENAT; pstate.tunnel_state |= IDPF_PTYPE_TUNNEL_IP_GRENAT; } break; case VIRTCHNL2_PROTO_HDR_MAC: ptype_lkup[k].outer_ip = - IDPF_RX_PTYPE_OUTER_L2; + LIBIE_RX_PTYPE_OUTER_L2; if (pstate.tunnel_state == IDPF_TUN_IP_GRE) { ptype_lkup[k].tunnel_type = - IDPF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC; + LIBIE_RX_PTYPE_TUNNEL_IP_GRENAT_MAC; pstate.tunnel_state |= IDPF_PTYPE_TUNNEL_IP_GRENAT_MAC; } @@ -2784,23 +2794,23 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport) break; case VIRTCHNL2_PROTO_HDR_UDP: ptype_lkup[k].inner_prot = - IDPF_RX_PTYPE_INNER_PROT_UDP; + LIBIE_RX_PTYPE_INNER_UDP; break; case VIRTCHNL2_PROTO_HDR_TCP: ptype_lkup[k].inner_prot = - IDPF_RX_PTYPE_INNER_PROT_TCP; + LIBIE_RX_PTYPE_INNER_TCP; break; case VIRTCHNL2_PROTO_HDR_SCTP: ptype_lkup[k].inner_prot = - IDPF_RX_PTYPE_INNER_PROT_SCTP; + LIBIE_RX_PTYPE_INNER_SCTP; break; case VIRTCHNL2_PROTO_HDR_ICMP: ptype_lkup[k].inner_prot = - IDPF_RX_PTYPE_INNER_PROT_ICMP; + LIBIE_RX_PTYPE_INNER_ICMP; break; case VIRTCHNL2_PROTO_HDR_PAY: ptype_lkup[k].payload_layer = - IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY2; + LIBIE_RX_PTYPE_PAYLOAD_L2; break; case VIRTCHNL2_PROTO_HDR_ICMPV6: case VIRTCHNL2_PROTO_HDR_IPV6_EH: @@ -2854,6 +2864,8 @@ int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport) break; } } + + idpf_finalize_ptype_lookup(&ptype_lkup[k]); } } From patchwork Sat Dec 23 02:55:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182869 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1459144dyi; Fri, 22 Dec 2023 18:59:40 -0800 (PST) X-Google-Smtp-Source: AGHT+IF8CJlNHXU3ZJESGZMVgkH2sNRJMzKfaLZr2Uab6ht2t0Hyamp2lKFX+96cuKI9mmHs35G/ X-Received: by 2002:a17:902:ed0d:b0:1d3:ee1f:ce4d with SMTP id b13-20020a170902ed0d00b001d3ee1fce4dmr2364860pld.40.1703300379932; Fri, 22 Dec 2023 18:59:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300379; cv=none; d=google.com; s=arc-20160816; b=Vwq+73DezpRNIu3zeE16T+pejGKaPS9iMwVQGAPrCKoeXJnCsHJxsVo5GN2myQH7O4 bM4kcsThJLRheu4K2hsHPXibo04OWtZW9wcrHeD4CWpsNbkQgqbRXAaGAnqn7BVZEKGg 5qi6g+sZ3JRoqsTGH5bBTuHbz8rCjKpfPhkYWYe+LlVy3bQtpmtNsH32gBRDkUg/ekK7 zTxoUECoVdB/iAKe0teOjBUYAHzCy5F0byEvwXe5m/UfFz6WGsTsNNv7/r5eDgKMDEKN EGsNzMpJVde+PfAUXAQtIaZ5RqpN1JM01IIAMMvPB/9n7Gv2MaAN+5FnHBpK7A402Rg2 VNeA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=rhxi0osO1encq7Z6SIelU7Pac8duuisMRj6eC7SUz0c=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=HiY8S7vWdWoxeTlH4q71tmBJhDMtOm9d5IrSaUQb+h2yceRROKD0w8BVS7m/+Qeju9 SfTS/BRD53beHF2VDLZfkPbEjqGsxYxAdUPvH95lPOeKC6Q8PfH53E43I8yEivqilBWl ZZAoCEfMdCSvrajJfisoaJJYw6sbw11NfXbEdmTwE2Kz+gzGdLxB5HuyHBLhwbc68DnT xAmpcgpHXl2ZFlGdESODi0coLS196Q/53USNK62/WOD+mZNhDnGXAzKHAJRNdbrRGoH2 7DwEcJPd7ay5YhTK3eQSZUGQyZLacSIxlszJpEDfo+Yr8tU2P0jlFuYIK1SBoEUt7wb2 jusg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=bjZeCkXA; spf=pass (google.com: domain of linux-kernel+bounces-10218-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10218-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id ix17-20020a170902f81100b001cfa3845be9si4054707plb.542.2023.12.22.18.59.39 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 18:59:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10218-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=bjZeCkXA; spf=pass (google.com: domain of linux-kernel+bounces-10218-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10218-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 9CE8C284E48 for ; Sat, 23 Dec 2023 02:59:33 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 981A2BA44; Sat, 23 Dec 2023 02:58:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="bjZeCkXA" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 07E5B17C6; Sat, 23 Dec 2023 02:58:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300291; x=1734836291; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=oVmgQdkcEevhqooMC4P7+6J+WN/iGM5H6Cnw3TXI9oo=; b=bjZeCkXAt7eLvnpx2I1Ef5ZxqZ7IDM3yis8jc2pK91NgYY/pIJe/euq5 TtHSQCAaSCaZ5COpf9DXEUopNUtIp+5oFoye4H7KfKAbTnX7tWSnEfwtp yYplwWWLAZgsIRgtolpaXnyOC1xR9F2Bb/RyJEPd2evmHeF8eLf1mvskw VwJF3BbEejJuYlMx5OthF8tmwRQvn3zfr53p7D9s5HCy8whEndyBwpWrE OlSBJl8dDP8p/n3O+jkJ0hwQMz9x47azCZjeo1l7LVGC/I7jA/vYkHPIL 0dKNMaugtrGU5hm7XlkNoI+MlqUZKvU1y3PhtZGqFsYYdpyudlQbSPTcq g==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610780" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610780" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:11 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537398" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:08 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 02/34] idpf: pack &idpf_queue way more efficiently Date: Sat, 23 Dec 2023 03:55:22 +0100 Message-ID: <20231223025554.2316836-3-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786039899006697342 X-GMAIL-MSGID: 1786039899006697342 Currently, sizeof(struct idpf_queue) is 32 Kb. This is due to the 12-bit hashtable declaration at the end of the queue. This HT is needed only for Tx queues when the flow scheduling mode is enabled. But &idpf_queue is unified for all of the queue types, provoking excessive memory usage. Instead, allocate those hashtables dynamically only when needed at the moment of Tx queues initialization. Next, reshuffle queue fields to reduce holes and ensure better cacheline locality. Ideally, &idpf_queue must be split into 4, as lots of fields are used only in 1 or 2 types, but for now, just unionize as much as we can. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 58 ++++++++++++----- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 72 ++++++++++++++------- 2 files changed, 91 insertions(+), 39 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 70785f9afadd..d81eff39a632 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -179,6 +179,9 @@ static int idpf_tx_buf_alloc_all(struct idpf_queue *tx_q) for (i = 0; i < tx_q->desc_count; i++) tx_q->tx_buf[i].compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG; + if (!test_bit(__IDPF_Q_FLOW_SCH_EN, tx_q->flags)) + return 0; + /* Initialize tx buf stack for out-of-order completions if * flow scheduling offload is enabled */ @@ -801,11 +804,16 @@ static int idpf_rx_desc_alloc_all(struct idpf_vport *vport) */ static void idpf_txq_group_rel(struct idpf_vport *vport) { + bool split, flow_sch_en; int i, j; if (!vport->txq_grps) return; + split = idpf_is_queue_model_split(vport->txq_model); + flow_sch_en = !idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS, + VIRTCHNL2_CAP_SPLITQ_QSCHED); + for (i = 0; i < vport->num_txq_grp; i++) { struct idpf_txq_group *txq_grp = &vport->txq_grps[i]; @@ -813,8 +821,15 @@ static void idpf_txq_group_rel(struct idpf_vport *vport) kfree(txq_grp->txqs[j]); txq_grp->txqs[j] = NULL; } + + if (!split) + continue; + kfree(txq_grp->complq); txq_grp->complq = NULL; + + if (flow_sch_en) + kfree(txq_grp->hashes); } kfree(vport->txq_grps); vport->txq_grps = NULL; @@ -1157,20 +1172,22 @@ static void idpf_rxq_set_descids(struct idpf_vport *vport, struct idpf_queue *q) */ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq) { - bool flow_sch_en; - int err, i; + bool split, flow_sch_en; + int i; vport->txq_grps = kcalloc(vport->num_txq_grp, sizeof(*vport->txq_grps), GFP_KERNEL); if (!vport->txq_grps) return -ENOMEM; + split = idpf_is_queue_model_split(vport->txq_model); flow_sch_en = !idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_SPLITQ_QSCHED); for (i = 0; i < vport->num_txq_grp; i++) { struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; struct idpf_adapter *adapter = vport->adapter; + struct idpf_txq_hash *hashes; int j; tx_qgrp->vport = vport; @@ -1179,10 +1196,16 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq) for (j = 0; j < tx_qgrp->num_txq; j++) { tx_qgrp->txqs[j] = kzalloc(sizeof(*tx_qgrp->txqs[j]), GFP_KERNEL); - if (!tx_qgrp->txqs[j]) { - err = -ENOMEM; + if (!tx_qgrp->txqs[j]) goto err_alloc; - } + } + + if (split && flow_sch_en) { + hashes = kcalloc(num_txq, sizeof(*hashes), GFP_KERNEL); + if (!hashes) + goto err_alloc; + + tx_qgrp->hashes = hashes; } for (j = 0; j < tx_qgrp->num_txq; j++) { @@ -1194,22 +1217,26 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq) q->tx_min_pkt_len = idpf_get_min_tx_pkt_len(adapter); q->vport = vport; q->txq_grp = tx_qgrp; - hash_init(q->sched_buf_hash); - if (flow_sch_en) - set_bit(__IDPF_Q_FLOW_SCH_EN, q->flags); + if (!flow_sch_en) + continue; + + if (split) { + q->sched_buf_hash = &hashes[j]; + hash_init(q->sched_buf_hash->hash); + } + + set_bit(__IDPF_Q_FLOW_SCH_EN, q->flags); } - if (!idpf_is_queue_model_split(vport->txq_model)) + if (!split) continue; tx_qgrp->complq = kcalloc(IDPF_COMPLQ_PER_GROUP, sizeof(*tx_qgrp->complq), GFP_KERNEL); - if (!tx_qgrp->complq) { - err = -ENOMEM; + if (!tx_qgrp->complq) goto err_alloc; - } tx_qgrp->complq->dev = &adapter->pdev->dev; tx_qgrp->complq->desc_count = vport->complq_desc_count; @@ -1225,7 +1252,7 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq) err_alloc: idpf_txq_group_rel(vport); - return err; + return -ENOMEM; } /** @@ -1512,7 +1539,7 @@ static void idpf_tx_clean_stashed_bufs(struct idpf_queue *txq, u16 compl_tag, struct hlist_node *tmp_buf; /* Buffer completion */ - hash_for_each_possible_safe(txq->sched_buf_hash, stash, tmp_buf, + hash_for_each_possible_safe(txq->sched_buf_hash->hash, stash, tmp_buf, hlist, compl_tag) { if (unlikely(stash->buf.compl_tag != (int)compl_tag)) continue; @@ -1567,7 +1594,8 @@ static int idpf_stash_flow_sch_buffers(struct idpf_queue *txq, stash->buf.compl_tag = tx_buf->compl_tag; /* Add buffer to buf_hash table to be freed later */ - hash_add(txq->sched_buf_hash, &stash->hlist, stash->buf.compl_tag); + hash_add(txq->sched_buf_hash->hash, &stash->hlist, + stash->buf.compl_tag); memset(tx_buf, 0, sizeof(struct idpf_tx_buf)); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index f082d3edeb9c..4a97790cbf68 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -623,7 +623,6 @@ struct idpf_queue { struct idpf_txq_group *txq_grp; struct idpf_rxq_group *rxq_grp; }; - u16 idx; void __iomem *tail; union { struct idpf_tx_buf *tx_buf; @@ -634,7 +633,8 @@ struct idpf_queue { } rx_buf; }; struct page_pool *pp; - struct sk_buff *skb; + void *desc_ring; + u16 idx; u16 q_type; u32 q_id; u16 desc_count; @@ -644,38 +644,57 @@ struct idpf_queue { u16 next_to_alloc; DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS); + struct idpf_q_vector *q_vector; + union idpf_queue_stats q_stats; struct u64_stats_sync stats_sync; - u32 cleaned_bytes; - u16 cleaned_pkts; + union { + /* Rx */ + struct { + u64 rxdids; + u8 rx_buffer_low_watermark; + bool rx_hsplit_en:1; + u16 rx_hbuf_size; + u16 rx_buf_size; + u16 rx_max_pkt_size; + u16 rx_buf_stride; + }; + /* Tx */ + struct { + u32 cleaned_bytes; + u16 cleaned_pkts; - bool rx_hsplit_en; - u16 rx_hbuf_size; - u16 rx_buf_size; - u16 rx_max_pkt_size; - u16 rx_buf_stride; - u8 rx_buffer_low_watermark; - u64 rxdids; - struct idpf_q_vector *q_vector; - unsigned int size; - dma_addr_t dma; - void *desc_ring; + u16 tx_max_bufs; + u8 tx_min_pkt_len; - u16 tx_max_bufs; - u8 tx_min_pkt_len; + u32 num_completions; - u32 num_completions; + struct idpf_buf_lifo buf_stack; + }; + }; - struct idpf_buf_lifo buf_stack; + union { + /* Rx */ + struct { + struct sk_buff *skb; + }; + /* Tx */ + struct { + u16 compl_tag_bufid_m; + u16 compl_tag_gen_s; - u16 compl_tag_bufid_m; - u16 compl_tag_gen_s; + u16 compl_tag_cur_gen; + u16 compl_tag_gen_max; - u16 compl_tag_cur_gen; - u16 compl_tag_gen_max; + struct idpf_txq_hash *sched_buf_hash; + }; + }; - DECLARE_HASHTABLE(sched_buf_hash, 12); + /* Slowpath */ + + dma_addr_t dma; + unsigned int size; } ____cacheline_internodealigned_in_smp; /** @@ -768,6 +787,10 @@ struct idpf_rxq_group { }; }; +struct idpf_txq_hash { + DECLARE_HASHTABLE(hash, 12); +}; + /** * struct idpf_txq_group * @vport: Vport back pointer @@ -787,6 +810,7 @@ struct idpf_txq_group { u16 num_txq; struct idpf_queue *txqs[IDPF_LARGE_MAX_Q]; + struct idpf_txq_hash *hashes; struct idpf_queue *complq; From patchwork Sat Dec 23 02:55:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182870 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1459228dyi; Fri, 22 Dec 2023 18:59:53 -0800 (PST) X-Google-Smtp-Source: AGHT+IE2W2BAQ1Qm0w2tGT9kb8ciGw456DGDtdzzUk5nVzNwhxREIX5lYneSnuj+5ldKhq0YVLrv X-Received: by 2002:a05:620a:3784:b0:781:1b48:5581 with SMTP id pi4-20020a05620a378400b007811b485581mr2342908qkn.122.1703300393034; Fri, 22 Dec 2023 18:59:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300393; cv=none; d=google.com; s=arc-20160816; b=zLj6z47to+D/7NSNKGJGYmC+w0FysLK8fsx+UDrKLof7hMWr2wuToFHEbkqJqIwsD0 h0vjOwYrfFa/XQejb0b8AJc8aJ1ua12kalP1/9h3GtOQLdENI6KXhPcP2FX31qWFFEHi zlezGBzc3G2T+C+jBKWw3D+X7/gFYMDFuhUzmTwokBX7aKRhwuy0PgXnK9LR1t8wTFt2 aV9bM3705+i0ez0cuflNXGlApL9qKDIGT6VUEUbcJxXR6Ao4+qTEZRuBjP2wBlw9HzBL hJ6kMCASsFyJzzqBrGXKRkD7JfGqqiwZkNyl+MOuaZC7REk2oekMENQe55MQxr0YBPDK DYdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=FlRQmpbf1Wgz/2/INfA1I5Nv0GdHyrQQjsTkWUm+8lo=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=eWy1r1Qly39MkrLFTTKzlfb0mY9CveD/X6sibAYSMlU7EBXbiSe8egWEEph1VGCzwV mMWW31sUJI+llYK44id6kJSo0z6yQv0EN0appa8T7ywSKI0SeLycdFVJDI1x/js54kpm WcoAsiZiqeq5jZjtyo3ePt+EM4ayZ71+BQVQMBTWwTgen6QNHhs0oYGWIk0kGL6F1poG pTDO69F5HDza5QYw5hhZGlXkV/6kfJXjbYB1qKO042xaKxCDS+T9cGqTZPeaKYESBGXm gxiH0QzvTiPnN5WPVqhykxZfam/81K/GkD26lcZ3D5Nve2HhUYIg93uOYWa6aQlwbPFL SL8Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=hv9LU3pd; spf=pass (google.com: domain of linux-kernel+bounces-10219-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10219-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id v16-20020a05620a441000b0078106d9c3afsi6193498qkp.86.2023.12.22.18.59.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 18:59:53 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10219-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=hv9LU3pd; spf=pass (google.com: domain of linux-kernel+bounces-10219-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10219-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id D10091C216F5 for ; Sat, 23 Dec 2023 02:59:52 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AD520D261; Sat, 23 Dec 2023 02:58:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="hv9LU3pd" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9BB72AD5D; Sat, 23 Dec 2023 02:58:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300295; x=1734836295; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uSYENsBIhPh3bIPzm6fju9Ktn483588qG7FmN6H9bU0=; b=hv9LU3pdJrUF9GJ7Vpoc0eDTaJkYLZpKCMzfqKP8Tb7OQ2iVi2z6B74e WhShbxHpQYBSsrHDB5+eCrlzLExO1GgJlqwJwfjmMs91Pg0jA2edcBzhg 0QWPo4eVz6OMm2oqMtKjLJI3kfEedkN+q/q6p+FNngbxH5/K9WfdD1Bed nfvCdAW2FiidzA+VQPYeC1k5/9iWJczDQ3PK0DA0OCWGFYMhdlUdK5Rce kaClht9tJc/NJ40FiNTxS1BPB8HubS8v0ICOn+Gr25aMf9L9Esq7fU72F VXb/7jSvvW/MYHyLRFuwynpPtYzAYbtnmelXNl7K3CUd9vL68euj6lT0j A==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610788" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610788" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:15 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537405" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:12 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 03/34] idpf: remove legacy Page Pool Ethtool stats Date: Sat, 23 Dec 2023 03:55:23 +0100 Message-ID: <20231223025554.2316836-4-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786039913226849756 X-GMAIL-MSGID: 1786039913226849756 Page Pool Ethtool stats are deprecated since the Netlink Page Pool interface introduction. idpf receives big changes in Rx buffer management, including &page_pool layout, so keeping these deprecated stats does only harm, not speaking of that CONFIG_IDPF selects CONFIG_PAGE_POOL_STATS unconditionally, while the latter is often turned off for better performance. Remove all the references to PP stats from the Ethtool code. The stats are still available in their full via the generic Netlink interface. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/Kconfig | 1 - .../net/ethernet/intel/idpf/idpf_ethtool.c | 29 +------------------ 2 files changed, 1 insertion(+), 29 deletions(-) diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index 0db1aa36866e..d2e9bef2e0cb 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -380,7 +380,6 @@ config IDPF select DIMLIB select LIBIE select PAGE_POOL - select PAGE_POOL_STATS help This driver supports Intel(R) Infrastructure Data Path Function devices. diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c index 986d429d1175..da7963f27bd8 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c +++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c @@ -571,8 +571,6 @@ static void idpf_get_stat_strings(struct net_device *netdev, u8 *data) for (i = 0; i < vport_config->max_q.max_rxq; i++) idpf_add_qstat_strings(&data, idpf_gstrings_rx_queue_stats, "rx", i); - - page_pool_ethtool_stats_get_strings(data); } /** @@ -606,7 +604,6 @@ static int idpf_get_sset_count(struct net_device *netdev, int sset) struct idpf_netdev_priv *np = netdev_priv(netdev); struct idpf_vport_config *vport_config; u16 max_txq, max_rxq; - unsigned int size; if (sset != ETH_SS_STATS) return -EINVAL; @@ -625,11 +622,8 @@ static int idpf_get_sset_count(struct net_device *netdev, int sset) max_txq = vport_config->max_q.max_txq; max_rxq = vport_config->max_q.max_rxq; - size = IDPF_PORT_STATS_LEN + (IDPF_TX_QUEUE_STATS_LEN * max_txq) + + return IDPF_PORT_STATS_LEN + (IDPF_TX_QUEUE_STATS_LEN * max_txq) + (IDPF_RX_QUEUE_STATS_LEN * max_rxq); - size += page_pool_ethtool_stats_get_count(); - - return size; } /** @@ -877,7 +871,6 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, { struct idpf_netdev_priv *np = netdev_priv(netdev); struct idpf_vport_config *vport_config; - struct page_pool_stats pp_stats = { }; struct idpf_vport *vport; unsigned int total = 0; unsigned int i, j; @@ -947,32 +940,12 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, idpf_add_empty_queue_stats(&data, qtype); else idpf_add_queue_stats(&data, rxq); - - /* In splitq mode, don't get page pool stats here since - * the pools are attached to the buffer queues - */ - if (is_splitq) - continue; - - if (rxq) - page_pool_get_stats(rxq->pp, &pp_stats); - } - } - - for (i = 0; i < vport->num_rxq_grp; i++) { - for (j = 0; j < vport->num_bufqs_per_qgrp; j++) { - struct idpf_queue *rxbufq = - &vport->rxq_grps[i].splitq.bufq_sets[j].bufq; - - page_pool_get_stats(rxbufq->pp, &pp_stats); } } for (; total < vport_config->max_q.max_rxq; total++) idpf_add_empty_queue_stats(&data, VIRTCHNL2_QUEUE_TYPE_RX); - page_pool_ethtool_stats_get(data, &pp_stats); - rcu_read_unlock(); idpf_vport_ctrl_unlock(netdev); From patchwork Sat Dec 23 02:55:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182871 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1459334dyi; Fri, 22 Dec 2023 19:00:12 -0800 (PST) X-Google-Smtp-Source: AGHT+IG77dUoqp3UVTBcyxFogXnI12bFiMpPXCLyGvAhhgQDFzxmNpTPIevHu4mbv02CV/26Fo6N X-Received: by 2002:ac8:5b8e:0:b0:427:95fa:5a45 with SMTP id a14-20020ac85b8e000000b0042795fa5a45mr3161582qta.16.1703300412716; Fri, 22 Dec 2023 19:00:12 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300412; cv=none; d=google.com; s=arc-20160816; b=Ahw35CJkjpSbCscrH7lI/kZ2vPvt0fIxDFGS2LfCN69UPBDL+uzvKnFHrv6EpsyO1x /DzobpKCQafr9eg9cQ1KSAl1/O6uE/HbObhCmvTpUa0MlWXEoQXetkudQd5ZHn1u1B3W Sx4rWMoy4uTjJkVybTbNH2Ei076PslHKSh4QvCoDfFf+gXP9J7HZtMlK5kj/i8RLiTpn v7UdgyxxRRyG63HPA2xPoKvqY8Xpa8+4OGHNXByMPEEIhCv4jU3HD1a027jzxTFlj9e4 JDf5tIAXj+10bnQwoUh+S24z6VUjS/f27jNwayns4+3DvfcYLKm87Mxb5ktHGf8IphmQ qr1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=7kAMZY8Hl8iQoG+rQQDOgvqSBGxtso6bA4jSDFwkuak=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=simyuBzsVD23JO82QIoCGLWempkTi2T1jRig785EP/8g30muv2ba1NdYyIF0nTcXCt oO/sHoy65fSOUhgswr8mNwkFt5oUtbTsHIsy8xkxnitPcZ99DeG/XKiWNLM9pKMG6bFl Vo+gyU66JTKGXFzkXugpwUc9F/4/5XENzk1vx+hXNZ0WgiaYxhixfkezcGM+tXP/648M ZRzNnjIuXVBytMjq6aPlI8psGAV9IBn0CiHEJI1Ect1XGalhQ5DsQyzT5kKKw3KFeVtU QKacWmmvFUBXFn4HxfntGBJETW/rxjNcrSuOVCNYYqL9FE92PuDfRctv38FtaSh94tOB M34A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=WanKfbTQ; spf=pass (google.com: domain of linux-kernel+bounces-10220-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10220-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id g4-20020ac87f44000000b00425f24df298si5934275qtk.548.2023.12.22.19.00.12 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:00:12 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10220-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=WanKfbTQ; spf=pass (google.com: domain of linux-kernel+bounces-10220-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10220-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 751FF1C21019 for ; Sat, 23 Dec 2023 03:00:12 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 735B3D2E3; Sat, 23 Dec 2023 02:58:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="WanKfbTQ" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3BA79CA72; Sat, 23 Dec 2023 02:58:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300299; x=1734836299; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=46Vvjei49P7oFRMYlPnyxLohII5Trr+LUZK0y4+x0UE=; b=WanKfbTQJ/KyzdANvyWKhwRXQRdK9DEf94K0In7HssAhPvgSXR7u85cE jxQxDhvQP9Ux8JqvAti6nFVErJpu0T7XoPnSLEk3a8e2zFt7X3O1Rup71 NYbX83bwxFRpKZ7YnJQ7qJf8YUBwu9Vxreme8+69am28K/g7DRO4MLTOu UQAAeofrLw3NihImBK2iT4tV9izWUy4hgy304V4ZY59cIdU8w591qjV52 pSZp+0RRmV77wI/k3GaxlVv8gNS0Axoz/kHYTh6KOPFJZjE5g7HFsGctm pDQ0rzmD8kwcgqK8++UPpswSTCVYyHMV2ThlyS/3gIcrcfuzn7UE1oOZ/ A==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610796" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610796" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:18 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537414" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:15 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 04/34] libie: support different types of buffers for Rx Date: Sat, 23 Dec 2023 03:55:24 +0100 Message-ID: <20231223025554.2316836-5-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786039933417333134 X-GMAIL-MSGID: 1786039933417333134 Unlike previous generations, idpf requires more buffer types for optimal performance. This includes: header buffers, short buffers, and no-overhead buffers (w/o headroom and tailroom, for TCP zerocopy when the header split is enabled). Introduce libie Rx buffer type and calculate page_pool params accordingly. All the HW-related details like buffer alignment are still accounted. For the header buffers, pick 256 bytes as in most places in the kernel (have you ever seen frames with bigger headers?). Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libie/rx.c | 107 +++++++++++++++++++++++--- include/linux/net/intel/libie/rx.h | 19 +++++ 2 files changed, 115 insertions(+), 11 deletions(-) diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c index 610f16043bcf..3d3b19d2b40d 100644 --- a/drivers/net/ethernet/intel/libie/rx.c +++ b/drivers/net/ethernet/intel/libie/rx.c @@ -6,14 +6,14 @@ /* Rx buffer management */ /** - * libie_rx_hw_len - get the actual buffer size to be passed to HW + * libie_rx_hw_len_mtu - get the actual buffer size to be passed to HW * @pp: &page_pool_params of the netdev to calculate the size for * * Return: HW-writeable length per one buffer to pass it to the HW accounting: * MTU the @dev has, HW required alignment, minimum and maximum allowed values, * and system's page size. */ -static u32 libie_rx_hw_len(const struct page_pool_params *pp) +static u32 libie_rx_hw_len_mtu(const struct page_pool_params *pp) { u32 len; @@ -24,6 +24,96 @@ static u32 libie_rx_hw_len(const struct page_pool_params *pp) return len; } +/** + * libie_rx_hw_len_truesize - get the short buffer size to be passed to HW + * @pp: &page_pool_params of the netdev to calculate the size for + * @truesize: desired truesize for the buffers + * + * Return: HW-writeable length per one buffer to pass it to the HW ignoring the + * MTU and closest to the passed truesize. Can be used for "short" buffer + * queues to fragment pages more efficiently. + */ +static u32 libie_rx_hw_len_truesize(const struct page_pool_params *pp, + u32 truesize) +{ + u32 min, len; + + min = SKB_HEAD_ALIGN(pp->offset + LIBIE_RX_BUF_LEN_ALIGN); + truesize = clamp(roundup_pow_of_two(truesize), roundup_pow_of_two(min), + PAGE_SIZE << LIBIE_RX_PAGE_ORDER); + + len = SKB_WITH_OVERHEAD(truesize - pp->offset); + len = ALIGN_DOWN(len, LIBIE_RX_BUF_LEN_ALIGN); + len = clamp(len, LIBIE_MIN_RX_BUF_LEN, pp->max_len); + + return len; +} + +static void libie_rx_page_pool_params(struct libie_buf_queue *bq, + struct page_pool_params *pp) +{ + pp->offset = LIBIE_SKB_HEADROOM; + /* HW-writeable / syncable length per one page */ + pp->max_len = LIBIE_RX_BUF_LEN(pp->offset); + + /* HW-writeable length per buffer */ + switch (bq->type) { + case LIBIE_RX_BUF_MTU: + bq->rx_buf_len = libie_rx_hw_len_mtu(pp); + break; + case LIBIE_RX_BUF_SHORT: + bq->rx_buf_len = libie_rx_hw_len_truesize(pp, bq->truesize); + break; + case LIBIE_RX_BUF_HDR: + bq->rx_buf_len = ALIGN(LIBIE_MAX_HEAD, LIBIE_RX_BUF_LEN_ALIGN); + break; + default: + break; + } + + /* Buffer size to allocate */ + bq->truesize = roundup_pow_of_two(SKB_HEAD_ALIGN(pp->offset + + bq->rx_buf_len)); +} + +/** + * libie_rx_page_pool_params_zc - calculate params without the stack overhead + * @bq: buffer queue to calculate the size for + * @pp: &page_pool_params of the netdev + * + * Adjusts the PP params to exclude the stack overhead and sets both the buffer + * lengh and the truesize, which are equal for the data buffers. Note that this + * requires separate header buffers to be always active and account the + * overhead. + * With the MTU == ``PAGE_SIZE``, this allows the kernel to enable the zerocopy + * mode. + */ +static bool libie_rx_page_pool_params_zc(struct libie_buf_queue *bq, + struct page_pool_params *pp) +{ + u32 mtu; + + pp->offset = 0; + pp->max_len = PAGE_SIZE << LIBIE_RX_PAGE_ORDER; + + switch (bq->type) { + case LIBIE_RX_BUF_MTU: + mtu = READ_ONCE(pp->netdev->mtu); + break; + case LIBIE_RX_BUF_SHORT: + mtu = bq->truesize; + break; + default: + return false; + } + + bq->rx_buf_len = clamp(roundup_pow_of_two(mtu), LIBIE_RX_BUF_LEN_ALIGN, + pp->max_len); + bq->truesize = bq->rx_buf_len; + + return true; +} + /** * libie_rx_page_pool_create - create a PP with the default libie settings * @bq: buffer queue struct to fill @@ -43,17 +133,12 @@ int libie_rx_page_pool_create(struct libie_buf_queue *bq, .netdev = napi->dev, .napi = napi, .dma_dir = DMA_FROM_DEVICE, - .offset = LIBIE_SKB_HEADROOM, }; - /* HW-writeable / syncable length per one page */ - pp.max_len = LIBIE_RX_BUF_LEN(pp.offset); - - /* HW-writeable length per buffer */ - bq->rx_buf_len = libie_rx_hw_len(&pp); - /* Buffer size to allocate */ - bq->truesize = roundup_pow_of_two(SKB_HEAD_ALIGN(pp.offset + - bq->rx_buf_len)); + if (!bq->hsplit) + libie_rx_page_pool_params(bq, &pp); + else if (!libie_rx_page_pool_params_zc(bq, &pp)) + return -EINVAL; bq->pp = page_pool_create(&pp); diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h index 0d6bce19ad6b..87ad8f9e89c7 100644 --- a/include/linux/net/intel/libie/rx.h +++ b/include/linux/net/intel/libie/rx.h @@ -19,6 +19,8 @@ #define LIBIE_MAX_HEADROOM LIBIE_SKB_HEADROOM /* Link layer / L2 overhead: Ethernet, 2 VLAN tags (C + S), FCS */ #define LIBIE_RX_LL_LEN (ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN) +/* Maximum supported L2-L4 header length */ +#define LIBIE_MAX_HEAD 256 /* Always use order-0 pages */ #define LIBIE_RX_PAGE_ORDER 0 @@ -64,6 +66,18 @@ struct libie_rx_buffer { u32 truesize; }; +/** + * enum libie_rx_buf_type - enum representing types of Rx buffers + * @LIBIE_RX_BUF_MTU: buffer size is determined by MTU + * @LIBIE_RX_BUF_SHORT: buffer size is smaller than MTU, for short frames + * @LIBIE_RX_BUF_HDR: buffer size is ```LIBIE_MAX_HEAD```-sized, for headers + */ +enum libie_rx_buf_type { + LIBIE_RX_BUF_MTU = 0U, + LIBIE_RX_BUF_SHORT, + LIBIE_RX_BUF_HDR, +}; + /** * struct libie_buf_queue - structure representing a buffer queue * @pp: &page_pool for buffer management @@ -71,6 +85,8 @@ struct libie_rx_buffer { * @truesize: size to allocate per buffer, w/overhead * @count: number of descriptors/buffers the queue has * @rx_buf_len: HW-writeable length per each buffer + * @type: type of the buffers this queue has + * @hsplit: flag whether header split is enabled */ struct libie_buf_queue { struct page_pool *pp; @@ -81,6 +97,9 @@ struct libie_buf_queue { /* Cold fields */ u32 rx_buf_len; + enum libie_rx_buf_type type:2; + + bool hsplit:1; }; int libie_rx_page_pool_create(struct libie_buf_queue *bq, From patchwork Sat Dec 23 02:55:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182872 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1459484dyi; Fri, 22 Dec 2023 19:00:32 -0800 (PST) X-Google-Smtp-Source: AGHT+IFjbHhDmOcRF+B3ccss1pQ/vMTSqwVS4S3vvhZf7iuKbyTelOlaoTmn5KiUyGs8WMAO2HVJ X-Received: by 2002:a05:6e02:1e08:b0:35f:ced5:555c with SMTP id g8-20020a056e021e0800b0035fced5555cmr3238543ila.77.1703300432312; Fri, 22 Dec 2023 19:00:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300432; cv=none; d=google.com; s=arc-20160816; b=0CT7zF0yJ7+EGsVfhCPMK1ydMXSxc+8itK7gYqmrQCsGMrrwvRLsd6CjqlhfwkAHOo Fe3KP5G9g80FVnVjlEkM09AnVchl6vtOnVr1RdUieDrtMOatmlM8ruZMSD9t1r7Z395O knFxHALtsL/G7uFGhGuSVSiCrOIMICScwCj4WB7FJT8dZfWlJUCkFZvQdtE3FA+ECUk+ q+ii0/syNC32mMS+3Wt4ITaWoD9Axa6BzIev6/ZV2K6TZeBlDL+J37Oxi9MNiabCXQwK xn4afarB4gxBQJx5mxvW9KE+0bx09sE6kTS+2aogTlD5Hw8UmLoSHZXencq/bvVU4wEv cEzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=fawNN0u/+eJeVHkHs5uUAEeKsqBcjGvmIYoMnfT+RN4=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=vDM9ruj7RGkPHlC7bGmL+7J40JHsB/WSu/KYaN1ZvDSB3lM9LxLX6gDEbeE9Hb+1ws a8rC0hCXgDemLi6Y4uADRyjFPBRK7R6MHzTGfi/AaSDSPCokvuZGvTTStBD/8QiiodP0 InXGHPzATVbsFOy7UEq56iJkm3mUVO5xp2nGMmmYC50OAEW3J1umd8EY8anulkWSav/9 9ajM7TaOh2lnrQixsfJabg0B1m2uNDJp3JDJ/uoZGK7YAyB8CcmwH4gQ8VUF4CPYp0sz ZDwM+R64OITGNo1wQvY99UAR0JPuKFQaTR0fUOJFWM9jSTiX81lC1vpbO7dLBc9y/X0Z K3lg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=h4TRcaBI; spf=pass (google.com: domain of linux-kernel+bounces-10221-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10221-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id w4-20020a17090a6b8400b0028ada6a6038si4365567pjj.153.2023.12.22.19.00.32 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:00:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10221-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=h4TRcaBI; spf=pass (google.com: domain of linux-kernel+bounces-10221-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10221-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 118AA281A19 for ; Sat, 23 Dec 2023 03:00:32 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 9234FDDA2; Sat, 23 Dec 2023 02:58:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="h4TRcaBI" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F33E1D2F8; Sat, 23 Dec 2023 02:58:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300303; x=1734836303; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=zIAQiD53pI9dF5o2if2XUK918+GApOkA7ZQmUOveiWM=; b=h4TRcaBIOCJ/hTSFlx4ACEldv1vn40veRRnElSVglc9/ZJIuk46bA5dv A419948LpPIXs2qgD2Lyf/hhkXAYw/133r9wh1bIYhDaOX6mEA6kCE2rK 3XWy/QzL6RU44uLblmYdNXflJaqaQHswJh+kIm37rihATxcNLGfdFNRVg GZQjhmuWv0PLJdagYkaLhca/Avn+NIxY+DKEAjbidDqinTskrL6zs46VE tKkrFzE3SttEJPGKlGJob5vpTeb0DQ1Oolij7VCMQR/0BCImr+4GKEdng DhfOWgnrM54ao3zKm7c3DMzo+RuN7n+XIgP0oUClwQbXNe8yltId06JQu w==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610805" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610805" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537422" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:19 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 05/34] idpf: convert header split mode to libie + napi_build_skb() Date: Sat, 23 Dec 2023 03:55:25 +0100 Message-ID: <20231223025554.2316836-6-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786039953840273832 X-GMAIL-MSGID: 1786039953840273832 Currently, idpf uses the following model for the header buffers: * buffers are allocated via dma_alloc_coherent(); * when receiving, napi_alloc_skb() is called and then the header is copied to the newly allocated linear part. This is far from optimal as DMA coherent zone is slow on many systems and memcpy() neutralizes the idea and benefits of the header split. Instead, use libie to create page_pools for the header buffers, allocate them dynamically and then build an skb via napi_build_skb() around them with no memory copy. With one exception... When you enable header split, you except you'll always have a separate header buffer, so that you could reserve headroom and tailroom only there and then use full buffers for the data. For example, this is how TCP zerocopy works -- you have to have the payload aligned to PAGE_SIZE. The current hardware running idpf does *not* guarantee that you'll always have headers placed separately. For example, on my setup, even ICMP packets are written as one piece to the data buffers. You can't build a valid skb around a data buffer in this case. To not complicate things and not lose TCP zerocopy etc., when such thing happens, use the empty header buffer and pull either full frame (if it's short) or the Ethernet header there and build an skb around it. GRO layer will pull more from the data buffer later. This W/A will hopefully be removed one day. Signed-off-by: Alexander Lobakin --- .../ethernet/intel/idpf/idpf_singleq_txrx.c | 2 + drivers/net/ethernet/intel/idpf/idpf_txrx.c | 219 +++++++++++------- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 23 +- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 14 +- 4 files changed, 159 insertions(+), 99 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index e58e08c9997d..53ff572ce252 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -868,6 +868,8 @@ static void idpf_rx_singleq_process_skb_fields(struct idpf_queue *rx_q, idpf_rx_singleq_flex_hash(rx_q, skb, rx_desc, parsed); idpf_rx_singleq_flex_csum(rx_q, skb, rx_desc, parsed); } + + skb_record_rx_queue(skb, rx_q->idx); } /** diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index d81eff39a632..f696fd9839fc 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -331,18 +331,17 @@ static int idpf_tx_desc_alloc_all(struct idpf_vport *vport) /** * idpf_rx_page_rel - Release an rx buffer page - * @rxq: the queue that owns the buffer * @rx_buf: the buffer to free */ -static void idpf_rx_page_rel(struct idpf_queue *rxq, struct idpf_rx_buf *rx_buf) +static void idpf_rx_page_rel(struct libie_rx_buffer *rx_buf) { if (unlikely(!rx_buf->page)) return; - page_pool_put_full_page(rxq->pp, rx_buf->page, false); + page_pool_put_full_page(rx_buf->page->pp, rx_buf->page, false); rx_buf->page = NULL; - rx_buf->page_offset = 0; + rx_buf->offset = 0; } /** @@ -351,13 +350,17 @@ static void idpf_rx_page_rel(struct idpf_queue *rxq, struct idpf_rx_buf *rx_buf) */ static void idpf_rx_hdr_buf_rel_all(struct idpf_queue *rxq) { - struct idpf_adapter *adapter = rxq->vport->adapter; + struct libie_buf_queue bq = { + .pp = rxq->hdr_pp, + }; + + for (u32 i = 0; i < rxq->desc_count; i++) + idpf_rx_page_rel(&rxq->rx_buf.hdr_buf[i]); + + libie_rx_page_pool_destroy(&bq); + rxq->hdr_pp = bq.pp; - dma_free_coherent(&adapter->pdev->dev, - rxq->desc_count * IDPF_HDR_BUF_SIZE, - rxq->rx_buf.hdr_buf_va, - rxq->rx_buf.hdr_buf_pa); - rxq->rx_buf.hdr_buf_va = NULL; + kfree(rxq->rx_buf.hdr_buf); } /** @@ -374,7 +377,7 @@ static void idpf_rx_buf_rel_all(struct idpf_queue *rxq) /* Free all the bufs allocated and given to hw on Rx queue */ for (i = 0; i < rxq->desc_count; i++) - idpf_rx_page_rel(rxq, &rxq->rx_buf.buf[i]); + idpf_rx_page_rel(&rxq->rx_buf.buf[i]); if (rxq->rx_hsplit_en) idpf_rx_hdr_buf_rel_all(rxq); @@ -484,17 +487,33 @@ void idpf_rx_buf_hw_update(struct idpf_queue *rxq, u32 val) */ static int idpf_rx_hdr_buf_alloc_all(struct idpf_queue *rxq) { - struct idpf_adapter *adapter = rxq->vport->adapter; - - rxq->rx_buf.hdr_buf_va = - dma_alloc_coherent(&adapter->pdev->dev, - IDPF_HDR_BUF_SIZE * rxq->desc_count, - &rxq->rx_buf.hdr_buf_pa, - GFP_KERNEL); - if (!rxq->rx_buf.hdr_buf_va) + struct libie_buf_queue bq = { + .count = rxq->desc_count, + .type = LIBIE_RX_BUF_HDR, + }; + struct libie_rx_buffer *hdr_buf; + int ret; + + hdr_buf = kcalloc(bq.count, sizeof(*hdr_buf), GFP_KERNEL); + if (!hdr_buf) return -ENOMEM; + rxq->rx_buf.hdr_buf = hdr_buf; + + ret = libie_rx_page_pool_create(&bq, &rxq->q_vector->napi); + if (ret) + goto free_hdr; + + rxq->hdr_pp = bq.pp; + rxq->hdr_truesize = bq.truesize; + rxq->rx_hbuf_size = bq.rx_buf_len; + return 0; + +free_hdr: + kfree(hdr_buf); + + return ret; } /** @@ -529,6 +548,9 @@ static void idpf_rx_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id) static bool idpf_rx_post_buf_desc(struct idpf_queue *bufq, u16 buf_id) { struct virtchnl2_splitq_rx_buf_desc *splitq_rx_desc = NULL; + struct libie_buf_queue bq = { + .count = bufq->desc_count, + }; u16 nta = bufq->next_to_alloc; struct idpf_rx_buf *buf; dma_addr_t addr; @@ -537,9 +559,15 @@ static bool idpf_rx_post_buf_desc(struct idpf_queue *bufq, u16 buf_id) buf = &bufq->rx_buf.buf[buf_id]; if (bufq->rx_hsplit_en) { - splitq_rx_desc->hdr_addr = - cpu_to_le64(bufq->rx_buf.hdr_buf_pa + - (u32)buf_id * IDPF_HDR_BUF_SIZE); + bq.pp = bufq->hdr_pp; + bq.rx_bi = bufq->rx_buf.hdr_buf; + bq.truesize = bufq->hdr_truesize; + + addr = libie_rx_alloc(&bq, buf_id); + if (addr == DMA_MAPPING_ERROR) + return false; + + splitq_rx_desc->hdr_addr = cpu_to_le64(addr); } addr = idpf_alloc_page(bufq->pp, buf, bufq->rx_buf_size); @@ -1328,11 +1356,7 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) q->rx_buf_size = vport->bufq_size[j]; q->rx_buffer_low_watermark = IDPF_LOW_WATERMARK; q->rx_buf_stride = IDPF_RX_BUF_STRIDE; - - if (hs) { - q->rx_hsplit_en = true; - q->rx_hbuf_size = IDPF_HDR_BUF_SIZE; - } + q->rx_hsplit_en = hs; bufq_set->num_refillqs = num_rxq; bufq_set->refillqs = kcalloc(num_rxq, swq_size, @@ -1373,10 +1397,7 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) rx_qgrp->splitq.rxq_sets[j]->refillq1 = &rx_qgrp->splitq.bufq_sets[1].refillqs[j]; - if (hs) { - q->rx_hsplit_en = true; - q->rx_hbuf_size = IDPF_HDR_BUF_SIZE; - } + q->rx_hsplit_en = hs; setup_rxq: q->dev = &adapter->pdev->dev; @@ -2947,6 +2968,8 @@ static int idpf_rx_process_skb_fields(struct idpf_queue *rxq, idpf_rx_splitq_extract_csum_bits(rx_desc, &csum_bits); idpf_rx_csum(rxq, skb, csum_bits, parsed); + skb_record_rx_queue(skb, rxq->idx); + return 0; } @@ -2964,7 +2987,7 @@ void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb, unsigned int size) { skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page, - rx_buf->page_offset, size, rx_buf->truesize); + rx_buf->offset, size, rx_buf->truesize); rx_buf->page = NULL; } @@ -2987,7 +3010,7 @@ struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq, struct sk_buff *skb; void *va; - va = page_address(rx_buf->page) + rx_buf->page_offset; + va = page_address(rx_buf->page) + rx_buf->offset; /* prefetch first cache line of first page */ net_prefetch(va); @@ -3000,7 +3023,6 @@ struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq, return NULL; } - skb_record_rx_queue(skb, rxq->idx); skb_mark_for_recycle(skb); /* Determine available headroom for copy */ @@ -3019,7 +3041,7 @@ struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq, return skb; } - skb_add_rx_frag(skb, 0, rx_buf->page, rx_buf->page_offset + headlen, + skb_add_rx_frag(skb, 0, rx_buf->page, rx_buf->offset + headlen, size, rx_buf->truesize); /* Since we're giving the page to the stack, clear our reference to it. @@ -3031,36 +3053,31 @@ struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq, } /** - * idpf_rx_hdr_construct_skb - Allocate skb and populate it from header buffer - * @rxq: Rx descriptor queue - * @va: Rx buffer to pull data from + * idpf_rx_build_skb - Allocate skb and populate it from header buffer + * @buf: Rx buffer to pull data from * @size: the length of the packet * * This function allocates an skb. It then populates it with the page data from * the current receive descriptor, taking care to set up the skb correctly. - * This specifically uses a header buffer to start building the skb. */ -static struct sk_buff *idpf_rx_hdr_construct_skb(struct idpf_queue *rxq, - const void *va, - unsigned int size) +struct sk_buff *idpf_rx_build_skb(const struct libie_rx_buffer *buf, u32 size) { + u32 hr = buf->page->pp->p.offset; struct sk_buff *skb; + void *va; - /* allocate a skb to store the frags */ - skb = __napi_alloc_skb(&rxq->q_vector->napi, size, GFP_ATOMIC); + va = page_address(buf->page) + buf->offset; + net_prefetch(va + hr); + + skb = napi_build_skb(va, buf->truesize); if (unlikely(!skb)) return NULL; - skb_record_rx_queue(skb, rxq->idx); - - memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long))); - - /* More than likely, a payload fragment, which will use a page from - * page_pool will be added to the SKB so mark it for recycle - * preemptively. And if not, it's inconsequential. - */ skb_mark_for_recycle(skb); + skb_reserve(skb, hr); + __skb_put(skb, size); + return skb; } @@ -3091,6 +3108,26 @@ static bool idpf_rx_splitq_is_eop(struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_de IDPF_RXD_EOF_SPLITQ)); } +static u32 idpf_rx_hsplit_wa(struct libie_rx_buffer *hdr, + struct libie_rx_buffer *buf, + u32 data_len) +{ + u32 copy = data_len <= SMP_CACHE_BYTES ? data_len : ETH_HLEN; + const void *src; + void *dst; + + if (!libie_rx_sync_for_cpu(buf, copy)) + return 0; + + dst = page_address(hdr->page) + hdr->offset + hdr->page->pp->p.offset; + src = page_address(buf->page) + buf->offset + buf->page->pp->p.offset; + memcpy(dst, src, ALIGN(copy, sizeof(long))); + + buf->offset += copy; + + return copy; +} + /** * idpf_rx_splitq_clean - Clean completed descriptors from Rx queue * @rxq: Rx descriptor queue to retrieve receive buffer queue @@ -3113,16 +3150,16 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) /* Process Rx packets bounded by budget */ while (likely(total_rx_pkts < budget)) { struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc; + struct idpf_rx_buf *hdr, *rx_buf = NULL; struct idpf_sw_queue *refillq = NULL; struct idpf_rxq_set *rxq_set = NULL; - struct idpf_rx_buf *rx_buf = NULL; union virtchnl2_rx_desc *desc; unsigned int pkt_len = 0; unsigned int hdr_len = 0; u16 gen_id, buf_id = 0; - /* Header buffer overflow only valid for header split */ - bool hbo = false; int bufq_id; + /* Header buffer overflow only valid for header split */ + bool hbo; u8 rxdid; /* get the Rx desc from Rx queue based on 'next_to_clean' */ @@ -3155,26 +3192,6 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) pkt_len = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M, pkt_len); - hbo = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_M, - rx_desc->status_err0_qw1); - - if (unlikely(hbo)) { - /* If a header buffer overflow, occurs, i.e. header is - * too large to fit in the header split buffer, HW will - * put the entire packet, including headers, in the - * data/payload buffer. - */ - u64_stats_update_begin(&rxq->stats_sync); - u64_stats_inc(&rxq->q_stats.rx.hsplit_buf_ovf); - u64_stats_update_end(&rxq->stats_sync); - goto bypass_hsplit; - } - - hdr_len = le16_to_cpu(rx_desc->hdrlen_flags); - hdr_len = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M, - hdr_len); - -bypass_hsplit: bufq_id = le16_to_cpu(rx_desc->pktlen_gen_bufq_id); bufq_id = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M, bufq_id); @@ -3192,16 +3209,46 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) rx_buf = &rx_bufq->rx_buf.buf[buf_id]; - if (hdr_len) { - const void *va = (u8 *)rx_bufq->rx_buf.hdr_buf_va + - (u32)buf_id * IDPF_HDR_BUF_SIZE; + if (!rx_bufq->hdr_pp) + goto payload; + + hbo = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_M, + rx_desc->status_err0_qw1); + if (likely(!hbo)) + /* If a header buffer overflow, occurs, i.e. header is + * too large to fit in the header split buffer, HW will + * put the entire packet, including headers, in the + * data/payload buffer. + */ +#define __HDR_LEN_MASK VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M + hdr_len = le16_get_bits(rx_desc->hdrlen_flags, + __HDR_LEN_MASK); +#undef __HDR_LEN_MASK + + hdr = &rx_bufq->rx_buf.hdr_buf[buf_id]; + + if (unlikely(!hdr_len && !skb)) { + hdr_len = idpf_rx_hsplit_wa(hdr, rx_buf, pkt_len); + pkt_len -= hdr_len; + + u64_stats_update_begin(&rxq->stats_sync); + u64_stats_inc(&rxq->q_stats.rx.hsplit_buf_ovf); + u64_stats_update_end(&rxq->stats_sync); + } + + if (libie_rx_sync_for_cpu(hdr, hdr_len)) { + skb = idpf_rx_build_skb(hdr, hdr_len); + if (!skb) + break; - skb = idpf_rx_hdr_construct_skb(rxq, va, hdr_len); u64_stats_update_begin(&rxq->stats_sync); u64_stats_inc(&rxq->q_stats.rx.hsplit_pkts); u64_stats_update_end(&rxq->stats_sync); } + hdr->page = NULL; + +payload: if (pkt_len) { idpf_rx_sync_for_cpu(rx_buf, pkt_len); if (skb) @@ -3271,6 +3318,9 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) static int idpf_rx_update_bufq_desc(struct idpf_queue *bufq, u16 refill_desc, struct virtchnl2_splitq_rx_buf_desc *buf_desc) { + struct libie_buf_queue bq = { + .count = bufq->desc_count, + }; struct idpf_rx_buf *buf; dma_addr_t addr; u16 buf_id; @@ -3289,8 +3339,15 @@ static int idpf_rx_update_bufq_desc(struct idpf_queue *bufq, u16 refill_desc, if (!bufq->rx_hsplit_en) return 0; - buf_desc->hdr_addr = cpu_to_le64(bufq->rx_buf.hdr_buf_pa + - (u32)buf_id * IDPF_HDR_BUF_SIZE); + bq.pp = bufq->hdr_pp; + bq.rx_bi = bufq->rx_buf.hdr_buf; + bq.truesize = bufq->hdr_truesize; + + addr = libie_rx_alloc(&bq, buf_id); + if (addr == DMA_MAPPING_ERROR) + return -ENOMEM; + + buf_desc->hdr_addr = cpu_to_le64(addr); return 0; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 4a97790cbf68..357683559b57 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -99,8 +99,6 @@ do { \ #define IDPF_RX_BUF_STRIDE 32 #define IDPF_RX_BUF_POST_STRIDE 16 #define IDPF_LOW_WATERMARK 64 -/* Size of header buffer specifically for header split */ -#define IDPF_HDR_BUF_SIZE 256 #define IDPF_PACKET_HDR_PAD \ (ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN * 2) #define IDPF_TX_TSO_MIN_MSS 88 @@ -315,16 +313,10 @@ struct idpf_rx_extracted { #define IDPF_TX_MAX_DESC_DATA_ALIGNED \ ALIGN_DOWN(IDPF_TX_MAX_DESC_DATA, IDPF_TX_MAX_READ_REQ_SIZE) -#define IDPF_RX_DMA_ATTR \ - (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING) #define IDPF_RX_DESC(rxq, i) \ (&(((union virtchnl2_rx_desc *)((rxq)->desc_ring))[i])) -struct idpf_rx_buf { - struct page *page; - unsigned int page_offset; - u16 truesize; -}; +#define idpf_rx_buf libie_rx_buffer #define IDPF_RX_MAX_PTYPE_PROTO_IDS 32 #define IDPF_RX_MAX_PTYPE_SZ (sizeof(struct virtchnl2_ptype) + \ @@ -627,13 +619,15 @@ struct idpf_queue { union { struct idpf_tx_buf *tx_buf; struct { + struct libie_rx_buffer *hdr_buf; struct idpf_rx_buf *buf; - dma_addr_t hdr_buf_pa; - void *hdr_buf_va; } rx_buf; }; + struct page_pool *hdr_pp; struct page_pool *pp; void *desc_ring; + + u32 hdr_truesize; u16 idx; u16 q_type; u32 q_id; @@ -885,7 +879,7 @@ static inline dma_addr_t idpf_alloc_page(struct page_pool *pool, unsigned int buf_size) { if (buf_size == IDPF_RX_BUF_2048) - buf->page = page_pool_dev_alloc_frag(pool, &buf->page_offset, + buf->page = page_pool_dev_alloc_frag(pool, &buf->offset, buf_size); else buf->page = page_pool_dev_alloc_pages(pool); @@ -895,7 +889,7 @@ static inline dma_addr_t idpf_alloc_page(struct page_pool *pool, buf->truesize = buf_size; - return page_pool_get_dma_addr(buf->page) + buf->page_offset + + return page_pool_get_dma_addr(buf->page) + buf->offset + pool->p.offset; } @@ -922,7 +916,7 @@ static inline void idpf_rx_sync_for_cpu(struct idpf_rx_buf *rx_buf, u32 len) dma_sync_single_range_for_cpu(pp->p.dev, page_pool_get_dma_addr(page), - rx_buf->page_offset + pp->p.offset, len, + rx_buf->offset + pp->p.offset, len, page_pool_get_dma_dir(pp)); } @@ -970,6 +964,7 @@ void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb, struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq, struct idpf_rx_buf *rx_buf, unsigned int size); +struct sk_buff *idpf_rx_build_skb(const struct libie_rx_buffer *buf, u32 size); bool idpf_init_rx_buf_hw_alloc(struct idpf_queue *rxq, struct idpf_rx_buf *buf); void idpf_rx_buf_hw_update(struct idpf_queue *rxq, u32 val); void idpf_tx_buf_hw_update(struct idpf_queue *tx_q, u32 val, diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 98c904f4dcf5..d599c0199e22 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -1636,32 +1636,38 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) num_rxq = rx_qgrp->singleq.num_rxq; for (j = 0; j < num_rxq; j++, k++) { + const struct idpf_bufq_set *sets; struct idpf_queue *rxq; if (!idpf_is_queue_model_split(vport->rxq_model)) { rxq = rx_qgrp->singleq.rxqs[j]; goto common_qi_fields; } + rxq = &rx_qgrp->splitq.rxq_sets[j]->rxq; - qi[k].rx_bufq1_id = - cpu_to_le16(rxq->rxq_grp->splitq.bufq_sets[0].bufq.q_id); + sets = rxq->rxq_grp->splitq.bufq_sets; + + qi[k].rx_bufq1_id = cpu_to_le16(sets[0].bufq.q_id); if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) { qi[k].bufq2_ena = IDPF_BUFQ2_ENA; qi[k].rx_bufq2_id = - cpu_to_le16(rxq->rxq_grp->splitq.bufq_sets[1].bufq.q_id); + cpu_to_le16(sets[1].bufq.q_id); } qi[k].rx_buffer_low_watermark = cpu_to_le16(rxq->rx_buffer_low_watermark); if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW)) qi[k].qflags |= cpu_to_le16(VIRTCHNL2_RXQ_RSC); -common_qi_fields: + rxq->rx_hbuf_size = sets[0].bufq.rx_hbuf_size; + if (rxq->rx_hsplit_en) { qi[k].qflags |= cpu_to_le16(VIRTCHNL2_RXQ_HDR_SPLIT); qi[k].hdr_buffer_size = cpu_to_le16(rxq->rx_hbuf_size); } + +common_qi_fields: qi[k].queue_id = cpu_to_le32(rxq->q_id); qi[k].model = cpu_to_le16(vport->rxq_model); qi[k].type = cpu_to_le32(rxq->q_type); From patchwork Sat Dec 23 02:55:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182873 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1459648dyi; Fri, 22 Dec 2023 19:00:52 -0800 (PST) X-Google-Smtp-Source: AGHT+IG8O84PYODLRH7BnjOVPYlH6rUOjFRRGo75EgxjXBwmx9Sc5LZ5z7P6nhwO1DlN2AKfBJ7v X-Received: by 2002:ac8:5b89:0:b0:427:7c4f:9d16 with SMTP id a9-20020ac85b89000000b004277c4f9d16mr2995547qta.14.1703300452570; Fri, 22 Dec 2023 19:00:52 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300452; cv=none; d=google.com; s=arc-20160816; b=fcz5jsVjRp3lPDE8KGYmNZ0rTvXpaKo8T/MLPseG9Xcrf2jFAqlvIzxBz5iYATzQJ6 1Za0SeYi40TGAH9ycnr5cZQp/gLIcRlAPhOEdPrHNBWh4/f/XO456JFL5yBBaURvSW/9 sv/2dVngsV93Kx+qmysHLX84xX3xPEFfY4CpCzXPDPQFUMwCFSpRWXR0sURoXvzd9H6R Hfz+tFxEZJj2iC8WHak9WPEvaOrf+dGQeQUGqhKi5SgiZ21zlbCuWYsVNk/fS/CTLb7x LM0WyI5r40zvLylwbqE1v8KU8j+JahmNpoqKryfHPFf5Lc3vY7azMjcIXAo1RIXV0DFz Cung== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=o1BDm6dsue/0duYK+cY+KS9Kc5wEHQnm7jJTX0Fnhfk=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=lPZ0xAt3yFRC0/O3+oHXgFApA6knZbUqomD9PbyFgNdHq3qZ9Dw+wXXKIYiXo/j9wd T7968+rX3BHUmOqrPuH6XVmA+VPRQ1NlLKdjtp1xZFDRfyBoRpMxoc2Mup2m2fpsaSR1 7NXrR2jLkwj9wg9vX0v/UVnxH8G0INmBulpud8FT2Aigau/XKl6WF2UmpNVihFHuTZw+ js13QrozgKSoi77wVea1d3UvcQfkDZbtCB7FhwiA3Ad/2kzoeHSXJ+ZMUoXNGhSlHxlc ECSsvjuismti8ncptZ9VSZOp40IDu00sX4B4kt1UoBYpVJN1IjAf4krQl2qcliGX+siw 19FQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="SVpR/C5j"; spf=pass (google.com: domain of linux-kernel+bounces-10222-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10222-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id f4-20020a05622a1a0400b0042586375f66si5623338qtb.124.2023.12.22.19.00.52 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:00:52 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10222-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="SVpR/C5j"; spf=pass (google.com: domain of linux-kernel+bounces-10222-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10222-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 4D7951C21BA9 for ; Sat, 23 Dec 2023 03:00:52 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7EBEEDF6C; Sat, 23 Dec 2023 02:58:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="SVpR/C5j" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B59FCD52B; Sat, 23 Dec 2023 02:58:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300307; x=1734836307; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ryKNR71ldFbhVXYFqqHil/sbwwAAIlJHYHVkfyR/mFU=; b=SVpR/C5jv0TqHSzmeS7+S5lTtSLrFl7IGKlLGgmrpP6Thzsdy8OtvVkJ cWncaSsuwGt+jXefjpO7IpQOiVzBpGGrCAzwkT97hNjBdxFD9PwX4WQIv D1xtd8+UCkSZiXIef2C7itNOHdqi3O9ZH44fu3V43Eytmo+ROn44lZNeA qa0iAZlEzxs1JE4VamFxFMLR4G+mo8eJ4JCdFLStL3UU4gywPnUnl2dgV mtDDukmcjqOm1kFxy65YD9okymacukPBZrlafmLElBV60Xp8dHAzsVYgJ KLGff7OnWgXBukqePa9Crjzvp/4xY4WxE8flUUvmOw6lpAup5+DpvN/H9 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610814" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610814" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:27 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537434" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:23 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 06/34] idpf: use libie Rx buffer management for payload buffer Date: Sat, 23 Dec 2023 03:55:26 +0100 Message-ID: <20231223025554.2316836-7-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786039975466569821 X-GMAIL-MSGID: 1786039975466569821 idpf uses Page Pool for data buffers with hardcoded buffer lengths of 4k for "classic" buffers and 2k for "short" ones. This is not flexible and does not ensure optimal memory usage. Why would you need 4k buffers when the MTU is 1500? Use libie for the data buffers and don't hardcode any buffer sizes. Let them be calculated from the MTU for "classics" and then divide the truesize by 2 for "short" ones. The memory usage is now greatly reduced and 2 buffer queues starts make sense: on frames <= 1024, you'll recycle (and resync) a page only after 4 HW writes rather than two. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/Kconfig | 1 - drivers/net/ethernet/intel/idpf/idpf.h | 1 - .../ethernet/intel/idpf/idpf_singleq_txrx.c | 24 +-- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 194 ++++++------------ drivers/net/ethernet/intel/idpf/idpf_txrx.h | 70 +------ .../net/ethernet/intel/idpf/idpf_virtchnl.c | 8 +- 6 files changed, 88 insertions(+), 210 deletions(-) diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig index d2e9bef2e0cb..c96d244a1d54 100644 --- a/drivers/net/ethernet/intel/Kconfig +++ b/drivers/net/ethernet/intel/Kconfig @@ -379,7 +379,6 @@ config IDPF depends on PCI_MSI select DIMLIB select LIBIE - select PAGE_POOL help This driver supports Intel(R) Infrastructure Data Path Function devices. diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 8342df0f4f3d..596ece7df26a 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -381,7 +381,6 @@ struct idpf_vport { u32 rxq_desc_count; u8 num_bufqs_per_qgrp; u32 bufq_desc_count[IDPF_MAX_BUFQS_PER_RXQ_GRP]; - u32 bufq_size[IDPF_MAX_BUFQS_PER_RXQ_GRP]; u16 num_rxq_grp; struct idpf_rxq_group *rxq_grps; u32 rxq_model; diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index 53ff572ce252..63a709743037 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -883,20 +883,24 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rx_q, u16 cleaned_count) { struct virtchnl2_singleq_rx_buf_desc *desc; + const struct libie_buf_queue bq = { + .pp = rx_q->pp, + .rx_bi = rx_q->rx_buf.buf, + .truesize = rx_q->truesize, + .count = rx_q->desc_count, + }; u16 nta = rx_q->next_to_alloc; - struct idpf_rx_buf *buf; if (!cleaned_count) return false; desc = IDPF_SINGLEQ_RX_BUF_DESC(rx_q, nta); - buf = &rx_q->rx_buf.buf[nta]; do { dma_addr_t addr; - addr = idpf_alloc_page(rx_q->pp, buf, rx_q->rx_buf_size); - if (unlikely(addr == DMA_MAPPING_ERROR)) + addr = libie_rx_alloc(&bq, nta); + if (addr == DMA_MAPPING_ERROR) break; /* Refresh the desc even if buffer_addrs didn't change @@ -906,11 +910,9 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rx_q, desc->hdr_addr = 0; desc++; - buf++; nta++; if (unlikely(nta == rx_q->desc_count)) { desc = IDPF_SINGLEQ_RX_BUF_DESC(rx_q, 0); - buf = rx_q->rx_buf.buf; nta = 0; } @@ -1031,24 +1033,22 @@ static int idpf_rx_singleq_clean(struct idpf_queue *rx_q, int budget) idpf_rx_singleq_extract_fields(rx_q, rx_desc, &fields); rx_buf = &rx_q->rx_buf.buf[ntc]; - if (!fields.size) { - idpf_rx_put_page(rx_buf); + if (!libie_rx_sync_for_cpu(rx_buf, fields.size)) goto skip_data; - } - idpf_rx_sync_for_cpu(rx_buf, fields.size); if (skb) idpf_rx_add_frag(rx_buf, skb, fields.size); else - skb = idpf_rx_construct_skb(rx_q, rx_buf, fields.size); + skb = idpf_rx_build_skb(rx_buf, fields.size); /* exit if we failed to retrieve a buffer */ if (!skb) break; skip_data: - IDPF_SINGLEQ_BUMP_RING_IDX(rx_q, ntc); + rx_buf->page = NULL; + IDPF_SINGLEQ_BUMP_RING_IDX(rx_q, ntc); cleaned_count++; /* skip if it is non EOP desc */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index f696fd9839fc..c44737e243b0 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -369,6 +369,10 @@ static void idpf_rx_hdr_buf_rel_all(struct idpf_queue *rxq) */ static void idpf_rx_buf_rel_all(struct idpf_queue *rxq) { + struct libie_buf_queue bq = { + .pp = rxq->pp, + }; + struct device *dev; u16 i; /* queue already cleared, nothing to do */ @@ -382,8 +386,9 @@ static void idpf_rx_buf_rel_all(struct idpf_queue *rxq) if (rxq->rx_hsplit_en) idpf_rx_hdr_buf_rel_all(rxq); - page_pool_destroy(rxq->pp); - rxq->pp = NULL; + dev = bq.pp->p.dev; + libie_rx_page_pool_destroy(&bq); + rxq->dev = dev; kfree(rxq->rx_buf.buf); rxq->rx_buf.buf = NULL; @@ -552,11 +557,9 @@ static bool idpf_rx_post_buf_desc(struct idpf_queue *bufq, u16 buf_id) .count = bufq->desc_count, }; u16 nta = bufq->next_to_alloc; - struct idpf_rx_buf *buf; dma_addr_t addr; splitq_rx_desc = IDPF_SPLITQ_RX_BUF_DESC(bufq, nta); - buf = &bufq->rx_buf.buf[buf_id]; if (bufq->rx_hsplit_en) { bq.pp = bufq->hdr_pp; @@ -570,8 +573,12 @@ static bool idpf_rx_post_buf_desc(struct idpf_queue *bufq, u16 buf_id) splitq_rx_desc->hdr_addr = cpu_to_le64(addr); } - addr = idpf_alloc_page(bufq->pp, buf, bufq->rx_buf_size); - if (unlikely(addr == DMA_MAPPING_ERROR)) + bq.pp = bufq->pp; + bq.rx_bi = bufq->rx_buf.buf; + bq.truesize = bufq->truesize; + + addr = libie_rx_alloc(&bq, buf_id); + if (addr == DMA_MAPPING_ERROR) return false; splitq_rx_desc->pkt_addr = cpu_to_le64(addr); @@ -607,28 +614,6 @@ static bool idpf_rx_post_init_bufs(struct idpf_queue *bufq, u16 working_set) return true; } -/** - * idpf_rx_create_page_pool - Create a page pool - * @rxbufq: RX queue to create page pool for - * - * Returns &page_pool on success, casted -errno on failure - */ -static struct page_pool *idpf_rx_create_page_pool(struct idpf_queue *rxbufq) -{ - struct page_pool_params pp = { - .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV, - .order = 0, - .pool_size = rxbufq->desc_count, - .nid = NUMA_NO_NODE, - .dev = rxbufq->vport->netdev->dev.parent, - .max_len = PAGE_SIZE, - .dma_dir = DMA_FROM_DEVICE, - .offset = 0, - }; - - return page_pool_create(&pp); -} - /** * idpf_rx_buf_alloc_all - Allocate memory for all buffer resources * @rxbufq: queue for which the buffers are allocated; equivalent to @@ -676,18 +661,28 @@ static int idpf_rx_buf_alloc_all(struct idpf_queue *rxbufq) /** * idpf_rx_bufs_init - Initialize page pool, allocate rx bufs, and post to HW * @rxbufq: RX queue to create page pool for + * @type: type of Rx buffers to allocate * * Returns 0 on success, negative on failure */ -static int idpf_rx_bufs_init(struct idpf_queue *rxbufq) +static int idpf_rx_bufs_init(struct idpf_queue *rxbufq, + enum libie_rx_buf_type type) { - struct page_pool *pool; + struct libie_buf_queue bq = { + .truesize = rxbufq->truesize, + .count = rxbufq->desc_count, + .type = type, + .hsplit = rxbufq->rx_hsplit_en, + }; + int ret; - pool = idpf_rx_create_page_pool(rxbufq); - if (IS_ERR(pool)) - return PTR_ERR(pool); + ret = libie_rx_page_pool_create(&bq, &rxbufq->q_vector->napi); + if (ret) + return ret; - rxbufq->pp = pool; + rxbufq->pp = bq.pp; + rxbufq->truesize = bq.truesize; + rxbufq->rx_buf_size = bq.rx_buf_len; return idpf_rx_buf_alloc_all(rxbufq); } @@ -700,20 +695,21 @@ static int idpf_rx_bufs_init(struct idpf_queue *rxbufq) */ int idpf_rx_bufs_init_all(struct idpf_vport *vport) { - struct idpf_rxq_group *rx_qgrp; + bool split = idpf_is_queue_model_split(vport->rxq_model); struct idpf_queue *q; int i, j, err; for (i = 0; i < vport->num_rxq_grp; i++) { - rx_qgrp = &vport->rxq_grps[i]; + struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + u32 truesize = 0; /* Allocate bufs for the rxq itself in singleq */ - if (!idpf_is_queue_model_split(vport->rxq_model)) { + if (!split) { int num_rxq = rx_qgrp->singleq.num_rxq; for (j = 0; j < num_rxq; j++) { q = rx_qgrp->singleq.rxqs[j]; - err = idpf_rx_bufs_init(q); + err = idpf_rx_bufs_init(q, LIBIE_RX_BUF_MTU); if (err) return err; } @@ -723,10 +719,18 @@ int idpf_rx_bufs_init_all(struct idpf_vport *vport) /* Otherwise, allocate bufs for the buffer queues */ for (j = 0; j < vport->num_bufqs_per_qgrp; j++) { + enum libie_rx_buf_type qt; + q = &rx_qgrp->splitq.bufq_sets[j].bufq; - err = idpf_rx_bufs_init(q); + q->truesize = truesize; + + qt = truesize ? LIBIE_RX_BUF_SHORT : LIBIE_RX_BUF_MTU; + + err = idpf_rx_bufs_init(q, qt); if (err) return err; + + truesize = q->truesize >> 1; } } @@ -1009,17 +1013,11 @@ void idpf_vport_init_num_qs(struct idpf_vport *vport, /* Adjust number of buffer queues per Rx queue group. */ if (!idpf_is_queue_model_split(vport->rxq_model)) { vport->num_bufqs_per_qgrp = 0; - vport->bufq_size[0] = IDPF_RX_BUF_2048; return; } vport->num_bufqs_per_qgrp = IDPF_MAX_BUFQS_PER_RXQ_GRP; - /* Bufq[0] default buffer size is 4K - * Bufq[1] default buffer size is 2K - */ - vport->bufq_size[0] = IDPF_RX_BUF_4096; - vport->bufq_size[1] = IDPF_RX_BUF_2048; } /** @@ -1353,7 +1351,6 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) q->vport = vport; q->rxq_grp = rx_qgrp; q->idx = j; - q->rx_buf_size = vport->bufq_size[j]; q->rx_buffer_low_watermark = IDPF_LOW_WATERMARK; q->rx_buf_stride = IDPF_RX_BUF_STRIDE; q->rx_hsplit_en = hs; @@ -1405,14 +1402,9 @@ static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq) q->vport = vport; q->rxq_grp = rx_qgrp; q->idx = (i * num_rxq) + j; - /* In splitq mode, RXQ buffer size should be - * set to that of the first buffer queue - * associated with this RXQ - */ - q->rx_buf_size = vport->bufq_size[0]; q->rx_buffer_low_watermark = IDPF_LOW_WATERMARK; q->rx_max_pkt_size = vport->netdev->mtu + - IDPF_PACKET_HDR_PAD; + LIBIE_RX_LL_LEN; idpf_rxq_set_descids(vport, q); } } @@ -2986,70 +2978,10 @@ static int idpf_rx_process_skb_fields(struct idpf_queue *rxq, void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb, unsigned int size) { - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page, - rx_buf->offset, size, rx_buf->truesize); - - rx_buf->page = NULL; -} - -/** - * idpf_rx_construct_skb - Allocate skb and populate it - * @rxq: Rx descriptor queue - * @rx_buf: Rx buffer to pull data from - * @size: the length of the packet - * - * This function allocates an skb. It then populates it with the page - * data from the current receive descriptor, taking care to set up the - * skb correctly. - */ -struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq, - struct idpf_rx_buf *rx_buf, - unsigned int size) -{ - unsigned int headlen; - struct sk_buff *skb; - void *va; - - va = page_address(rx_buf->page) + rx_buf->offset; - - /* prefetch first cache line of first page */ - net_prefetch(va); - /* allocate a skb to store the frags */ - skb = __napi_alloc_skb(&rxq->q_vector->napi, IDPF_RX_HDR_SIZE, - GFP_ATOMIC); - if (unlikely(!skb)) { - idpf_rx_put_page(rx_buf); - - return NULL; - } - - skb_mark_for_recycle(skb); - - /* Determine available headroom for copy */ - headlen = size; - if (headlen > IDPF_RX_HDR_SIZE) - headlen = eth_get_headlen(skb->dev, va, IDPF_RX_HDR_SIZE); - - /* align pull length to size of long to optimize memcpy performance */ - memcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long))); - - /* if we exhaust the linear part then add what is left as a frag */ - size -= headlen; - if (!size) { - idpf_rx_put_page(rx_buf); - - return skb; - } + u32 hr = rx_buf->page->pp->p.offset; - skb_add_rx_frag(skb, 0, rx_buf->page, rx_buf->offset + headlen, - size, rx_buf->truesize); - - /* Since we're giving the page to the stack, clear our reference to it. - * We'll get a new one during buffer posting. - */ - rx_buf->page = NULL; - - return skb; + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page, + rx_buf->offset + hr, size, rx_buf->truesize); } /** @@ -3249,24 +3181,24 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) hdr->page = NULL; payload: - if (pkt_len) { - idpf_rx_sync_for_cpu(rx_buf, pkt_len); - if (skb) - idpf_rx_add_frag(rx_buf, skb, pkt_len); - else - skb = idpf_rx_construct_skb(rxq, rx_buf, - pkt_len); - } else { - idpf_rx_put_page(rx_buf); - } + if (!libie_rx_sync_for_cpu(rx_buf, pkt_len)) + goto skip_data; + + if (skb) + idpf_rx_add_frag(rx_buf, skb, pkt_len); + else + skb = idpf_rx_build_skb(rx_buf, pkt_len); /* exit if we failed to retrieve a buffer */ if (!skb) break; - idpf_rx_post_buf_refill(refillq, buf_id); +skip_data: + rx_buf->page = NULL; + idpf_rx_post_buf_refill(refillq, buf_id); IDPF_RX_BUMP_NTC(rxq, ntc); + /* skip if it is non EOP desc */ if (!idpf_rx_splitq_is_eop(rx_desc)) continue; @@ -3319,18 +3251,18 @@ static int idpf_rx_update_bufq_desc(struct idpf_queue *bufq, u16 refill_desc, struct virtchnl2_splitq_rx_buf_desc *buf_desc) { struct libie_buf_queue bq = { + .pp = bufq->pp, + .rx_bi = bufq->rx_buf.buf, + .truesize = bufq->truesize, .count = bufq->desc_count, }; - struct idpf_rx_buf *buf; dma_addr_t addr; u16 buf_id; buf_id = FIELD_GET(IDPF_RX_BI_BUFID_M, refill_desc); - buf = &bufq->rx_buf.buf[buf_id]; - - addr = idpf_alloc_page(bufq->pp, buf, bufq->rx_buf_size); - if (unlikely(addr == DMA_MAPPING_ERROR)) + addr = libie_rx_alloc(&bq, buf_id); + if (addr == DMA_MAPPING_ERROR) return -ENOMEM; buf_desc->pkt_addr = cpu_to_le64(addr); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 357683559b57..0bbc654a24b9 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -93,14 +93,10 @@ do { \ idx = 0; \ } while (0) -#define IDPF_RX_HDR_SIZE 256 -#define IDPF_RX_BUF_2048 2048 -#define IDPF_RX_BUF_4096 4096 #define IDPF_RX_BUF_STRIDE 32 #define IDPF_RX_BUF_POST_STRIDE 16 #define IDPF_LOW_WATERMARK 64 -#define IDPF_PACKET_HDR_PAD \ - (ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN * 2) + #define IDPF_TX_TSO_MIN_MSS 88 /* Minimum number of descriptors between 2 descriptors with the RE bit set; @@ -609,7 +605,6 @@ union idpf_queue_stats { * @sched_buf_hash: Hash table to stores buffers */ struct idpf_queue { - struct device *dev; struct idpf_vport *vport; union { struct idpf_txq_group *txq_grp; @@ -624,10 +619,14 @@ struct idpf_queue { } rx_buf; }; struct page_pool *hdr_pp; - struct page_pool *pp; + union { + struct page_pool *pp; + struct device *dev; + }; void *desc_ring; u32 hdr_truesize; + u32 truesize; u16 idx; u16 q_type; u32 q_id; @@ -866,60 +865,6 @@ static inline void idpf_tx_splitq_build_desc(union idpf_tx_flex_desc *desc, idpf_tx_splitq_build_flow_desc(desc, params, td_cmd, size); } -/** - * idpf_alloc_page - Allocate a new RX buffer from the page pool - * @pool: page_pool to allocate from - * @buf: metadata struct to populate with page info - * @buf_size: 2K or 4K - * - * Returns &dma_addr_t to be passed to HW for Rx, %DMA_MAPPING_ERROR otherwise. - */ -static inline dma_addr_t idpf_alloc_page(struct page_pool *pool, - struct idpf_rx_buf *buf, - unsigned int buf_size) -{ - if (buf_size == IDPF_RX_BUF_2048) - buf->page = page_pool_dev_alloc_frag(pool, &buf->offset, - buf_size); - else - buf->page = page_pool_dev_alloc_pages(pool); - - if (!buf->page) - return DMA_MAPPING_ERROR; - - buf->truesize = buf_size; - - return page_pool_get_dma_addr(buf->page) + buf->offset + - pool->p.offset; -} - -/** - * idpf_rx_put_page - Return RX buffer page to pool - * @rx_buf: RX buffer metadata struct - */ -static inline void idpf_rx_put_page(struct idpf_rx_buf *rx_buf) -{ - page_pool_put_page(rx_buf->page->pp, rx_buf->page, - rx_buf->truesize, true); - rx_buf->page = NULL; -} - -/** - * idpf_rx_sync_for_cpu - Synchronize DMA buffer - * @rx_buf: RX buffer metadata struct - * @len: frame length from descriptor - */ -static inline void idpf_rx_sync_for_cpu(struct idpf_rx_buf *rx_buf, u32 len) -{ - struct page *page = rx_buf->page; - struct page_pool *pp = page->pp; - - dma_sync_single_range_for_cpu(pp->p.dev, - page_pool_get_dma_addr(page), - rx_buf->offset + pp->p.offset, len, - page_pool_get_dma_dir(pp)); -} - /** * idpf_vport_intr_set_wb_on_itr - enable descriptor writeback on disabled interrupts * @q_vector: pointer to queue vector struct @@ -961,9 +906,6 @@ void idpf_deinit_rss(struct idpf_vport *vport); int idpf_rx_bufs_init_all(struct idpf_vport *vport); void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb, unsigned int size); -struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq, - struct idpf_rx_buf *rx_buf, - unsigned int size); struct sk_buff *idpf_rx_build_skb(const struct libie_rx_buffer *buf, u32 size); bool idpf_init_rx_buf_hw_alloc(struct idpf_queue *rxq, struct idpf_rx_buf *buf); void idpf_rx_buf_hw_update(struct idpf_queue *rxq, u32 val); diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index d599c0199e22..5c3d7c3534af 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -1647,6 +1647,12 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) rxq = &rx_qgrp->splitq.rxq_sets[j]->rxq; sets = rxq->rxq_grp->splitq.bufq_sets; + /* In splitq mode, RXQ buffer size should be + * set to that of the first buffer queue + * associated with this RXQ. + */ + rxq->rx_buf_size = sets[0].bufq.rx_buf_size; + qi[k].rx_bufq1_id = cpu_to_le16(sets[0].bufq.q_id); if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) { qi[k].bufq2_ena = IDPF_BUFQ2_ENA; @@ -3297,7 +3303,7 @@ void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q) rss_data->rss_lut_size = le16_to_cpu(vport_msg->rss_lut_size); ether_addr_copy(vport->default_mac_addr, vport_msg->default_mac_addr); - vport->max_mtu = le16_to_cpu(vport_msg->max_mtu) - IDPF_PACKET_HDR_PAD; + vport->max_mtu = le16_to_cpu(vport_msg->max_mtu) - LIBIE_RX_LL_LEN; /* Initialize Tx and Rx profiles for Dynamic Interrupt Moderation */ memcpy(vport->rx_itr_profile, rx_itr, IDPF_DIM_PROFILE_SLOTS); From patchwork Sat Dec 23 02:55:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182874 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1459789dyi; Fri, 22 Dec 2023 19:01:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IGDpJ4AcR/14q9T1QhealmkbiJ7jwDv5G8VDerdxVFX8hXULOvgR9qYwkQs9wPXkxgM9gtU X-Received: by 2002:a50:9f0f:0:b0:553:3b07:620a with SMTP id b15-20020a509f0f000000b005533b07620amr1520298edf.36.1703300471703; Fri, 22 Dec 2023 19:01:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300471; cv=none; d=google.com; s=arc-20160816; b=lNwcsZzC3Hq1yNYncLpRs3dMwFFrpwI1/44M3sofcw9dpoa/avRhB+6vdVtzZheWhp Xm3lhTfs8v+Cf8wxpDLxhS2aTSTCpc+iEvompdBCYCWqj/ck4qa+LL+I5mPjdRtwpHrC BxUfPRGP3AQoYQp1e1LzB0VvDJwd7KzEfq07aoaaQr7F49xa5GZgQ2zQx8hIGu0nWehK ftzp9QeSwk4tZyAJ/v/vxNHbzm5NmzfMYvTk3zCPCaKSVZbB8HBaIEBi/l91ZJumo5wd ZPzPmxT86gm5W/GwQNCz8ATt70fnjf50GfNhwXBkKPoW+WtbfL07gt8xu0WXoYg8jb9C Mj+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=oKQpC9TUFXUYQRc0CVwk5dW01piEjLcYjbhp8NtYFzU=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=BIbxSjbY03VxFZ4ALbd999TLZc+dUV6q60oQoYVsB4Gu42PrRXJSJdnUeWIRucZvbL XDC7lQ61Pg6/D61j2PPcRwawW/W8U7057A0ESGKUfrnpyApiFq2I3Jo3P9KpXeWHkGMo YCpFC9goakqbjKYCe+6qtuR0zUe6/SA7RBAWlymOeKuY7lzBcBdfNegLvIBS+oSXq2wn nWK7pwAgwUcWr9AIA6WRLOPUm4AK8xjbvVuSax1qrE4sBX/AD+FrjjPuwdAuFrhQH0d1 NomCYG3jshPHRjpGDvJImG/eowhI2kzCyD/1rmfU9pqJlwwG43VpTcMB2fi/i9sZyS6W jTtw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Gq0Wq8hP; spf=pass (google.com: domain of linux-kernel+bounces-10223-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10223-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id c20-20020a05640227d400b0055344c8cfccsi2464245ede.567.2023.12.22.19.01.11 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:01:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10223-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Gq0Wq8hP; spf=pass (google.com: domain of linux-kernel+bounces-10223-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10223-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 456591F22F2C for ; Sat, 23 Dec 2023 03:01:11 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id EB83EFC1E; Sat, 23 Dec 2023 02:58:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Gq0Wq8hP" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15448DDDF; Sat, 23 Dec 2023 02:58:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300311; x=1734836311; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XqjIg2sffZWYlV1QO3+27Gsdtu5ec4rZCM4Z9ujVexM=; b=Gq0Wq8hPNwO4dMOr0ywd0Lhmd5AjpYR+AVKnUjIPgSLQL7y7rqgTWtJx dpZRmPqXdD6gSmOZtlq98vVJHee05TM6zmIpi/kzthVD5BKGgGD4ooCBz LUcPLeFIBuI3KPnKSUHvJlVDka8eokOT+/AbFgLY7D0GOUmbxpyGu0GIs X+1++MweMMSuzrlMTlCJZJ17gnPuUFQg8qgmeAQ53j0ilzfT1J1Vs4HbE qI/RkZ3XeefoqrVcDRdtDpIg8LQZjNvCdiuDaPUgHMWS11qqFPVcWmK/z EqH3GNf8+/yaSslNgZTweWgyNBdSsw+TyVWlzBNPOMm91R/6RIJvz/8mk A==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610822" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610822" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:30 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537443" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:27 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 07/34] libie: add Tx buffer completion helpers Date: Sat, 23 Dec 2023 03:55:27 +0100 Message-ID: <20231223025554.2316836-8-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786039995018839392 X-GMAIL-MSGID: 1786039995018839392 Software-side Tx buffers for storing DMA, frame size, skb pointers etc. are pretty much generic and every driver defines them the same way. The same can be said for software Tx completions -- same napi_consume_skb()s and all that... Add a couple simple wrappers for doing that to stop repeating the old tale at least within the Intel code. Signed-off-by: Alexander Lobakin --- include/linux/net/intel/libie/tx.h | 88 ++++++++++++++++++++++++++++++ 1 file changed, 88 insertions(+) create mode 100644 include/linux/net/intel/libie/tx.h diff --git a/include/linux/net/intel/libie/tx.h b/include/linux/net/intel/libie/tx.h new file mode 100644 index 000000000000..07a19abb72fd --- /dev/null +++ b/include/linux/net/intel/libie/tx.h @@ -0,0 +1,88 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Intel Corporation. */ + +#ifndef __LIBIE_TX_H +#define __LIBIE_TX_H + +#include +#include + +/** + * enum libie_tx_buf_type - type of &libie_tx_buf to act on Tx completion + * @LIBIE_TX_BUF_EMPTY: unused OR XSk frame, no action required + * @LIBIE_TX_BUF_SLAB: kmalloc-allocated buffer, unmap and kfree() + * @LIBIE_TX_BUF_FRAG: mapped skb OR &xdp_buff frag, only unmap DMA + * @LIBIE_TX_BUF_SKB: &sk_buff, unmap and consume_skb(), update stats + * @LIBIE_TX_BUF_XDP_TX: &skb_shared_info, page_pool_put_full_page(), stats + * @LIBIE_TX_BUF_XDP_XMIT: &xdp_frame, unmap and xdp_return_frame(), stats + * @LIBIE_TX_BUF_XSK_TX: &xdp_buff on XSk queue, xsk_buff_free(), stats + */ +enum libie_tx_buf_type { + LIBIE_TX_BUF_EMPTY = 0U, + LIBIE_TX_BUF_SLAB, + LIBIE_TX_BUF_FRAG, + LIBIE_TX_BUF_SKB, + LIBIE_TX_BUF_XDP_TX, + LIBIE_TX_BUF_XDP_XMIT, + LIBIE_TX_BUF_XSK_TX, +}; + +struct libie_tx_buffer { + void *next_to_watch; + union { + void *raw; + struct sk_buff *skb; + struct skb_shared_info *sinfo; + struct xdp_frame *xdpf; + struct xdp_buff *xdp; + }; + + DEFINE_DMA_UNMAP_ADDR(dma); + DEFINE_DMA_UNMAP_LEN(len); + + u32 bytecount; + u16 gso_segs; + enum libie_tx_buf_type type:16; + + union { + int compl_tag; + bool ctx_entry; + }; +}; + +static inline void libie_tx_complete_buf(struct libie_tx_buffer *buf, + struct device *dev, bool napi, + struct libie_sq_onstack_stats *ss) +{ + switch (buf->type) { + case LIBIE_TX_BUF_EMPTY: + return; + case LIBIE_TX_BUF_SLAB: + case LIBIE_TX_BUF_FRAG: + case LIBIE_TX_BUF_SKB: + dma_unmap_page(dev, dma_unmap_addr(buf, dma), + dma_unmap_len(buf, len), + DMA_TO_DEVICE); + break; + default: + break; + } + + switch (buf->type) { + case LIBIE_TX_BUF_SLAB: + kfree(buf->raw); + break; + case LIBIE_TX_BUF_SKB: + ss->packets += buf->gso_segs; + ss->bytes += buf->bytecount; + + napi_consume_skb(buf->skb, napi); + break; + default: + break; + } + + buf->type = LIBIE_TX_BUF_EMPTY; +} + +#endif /* __LIBIE_TX_H */ From patchwork Sat Dec 23 02:55:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182875 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1459925dyi; Fri, 22 Dec 2023 19:01:30 -0800 (PST) X-Google-Smtp-Source: AGHT+IHrSoKH1ID1iOY00Cmg6y8MHMfppePMpfMrN1cxyAeSr4Mr7eojygEaC4SJmMbThsJutwBs X-Received: by 2002:a05:620a:12d0:b0:77f:8dc1:964a with SMTP id e16-20020a05620a12d000b0077f8dc1964amr2769235qkl.75.1703300489974; Fri, 22 Dec 2023 19:01:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300489; cv=none; d=google.com; s=arc-20160816; b=BWT/Fz//y3WFWnQmx6ioEH2NyujvZu1X0WF+YfU+9dRcLu1pqhBxDhZd9+Z+5g26qy 6J3Kpq2AsyIvcHtOqs/lDq30M4lxqvBRhwHyZdtiLo5xt8Cx374dd6gGqq97XSmUiCll T23Su6klU3HBceA9sD+Ga8aOjZJ9V+n2fVWzzmQrK+uGeuL+RVIgyVvBICNQ1DUtmWoo YrwUTIxbcTzBRL6bPEaUaaeGdUjmmw77PXGRMz1Rt7F2V4/bfoLs1WL69GEBgnReNDUs FY8uQ2MSRIPN619cDQE6GxmYqvjrs3Fw8QpH7+OA+s+VRhTymg8nQCux8WpJZtTB6YPy Q95A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=yRsOwIhMMl3a3psg04Z1NhMEwiNfY4mTZS7PCmPDDLk=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=wrPh7z8cNDbgYBRpGY5YKDr/p2J87t55XWveZcFH0t6te5XC7wc+KFFkgSsnVZf5qB r6NyX6V6x1FpafVTQHTwzUtOvzrACrJCqBGjIyl4nJ1N/uWm0a0etguvptUJsq1MxM4j swDKjw66Cxs/7/fTp2PFZgFEd5si3hUPAlKdFRjsD/rO97/prvtubFNppA2sq56v4UP4 IFyGEWW10KbKoYeBsq26a/jzO3hAdyMQqG27NEV6fn6SBXfzZov8MGDyiCJkfspI2Ir+ 8pIzNECwhHR7vA06Vukx7Zl2OZE9HUa4xfE3Ok1qTHbQiXDtwEQvb19YprSOjw/Gop5i 6asA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="ZQz/WXLy"; spf=pass (google.com: domain of linux-kernel+bounces-10224-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10224-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id t15-20020a05620a450f00b0077dc688e20asi6254167qkp.259.2023.12.22.19.01.29 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:01:29 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10224-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="ZQz/WXLy"; spf=pass (google.com: domain of linux-kernel+bounces-10224-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10224-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id B388F1C229E2 for ; Sat, 23 Dec 2023 03:01:29 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4AD6D12E5A; Sat, 23 Dec 2023 02:58:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ZQz/WXLy" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EE9710798; Sat, 23 Dec 2023 02:58:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300315; x=1734836315; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NYAAWOP0iCM4cesVBIAO//J8ovtZjd+T6TbmxzFlxK8=; b=ZQz/WXLy9Xo3vJtDnv/c8Eu5NBCocVnOFeWzsGlr1Y5czo6//HhshO3/ 45y2fG2oV6ID1DRc2A7mAvleEzSQV13lZrO5xZ62+k4OrmBXHjtaBrCPs k1ID2XCfslwpvTJ63Rml6sImmVShzN8N/GqjHaMGzXy8KetInwOUFIHux OFsEWrBjo3bbnOVunwbFSbaatKrUDO2YOhUc0cRLDdJ0Zvi24Z97V080F 8d7ITDlurlRoIJzMxOVWCC73T1n0bUhxDhAIClFRQYtWMPAQTJaN/dQEs QVPXMcuYYqLlGcKGw5C0MirU8iTieQ54wD7qHP+myeesRRvoWuVLXR2+4 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610832" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610832" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537459" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:31 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 08/34] idpf: convert to libie Tx buffer completion Date: Sat, 23 Dec 2023 03:55:28 +0100 Message-ID: <20231223025554.2316836-9-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040014321572237 X-GMAIL-MSGID: 1786040014321572237 &idpf_tx_buffer is almost identical to the previous generations, as well as the way it's handled. Moreover, relying on dma_unmap_addr() and !!buf->skb instead of explicit defining of buffer's type was never good. Use the newly added libie helpers to do it properly and reduce the copy-paste around the Tx code. Signed-off-by: Alexander Lobakin --- .../ethernet/intel/idpf/idpf_singleq_txrx.c | 38 ++----- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 107 +++--------------- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 43 +------ 3 files changed, 31 insertions(+), 157 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index 63a709743037..23dcc02e6976 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -190,6 +190,7 @@ static void idpf_tx_singleq_map(struct idpf_queue *tx_q, struct idpf_tx_buf *first, struct idpf_tx_offload_params *offloads) { + enum libie_tx_buf_type type = LIBIE_TX_BUF_SKB; u32 offsets = offloads->hdr_offsets; struct idpf_tx_buf *tx_buf = first; struct idpf_base_tx_desc *tx_desc; @@ -219,6 +220,8 @@ static void idpf_tx_singleq_map(struct idpf_queue *tx_q, if (dma_mapping_error(tx_q->dev, dma)) return idpf_tx_dma_map_error(tx_q, skb, first, i); + tx_buf->type = type; + /* record length, and DMA address */ dma_unmap_len_set(tx_buf, len, size); dma_unmap_addr_set(tx_buf, dma, dma); @@ -270,6 +273,7 @@ static void idpf_tx_singleq_map(struct idpf_queue *tx_q, DMA_TO_DEVICE); tx_buf = &tx_q->tx_buf[i]; + type = LIBIE_TX_BUF_FRAG; } skb_tx_timestamp(first->skb); @@ -447,7 +451,7 @@ static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget, int *cleaned) { unsigned int budget = tx_q->vport->compln_clean_budget; - unsigned int total_bytes = 0, total_pkts = 0; + struct libie_sq_onstack_stats ss = { }; struct idpf_base_tx_desc *tx_desc; s16 ntc = tx_q->next_to_clean; struct idpf_netdev_priv *np; @@ -495,20 +499,7 @@ static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget, tx_buf->next_to_watch = NULL; /* update the statistics for this packet */ - total_bytes += tx_buf->bytecount; - total_pkts += tx_buf->gso_segs; - - napi_consume_skb(tx_buf->skb, napi_budget); - - /* unmap skb header data */ - dma_unmap_single(tx_q->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - - /* clear tx_buf data */ - tx_buf->skb = NULL; - dma_unmap_len_set(tx_buf, len, 0); + libie_tx_complete_buf(tx_buf, tx_q->dev, !!napi_budget, &ss); /* unmap remaining buffers */ while (tx_desc != eop_desc) { @@ -522,13 +513,8 @@ static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget, } /* unmap any remaining paged data */ - if (dma_unmap_len(tx_buf, len)) { - dma_unmap_page(tx_q->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - dma_unmap_len_set(tx_buf, len, 0); - } + libie_tx_complete_buf(tx_buf, tx_q->dev, !!napi_budget, + &ss); } /* update budget only if we did something */ @@ -548,11 +534,11 @@ static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget, ntc += tx_q->desc_count; tx_q->next_to_clean = ntc; - *cleaned += total_pkts; + *cleaned += ss.packets; u64_stats_update_begin(&tx_q->stats_sync); - u64_stats_add(&tx_q->q_stats.tx.packets, total_pkts); - u64_stats_add(&tx_q->q_stats.tx.bytes, total_bytes); + u64_stats_add(&tx_q->q_stats.tx.packets, ss.packets); + u64_stats_add(&tx_q->q_stats.tx.bytes, ss.bytes); u64_stats_update_end(&tx_q->stats_sync); vport = tx_q->vport; @@ -561,7 +547,7 @@ static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget, dont_wake = np->state != __IDPF_VPORT_UP || !netif_carrier_ok(vport->netdev); - __netif_txq_completed_wake(nq, total_pkts, total_bytes, + __netif_txq_completed_wake(nq, ss.packets, ss.bytes, IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH, dont_wake); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index c44737e243b0..6fd9128e61d8 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -54,39 +54,13 @@ void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue) } } -/** - * idpf_tx_buf_rel - Release a Tx buffer - * @tx_q: the queue that owns the buffer - * @tx_buf: the buffer to free - */ -static void idpf_tx_buf_rel(struct idpf_queue *tx_q, struct idpf_tx_buf *tx_buf) -{ - if (tx_buf->skb) { - if (dma_unmap_len(tx_buf, len)) - dma_unmap_single(tx_q->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - dev_kfree_skb_any(tx_buf->skb); - } else if (dma_unmap_len(tx_buf, len)) { - dma_unmap_page(tx_q->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - } - - tx_buf->next_to_watch = NULL; - tx_buf->skb = NULL; - tx_buf->compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG; - dma_unmap_len_set(tx_buf, len, 0); -} - /** * idpf_tx_buf_rel_all - Free any empty Tx buffers * @txq: queue to be cleaned */ static void idpf_tx_buf_rel_all(struct idpf_queue *txq) { + struct libie_sq_onstack_stats ss = { }; u16 i; /* Buffers already cleared, nothing to do */ @@ -95,7 +69,7 @@ static void idpf_tx_buf_rel_all(struct idpf_queue *txq) /* Free all the Tx buffer sk_buffs */ for (i = 0; i < txq->desc_count; i++) - idpf_tx_buf_rel(txq, &txq->tx_buf[i]); + libie_tx_complete_buf(&txq->tx_buf[i], txq->dev, false, &ss); kfree(txq->tx_buf); txq->tx_buf = NULL; @@ -1505,37 +1479,6 @@ static void idpf_tx_handle_sw_marker(struct idpf_queue *tx_q) wake_up(&vport->sw_marker_wq); } -/** - * idpf_tx_splitq_clean_hdr - Clean TX buffer resources for header portion of - * packet - * @tx_q: tx queue to clean buffer from - * @tx_buf: buffer to be cleaned - * @cleaned: pointer to stats struct to track cleaned packets/bytes - * @napi_budget: Used to determine if we are in netpoll - */ -static void idpf_tx_splitq_clean_hdr(struct idpf_queue *tx_q, - struct idpf_tx_buf *tx_buf, - struct idpf_cleaned_stats *cleaned, - int napi_budget) -{ - napi_consume_skb(tx_buf->skb, napi_budget); - - if (dma_unmap_len(tx_buf, len)) { - dma_unmap_single(tx_q->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - - dma_unmap_len_set(tx_buf, len, 0); - } - - /* clear tx_buf data */ - tx_buf->skb = NULL; - - cleaned->bytes += tx_buf->bytecount; - cleaned->packets += tx_buf->gso_segs; -} - /** * idpf_tx_clean_stashed_bufs - clean bufs that were stored for * out of order completions @@ -1557,16 +1500,8 @@ static void idpf_tx_clean_stashed_bufs(struct idpf_queue *txq, u16 compl_tag, if (unlikely(stash->buf.compl_tag != (int)compl_tag)) continue; - if (stash->buf.skb) { - idpf_tx_splitq_clean_hdr(txq, &stash->buf, cleaned, - budget); - } else if (dma_unmap_len(&stash->buf, len)) { - dma_unmap_page(txq->dev, - dma_unmap_addr(&stash->buf, dma), - dma_unmap_len(&stash->buf, len), - DMA_TO_DEVICE); - dma_unmap_len_set(&stash->buf, len, 0); - } + libie_tx_complete_buf(&stash->buf, txq->dev, !!budget, + cleaned); /* Push shadow buf back onto stack */ idpf_buf_lifo_push(&txq->buf_stack, stash); @@ -1599,6 +1534,7 @@ static int idpf_stash_flow_sch_buffers(struct idpf_queue *txq, } /* Store buffer params in shadow buffer */ + stash->buf.type = tx_buf->type; stash->buf.skb = tx_buf->skb; stash->buf.bytecount = tx_buf->bytecount; stash->buf.gso_segs = tx_buf->gso_segs; @@ -1693,8 +1629,8 @@ static void idpf_tx_splitq_clean(struct idpf_queue *tx_q, u16 end, } } } else { - idpf_tx_splitq_clean_hdr(tx_q, tx_buf, cleaned, - napi_budget); + libie_tx_complete_buf(tx_buf, tx_q->dev, !!napi_budget, + cleaned); /* unmap remaining buffers */ while (tx_desc != eop_desc) { @@ -1702,13 +1638,8 @@ static void idpf_tx_splitq_clean(struct idpf_queue *tx_q, u16 end, tx_desc, tx_buf); /* unmap any remaining paged data */ - if (dma_unmap_len(tx_buf, len)) { - dma_unmap_page(tx_q->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - dma_unmap_len_set(tx_buf, len, 0); - } + libie_tx_complete_buf(tx_buf, tx_q->dev, + !!napi_budget, cleaned); } } @@ -1756,18 +1687,7 @@ static bool idpf_tx_clean_buf_ring(struct idpf_queue *txq, u16 compl_tag, tx_buf = &txq->tx_buf[idx]; while (tx_buf->compl_tag == (int)compl_tag) { - if (tx_buf->skb) { - idpf_tx_splitq_clean_hdr(txq, tx_buf, cleaned, budget); - } else if (dma_unmap_len(tx_buf, len)) { - dma_unmap_page(txq->dev, - dma_unmap_addr(tx_buf, dma), - dma_unmap_len(tx_buf, len), - DMA_TO_DEVICE); - dma_unmap_len_set(tx_buf, len, 0); - } - - memset(tx_buf, 0, sizeof(struct idpf_tx_buf)); - tx_buf->compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG; + libie_tx_complete_buf(tx_buf, txq->dev, !!budget, cleaned); num_descs_cleaned++; idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf); @@ -2161,6 +2081,8 @@ unsigned int idpf_tx_desc_count_required(struct idpf_queue *txq, void idpf_tx_dma_map_error(struct idpf_queue *txq, struct sk_buff *skb, struct idpf_tx_buf *first, u16 idx) { + struct libie_sq_onstack_stats ss = { }; + u64_stats_update_begin(&txq->stats_sync); u64_stats_inc(&txq->q_stats.tx.dma_map_errs); u64_stats_update_end(&txq->stats_sync); @@ -2170,7 +2092,7 @@ void idpf_tx_dma_map_error(struct idpf_queue *txq, struct sk_buff *skb, struct idpf_tx_buf *tx_buf; tx_buf = &txq->tx_buf[idx]; - idpf_tx_buf_rel(txq, tx_buf); + libie_tx_complete_buf(tx_buf, txq->dev, false, &ss); if (tx_buf == first) break; if (idx == 0) @@ -2227,6 +2149,7 @@ static void idpf_tx_splitq_map(struct idpf_queue *tx_q, struct idpf_tx_splitq_params *params, struct idpf_tx_buf *first) { + enum libie_tx_buf_type type = LIBIE_TX_BUF_SKB; union idpf_tx_flex_desc *tx_desc; unsigned int data_len, size; struct idpf_tx_buf *tx_buf; @@ -2259,6 +2182,7 @@ static void idpf_tx_splitq_map(struct idpf_queue *tx_q, if (dma_mapping_error(tx_q->dev, dma)) return idpf_tx_dma_map_error(tx_q, skb, first, i); + tx_buf->type = type; tx_buf->compl_tag = params->compl_tag; /* record length, and DMA address */ @@ -2374,6 +2298,7 @@ static void idpf_tx_splitq_map(struct idpf_queue *tx_q, DMA_TO_DEVICE); tx_buf = &tx_q->tx_buf[i]; + type = LIBIE_TX_BUF_FRAG; } /* record SW timestamp if HW timestamp is not available */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 0bbc654a24b9..5975c6d029d7 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -5,6 +5,7 @@ #define _IDPF_TXRX_H_ #include +#include #include #include @@ -165,42 +166,7 @@ union idpf_tx_flex_desc { struct idpf_flex_tx_sched_desc flow; /* flow based scheduling */ }; -/** - * struct idpf_tx_buf - * @next_to_watch: Next descriptor to clean - * @skb: Pointer to the skb - * @dma: DMA address - * @len: DMA length - * @bytecount: Number of bytes - * @gso_segs: Number of GSO segments - * @compl_tag: Splitq only, unique identifier for a buffer. Used to compare - * with completion tag returned in buffer completion event. - * Because the completion tag is expected to be the same in all - * data descriptors for a given packet, and a single packet can - * span multiple buffers, we need this field to track all - * buffers associated with this completion tag independently of - * the buf_id. The tag consists of a N bit buf_id and M upper - * order "generation bits". See compl_tag_bufid_m and - * compl_tag_gen_s in struct idpf_queue. We'll use a value of -1 - * to indicate the tag is not valid. - * @ctx_entry: Singleq only. Used to indicate the corresponding entry - * in the descriptor ring was used for a context descriptor and - * this buffer entry should be skipped. - */ -struct idpf_tx_buf { - void *next_to_watch; - struct sk_buff *skb; - DEFINE_DMA_UNMAP_ADDR(dma); - DEFINE_DMA_UNMAP_LEN(len); - unsigned int bytecount; - unsigned short gso_segs; - - union { - int compl_tag; - - bool ctx_entry; - }; -}; +#define idpf_tx_buf libie_tx_buffer struct idpf_tx_stash { struct hlist_node hlist; @@ -493,10 +459,7 @@ struct idpf_tx_queue_stats { u64_stats_t dma_map_errs; }; -struct idpf_cleaned_stats { - u32 packets; - u32 bytes; -}; +#define idpf_cleaned_stats libie_sq_onstack_stats union idpf_queue_stats { struct idpf_rx_queue_stats rx; From patchwork Sat Dec 23 02:55:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182876 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1460078dyi; Fri, 22 Dec 2023 19:01:49 -0800 (PST) X-Google-Smtp-Source: AGHT+IFBeCShx3d5WVUPbLW/k8V3WYx0FVRffQpXcKmkZr6CaT84Rrvkr5SQO0bUOuQf8IhbK0BR X-Received: by 2002:a05:620a:674:b0:77f:aeb4:7afb with SMTP id a20-20020a05620a067400b0077faeb47afbmr2502423qkh.5.1703300509476; Fri, 22 Dec 2023 19:01:49 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300509; cv=none; d=google.com; s=arc-20160816; b=EOfn5clAyO9Yx5JWswUYqI76fQ4JswnoVZxZCVb4bsmbPuBaaxAIx2LyDaNzsq74mB VmoXLgRJ1gq5+y/4Yfs7lFrHzn55xV/fDRjqIfvHU0hOg+NKzbVzrz+LYccZz38uSeeN 5BzMiC5akdyrfRxBDJ0ZUlg0909BpNfGiFw7E/46ikhQV/AnQV/Za1JG7Ve2ItQCZLhr gEmOKUlb7E6I/XKn7chGHr9ZK5tdkU5goROb4kQM8zbQqlKGBbi7DAFbTLCpbLVjvcfw luTjyBkV28+ZIhl8wEb8LIKDo09uPJlvnBZyMTUHPYRHa8qjRhs9dtC9UpqPkDhAV/Xq oL4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=YpqX1R6f1h4tW+d1avwt27vrxJsEYegGEFdDUyCVHkM=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=s52y1SoyzsYYETQe/q2wkc8zzvzoTLomWHNNaxd+esmgnOvCKspbc/s8BC73kgef9Q UP8ubFNp3jtwwRytI1hiNUzXZnXyMGfqAyGLnfsWOlDpLYMmHt5O8Zlr2Ti1BoDgX1Bx s+m5FZSX8uy41HjKZ+2I0pEx0+ucKCLk+8g6n9riIWou2dtOBv7qLmhiWEppBCmDEJiu 2qoB+167oL7tdVbMHbAw9EgwHxvDlYnP9Vqjkh9bWoM9ZiQs+NyLJrM/q0PCVAG+IldB Nm50kmWWQFUm09/lSL2Bv93s7/yjA9yZRytBH2+ezFrGFTnkLBofks+sifinRwn7S1F3 EHFw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AYGmb5al; spf=pass (google.com: domain of linux-kernel+bounces-10225-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10225-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id q11-20020a05620a024b00b007811c90b7fbsi5070769qkn.587.2023.12.22.19.01.49 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:01:49 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10225-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=AYGmb5al; spf=pass (google.com: domain of linux-kernel+bounces-10225-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10225-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 378FE1C226E7 for ; Sat, 23 Dec 2023 03:01:49 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 298A414F6F; Sat, 23 Dec 2023 02:58:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="AYGmb5al" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A34C12E61; Sat, 23 Dec 2023 02:58:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300319; x=1734836319; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CVCGYVKSl7Vfqncoy/Waq/QssSuuqYKEhaeBYUs9+Xs=; b=AYGmb5al0KIxoibuo6E6Zee8nx04wgXaIJ2cAaqVoDWsEf0LUbte6y2C xGw1PGjE3JAiBrB0NLIIIIW/9NGhP9vsQRUS+yTs1+waugOiFEr6ydwu2 r+vG8JEata4hiajireR8TFRds/Ii3vyr9uWCtKZ2YYHMxGL7NstcKaNV0 5Vl7cDkIqmJBYqW/wiLuCHfbBcscq5AZCrCfVj4xgQRpOzh5T++AJ8rB6 4/WcQKa+/6m6rpJbqxsjRP6O6aXS/bGhF9TBPqSv2oZm52Aimi9ZEKC0K xPh5rqAHOvJJBSreM8nbB1AOgS32g0e9sRCxwgON0Fa5SzqTi6MR+GEIL Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610841" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610841" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:39 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537475" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:35 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 09/34] bpf, xdp: constify some bpf_prog * function arguments Date: Sat, 23 Dec 2023 03:55:29 +0100 Message-ID: <20231223025554.2316836-10-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040035083068291 X-GMAIL-MSGID: 1786040035083068291 In lots of places, bpf_prog pointer is used only for tracing or other stuff that doesn't modify the structure itself. Same for net_device. Address at least some of them and add `const` attributes there. The object code didn't change, but that may prevent unwanted data modifications and also allow more helpers to have const arguments. Signed-off-by: Alexander Lobakin --- include/linux/bpf.h | 12 ++++++------ include/linux/filter.h | 9 +++++---- include/linux/netdevice.h | 6 +++--- kernel/bpf/devmap.c | 8 ++++---- net/core/dev.c | 8 ++++---- net/core/filter.c | 25 ++++++++++++++----------- 6 files changed, 36 insertions(+), 32 deletions(-) diff --git a/include/linux/bpf.h b/include/linux/bpf.h index 7a8d4c81a39a..53ccac0f0d64 100644 --- a/include/linux/bpf.h +++ b/include/linux/bpf.h @@ -2384,10 +2384,10 @@ int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_frame *xdpf, int dev_map_enqueue_multi(struct xdp_frame *xdpf, struct net_device *dev_rx, struct bpf_map *map, bool exclude_ingress); int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, - struct bpf_prog *xdp_prog); + const struct bpf_prog *xdp_prog); int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, - struct bpf_prog *xdp_prog, struct bpf_map *map, - bool exclude_ingress); + const struct bpf_prog *xdp_prog, + struct bpf_map *map, bool exclude_ingress); void __cpu_map_flush(void); int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf, @@ -2632,15 +2632,15 @@ struct sk_buff; static inline int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { return 0; } static inline int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, - struct bpf_prog *xdp_prog, struct bpf_map *map, - bool exclude_ingress) + const struct bpf_prog *xdp_prog, + struct bpf_map *map, bool exclude_ingress) { return 0; } diff --git a/include/linux/filter.h b/include/linux/filter.h index 68fb6c8142fe..34c9e2a4cc01 100644 --- a/include/linux/filter.h +++ b/include/linux/filter.h @@ -1015,17 +1015,18 @@ static inline int xdp_ok_fwd_dev(const struct net_device *fwd, * This does not appear to be a real limitation for existing software. */ int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, - struct xdp_buff *xdp, struct bpf_prog *prog); + struct xdp_buff *xdp, const struct bpf_prog *prog); int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *prog); + const struct bpf_prog *prog); int xdp_do_redirect_frame(struct net_device *dev, struct xdp_buff *xdp, struct xdp_frame *xdpf, - struct bpf_prog *prog); + const struct bpf_prog *prog); void xdp_do_flush(void); -void bpf_warn_invalid_xdp_action(struct net_device *dev, struct bpf_prog *prog, u32 act); +void bpf_warn_invalid_xdp_action(const struct net_device *dev, + const struct bpf_prog *prog, u32 act); #ifdef CONFIG_INET struct sock *bpf_run_sk_reuseport(struct sock_reuseport *reuse, struct sock *sk, diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h index 75c7725e5e4f..7bb8324a4ebe 100644 --- a/include/linux/netdevice.h +++ b/include/linux/netdevice.h @@ -3956,9 +3956,9 @@ static inline void dev_consume_skb_any(struct sk_buff *skb) } u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog); -void generic_xdp_tx(struct sk_buff *skb, struct bpf_prog *xdp_prog); -int do_xdp_generic(struct bpf_prog *xdp_prog, struct sk_buff *skb); + const struct bpf_prog *xdp_prog); +void generic_xdp_tx(struct sk_buff *skb, const struct bpf_prog *xdp_prog); +int do_xdp_generic(const struct bpf_prog *xdp_prog, struct sk_buff *skb); int netif_rx(struct sk_buff *skb); int __netif_rx(struct sk_buff *skb); diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c index a936c704d4e7..5ad73ef21da2 100644 --- a/kernel/bpf/devmap.c +++ b/kernel/bpf/devmap.c @@ -673,7 +673,7 @@ int dev_map_enqueue_multi(struct xdp_frame *xdpf, struct net_device *dev_rx, } int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { int err; @@ -696,7 +696,7 @@ int dev_map_generic_redirect(struct bpf_dtab_netdev *dst, struct sk_buff *skb, static int dev_map_redirect_clone(struct bpf_dtab_netdev *dst, struct sk_buff *skb, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { struct sk_buff *nskb; int err; @@ -715,8 +715,8 @@ static int dev_map_redirect_clone(struct bpf_dtab_netdev *dst, } int dev_map_redirect_multi(struct net_device *dev, struct sk_buff *skb, - struct bpf_prog *xdp_prog, struct bpf_map *map, - bool exclude_ingress) + const struct bpf_prog *xdp_prog, + struct bpf_map *map, bool exclude_ingress) { struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map); struct bpf_dtab_netdev *dst, *last_dst = NULL; diff --git a/net/core/dev.c b/net/core/dev.c index f9d4b550ef4b..b2a9839dd18e 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -4833,7 +4833,7 @@ static struct netdev_rx_queue *netif_get_rxqueue(struct sk_buff *skb) } u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { void *orig_data, *orig_data_end, *hard_start; struct netdev_rx_queue *rxqueue; @@ -4922,7 +4922,7 @@ u32 bpf_prog_run_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, static u32 netif_receive_generic_xdp(struct sk_buff *skb, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { u32 act = XDP_DROP; @@ -4979,7 +4979,7 @@ static u32 netif_receive_generic_xdp(struct sk_buff *skb, * and DDOS attacks will be more effective. In-driver-XDP use dedicated TX * queues, so they do not have this starvation issue. */ -void generic_xdp_tx(struct sk_buff *skb, struct bpf_prog *xdp_prog) +void generic_xdp_tx(struct sk_buff *skb, const struct bpf_prog *xdp_prog) { struct net_device *dev = skb->dev; struct netdev_queue *txq; @@ -5004,7 +5004,7 @@ void generic_xdp_tx(struct sk_buff *skb, struct bpf_prog *xdp_prog) static DEFINE_STATIC_KEY_FALSE(generic_xdp_needed_key); -int do_xdp_generic(struct bpf_prog *xdp_prog, struct sk_buff *skb) +int do_xdp_generic(const struct bpf_prog *xdp_prog, struct sk_buff *skb) { if (xdp_prog) { struct xdp_buff xdp; diff --git a/net/core/filter.c b/net/core/filter.c index 24061f29c9dd..4ace1edc4de1 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -4297,9 +4297,9 @@ u32 xdp_master_redirect(struct xdp_buff *xdp) EXPORT_SYMBOL_GPL(xdp_master_redirect); static inline int __xdp_do_redirect_xsk(struct bpf_redirect_info *ri, - struct net_device *dev, + const struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { enum bpf_map_type map_type = ri->map_type; void *fwd = ri->tgt_value; @@ -4320,10 +4320,10 @@ static inline int __xdp_do_redirect_xsk(struct bpf_redirect_info *ri, return err; } -static __always_inline int __xdp_do_redirect_frame(struct bpf_redirect_info *ri, - struct net_device *dev, - struct xdp_frame *xdpf, - struct bpf_prog *xdp_prog) +static __always_inline int +__xdp_do_redirect_frame(struct bpf_redirect_info *ri, struct net_device *dev, + struct xdp_frame *xdpf, + const struct bpf_prog *xdp_prog) { enum bpf_map_type map_type = ri->map_type; void *fwd = ri->tgt_value; @@ -4381,7 +4381,7 @@ static __always_inline int __xdp_do_redirect_frame(struct bpf_redirect_info *ri, } int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog) + const struct bpf_prog *xdp_prog) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); enum bpf_map_type map_type = ri->map_type; @@ -4395,7 +4395,8 @@ int xdp_do_redirect(struct net_device *dev, struct xdp_buff *xdp, EXPORT_SYMBOL_GPL(xdp_do_redirect); int xdp_do_redirect_frame(struct net_device *dev, struct xdp_buff *xdp, - struct xdp_frame *xdpf, struct bpf_prog *xdp_prog) + struct xdp_frame *xdpf, + const struct bpf_prog *xdp_prog) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); enum bpf_map_type map_type = ri->map_type; @@ -4410,7 +4411,7 @@ EXPORT_SYMBOL_GPL(xdp_do_redirect_frame); static int xdp_do_generic_redirect_map(struct net_device *dev, struct sk_buff *skb, struct xdp_buff *xdp, - struct bpf_prog *xdp_prog, + const struct bpf_prog *xdp_prog, void *fwd, enum bpf_map_type map_type, u32 map_id) { @@ -4457,7 +4458,8 @@ static int xdp_do_generic_redirect_map(struct net_device *dev, } int xdp_do_generic_redirect(struct net_device *dev, struct sk_buff *skb, - struct xdp_buff *xdp, struct bpf_prog *xdp_prog) + struct xdp_buff *xdp, + const struct bpf_prog *xdp_prog) { struct bpf_redirect_info *ri = this_cpu_ptr(&bpf_redirect_info); enum bpf_map_type map_type = ri->map_type; @@ -8961,7 +8963,8 @@ static bool xdp_is_valid_access(int off, int size, return __is_valid_xdp_access(off, size); } -void bpf_warn_invalid_xdp_action(struct net_device *dev, struct bpf_prog *prog, u32 act) +void bpf_warn_invalid_xdp_action(const struct net_device *dev, + const struct bpf_prog *prog, u32 act) { const u32 act_max = XDP_REDIRECT; From patchwork Sat Dec 23 02:55:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182877 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1460203dyi; Fri, 22 Dec 2023 19:02:09 -0800 (PST) X-Google-Smtp-Source: AGHT+IH74KtQ+iqt8jQzS2KHhQxSjfzRQq/xKfpmkjuE5F5JhmbeNj/dbNyfvNIw9WZRlSIkqoHU X-Received: by 2002:adf:e40c:0:b0:336:8d5b:8535 with SMTP id g12-20020adfe40c000000b003368d5b8535mr1195014wrm.59.1703300528914; Fri, 22 Dec 2023 19:02:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300528; cv=none; d=google.com; s=arc-20160816; b=f9romruUk4P7ag6YQGzX3gH0Wzr+AsWPnvKaUSDdX+jr6EJyfd5ZlpCce/HnrXIXih OpoEp+d5fhKvexzXbINPfstsHp4uNLipZIXvdwp3th1srOsn3TohOBxPgpMF+F9SeldW BEbEs2jxY/LhTTz0qZuWCkcIowYVsFC/3b00wJB5GDEs9fIhvC5ZANx9OgZkprruXAoA c6dA8V7fPw6Y25y+N9JrL8pVAVRCI6WzzjHlaY2FWJJ6Yad2teGlDcTL9Y15RGurWLnv kXK11Cl9hZP/VUj/PaX+ks/Ai7vHpGn216EKmAyTLQ7UnrMfTgLzd52X8B3Q/vtzTYBp RpKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=0+iVMU6cArgB6fiYKr2wvRDL9vTvfJPoXAVmAOSVIwU=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=fbpsDsM+Tn6ZKua3VgnJBeoh1grRK+i97Tj35seGtUG2QkA00joWXWXM/lenX3wx1T 48UL63iPyb3pWDgNOSJWmEI/TkU2ataB64J54UWyzyXPGkH4DQAyPa1O2KnsGxImrQOv 8GXDEJKue62ejkymA2ONsQyiOeYlTw7A910Iv7G/NXlZeL2Ci+ozFJujwFbaZFd2Dxxr jLuR21i9bDumigGPn9lbMRSFAXfKlBxi6IVMjm++I8zwjIDQ00LaOMUceSNfdVBstXz0 PC4Kv5+gslQhbuqt6myfAvEfXiJG073kxTYwHJNDHNTpo2x4xSoydwauDHCHW5na1vM5 H3GQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FzYVt7fY; spf=pass (google.com: domain of linux-kernel+bounces-10226-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10226-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id x34-20020a50baa5000000b00553ffa87288si2360546ede.320.2023.12.22.19.02.08 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:02:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10226-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FzYVt7fY; spf=pass (google.com: domain of linux-kernel+bounces-10226-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10226-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 86AA81F22F15 for ; Sat, 23 Dec 2023 03:02:08 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 462B7168A8; Sat, 23 Dec 2023 02:58:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FzYVt7fY" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C9DC1401A; Sat, 23 Dec 2023 02:58:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300323; x=1734836323; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=XDKZ7I8YXkzL6jtWwhlWlUnbadwUogykSHePFi5AJXo=; b=FzYVt7fYs4AIhFeOmCHkDLB7PPeHpZbK86k0pIJdeZHUNXRGd0EA/gjs lESqUWvAYe0PNXZhdWfOci7KdN/ewLMRRbq8NXdIhVwg0+5irZhWw4m8D VqKpWMpLPNwq/piX0hekRbgmYoFkluI2zOXcFdwmgrEAGlAFHA/O86vDq nJvWgLF9PeAWClUGJgtym2Cw5QMbyBXbXhSloaOilui6vziw/PXsRvQCW wa11rCCGtClwsh4iVqVyOszTISH4MzcRKf8VBxK0lrjJk491TU1O8J7RI G35MGyWqhMpMMs9ipUEeYn7OGSwiWFaMx7rANpCl67pQzmyhGItJFHLoJ g==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610849" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610849" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537480" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:39 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 10/34] xdp: constify read-only arguments of some static inline helpers Date: Sat, 23 Dec 2023 03:55:30 +0100 Message-ID: <20231223025554.2316836-11-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040055409761094 X-GMAIL-MSGID: 1786040055409761094 Lots of read-only helpers for &xdp_buff and &xdp_frame, such as getting the frame length, skb_shared_info etc., don't have their arguments marked with `const` for no reason. Add the missing annotations to leave less place for mistakes and more for optimization. Signed-off-by: Alexander Lobakin --- include/net/xdp.h | 29 +++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index e6770dd40c91..197808df1ee1 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -88,7 +88,7 @@ struct xdp_buff { u32 flags; /* supported values defined in xdp_buff_flags */ }; -static __always_inline bool xdp_buff_has_frags(struct xdp_buff *xdp) +static __always_inline bool xdp_buff_has_frags(const struct xdp_buff *xdp) { return !!(xdp->flags & XDP_FLAGS_HAS_FRAGS); } @@ -103,7 +103,8 @@ static __always_inline void xdp_buff_clear_frags_flag(struct xdp_buff *xdp) xdp->flags &= ~XDP_FLAGS_HAS_FRAGS; } -static __always_inline bool xdp_buff_is_frag_pfmemalloc(struct xdp_buff *xdp) +static __always_inline bool +xdp_buff_is_frag_pfmemalloc(const struct xdp_buff *xdp) { return !!(xdp->flags & XDP_FLAGS_FRAGS_PF_MEMALLOC); } @@ -144,15 +145,16 @@ xdp_prepare_buff(struct xdp_buff *xdp, unsigned char *hard_start, SKB_DATA_ALIGN(sizeof(struct skb_shared_info))) static inline struct skb_shared_info * -xdp_get_shared_info_from_buff(struct xdp_buff *xdp) +xdp_get_shared_info_from_buff(const struct xdp_buff *xdp) { return (struct skb_shared_info *)xdp_data_hard_end(xdp); } -static __always_inline unsigned int xdp_get_buff_len(struct xdp_buff *xdp) +static __always_inline unsigned int +xdp_get_buff_len(const struct xdp_buff *xdp) { unsigned int len = xdp->data_end - xdp->data; - struct skb_shared_info *sinfo; + const struct skb_shared_info *sinfo; if (likely(!xdp_buff_has_frags(xdp))) goto out; @@ -177,12 +179,13 @@ struct xdp_frame { u32 flags; /* supported values defined in xdp_buff_flags */ }; -static __always_inline bool xdp_frame_has_frags(struct xdp_frame *frame) +static __always_inline bool xdp_frame_has_frags(const struct xdp_frame *frame) { return !!(frame->flags & XDP_FLAGS_HAS_FRAGS); } -static __always_inline bool xdp_frame_is_frag_pfmemalloc(struct xdp_frame *frame) +static __always_inline bool +xdp_frame_is_frag_pfmemalloc(const struct xdp_frame *frame) { return !!(frame->flags & XDP_FLAGS_FRAGS_PF_MEMALLOC); } @@ -201,7 +204,7 @@ static __always_inline void xdp_frame_bulk_init(struct xdp_frame_bulk *bq) } static inline struct skb_shared_info * -xdp_get_shared_info_from_frame(struct xdp_frame *frame) +xdp_get_shared_info_from_frame(const struct xdp_frame *frame) { void *data_hard_start = frame->data - frame->headroom - sizeof(*frame); @@ -249,7 +252,8 @@ int xdp_alloc_skb_bulk(void **skbs, int n_skb, gfp_t gfp); struct xdp_frame *xdpf_clone(struct xdp_frame *xdpf); static inline -void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp) +void xdp_convert_frame_to_buff(const struct xdp_frame *frame, + struct xdp_buff *xdp) { xdp->data_hard_start = frame->data - frame->headroom - sizeof(*frame); xdp->data = frame->data; @@ -260,7 +264,7 @@ void xdp_convert_frame_to_buff(struct xdp_frame *frame, struct xdp_buff *xdp) } static inline -int xdp_update_frame_from_buff(struct xdp_buff *xdp, +int xdp_update_frame_from_buff(const struct xdp_buff *xdp, struct xdp_frame *xdp_frame) { int metasize, headroom; @@ -317,9 +321,10 @@ void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq); void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_frame_bulk *bq); -static __always_inline unsigned int xdp_get_frame_len(struct xdp_frame *xdpf) +static __always_inline unsigned int +xdp_get_frame_len(const struct xdp_frame *xdpf) { - struct skb_shared_info *sinfo; + const struct skb_shared_info *sinfo; unsigned int len = xdpf->len; if (likely(!xdp_frame_has_frags(xdpf))) From patchwork Sat Dec 23 02:55:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182878 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1460495dyi; Fri, 22 Dec 2023 19:02:44 -0800 (PST) X-Google-Smtp-Source: AGHT+IFgG3xgn5rajB+b+Y7zp+4I11Inhtt+gVYfMRiUPbMGETuYdFkDjPr7eXF5b2vUzz6kxySu X-Received: by 2002:a17:90b:128d:b0:28c:e12:705f with SMTP id fw13-20020a17090b128d00b0028c0e12705fmr642812pjb.83.1703300563734; Fri, 22 Dec 2023 19:02:43 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300563; cv=none; d=google.com; s=arc-20160816; b=qyC5OhH0F7jA4S9+FAfgNS9Fx8utoKSVRUlgvnElTNFC6G2uwVCXYW9K11zz+rYH7U IQGfB1PgrTyFQwShjgKU9j4FjyaxTm6GBkVp2YAJrVFvZSZBha0G/aBGRLv2JjeuVtSs pxqE7of3TPM0XyRAq0m/RgZ79KePDez5VuLukYGXffr3adBDgGUA8ZaVeaBQ7mRfMZdF S2o8GF/0fLOMOaY5FA5TETilgXccMHkuv+Cj6B2H0P5FTG+TkTaq4vdJUUi7TAZrQDfy 2de6GdSNe8HdBjCenKNGm2/YN58zh3YY0VoHyprqyFLkjJUc1i0/dhX4fG/cDGD52/RG 83yA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=8b/ltFgMRxxFP8uT4LsPcVvhDQfSgC80747AWweuLVQ=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=oJIchkTn5J5Y2FByQvuWE2lHd9xE8FGdU0f//KHLq0/p1okpNsJxMDhIIsQN/vExlO Jz079k29BZ7xofWwzJyd2oa5DhPaPbKua86AlnnMTLs8Kn5Ih1LwW7yPd+mgq7uXsBPs Bb0frfpLTr/WshooWHeGJgkGrRqIfiPJls9vvj/61Gfq2s3pkEUcTdQRyXAUd3LsAy9N 5Ba2cpinNIoHk7nKt0863Hm30YigoJ6bLNVyTX5o3CYfUK02bjhFAD0pLmQ7rUhbTMxX 6xes84jsyIn6pCJatorTpPpNOrHe03ARwD+fbWk/N4a1CeYsaQQ9lWQpDq/Il8/IcBwc 8Wbw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=E8YzQdRO; spf=pass (google.com: domain of linux-kernel+bounces-10227-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10227-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id l12-20020a17090a850c00b0028bde0db2e0si4986940pjn.144.2023.12.22.19.02.43 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:02:43 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10227-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=E8YzQdRO; spf=pass (google.com: domain of linux-kernel+bounces-10227-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10227-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 4B118B22A21 for ; Sat, 23 Dec 2023 03:02:27 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D0A631D69A; Sat, 23 Dec 2023 02:58:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="E8YzQdRO" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E9CD116400; Sat, 23 Dec 2023 02:58:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300326; x=1734836326; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Mr6Snaw/+lFcIGo7rqhsa9beABjxqH7MK8K28PCFbjs=; b=E8YzQdRODWNdLGvhde1qgktOtyoNZLw+xHoQK6/nIQC7mp40JUwTXcy3 pghQFpg1X0taAlLylZ5+MNp1jP6HT/hMfez8WPmVTEwqdC1w6YX+sKrwd hUJhdeRdnaL7ECmahe8yQh45fAEQ5pkrZ6ZrIB323uNxn6iz8uIwvk77k Ay68VGeGHDpNdXdnz2QApJJjmol8a6j7By9mEQ1Kf14xAJiu8QZz/EWHT a7n293c+o1Q033vZKRknVrpcvx8ELBR8HxXKqeTysppjEhcETwv4S6Lqz NHnahaiTxm8hqIuxa4AhK7OU3zQ46zp8FsZBBOcbcbdxu4csd9HN5hnzr Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610857" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610857" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537489" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:43 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 11/34] xdp: allow attaching already registered memory model to xdp_rxq_info Date: Sat, 23 Dec 2023 03:55:31 +0100 Message-ID: <20231223025554.2316836-12-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040091846955541 X-GMAIL-MSGID: 1786040091846955541 One may need to register memory model separately from xdp_rxq_info. One simple example may be XDP test run code, but in general, it might be useful when memory model registering is managed by one layer and then XDP RxQ info by a different one. Allow such scenarios by adding a simple helper which "attaches" an already registered memory model to the desired xdp_rxq_info. As this is mostly needed for Page Pool, add a special function to do that for a &page_pool pointer. Signed-off-by: Alexander Lobakin --- include/net/xdp.h | 14 ++++++++++++++ net/bpf/test_run.c | 4 ++-- net/core/xdp.c | 12 ++++++++++++ 3 files changed, 28 insertions(+), 2 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 197808df1ee1..909c0bc50517 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -356,6 +356,20 @@ void xdp_rxq_info_unreg_mem_model(struct xdp_rxq_info *xdp_rxq); int xdp_reg_mem_model(struct xdp_mem_info *mem, enum xdp_mem_type type, void *allocator); void xdp_unreg_mem_model(struct xdp_mem_info *mem); +void xdp_rxq_info_attach_page_pool(struct xdp_rxq_info *xdp_rxq, + const struct page_pool *pool); + +static inline void +xdp_rxq_info_attach_mem_model(struct xdp_rxq_info *xdp_rxq, + const struct xdp_mem_info *mem) +{ + xdp_rxq->mem = *mem; +} + +static inline void xdp_rxq_info_detach_mem_model(struct xdp_rxq_info *xdp_rxq) +{ + xdp_rxq->mem = (struct xdp_mem_info){ }; +} /* Drivers not supporting XDP metadata can use this helper, which * rejects any room expansion for metadata as a result. diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index dfd919374017..b612b28ebeac 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -194,8 +194,7 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c * xdp_mem_info pointing to our page_pool */ xdp_rxq_info_reg(&xdp->rxq, orig_ctx->rxq->dev, 0, 0); - xdp->rxq.mem.type = MEM_TYPE_PAGE_POOL; - xdp->rxq.mem.id = pp->xdp_mem_id; + xdp_rxq_info_attach_page_pool(&xdp->rxq, xdp->pp); xdp->dev = orig_ctx->rxq->dev; xdp->orig_ctx = orig_ctx; @@ -212,6 +211,7 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c static void xdp_test_run_teardown(struct xdp_test_data *xdp) { + xdp_rxq_info_detach_mem_model(&xdp->rxq); xdp_unreg_mem_model(&xdp->mem); page_pool_destroy(xdp->pp); kfree(xdp->frames); diff --git a/net/core/xdp.c b/net/core/xdp.c index 4869c1c2d8f3..03ebdb21ea62 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -368,6 +368,18 @@ int xdp_rxq_info_reg_mem_model(struct xdp_rxq_info *xdp_rxq, EXPORT_SYMBOL_GPL(xdp_rxq_info_reg_mem_model); +void xdp_rxq_info_attach_page_pool(struct xdp_rxq_info *xdp_rxq, + const struct page_pool *pool) +{ + struct xdp_mem_info mem = { + .type = MEM_TYPE_PAGE_POOL, + .id = pool->xdp_mem_id, + }; + + xdp_rxq_info_attach_mem_model(xdp_rxq, &mem); +} +EXPORT_SYMBOL_GPL(xdp_rxq_info_attach_page_pool); + /* XDP RX runs under NAPI protection, and in different delivery error * scenarios (e.g. queue full), it is possible to return the xdp_frame * while still leveraging this protection. The @napi_direct boolean From patchwork Sat Dec 23 02:55:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182879 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1460514dyi; Fri, 22 Dec 2023 19:02:46 -0800 (PST) X-Google-Smtp-Source: AGHT+IGPiQll4hMTh/cEe5VrDYpi0KJghcRJBMvahr21GNG7bJz1LNoDjRFjBQNb4/cfenkoa+46 X-Received: by 2002:a17:906:701:b0:a23:5cdd:fe31 with SMTP id y1-20020a170906070100b00a235cddfe31mr1038229ejb.117.1703300566291; Fri, 22 Dec 2023 19:02:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300566; cv=none; d=google.com; s=arc-20160816; b=1F41Ob6z05AAJSWV0+etHQn7hBti+orsSgIFP409xzBmifkHOdMZ/ejprkAiNdwDzk 6P4Nvj1MakcqfxKYLR3QATIY5iV5RZweRfKEn2UPwpvL2oM3L01LtVDvsdDHz69igv9f EdQzlTEWUeSm/hjqdUuxa7QQul04ttul2XYpIiMvsB1A2H2gW1jdsfwmLOfQwU0kCGVK zvXR4lW8jJ9/YKBRNgJwaK4C9BBGO+zwD8sla3d+2G3DjEbDt/QNCagi6yiSnLzVJwGO Pu7IA0FATwJK2D9Pz2iTpIsNkzoURJyvw+/yCLWLrLMtjR8w5LljAEPs3pSobJfony5O vIqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=RAnhNZqf44y/2GlZzpyDT4tuigD9BCZ5s/VKOIezywQ=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=wAc/CCBvJi18IO9Jf1Z6iKdcBF1C6nEP3Wbu6FDymfa29/rTEzp0uaLVwPMaYMlqf4 nzkPbhNC5cGQhuo0f9MLgGL6jzT1CVyi1cdWcyV2GDmyvuAXmmrLtZar+Zh6dUBTEDxm JGxgGk2jOxK+OOz4+NdbiXU9ZmCSH59aioJqj6Z+NjtttxBTUOTRrXVG81AQjfL0Cbvq jIhIc4BF4AamgruQyakiY4LhKJXVj07AAnCdFPXT6zD4TcUSL/3hHn8MRc0ln8krxnlt sddbMiw733Z6Aq9ZtcPnZeTie9n0JQ2mGsVICDB1iiqjzElYDT3OsGULkHFs+v2qquNL WqvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jVkDPg5N; spf=pass (google.com: domain of linux-kernel+bounces-10228-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10228-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id l2-20020a170907914200b00a26866833f7si2243360ejs.127.2023.12.22.19.02.46 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:02:46 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10228-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jVkDPg5N; spf=pass (google.com: domain of linux-kernel+bounces-10228-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10228-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id E49111F23A75 for ; Sat, 23 Dec 2023 03:02:45 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C1F11241E8; Sat, 23 Dec 2023 02:58:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="jVkDPg5N" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A8F9B1D696; Sat, 23 Dec 2023 02:58:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300330; x=1734836330; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lwem7QAKrTATP+Ps1asjKzaTx+4QKgH8qLT9fJ4NywM=; b=jVkDPg5NTNw/M/bNdjDvryr0p6l7/Rx/CcUtoify5Af/wMpgCON3zmC4 C9fMrL7mtzbnxknyEVO4eXEM2UH59DH+iQ3P+bweet4fMQ0TxipjneHy7 /O9kFJViFmCzFdMwuej94EzW2gjgnofkHOLoXDec/MKvfm//DZ0PWzyY0 xJh7bHsTaTN6VzDi3GvpUSYZQWoCGYcLh7czOalCQ3L+v4LeRbLzRJTLa pvccrnBy5U/5s2Ei1YPFXw3pp/1yUga7zDZE7B+aCKZ9A/BczbizMXXyh dep+WXZe8pMoC4FEHnMPhYaL9Se/Ho/x77Tgc2ZjqdDZWC/Ni4zsslwn4 Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610865" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610865" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:50 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537492" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:47 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 12/34] xdp: add generic xdp_buff_add_frag() Date: Sat, 23 Dec 2023 03:55:32 +0100 Message-ID: <20231223025554.2316836-13-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040094434181228 X-GMAIL-MSGID: 1786040094434181228 The code piece which would attach a frag to &xdp_buff is almost identical across the drivers supporting XDP multi-buffer on Rx. Make it a generic elegant onelner. Also, I see lots of drivers calculating frags_truesize as `xdp->frame_sz * nr_frags`. I can't say this is fully correct, since frags might be backed by chunks of different sizes, especially with stuff like the header split. Even page_pool_alloc() can give you two different truesizes on two subsequent requests to allocate the same buffer size. Add a field to &skb_shared_info (unionized as there's no free slot currently on x6_64) to track the "true" truesize. It can be used later when updating an skb. Signed-off-by: Alexander Lobakin --- include/linux/skbuff.h | 14 ++++++++++---- include/net/xdp.h | 36 +++++++++++++++++++++++++++++++++++- 2 files changed, 45 insertions(+), 5 deletions(-) diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index ea5c8ab3ed00..e350efa04070 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -598,11 +598,17 @@ struct skb_shared_info { * Warning : all fields before dataref are cleared in __alloc_skb() */ atomic_t dataref; - unsigned int xdp_frags_size; - /* Intermediate layers must ensure that destructor_arg - * remains valid until skb destructor */ - void * destructor_arg; + union { + struct { + unsigned int xdp_frags_size; + u32 xdp_frags_truesize; + }; + + /* Intermediate layers must ensure that destructor_arg + * remains valid until skb destructor */ + void * destructor_arg; + }; /* must be last field, see pskb_expand_head() */ skb_frag_t frags[MAX_SKB_FRAGS]; diff --git a/include/net/xdp.h b/include/net/xdp.h index 909c0bc50517..a3dc0f39b437 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -165,6 +165,34 @@ xdp_get_buff_len(const struct xdp_buff *xdp) return len; } +static inline bool xdp_buff_add_frag(struct xdp_buff *xdp, struct page *page, + u32 offset, u32 size, u32 truesize) +{ + struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp); + + if (!xdp_buff_has_frags(xdp)) { + sinfo->nr_frags = 0; + + sinfo->xdp_frags_size = 0; + sinfo->xdp_frags_truesize = 0; + + xdp_buff_set_frags_flag(xdp); + } + + if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) + return false; + + __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, page, offset, + size); + sinfo->xdp_frags_size += size; + sinfo->xdp_frags_truesize += truesize; + + if (unlikely(page_is_pfmemalloc(page))) + xdp_buff_set_frag_pfmemalloc(xdp); + + return true; +} + struct xdp_frame { void *data; u16 len; @@ -230,7 +258,13 @@ xdp_update_skb_shared_info(struct sk_buff *skb, u8 nr_frags, unsigned int size, unsigned int truesize, bool pfmemalloc) { - skb_shinfo(skb)->nr_frags = nr_frags; + struct skb_shared_info *sinfo = skb_shinfo(skb); + + sinfo->nr_frags = nr_frags; + /* ``destructor_arg`` is unionized with ``xdp_frags_{,true}size``, + * reset it after that these fields aren't used anymore. + */ + sinfo->destructor_arg = NULL; skb->len += size; skb->data_len += size; From patchwork Sat Dec 23 02:55:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182880 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1460737dyi; Fri, 22 Dec 2023 19:03:18 -0800 (PST) X-Google-Smtp-Source: AGHT+IELn4YkuChQpyCp9KnCMua5Ldvj0TVnD6m86AYJK1CdM4yNEwEEdrLGhUCxNsvx3kkVQJkX X-Received: by 2002:a05:6a20:78a2:b0:194:a32d:b6aa with SMTP id d34-20020a056a2078a200b00194a32db6aamr2336588pzg.122.1703300597925; Fri, 22 Dec 2023 19:03:17 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300597; cv=none; d=google.com; s=arc-20160816; b=jcokaGqkYhMtgxtrMzo6RH/3SayV9KS56gfamJ0WoRrLesWpayaYxEx0XCCJdjkn26 G/M7+Ui/g7lRtH7HOrC4dMLdSDRcTDWOJdPingdbbYYEHIYgndBSQr8IiqJ7g1jCbWXO xrz+a37YrWOiQ3Ag5O3Ro1nYjOOuJ+NozGrBjWtDdm2xcLQl87cBjCEcbvNsOSjn24qW b2pJ8rBWEAIFW9/DahJv8FZr+ttj2dtVzjA3L5rgA36Pv+DQ9gOvcfIxu9WTabFNL482 R7V04dCSEtfvEtyF3xjD6mYhRew+RDmd0ibQ7izMmeOZ6u6aD4+L6XVndS2eRrXfJsu3 94uw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=C3lqrj5qL+SpTXVjLwPbn08FgTYZL5cUZjFfn5rNbNI=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=DNhJhfnwNgivYIj9tLoVd9IodmP31PXoQkkmg8gDIyd5aMpkQMZ7kii0eUmo892Pxe kAd6+1MLsTBBv8MdAz/6EO9pjYvvbiYsllGsDlr7r+agK8lNwbDWDkWNqcjj2DpUmVeL LU8f3ISKqTZsJ2V2NwGg5V34OACmq4nYbPkONPe5tJ3u8jElgqnk0L5u6/coyjM1JbLz 2m/KA9i2CVlwUvdr7w1HwnkiJyH4BArSgYWALKG0/Pi15+LbHWkuf93IJLS7yZWln4+z bWfVM8Q3VhsLU028UjNiArxULS8CrmqZceYU69oYBXArFdPm1UfPksLcd81Yr3e7gOtF cP4A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=aomeJIhD; spf=pass (google.com: domain of linux-kernel+bounces-10229-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10229-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id fe13-20020a056a002f0d00b006d39e268779si4158891pfb.392.2023.12.22.19.03.17 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:03:17 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10229-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=aomeJIhD; spf=pass (google.com: domain of linux-kernel+bounces-10229-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10229-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id A4658B22C4F for ; Sat, 23 Dec 2023 03:03:05 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 22EF125759; Sat, 23 Dec 2023 02:58:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="aomeJIhD" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2CDDA241FC; Sat, 23 Dec 2023 02:58:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300335; x=1734836335; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=w7O2LkdFgRPFTPe5I5rq1IJGyzwbW3hKjP3uSaNFJgU=; b=aomeJIhDMWUS19petCbIpHh2n5/LzFo5q7N6LgbxFWfx77iGJZNdLHSE 6nKxXK8uf1R9iz2OAsCRcRhQn0JYdJaB21WirdocXB2Ec3KkCi7sc74SG HrPPUFqMjTt7hRQBmoffxWdRr1uRloHM87BsknrM0QOfMyqYoHUTvV7VH Yr/3aMnsgLQ2FFsMQk6KdexehADQ4ydMqqgcDIR3ECMy0xr7Z200R03lj YZTzya8BGUzLOlSX/4a7DsU6UpULHjNMjoj+Z8qR3BF3R3z9CKfra0/d7 Sg52WDcwC7xJ2YKKXMLnKV/wvAjQ1s0YlkFuTS9y1mzqJKmWy++4fyIsR Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610873" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610873" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:54 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537497" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:50 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 13/34] xdp: add generic xdp_build_skb_from_buff() Date: Sat, 23 Dec 2023 03:55:33 +0100 Message-ID: <20231223025554.2316836-14-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040127801095864 X-GMAIL-MSGID: 1786040127801095864 The code which builds an skb from an &xdp_buff keeps multiplying itself around the drivers with almost no changes. Let's try to stop that by adding a generic function. There's __xdp_build_skb_from_frame() already, so just convert it to take &xdp_buff instead, while making the original one a wrapper. The original one always took an already allocated skb, allow both variants here -- if no skb passed, which is expected when calling from a driver, pick one via napi_build_skb(). Signed-off-by: Alexander Lobakin --- include/net/xdp.h | 4 +++ net/core/xdp.c | 89 +++++++++++++++++++++++++++++++---------------- 2 files changed, 63 insertions(+), 30 deletions(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index a3dc0f39b437..4fcf0ac48345 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -276,6 +276,10 @@ xdp_update_skb_shared_info(struct sk_buff *skb, u8 nr_frags, void xdp_warn(const char *msg, const char *func, const int line); #define XDP_WARN(msg) xdp_warn(msg, __func__, __LINE__) +struct sk_buff *__xdp_build_skb_from_buff(struct sk_buff *skb, + const struct xdp_buff *xdp); +#define xdp_build_skb_from_buff(xdp) __xdp_build_skb_from_buff(NULL, xdp) + struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp); struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct sk_buff *skb, diff --git a/net/core/xdp.c b/net/core/xdp.c index 03ebdb21ea62..ed73b97472b4 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -609,45 +609,77 @@ int xdp_alloc_skb_bulk(void **skbs, int n_skb, gfp_t gfp) } EXPORT_SYMBOL_GPL(xdp_alloc_skb_bulk); -struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, - struct sk_buff *skb, - struct net_device *dev) +struct sk_buff *__xdp_build_skb_from_buff(struct sk_buff *skb, + const struct xdp_buff *xdp) { - struct skb_shared_info *sinfo = xdp_get_shared_info_from_frame(xdpf); - unsigned int headroom, frame_size; - void *hard_start; - u8 nr_frags; + const struct xdp_rxq_info *rxq = xdp->rxq; + const struct skb_shared_info *sinfo; + u32 nr_frags = 0; /* xdp frags frame */ - if (unlikely(xdp_frame_has_frags(xdpf))) + if (unlikely(xdp_buff_has_frags(xdp))) { + sinfo = xdp_get_shared_info_from_buff(xdp); nr_frags = sinfo->nr_frags; + } - /* Part of headroom was reserved to xdpf */ - headroom = sizeof(*xdpf) + xdpf->headroom; + net_prefetch(xdp->data_meta); - /* Memory size backing xdp_frame data already have reserved - * room for build_skb to place skb_shared_info in tailroom. - */ - frame_size = xdpf->frame_sz; + if (!skb) { + skb = napi_build_skb(xdp->data_hard_start, xdp->frame_sz); + if (unlikely(!skb)) + return NULL; + } else { + /* build_skb_around() can return NULL only when !skb, which + * is impossible here. + */ + build_skb_around(skb, xdp->data_hard_start, xdp->frame_sz); + } - hard_start = xdpf->data - headroom; - skb = build_skb_around(skb, hard_start, frame_size); - if (unlikely(!skb)) - return NULL; + skb_reserve(skb, xdp->data - xdp->data_hard_start); + __skb_put(skb, xdp->data_end - xdp->data); + if (xdp->data > xdp->data_meta) + skb_metadata_set(skb, xdp->data - xdp->data_meta); + + if (rxq->mem.type == MEM_TYPE_PAGE_POOL) + skb_mark_for_recycle(skb); - skb_reserve(skb, headroom); - __skb_put(skb, xdpf->len); - if (xdpf->metasize) - skb_metadata_set(skb, xdpf->metasize); + /* __xdp_rxq_info_reg() sets these two together */ + if (rxq->reg_state == REG_STATE_REGISTERED) + skb_record_rx_queue(skb, rxq->queue_index); + + if (unlikely(nr_frags)) { + u32 truesize = sinfo->xdp_frags_truesize ? : + nr_frags * xdp->frame_sz; - if (unlikely(xdp_frame_has_frags(xdpf))) xdp_update_skb_shared_info(skb, nr_frags, - sinfo->xdp_frags_size, - nr_frags * xdpf->frame_sz, - xdp_frame_is_frag_pfmemalloc(xdpf)); + sinfo->xdp_frags_size, truesize, + xdp_buff_is_frag_pfmemalloc(xdp)); + } /* Essential SKB info: protocol and skb->dev */ - skb->protocol = eth_type_trans(skb, dev); + skb->protocol = eth_type_trans(skb, rxq->dev); + + return skb; +} +EXPORT_SYMBOL_GPL(__xdp_build_skb_from_buff); + +struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, + struct sk_buff *skb, + struct net_device *dev) +{ + struct xdp_rxq_info rxq = { + .dev = dev, + .mem = xdpf->mem, + }; + struct xdp_buff xdp; + + /* Check early instead of delegating it to build_skb_around() */ + if (unlikely(!skb)) + return NULL; + + xdp.rxq = &rxq; + xdp_convert_frame_to_buff(xdpf, &xdp); + __xdp_build_skb_from_buff(skb, &xdp); /* Optional SKB info, currently missing: * - HW checksum info (skb->ip_summed) @@ -655,9 +687,6 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, * - RX ring dev queue index (skb_record_rx_queue) */ - if (xdpf->mem.type == MEM_TYPE_PAGE_POOL) - skb_mark_for_recycle(skb); - /* Allow SKB to reuse area used by xdp_frame */ xdp_scrub_frame(xdpf); From patchwork Sat Dec 23 02:55:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182881 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1460845dyi; Fri, 22 Dec 2023 19:03:36 -0800 (PST) X-Google-Smtp-Source: AGHT+IHR6rbFaPxD+GzYkNNFuN/pkjnRVNFJRLG/zbjmfIHdtSEVBRczn1h+NE/12I3SVQNcVurc X-Received: by 2002:a92:ca0c:0:b0:35f:abca:238e with SMTP id j12-20020a92ca0c000000b0035fabca238emr3968006ils.57.1703300616271; Fri, 22 Dec 2023 19:03:36 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300616; cv=none; d=google.com; s=arc-20160816; b=sM1UUMS7z3I4dr0Yp2DLBSOfacq2/K1JM+zK8MB8EQls1PVcKhgssy9h3T1U2LmLzH nZOzIheL0fJXqnqyJq/6wpGZeM7tc8Vt/mV12d5l4MNr8Arlsq9eaeWwzW7Ln1E1vTof WGKBcUnRi94z98SSHu/IT23JoMvSkxjwvSznkbjlUxG7jUPdESt/kYnIlom23qB7A9QG W/7eXoc5YeyjfSSp8KNJHBqDDmP570Q0FPrdwkYxgevuWDLGxtu1ozYF9VytsZX2QvQi WFrqbsBBxKapCY937fP7ai7NpZzMEsxbm5hOj119h9ChvNXry6hHqWWf/2UeJ91g5Wnk Kayg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=eN5PWmBW4ETwexzbauSQWrVIUjjnzpnPth1B0wtdnfY=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=q6p1hU0apIKr1hWmiHx004eSO3X0KiXUuSFb3FAD2nAbpANH6XC2t9NkoLjEN50Knk 0bp0YpmvjdHPbgeza65/czuUlMsuZejAB7wHbkmDWHL2d4a4UDODBCbifM0STGjE0w1C DPqS5C9FwvT3XoAI9Zgk+Bx7U1UzcxO+qs7ZU1h0IJUVXqIem46O/dGqv8Itr9AQWOnw EVccB+JsubBuJskRd3LiB2puslSsCYnl8GvkPlEQ8v0e7O6XsYNaQbcczO2TBAOpMIku PceREeoW1UgK+T9+2J7pv79Xe6yg5zYzXXQFdcyX7Ebt//N0HaoeL7n3ov2E5/gH1Rbc pgXQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BSpo9cbj; spf=pass (google.com: domain of linux-kernel+bounces-10230-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10230-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id b2-20020a170902d50200b001d42a95ec21si1258107plg.365.2023.12.22.19.03.35 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:03:36 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10230-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=BSpo9cbj; spf=pass (google.com: domain of linux-kernel+bounces-10230-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10230-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 3EE87B22BCC for ; Sat, 23 Dec 2023 03:03:26 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 4316B26AE9; Sat, 23 Dec 2023 02:59:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="BSpo9cbj" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EC0B52574C; Sat, 23 Dec 2023 02:58:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300338; x=1734836338; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dHGtKdZRC40H8UJnTnHa+n6CAVSh4xm7lcaExv1dtTs=; b=BSpo9cbjHB1mJOUOqVaz+8OUyT1kjkJ1DsH8GP+V2pBPrkbXEOk/Zzxh CQaOe9JPNTov5F3bBJsKAC+RbaDHaz6QWRUbflDntileqFK8jtGObYz/y rHtBxiBX8x2Z2lHuNu4rpsSIGPayKOUdcN0OB02PVLvzl8hJwdhchqyHx UwPlQb55QzJiXiMSYJruZi/rY/NB+tuVcUhzJrM3jD3ZoRDQwefKLQSCt D+412fD/7q9aQUIn8h7xnlSqj0q4atdiq3exl8KdW0QJ37OLOUZpDG8ua 6IL4V6OSeGP63QWm7svtbLNJyJZpwTMG32lk57uuOCiwvxjP22p59I6lR Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610883" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610883" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:58:58 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537502" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:55 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 14/34] xdp: get rid of xdp_frame::mem.id Date: Sat, 23 Dec 2023 03:55:34 +0100 Message-ID: <20231223025554.2316836-15-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040146959824823 X-GMAIL-MSGID: 1786040146959824823 Initially, xdp_frame::mem.id was used to search for the corresponding &page_pool to return the page correctly. However, after that struct page now contains a direct pointer to its PP, further keeping of this field makes no sense. xdp_return_frame_bulk() still uses it to do a lookup, but this is rather a leftover. Remove xdp_frame::mem and replace it with ::mem_type, as only memory type still matters and we need to know it to be able to free the frame correctly. But the main reason for this change was to allow mixing pages from different &page_pools within one &xdp_buff/&xdp_frame. Why not? Adjust xdp_return_frame_bulk() and page_pool_put_page_bulk(), so that they won't be tied to a particular pool. Let the latter splice the bulk when it encounters a page whichs PP is different and flush it recursively. As a cute side effect, sizeof(struct xdp_frame) on x86_64 is reduced from 40 to 32 bytes, giving a bit more free headroom in front of the frame. Signed-off-by: Alexander Lobakin --- .../net/ethernet/freescale/dpaa/dpaa_eth.c | 2 +- drivers/net/veth.c | 4 +- include/net/page_pool/types.h | 6 +-- include/net/xdp.h | 17 +++---- kernel/bpf/cpumap.c | 2 +- net/bpf/test_run.c | 2 +- net/core/filter.c | 2 +- net/core/page_pool.c | 31 ++++++++++-- net/core/xdp.c | 49 ++++++------------- 9 files changed, 58 insertions(+), 57 deletions(-) diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c index dcbc598b11c6..4f1fb8181131 100644 --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c @@ -2263,7 +2263,7 @@ static int dpaa_a050385_wa_xdpf(struct dpaa_priv *priv, new_xdpf->len = xdpf->len; new_xdpf->headroom = priv->tx_headroom; new_xdpf->frame_sz = DPAA_BP_RAW_SIZE; - new_xdpf->mem.type = MEM_TYPE_PAGE_ORDER0; + new_xdpf->mem_type = MEM_TYPE_PAGE_ORDER0; /* Release the initial buffer */ xdp_return_frame_rx_napi(xdpf); diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 578e36ea1589..8223c5c68704 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -633,7 +633,7 @@ static struct xdp_frame *veth_xdp_rcv_one(struct veth_rq *rq, break; case XDP_TX: orig_frame = *frame; - xdp->rxq->mem = frame->mem; + xdp->rxq->mem.type = frame->mem_type; if (unlikely(veth_xdp_tx(rq, xdp, bq) < 0)) { trace_xdp_exception(rq->dev, xdp_prog, act); frame = &orig_frame; @@ -645,7 +645,7 @@ static struct xdp_frame *veth_xdp_rcv_one(struct veth_rq *rq, goto xdp_xmit; case XDP_REDIRECT: orig_frame = *frame; - xdp->rxq->mem = frame->mem; + xdp->rxq->mem.type = frame->mem_type; if (xdp_do_redirect(rq->dev, xdp, xdp_prog)) { frame = &orig_frame; stats->rx_drops++; diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index 24733bef19bc..adc6630ace9c 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -222,8 +222,7 @@ void page_pool_unlink_napi(struct page_pool *pool); void page_pool_destroy(struct page_pool *pool); void page_pool_use_xdp_mem(struct page_pool *pool, void (*disconnect)(void *), struct xdp_mem_info *mem); -void page_pool_put_page_bulk(struct page_pool *pool, void **data, - int count); +void page_pool_put_page_bulk(void **data, int count); #else static inline void page_pool_unlink_napi(struct page_pool *pool) { @@ -239,8 +238,7 @@ static inline void page_pool_use_xdp_mem(struct page_pool *pool, { } -static inline void page_pool_put_page_bulk(struct page_pool *pool, void **data, - int count) +static inline void page_pool_put_page_bulk(void **data, int count) { } #endif diff --git a/include/net/xdp.h b/include/net/xdp.h index 4fcf0ac48345..66854b755b58 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -197,11 +197,8 @@ struct xdp_frame { void *data; u16 len; u16 headroom; - u32 metasize; /* uses lower 8-bits */ - /* Lifetime of xdp_rxq_info is limited to NAPI/enqueue time, - * while mem info is valid on remote CPU. - */ - struct xdp_mem_info mem; + u16 metasize; + enum xdp_mem_type mem_type:16; struct net_device *dev_rx; /* used by cpumap */ u32 frame_sz; u32 flags; /* supported values defined in xdp_buff_flags */ @@ -221,14 +218,12 @@ xdp_frame_is_frag_pfmemalloc(const struct xdp_frame *frame) #define XDP_BULK_QUEUE_SIZE 16 struct xdp_frame_bulk { int count; - void *xa; void *q[XDP_BULK_QUEUE_SIZE]; }; static __always_inline void xdp_frame_bulk_init(struct xdp_frame_bulk *bq) { - /* bq->count will be zero'ed when bq->xa gets updated */ - bq->xa = NULL; + bq->count = 0; } static inline struct skb_shared_info * @@ -344,13 +339,13 @@ struct xdp_frame *xdp_convert_buff_to_frame(struct xdp_buff *xdp) if (unlikely(xdp_update_frame_from_buff(xdp, xdp_frame) < 0)) return NULL; - /* rxq only valid until napi_schedule ends, convert to xdp_mem_info */ - xdp_frame->mem = xdp->rxq->mem; + /* rxq only valid until napi_schedule ends, convert to xdp_mem_type */ + xdp_frame->mem_type = xdp->rxq->mem.type; return xdp_frame; } -void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, +void __xdp_return(void *data, enum xdp_mem_type mem_type, bool napi_direct, struct xdp_buff *xdp); void xdp_return_frame(struct xdp_frame *xdpf); void xdp_return_frame_rx_napi(struct xdp_frame *xdpf); diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 8a0bb80fe48a..3d557d458284 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -191,7 +191,7 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu, int err; rxq.dev = xdpf->dev_rx; - rxq.mem = xdpf->mem; + rxq.mem.type = xdpf->mem_type; /* TODO: report queue_index to xdp_rxq_info */ xdp_convert_frame_to_buff(xdpf, &xdp); diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c index b612b28ebeac..a35940a41589 100644 --- a/net/bpf/test_run.c +++ b/net/bpf/test_run.c @@ -151,7 +151,7 @@ static void xdp_test_run_init_page(struct page *page, void *arg) new_ctx->data = new_ctx->data_meta + meta_len; xdp_update_frame_from_buff(new_ctx, frm); - frm->mem = new_ctx->rxq->mem; + frm->mem_type = new_ctx->rxq->mem.type; memcpy(&head->orig_ctx, new_ctx, sizeof(head->orig_ctx)); } diff --git a/net/core/filter.c b/net/core/filter.c index 4ace1edc4de1..e79e74edbae4 100644 --- a/net/core/filter.c +++ b/net/core/filter.c @@ -4114,7 +4114,7 @@ static int bpf_xdp_frags_shrink_tail(struct xdp_buff *xdp, int offset) if (skb_frag_size(frag) == shrink) { struct page *page = skb_frag_page(frag); - __xdp_return(page_address(page), &xdp->rxq->mem, + __xdp_return(page_address(page), xdp->rxq->mem.type, false, NULL); n_frags_free++; } else { diff --git a/net/core/page_pool.c b/net/core/page_pool.c index bdebc9028da3..742289cd86cd 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -698,9 +698,18 @@ void page_pool_put_unrefed_page(struct page_pool *pool, struct page *page, } EXPORT_SYMBOL(page_pool_put_unrefed_page); +static void page_pool_bulk_splice(struct xdp_frame_bulk *bulk, void *data) +{ + if (unlikely(bulk->count == ARRAY_SIZE(bulk->q))) { + page_pool_put_page_bulk(bulk->q, bulk->count); + bulk->count = 0; + } + + bulk->q[bulk->count++] = data; +} + /** * page_pool_put_page_bulk() - release references on multiple pages - * @pool: pool from which pages were allocated * @data: array holding page pointers * @count: number of pages in @data * @@ -713,12 +722,15 @@ EXPORT_SYMBOL(page_pool_put_unrefed_page); * Please note the caller must not use data area after running * page_pool_put_page_bulk(), as this function overwrites it. */ -void page_pool_put_page_bulk(struct page_pool *pool, void **data, - int count) +void page_pool_put_page_bulk(void **data, int count) { + struct page_pool *pool = NULL; + struct xdp_frame_bulk sub; int i, bulk_len = 0; bool in_softirq; + xdp_frame_bulk_init(&sub); + for (i = 0; i < count; i++) { struct page *page = virt_to_head_page(data[i]); @@ -726,12 +738,25 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data, if (!page_pool_is_last_ref(page)) continue; + if (unlikely(!pool)) { + pool = page->pp; + } else if (page->pp != pool) { + /* If the page belongs to a different page_pool, + * splice the array and handle it recursively. + */ + page_pool_bulk_splice(&sub, data[i]); + continue; + } + page = __page_pool_put_page(pool, page, -1, false); /* Approved for bulk recycling in ptr_ring cache */ if (page) data[bulk_len++] = page; } + if (sub.count) + page_pool_put_page_bulk(sub.q, sub.count); + if (unlikely(!bulk_len)) return; diff --git a/net/core/xdp.c b/net/core/xdp.c index ed73b97472b4..8ef1d735a7eb 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -386,12 +386,12 @@ EXPORT_SYMBOL_GPL(xdp_rxq_info_attach_page_pool); * is used for those calls sites. Thus, allowing for faster recycling * of xdp_frames/pages in those cases. */ -void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, +void __xdp_return(void *data, enum xdp_mem_type mem_type, bool napi_direct, struct xdp_buff *xdp) { struct page *page; - switch (mem->type) { + switch (mem_type) { case MEM_TYPE_PAGE_POOL: page = virt_to_head_page(data); if (napi_direct && xdp_return_frame_no_direct()) @@ -414,7 +414,7 @@ void __xdp_return(void *data, struct xdp_mem_info *mem, bool napi_direct, break; default: /* Not possible, checked in xdp_rxq_info_reg_mem_model() */ - WARN(1, "Incorrect XDP memory type (%d) usage", mem->type); + WARN(1, "Incorrect XDP memory type (%d) usage", mem_type); break; } } @@ -431,10 +431,10 @@ void xdp_return_frame(struct xdp_frame *xdpf) for (i = 0; i < sinfo->nr_frags; i++) { struct page *page = skb_frag_page(&sinfo->frags[i]); - __xdp_return(page_address(page), &xdpf->mem, false, NULL); + __xdp_return(page_address(page), xdpf->mem_type, false, NULL); } out: - __xdp_return(xdpf->data, &xdpf->mem, false, NULL); + __xdp_return(xdpf->data, xdpf->mem_type, false, NULL); } EXPORT_SYMBOL_GPL(xdp_return_frame); @@ -450,10 +450,10 @@ void xdp_return_frame_rx_napi(struct xdp_frame *xdpf) for (i = 0; i < sinfo->nr_frags; i++) { struct page *page = skb_frag_page(&sinfo->frags[i]); - __xdp_return(page_address(page), &xdpf->mem, true, NULL); + __xdp_return(page_address(page), xdpf->mem_type, true, NULL); } out: - __xdp_return(xdpf->data, &xdpf->mem, true, NULL); + __xdp_return(xdpf->data, xdpf->mem_type, true, NULL); } EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi); @@ -469,12 +469,10 @@ EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi); */ void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq) { - struct xdp_mem_allocator *xa = bq->xa; - - if (unlikely(!xa || !bq->count)) + if (unlikely(!bq->count)) return; - page_pool_put_page_bulk(xa->page_pool, bq->q, bq->count); + page_pool_put_page_bulk(bq->q, bq->count); /* bq->xa is not cleared to save lookup, if mem.id same in next bulk */ bq->count = 0; } @@ -484,29 +482,14 @@ EXPORT_SYMBOL_GPL(xdp_flush_frame_bulk); void xdp_return_frame_bulk(struct xdp_frame *xdpf, struct xdp_frame_bulk *bq) { - struct xdp_mem_info *mem = &xdpf->mem; - struct xdp_mem_allocator *xa; - - if (mem->type != MEM_TYPE_PAGE_POOL) { + if (xdpf->mem_type != MEM_TYPE_PAGE_POOL) { xdp_return_frame(xdpf); return; } - xa = bq->xa; - if (unlikely(!xa)) { - xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); - bq->count = 0; - bq->xa = xa; - } - if (bq->count == XDP_BULK_QUEUE_SIZE) xdp_flush_frame_bulk(bq); - if (unlikely(mem->id != xa->mem.id)) { - xdp_flush_frame_bulk(bq); - bq->xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params); - } - if (unlikely(xdp_frame_has_frags(xdpf))) { struct skb_shared_info *sinfo; int i; @@ -536,10 +519,11 @@ void xdp_return_buff(struct xdp_buff *xdp) for (i = 0; i < sinfo->nr_frags; i++) { struct page *page = skb_frag_page(&sinfo->frags[i]); - __xdp_return(page_address(page), &xdp->rxq->mem, true, xdp); + __xdp_return(page_address(page), xdp->rxq->mem.type, true, + xdp); } out: - __xdp_return(xdp->data, &xdp->rxq->mem, true, xdp); + __xdp_return(xdp->data, xdp->rxq->mem.type, true, xdp); } EXPORT_SYMBOL_GPL(xdp_return_buff); @@ -585,7 +569,7 @@ struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp) xdpf->headroom = 0; xdpf->metasize = metasize; xdpf->frame_sz = PAGE_SIZE; - xdpf->mem.type = MEM_TYPE_PAGE_ORDER0; + xdpf->mem_type = MEM_TYPE_PAGE_ORDER0; xsk_buff_free(xdp); return xdpf; @@ -669,7 +653,7 @@ struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, { struct xdp_rxq_info rxq = { .dev = dev, - .mem = xdpf->mem, + .mem.type = xdpf->mem_type, }; struct xdp_buff xdp; @@ -731,8 +715,7 @@ struct xdp_frame *xdpf_clone(struct xdp_frame *xdpf) nxdpf = addr; nxdpf->data = addr + headroom; nxdpf->frame_sz = PAGE_SIZE; - nxdpf->mem.type = MEM_TYPE_PAGE_ORDER0; - nxdpf->mem.id = 0; + nxdpf->mem_type = MEM_TYPE_PAGE_ORDER0; return nxdpf; } From patchwork Sat Dec 23 02:55:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182882 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1460971dyi; Fri, 22 Dec 2023 19:03:56 -0800 (PST) X-Google-Smtp-Source: AGHT+IE4DTK1IieT8F664mWttje464WHKW8jHeD/ttPlDGCDdrTaYXWXuLX2r1lvI+/nF8iB6NIx X-Received: by 2002:a05:6870:e409:b0:1fa:f541:5297 with SMTP id n9-20020a056870e40900b001faf5415297mr2808578oag.42.1703300635969; Fri, 22 Dec 2023 19:03:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300635; cv=none; d=google.com; s=arc-20160816; b=gtWDj+CxWQAneTy/wfvvK6UjChcKlFv/2N8MWLD4COluCAftOU4fTAZMm0fFpdV/I1 wEqCCc1kOY55QmlVTrAj4oZPZncNMNXY6HVXiBApNgwldQCSlnOR31tSyWitTA6leRhB 8SihwvQjW/oM22C9WcjFrkt3AelHmnrCpHG5jCTmdzcQ2buiP3SILlFdpQZAePB9iwPX dOcWMkLlUmfmgrzw39HcvJZI4XzqOko6ATY5rLrOcN+DmwwCCowj2aCew+IFWno4g5Do bBLH8+tCtxgxJM3H0g0Ctp/4a31hCWsCxSve3DqIm587o4p2prYNxx5uGoyScNdFZ7Dp TlGQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=VsDFzbr61ZrYN/+9EjwxZW6bB9G6JrBPgfTPVtL7cSg=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=Ji9YkiFCrAYkO4wNfHaDp+4t/5cNdkm5zy4bZtsbmFpVCGNF6LhXfptq4LYLoa+DWP C+Zbz2jqel1FLreAgtPGtXG5FOBaF2hW3jXJ0p6AKqcVInrwDZvnorWXREluAmp3Xz7/ sNwCRFaPkrcpEYjjsmU9tVR3w3eQZ32kQhv21YnV1Inbv+WUriDcPVpylj+KCYSnisOb WxPnN35S2q75JfZRM7LtH10FvNovlYK9AJlNDbfFOU9P12g47eGFCPNPLbZApazEFOY3 4sTMPhxlRw66Il9Nyps8ukH8cYb1WlMg2lmzy+vc6MIqlUPlDjsAxE3jLvsDbxGImQxi DWGQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=l9Ycgs0d; spf=pass (google.com: domain of linux-kernel+bounces-10231-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10231-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id e2-20020a656782000000b005cdff2eb998si1003116pgr.744.2023.12.22.19.03.55 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:03:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10231-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=l9Ycgs0d; spf=pass (google.com: domain of linux-kernel+bounces-10231-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10231-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id B4905B210E1 for ; Sat, 23 Dec 2023 03:03:45 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 05FEA28383; Sat, 23 Dec 2023 02:59:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="l9Ycgs0d" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1B52D2577B; Sat, 23 Dec 2023 02:59:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300342; x=1734836342; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jcsuGPpFSGcZ4sgNc0Vw/DW519NNgEvMXndn/cJUseI=; b=l9Ycgs0dFb0PGHHQJXMR1NS5+blXTfGC0esupwQa5nDbovJ6mM8NxeUs 4Wc9ZXTQpH3TeoUnFlsXRJnP1RCMsFkrgJhpyNZhZld6nJ82CJYEPlZwo k/MQmPZeuvX8HXOdbWCiDvH9kfKMLtdmlzj6/ZI+uCz3cgdWGIRXpH1Je BBw5NwbFp52YI0nHG9oCiS6ItuZi0TJfzLzz0Gs67XiB1RFAG6e3Yvtw2 5wyIIUumQW8XCx/wAdLazdfHop+YVopUuYzLg7+IiBR6EGnnsMn/t85H3 Ze8MTX2AorSP++zuc9ovymJNrQX+02dFU68mB0HKwXddnfYaxlsn0RW/z Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610891" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610891" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537505" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:58:58 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 15/34] page_pool: add inline helper to sync VA for device (for XDP_TX) Date: Sat, 23 Dec 2023 03:55:35 +0100 Message-ID: <20231223025554.2316836-16-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040167830277511 X-GMAIL-MSGID: 1786040167830277511 Drivers using Page Pool for Rx buffers do the same pattern on XDP_TX: syncing-DMA-for-device and obtaining DMA address for &xdp_buff they are sending. Add a helper for that to be able to do that in one call in the drivers. I explicitly added `bool compound` argument and set it to false by default: only a few drivers, if any, uses high-order pages with Page Pool, so losing cycles on compound_head() looks suboptimal. Drivers can always call the underscored version if needed (for example, pass pool->p.order as the last argument -- will always work). Signed-off-by: Alexander Lobakin --- include/net/page_pool/helpers.h | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h index 873631c79ab1..99dc825b03a5 100644 --- a/include/net/page_pool/helpers.h +++ b/include/net/page_pool/helpers.h @@ -397,6 +397,38 @@ static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr) return false; } +static inline dma_addr_t __page_pool_dma_sync_va_for_device(const void *va, + u32 dma_sync_size, + bool compound) +{ + const struct page_pool *pool; + const struct page *page; + dma_addr_t addr; + u32 offset; + + if (unlikely(compound)) { + page = virt_to_head_page(va); + offset = va - page_address(page); + } else { + page = virt_to_page(va); + offset = offset_in_page(va); + } + + addr = page_pool_get_dma_addr(page) + offset; + pool = page->pp; + + dma_sync_single_for_device(pool->p.dev, addr, dma_sync_size, + page_pool_get_dma_dir(pool)); + + return addr; +} + +static inline dma_addr_t page_pool_dma_sync_va_for_device(const void *va, + u32 dma_sync_size) +{ + return __page_pool_dma_sync_va_for_device(va, dma_sync_size, false); +} + /** * page_pool_dma_sync_for_cpu - sync Rx page for CPU after it's written by HW * @pool: &page_pool the @page belongs to From patchwork Sat Dec 23 02:55:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182883 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1461079dyi; Fri, 22 Dec 2023 19:04:15 -0800 (PST) X-Google-Smtp-Source: AGHT+IG40yTWWkwfP6bSTGJ2yX0aqTDKuJVMCeqjSsirxx1x2/mzmGbiCWZ+auoNwLBRWiKjrUgd X-Received: by 2002:a05:6871:b0f:b0:1e9:badb:acaa with SMTP id fq15-20020a0568710b0f00b001e9badbacaamr2936098oab.23.1703300655193; Fri, 22 Dec 2023 19:04:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300655; cv=none; d=google.com; s=arc-20160816; b=ZBk7k9eyRSx22vodjEqKXn75NGv9Rr0I60UEtKOhEEgayhXffkkX1PKvI86YfzbnxM /YjQRnmOe/1cVD4gzmsdSZ0G8anHTP9s3aIKgfE6T3xlqDTMBWXJjFTrjs1yfd60glvd 2E5AQSQCP8IKIe8BYTi+byOq3Mu9+zI0CBNiSLIakvq/MY6eM5Eu1HhApAASys6YAjcs 14BEiGwyq+30hqdCHvlJRFJ7tb2pb9upYiT04KkaUZQZcEPcSCdOC4GqpxmuYlea0RKd 2Grf/N4jyBPgRAKlrLiuMzlAYfB+OAgvEF7p1seDoRC78Fqwir+wanEaNA4SBUOGxLCP bfNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=+x51UjUsqJIskbt6IqzRGY/nszBXhTvB/FXuRwcLn5E=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=CFJkbawtUhZVfFw5NyaxycJopRZ4oIKTb8ky9R0vXeTHj/Mee/zyrksoti/wDtY2WA gBEW1xukvWV37vLeEFlw10BnKKDz/6cl28kUnGI+eEujZlnTyBLHVWRj631uaiipDQ/p UmnljWEHDmGlR3HORToC9uuKy54D0F3Jw1kiHn5Prqzl683Ak/JIf8HOvS1c8pA96vJ7 w5y3GggocZh4B34f5XWZxXIVS+YRujzGY2iMqRMiayolxiJqnG5d/L2D0vdhcJhm5Q82 Trjdv0skqyK4aqgB26yMFgTgUa6f/0j/VTTLXpYrnhwVP3lUEeDAdkjedUcpCUnhMU/D D2iQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MvEWwNfd; spf=pass (google.com: domain of linux-kernel+bounces-10232-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10232-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [2604:1380:40f1:3f00::1]) by mx.google.com with ESMTPS id f3-20020a631f03000000b005cdd60a42f5si4178596pgf.838.2023.12.22.19.04.14 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:04:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10232-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) client-ip=2604:1380:40f1:3f00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=MvEWwNfd; spf=pass (google.com: domain of linux-kernel+bounces-10232-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:40f1:3f00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10232-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 5B606B227EA for ; Sat, 23 Dec 2023 03:04:04 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8205C28E21; Sat, 23 Dec 2023 02:59:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="MvEWwNfd" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A70752837A; Sat, 23 Dec 2023 02:59:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300345; x=1734836345; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=2Cam6ielx4SiTFi9eZoINF4R5A6ww9rh77PqH8LO7LY=; b=MvEWwNfdMDBTgN8s8ujU89w5D54sd0rs8g/+9h8OQZHikMWsFrL6t+Ze t4Z4ezLdZmOsRWEtbqP+jLlGE4saKDjMrJk3okORnCKZrt7mZNpdQmQxx AACzWfD+RQU4+PtxNFyrT2wYRsNpzWgbt4rswAgh4uxxRxpOE1Vz0dZ9D r1io/UicwYBhD9lqNHD1yFZxTf1uZnMLhswJwU8Py7HVAsWlf3VK+0l8d quV7wZ+0di3I0gPzTLFMr3BfK+zWRdI4HPTciKjea0os4HeNQcdFsJqhf O5B5ver9jj4jOiotDKYYr+yYHEztXlNf985sNn/rb4gl6gFp8iauUAWRN A==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610903" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610903" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537510" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:02 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 16/34] jump_label: export static_key_slow_{inc,dec}_cpuslocked() Date: Sat, 23 Dec 2023 03:55:36 +0100 Message-ID: <20231223025554.2316836-17-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040187781661978 X-GMAIL-MSGID: 1786040187781661978 Sometimes, there's a need to modify a lot of static keys or modify the same key multiple times in a loop. In that case, it seems more optimal to lock cpu_read_lock once and then call _cpuslocked() variants. The enable/disable functions are already exported, the refcounted counterparts however are not. Fix that to allow modules to save some cycles. Signed-off-by: Alexander Lobakin --- kernel/jump_label.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/jump_label.c b/kernel/jump_label.c index d9c822bbffb8..f0375372b484 100644 --- a/kernel/jump_label.c +++ b/kernel/jump_label.c @@ -177,6 +177,7 @@ bool static_key_slow_inc_cpuslocked(struct static_key *key) jump_label_unlock(); return true; } +EXPORT_SYMBOL_GPL(static_key_slow_inc_cpuslocked); bool static_key_slow_inc(struct static_key *key) { @@ -304,6 +305,7 @@ void static_key_slow_dec_cpuslocked(struct static_key *key) STATIC_KEY_CHECK_USE(key); __static_key_slow_dec_cpuslocked(key); } +EXPORT_SYMBOL_GPL(static_key_slow_dec_cpuslocked); void __static_key_slow_dec_deferred(struct static_key *key, struct delayed_work *work, From patchwork Sat Dec 23 02:55:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182884 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1461130dyi; Fri, 22 Dec 2023 19:04:23 -0800 (PST) X-Google-Smtp-Source: AGHT+IGJ781A0dkYzMGPw0N/GeSGRKVLLz7TvvrJhCWiCYzJO0AC/L1T6aw7RJ66C1JJFVUP2Hoh X-Received: by 2002:a05:622a:316:b0:425:4043:1da8 with SMTP id q22-20020a05622a031600b0042540431da8mr2870437qtw.123.1703300662705; Fri, 22 Dec 2023 19:04:22 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300662; cv=none; d=google.com; s=arc-20160816; b=UDet8QM4hL7ZsxvN3OIIu1RHVuy975Jx0uKQ8+XK2T8xLqUPB8unyvImsRPqIP9QR2 CjUh6GBDCs50cMHtgD2lX7n2r/e4xF6RptDD9GLenGxqhWaxWEEJ/i3zfaSlyRrmfsQY qvCB7dtYy7zPQ1ZNO5Njb5ieb1JYL59LPJLFaQRnmcAwjsyK58btZ9O7BILQcHXK9fXh rL2lQwZhdA1zYoYJtt7fpEDOaWyZH985eFYLrfh0Ka4aLnVJEjyVTlBUlMqv70UudLvL 0paw+dW06WGsfqZiRYP8H7pfZFt4LmydIhHOK276Xuc6tDa4Nn6Ip/FNxPAWY2fmUllJ CReg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=H3ys+8LalD7qsA92ufykH6V9CIt0PqKTU8BsCMQduo4=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=nWXa3WOv3FpZw/2D9R11A2hVmBQAb0rT32NCANywydAduwrvWB1TQjmaJfCL37SyPG blALMi7jePoG5vL+2luIK6npFftb3Xo06L7+zrmnoamOq5cPzCw8rUMUl8NvGTSeeDrz C9oyUw9mXJ0V3fuJzOjQBfRSwycC743hWb+mtYEUaMJ0ax3wxN/EXe5FS2ebrizirL+3 5GGvDfmsz9/ueuP25wnLXkQcO3rhQ77b+xE0bZUJBbmb9AgxlW8f+yruZNJ41GGf2hxF ucM7dW6PLm9VTQUk2InVWGzrqOd7zsXwUb/WY+UB17LnDzjjxOa4ljRsTKkBDpMbN6kP KPew== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OS8gWTJo; spf=pass (google.com: domain of linux-kernel+bounces-10233-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10233-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id d12-20020ac85acc000000b004277c3be900si5965665qtd.457.2023.12.22.19.04.22 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:04:22 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10233-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=OS8gWTJo; spf=pass (google.com: domain of linux-kernel+bounces-10233-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10233-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 7D5DE1C22A55 for ; Sat, 23 Dec 2023 03:04:22 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 242FB2C695; Sat, 23 Dec 2023 02:59:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="OS8gWTJo" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 310D728DDF; Sat, 23 Dec 2023 02:59:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300349; x=1734836349; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VNPt1nK0AOP2pNwVFe078A89NwhgBYFYVPqNFR4BfTE=; b=OS8gWTJoJJ0wBk47XVF5PlPL0QGjq/8FzWdu2YwytECsswF3/zbQ2rlU mciXwrChmCprP/re8dk51H6a6di8GEBbBC4uf+Wai4K0Fm2Y3t0/0I//a sJ4pgO1QRiio0z/b3uAuynWU4zzjk3WbB5w3bkE8hPiB5bZuyBloMvWdV cDH1bqFBIbt2gmPlPAbgbx8RoC4HulcR6XaUXnrnawyexHNSCjX2049eF Ua02gX7ROOL5ZpAabD0sKsf2I7eMwXYv+DVDwNW15XVAOcBLulfmf9NH1 zsCCFXX25ryp2h+XNaBxVM2SvDe/qH2lE0Q9aJiQBSIpTqQS5XmfeoaP2 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610936" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610936" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:08 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537535" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:05 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 17/34] libie: support native XDP and register memory model Date: Sat, 23 Dec 2023 03:55:37 +0100 Message-ID: <20231223025554.2316836-18-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040195382775418 X-GMAIL-MSGID: 1786040195382775418 Expand libie's Page Pool functionality by adding native XDP support. This means picking the appropriate headroom and DMA direction. Also, register all the created &page_pools as XDP memory models. A driver then can call xdp_rxq_info_attach_page_pool() when registering its RxQ info. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libie/rx.c | 32 ++++++++++++++++++++++----- include/linux/net/intel/libie/rx.h | 6 ++++- 2 files changed, 32 insertions(+), 6 deletions(-) diff --git a/drivers/net/ethernet/intel/libie/rx.c b/drivers/net/ethernet/intel/libie/rx.c index 3d3b19d2b40d..b4c404958f25 100644 --- a/drivers/net/ethernet/intel/libie/rx.c +++ b/drivers/net/ethernet/intel/libie/rx.c @@ -52,7 +52,7 @@ static u32 libie_rx_hw_len_truesize(const struct page_pool_params *pp, static void libie_rx_page_pool_params(struct libie_buf_queue *bq, struct page_pool_params *pp) { - pp->offset = LIBIE_SKB_HEADROOM; + pp->offset = bq->xdp ? LIBIE_XDP_HEADROOM : LIBIE_SKB_HEADROOM; /* HW-writeable / syncable length per one page */ pp->max_len = LIBIE_RX_BUF_LEN(pp->offset); @@ -132,17 +132,34 @@ int libie_rx_page_pool_create(struct libie_buf_queue *bq, .dev = napi->dev->dev.parent, .netdev = napi->dev, .napi = napi, - .dma_dir = DMA_FROM_DEVICE, }; + struct xdp_mem_info mem; + struct page_pool *pool; + int ret; + + pp.dma_dir = bq->xdp ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE; if (!bq->hsplit) libie_rx_page_pool_params(bq, &pp); else if (!libie_rx_page_pool_params_zc(bq, &pp)) return -EINVAL; - bq->pp = page_pool_create(&pp); + pool = page_pool_create(&pp); + if (IS_ERR(pool)) + return PTR_ERR(pool); + + ret = xdp_reg_mem_model(&mem, MEM_TYPE_PAGE_POOL, pool); + if (ret) + goto err_mem; + + bq->pp = pool; + + return 0; - return PTR_ERR_OR_ZERO(bq->pp); +err_mem: + page_pool_destroy(pool); + + return ret; } EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE); @@ -152,7 +169,12 @@ EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE); */ void libie_rx_page_pool_destroy(struct libie_buf_queue *bq) { - page_pool_destroy(bq->pp); + struct xdp_mem_info mem = { + .type = MEM_TYPE_PAGE_POOL, + .id = bq->pp->xdp_mem_id, + }; + + xdp_unreg_mem_model(&mem); bq->pp = NULL; } EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_destroy, LIBIE); diff --git a/include/linux/net/intel/libie/rx.h b/include/linux/net/intel/libie/rx.h index 87ad8f9e89c7..8eda4ac8028c 100644 --- a/include/linux/net/intel/libie/rx.h +++ b/include/linux/net/intel/libie/rx.h @@ -15,8 +15,10 @@ /* Space reserved in front of each frame */ #define LIBIE_SKB_HEADROOM (NET_SKB_PAD + NET_IP_ALIGN) +#define LIBIE_XDP_HEADROOM (ALIGN(XDP_PACKET_HEADROOM, NET_SKB_PAD) + \ + NET_IP_ALIGN) /* Maximum headroom to calculate max MTU below */ -#define LIBIE_MAX_HEADROOM LIBIE_SKB_HEADROOM +#define LIBIE_MAX_HEADROOM LIBIE_XDP_HEADROOM /* Link layer / L2 overhead: Ethernet, 2 VLAN tags (C + S), FCS */ #define LIBIE_RX_LL_LEN (ETH_HLEN + 2 * VLAN_HLEN + ETH_FCS_LEN) /* Maximum supported L2-L4 header length */ @@ -87,6 +89,7 @@ enum libie_rx_buf_type { * @rx_buf_len: HW-writeable length per each buffer * @type: type of the buffers this queue has * @hsplit: flag whether header split is enabled + * @xdp: flag indicating whether XDP is enabled */ struct libie_buf_queue { struct page_pool *pp; @@ -100,6 +103,7 @@ struct libie_buf_queue { enum libie_rx_buf_type type:2; bool hsplit:1; + bool xdp:1; }; int libie_rx_page_pool_create(struct libie_buf_queue *bq, From patchwork Sat Dec 23 02:55:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182886 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1461555dyi; Fri, 22 Dec 2023 19:05:38 -0800 (PST) X-Google-Smtp-Source: AGHT+IFVn0OApJY+eXkf86uXo7JxiLuYSoWVwHhXe5dk/ClfCTij0MOM9tywdj3/tpJ5P8AvLEB5 X-Received: by 2002:a05:6808:3195:b0:3bb:9928:d9bb with SMTP id cd21-20020a056808319500b003bb9928d9bbmr160554oib.38.1703300737813; Fri, 22 Dec 2023 19:05:37 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300737; cv=none; d=google.com; s=arc-20160816; b=shgl7bQvq15wEGAlEwdtLh5QGP5SCzuQbOu9DaaW8kbl8U5+I2GFvPhXidEc2WVZyx kjBBGhrzm7Y3hRHVOT/Yncz63teResaIZOhLKb52PYWkNLa99exqTOgQ8A1OLrOdXb06 oeLzzGaC3bQc1w69NiP7f2BI+1uVmjHdRP0dhZzF4pLnKhq96XepoXUsR38W03WXd1D3 7kjfIajLUukRjaykGD9yBJYyKAL7UBN4oqH+hUQ5MAIEEw1+WGYyn+FWJRkPVQ9L8NQr ahw00tyuXuGL0SVuvd8LLjUfWa+5OJ+PcriS7ncIMuR/LWuRn+DDshud3TaNr200sedh cGIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=ApmTyfhIpKPxVXThaYf6Cvl8HFDZuBi8tuTGiWnX8rs=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=i23nAmeiuj9+K7Nq7RuEj5RiyrOKX+U4h0MqXXo2ezG9UGpdILVBhZY4BNzRk3g3Ss hDcFd7subNZL0MuFCAddlwbQ8abOcSY8boyKw3n5pgDmxXfVHLfmAneFauwMIoxXNC22 H2ADVegol/QB/ph8UchDg1HYBQUKuulZoQ/4z19ySAz0ooBOQ3F597VLeu/sFppaLV2h mxvDIA6jgq7k/7G91BEjYrTv5g+bs7RazWqy3TWYxce5WZghBtykHZXkCHPu0kkRQPpY ame33TFZjo8tOR1qzf+bKQQ/D0n4G23dePx7a/DMObi2x5r47oOxkWsJcSq5dwbzmtUk 5WnQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mQNloKwP; spf=pass (google.com: domain of linux-kernel+bounces-10234-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10234-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id p40-20020a056a0026e800b006d99f8d3518si501362pfw.250.2023.12.22.19.05.37 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:05:37 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10234-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mQNloKwP; spf=pass (google.com: domain of linux-kernel+bounces-10234-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10234-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 094F4B239CA for ; Sat, 23 Dec 2023 03:04:45 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3F87C2DB97; Sat, 23 Dec 2023 02:59:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mQNloKwP" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 947EC2C843; Sat, 23 Dec 2023 02:59:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300354; x=1734836354; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=8mjt8uDT2YTuKI2B5BvmsRyZnUjtDOJ0AkwsU1g4zI8=; b=mQNloKwP/bBj6FV4ITROTJ+QfUZwFzH0Q1hGHkOIZHnrtra+GVlhjvMJ /mSEZdTqCzQeFq+ZSIUG8/9tdBmryuF2aNmJa/hVhNQaDmrLngFiXbsgb vCkGPWN6tdhxeBUakRjyVF8eG0+togLyYSbHFNEbJAuY8ATIY8NjX/tpO Fa5yQ/2mgZ7wKmexnQXOvx/+rNKbHRvF82g0yus70TtH8MPiNUFissdw0 E0zbszkjDTbd3MrOGPAwWDmdwKOpQYkyiTdzvRFvZ/kZBcWyLmSW+vMLf xuMuMN/GaFwwgrRO2Ve+Cl4WOPUW8sq5MuO9/Wln1GTypBILhB2ScDTeG Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610967" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610967" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:14 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537553" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:10 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 18/34] libie: add a couple of XDP helpers Date: Sat, 23 Dec 2023 03:55:38 +0100 Message-ID: <20231223025554.2316836-19-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040274646177747 X-GMAIL-MSGID: 1786040274646177747 "Couple" is a bit humbly... Add the following functionality to libie: * XDP shared queues managing * XDP_TX bulk sending infra * .ndo_xdp_xmit() infra * adding buffers to &xdp_buff * running XDP prog and managing its verdict * completing XDP Tx buffers * ^ repeat everything for XSk Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/libie/Makefile | 3 + drivers/net/ethernet/intel/libie/tx.c | 16 + drivers/net/ethernet/intel/libie/xdp.c | 50 ++ drivers/net/ethernet/intel/libie/xsk.c | 49 ++ include/linux/net/intel/libie/tx.h | 6 + include/linux/net/intel/libie/xdp.h | 586 ++++++++++++++++++++++ include/linux/net/intel/libie/xsk.h | 172 +++++++ 7 files changed, 882 insertions(+) create mode 100644 drivers/net/ethernet/intel/libie/tx.c create mode 100644 drivers/net/ethernet/intel/libie/xdp.c create mode 100644 drivers/net/ethernet/intel/libie/xsk.c create mode 100644 include/linux/net/intel/libie/xdp.h create mode 100644 include/linux/net/intel/libie/xsk.h diff --git a/drivers/net/ethernet/intel/libie/Makefile b/drivers/net/ethernet/intel/libie/Makefile index 76f32253481b..7ca353e4ecdf 100644 --- a/drivers/net/ethernet/intel/libie/Makefile +++ b/drivers/net/ethernet/intel/libie/Makefile @@ -5,3 +5,6 @@ obj-$(CONFIG_LIBIE) += libie.o libie-objs += rx.o libie-objs += stats.o +libie-objs += tx.o +libie-objs += xdp.o +libie-objs += xsk.o diff --git a/drivers/net/ethernet/intel/libie/tx.c b/drivers/net/ethernet/intel/libie/tx.c new file mode 100644 index 000000000000..9e3b6e3c3c25 --- /dev/null +++ b/drivers/net/ethernet/intel/libie/tx.c @@ -0,0 +1,16 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Intel Corporation. */ + +#include + +void libie_tx_complete_any(struct libie_tx_buffer *buf, struct device *dev, + struct xdp_frame_bulk *bq, u32 *xdp_tx_active, + struct libie_sq_onstack_stats *ss) +{ + if (buf->type > LIBIE_TX_BUF_SKB) + libie_xdp_complete_tx_buf(buf, dev, false, bq, xdp_tx_active, + ss); + else + libie_tx_complete_buf(buf, dev, false, ss); +} +EXPORT_SYMBOL_NS_GPL(libie_tx_complete_any, LIBIE); diff --git a/drivers/net/ethernet/intel/libie/xdp.c b/drivers/net/ethernet/intel/libie/xdp.c new file mode 100644 index 000000000000..f47a17ca6e66 --- /dev/null +++ b/drivers/net/ethernet/intel/libie/xdp.c @@ -0,0 +1,50 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Intel Corporation. */ + +#include + +/* XDP SQ sharing */ + +DEFINE_STATIC_KEY_FALSE(libie_xdp_sq_share); +EXPORT_SYMBOL_NS_GPL(libie_xdp_sq_share, LIBIE); + +void __libie_xdp_sq_get(struct libie_xdp_sq_lock *lock, + const struct net_device *dev) +{ + bool warn; + + spin_lock_init(&lock->lock); + lock->share = true; + + warn = !static_key_enabled(&libie_xdp_sq_share); + static_branch_inc_cpuslocked(&libie_xdp_sq_share); + + if (warn) + netdev_warn(dev, "XDP SQ sharing enabled, possible XDP_TX/XDP_REDIRECT slowdown\n"); + +} +EXPORT_SYMBOL_NS_GPL(__libie_xdp_sq_get, LIBIE); + +void __libie_xdp_sq_put(struct libie_xdp_sq_lock *lock, + const struct net_device *dev) +{ + static_branch_dec_cpuslocked(&libie_xdp_sq_share); + + if (!static_key_enabled(&libie_xdp_sq_share)) + netdev_notice(dev, "XDP SQ sharing disabled\n"); + + lock->share = false; +} +EXPORT_SYMBOL_NS_GPL(__libie_xdp_sq_put, LIBIE); + +/* ``XDP_TX`` bulking */ + +void libie_xdp_tx_return_bulk(const struct libie_xdp_tx_frame *bq, u32 count) +{ + for (u32 i = 0; i < count; i++) { + struct page *page = virt_to_page(bq[i].data); + + page_pool_recycle_direct(page->pp, page); + } +} +EXPORT_SYMBOL_NS_GPL(libie_xdp_tx_return_bulk, LIBIE); diff --git a/drivers/net/ethernet/intel/libie/xsk.c b/drivers/net/ethernet/intel/libie/xsk.c new file mode 100644 index 000000000000..ffbdb85586f1 --- /dev/null +++ b/drivers/net/ethernet/intel/libie/xsk.c @@ -0,0 +1,49 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright(c) 2023 Intel Corporation. */ + +#include + +#define LIBIE_XSK_DMA_ATTR (DMA_ATTR_WEAK_ORDERING | \ + DMA_ATTR_SKIP_CPU_SYNC) + +int libie_xsk_enable_pool(struct net_device *dev, u32 qid, unsigned long *map) +{ + struct xsk_buff_pool *pool; + int ret; + + if (qid >= min(dev->real_num_rx_queues, dev->real_num_tx_queues)) + return -EINVAL; + + pool = xsk_get_pool_from_qid(dev, qid); + if (!pool) + return -EINVAL; + + ret = xsk_pool_dma_map(pool, dev->dev.parent, LIBIE_XSK_DMA_ATTR); + if (ret) + return ret; + + set_bit(qid, map); + + return 0; +} +EXPORT_SYMBOL_NS_GPL(libie_xsk_enable_pool, LIBIE); + +int libie_xsk_disable_pool(struct net_device *dev, u32 qid, + unsigned long *map) +{ + struct xsk_buff_pool *pool; + + if (qid >= min(dev->real_num_rx_queues, dev->real_num_tx_queues)) + return -EINVAL; + + pool = xsk_get_pool_from_qid(dev, qid); + if (!pool) + return -EINVAL; + + xsk_pool_dma_unmap(pool, LIBIE_XSK_DMA_ATTR); + + clear_bit(qid, map); + + return 0; +} +EXPORT_SYMBOL_NS_GPL(libie_xsk_disable_pool, LIBIE); diff --git a/include/linux/net/intel/libie/tx.h b/include/linux/net/intel/libie/tx.h index 07a19abb72fd..4d4d85af6f7e 100644 --- a/include/linux/net/intel/libie/tx.h +++ b/include/linux/net/intel/libie/tx.h @@ -85,4 +85,10 @@ static inline void libie_tx_complete_buf(struct libie_tx_buffer *buf, buf->type = LIBIE_TX_BUF_EMPTY; } +struct xdp_frame_bulk; + +void libie_tx_complete_any(struct libie_tx_buffer *buf, struct device *dev, + struct xdp_frame_bulk *bq, u32 *xdp_tx_active, + struct libie_sq_onstack_stats *ss); + #endif /* __LIBIE_TX_H */ diff --git a/include/linux/net/intel/libie/xdp.h b/include/linux/net/intel/libie/xdp.h new file mode 100644 index 000000000000..087fc075078f --- /dev/null +++ b/include/linux/net/intel/libie/xdp.h @@ -0,0 +1,586 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Intel Corporation. */ + +#ifndef __LIBIE_XDP_H +#define __LIBIE_XDP_H + +#include +#include +#include + +#include + +/* Defined as bits to be able to use them as a mask */ +enum { + LIBIE_XDP_PASS = 0U, + LIBIE_XDP_DROP = BIT(0), + LIBIE_XDP_ABORTED = BIT(1), + LIBIE_XDP_TX = BIT(2), + LIBIE_XDP_REDIRECT = BIT(3), +}; + +/* XDP SQ sharing */ + +struct libie_xdp_sq_lock { + spinlock_t lock; + bool share; +}; + +DECLARE_STATIC_KEY_FALSE(libie_xdp_sq_share); + +static inline u32 libie_xdp_get_sq_num(u32 rxq, u32 txq, u32 max) +{ + return min(max(nr_cpu_ids, rxq), max - txq); +} + +static inline bool libie_xdp_sq_shared(u32 qid) +{ + return qid < nr_cpu_ids; +} + +static inline u32 libie_xdp_sq_id(u32 num) +{ + u32 ret = smp_processor_id(); + + if (static_branch_unlikely(&libie_xdp_sq_share) && + libie_xdp_sq_shared(num)) + ret %= num; + + return ret; +} + +void __libie_xdp_sq_get(struct libie_xdp_sq_lock *lock, + const struct net_device *dev); +void __libie_xdp_sq_put(struct libie_xdp_sq_lock *lock, + const struct net_device *dev); + +static inline void libie_xdp_sq_get(struct libie_xdp_sq_lock *lock, + const struct net_device *dev, + bool share) +{ + if (unlikely(share)) + __libie_xdp_sq_get(lock, dev); +} + +static inline void libie_xdp_sq_put(struct libie_xdp_sq_lock *lock, + const struct net_device *dev) +{ + if (static_branch_unlikely(&libie_xdp_sq_share) && lock->share) + __libie_xdp_sq_put(lock, dev); +} + +static inline void __acquires(&lock->lock) +libie_xdp_sq_lock(struct libie_xdp_sq_lock *lock) +{ + if (static_branch_unlikely(&libie_xdp_sq_share) && lock->share) + spin_lock(&lock->lock); +} + +static inline void __releases(&lock->lock) +libie_xdp_sq_unlock(struct libie_xdp_sq_lock *lock) +{ + if (static_branch_unlikely(&libie_xdp_sq_share) && lock->share) + spin_unlock(&lock->lock); +} + +/* ``XDP_TX`` bulking */ + +#define LIBIE_XDP_TX_BULK XDP_BULK_QUEUE_SIZE +#define LIBIE_XDP_TX_BATCH 8 + +#ifdef __clang__ +#define libie_xdp_tx_for _Pragma("clang loop unroll_count(8)") for +#elif __GNUC__ >= 8 +#define libie_xdp_tx_for _Pragma("GCC unroll (8)") for +#else +#define libie_xdp_tx_for for +#endif + +struct libie_xdp_tx_frame { + union { + struct { + void *data; + u16 len; + + enum xdp_buff_flags flags:16; + u32 soff; + }; + struct { + struct xdp_frame *xdpf; + dma_addr_t dma; + }; + + struct { + struct xdp_buff *xsk; + /* u32 len */ + }; + struct xdp_desc desc; + }; +}; +static_assert(sizeof(struct libie_xdp_tx_frame) == sizeof(struct xdp_desc)); + +struct libie_xdp_tx_bulk { + const struct bpf_prog *prog; + struct net_device *dev; + void *xdpq; + + u32 act_mask; + u32 count; + struct libie_xdp_tx_frame bulk[LIBIE_XDP_TX_BULK]; +}; + + +struct libie_xdp_tx_queue { + union { + struct device *dev; + struct xsk_buff_pool *pool; + }; + struct libie_tx_buffer *tx_buf; + void *desc_ring; + + struct libie_xdp_sq_lock *xdp_lock; + u16 *next_to_use; + u32 desc_count; + + u32 *xdp_tx_active; +}; + +struct libie_xdp_tx_desc { + dma_addr_t addr; + u32 len; +}; + +static inline void __libie_xdp_tx_init_bulk(struct libie_xdp_tx_bulk *bq, + const struct bpf_prog *prog, + struct net_device *dev, void *xdpq) +{ + bq->prog = prog; + bq->dev = dev; + bq->xdpq = xdpq; + + bq->act_mask = 0; + bq->count = 0; +} + +#define _libie_xdp_tx_init_bulk(bq, prog, dev, xdpqs, num, uniq) ({ \ + const struct bpf_prog *uniq = rcu_dereference(prog); \ + \ + if (uniq) \ + __libie_xdp_tx_init_bulk(bq, uniq, dev, \ + (xdpqs)[libie_xdp_sq_id(num)]); \ +}) + +#define libie_xdp_tx_init_bulk(bq, prog, dev, xdpqs, num) \ + _libie_xdp_tx_init_bulk(bq, prog, dev, xdpqs, num, \ + __UNIQUE_ID(prog_)) + +static inline void libie_xdp_tx_queue_bulk(struct libie_xdp_tx_bulk *bq, + const struct xdp_buff *xdp) +{ + bq->bulk[bq->count++] = (typeof(*bq->bulk)){ + .data = xdp->data, + .len = xdp->data_end - xdp->data, + .soff = xdp_data_hard_end(xdp) - xdp->data, + .flags = xdp->flags, + }; +} + +static inline struct libie_xdp_tx_desc +libie_xdp_tx_fill_buf(const struct libie_xdp_tx_frame *frm, + const struct libie_xdp_tx_queue *sq) +{ + struct libie_xdp_tx_desc desc = { + .len = frm->len, + }; + struct libie_tx_buffer *tx_buf; + + desc.addr = page_pool_dma_sync_va_for_device(frm->data, desc.len); + + tx_buf = &sq->tx_buf[*sq->next_to_use]; + tx_buf->type = LIBIE_TX_BUF_XDP_TX; + tx_buf->gso_segs = 1; + tx_buf->bytecount = desc.len; + tx_buf->sinfo = frm->data + frm->soff; + + return desc; +} + +static __always_inline u32 +libie_xdp_tx_xmit_bulk(const struct libie_xdp_tx_bulk *bq, + u32 (*prep)(void *xdpq, struct libie_xdp_tx_queue *sq), + struct libie_xdp_tx_desc + (*fill)(const struct libie_xdp_tx_frame *frm, + const struct libie_xdp_tx_queue *sq), + void (*xmit)(struct libie_xdp_tx_desc desc, + const struct libie_xdp_tx_queue *sq)) +{ + u32 this, batched, leftover, off = 0; + struct libie_xdp_tx_queue sq; + u32 free, count, ntu, i = 0; + + free = prep(bq->xdpq, &sq); + count = min3(bq->count, free, LIBIE_XDP_TX_BULK); + ntu = *sq.next_to_use; + +again: + this = sq.desc_count - ntu; + if (likely(this > count)) + this = count; + + batched = ALIGN_DOWN(this, LIBIE_XDP_TX_BATCH); + leftover = this - batched; + + for ( ; i < off + batched; i += LIBIE_XDP_TX_BATCH) { + libie_xdp_tx_for (u32 j = 0; j < LIBIE_XDP_TX_BATCH; j++) { + struct libie_xdp_tx_desc desc; + + desc = fill(&bq->bulk[i + j], &sq); + xmit(desc, &sq); + + ntu++; + } + } + + for ( ; i < off + batched + leftover; i++) { + struct libie_xdp_tx_desc desc; + + desc = fill(&bq->bulk[i], &sq); + xmit(desc, &sq); + + ntu++; + } + + if (likely(ntu < sq.desc_count)) + goto out; + + ntu = 0; + + count -= this; + if (count) { + off = i; + goto again; + } + +out: + *sq.next_to_use = ntu; + if (sq.xdp_tx_active) + *sq.xdp_tx_active += i; + + libie_xdp_sq_unlock(sq.xdp_lock); + + return i; +} + +void libie_xdp_tx_return_bulk(const struct libie_xdp_tx_frame *bq, u32 count); + +static __always_inline bool +__libie_xdp_tx_flush_bulk(struct libie_xdp_tx_bulk *bq, + u32 (*prep)(void *xdpq, + struct libie_xdp_tx_queue *sq), + struct libie_xdp_tx_desc + (*fill)(const struct libie_xdp_tx_frame *frm, + const struct libie_xdp_tx_queue *sq), + void (*xmit)(struct libie_xdp_tx_desc desc, + const struct libie_xdp_tx_queue *sq)) +{ + u32 sent, drops; + int err = 0; + + sent = libie_xdp_tx_xmit_bulk(bq, prep, fill, xmit); + drops = bq->count - sent; + bq->count = 0; + + if (unlikely(drops)) { + trace_xdp_exception(bq->dev, bq->prog, XDP_TX); + err = -ENXIO; + + libie_xdp_tx_return_bulk(&bq->bulk[sent], drops); + } + + trace_xdp_bulk_tx(bq->dev, sent, drops, err); + + return likely(sent); +} + +#define libie_xdp_tx_flush_bulk(bq, prep, xmit) \ + __libie_xdp_tx_flush_bulk(bq, prep, libie_xdp_tx_fill_buf, xmit) + +/* .ndo_xdp_xmit() implementation */ + +static inline bool libie_xdp_xmit_queue_bulk(struct libie_xdp_tx_bulk *bq, + struct xdp_frame *xdpf) +{ + struct device *dev = bq->dev->dev.parent; + dma_addr_t dma; + + dma = dma_map_single(dev, xdpf->data, xdpf->len, DMA_TO_DEVICE); + if (dma_mapping_error(dev, dma)) + return false; + + bq->bulk[bq->count++] = (typeof(*bq->bulk)){ + .xdpf = xdpf, + .dma = dma, + }; + + return true; +} + +static inline struct libie_xdp_tx_desc +libie_xdp_xmit_fill_buf(const struct libie_xdp_tx_frame *frm, + const struct libie_xdp_tx_queue *sq) +{ + struct xdp_frame *xdpf = frm->xdpf; + struct libie_xdp_tx_desc desc = { + .addr = frm->dma, + .len = xdpf->len, + }; + struct libie_tx_buffer *tx_buf; + + tx_buf = &sq->tx_buf[*sq->next_to_use]; + tx_buf->type = LIBIE_TX_BUF_XDP_XMIT; + tx_buf->gso_segs = 1; + tx_buf->bytecount = desc.len; + tx_buf->xdpf = xdpf; + + dma_unmap_addr_set(tx_buf, dma, frm->dma); + dma_unmap_len_set(tx_buf, len, desc.len); + + return desc; +} + +static __always_inline int +__libie_xdp_xmit_do_bulk(struct libie_xdp_tx_bulk *bq, + struct xdp_frame **frames, u32 n, u32 flags, + u32 (*prep)(void *xdpq, + struct libie_xdp_tx_queue *sq), + void (*xmit)(struct libie_xdp_tx_desc desc, + const struct libie_xdp_tx_queue *sq), + void (*finalize)(void *xdpq, bool tail)) +{ + int err = -ENXIO; + u32 nxmit = 0; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) + return -EINVAL; + + for (u32 i = 0; i < n; i++) { + if (!libie_xdp_xmit_queue_bulk(bq, frames[i])) + break; + } + + if (unlikely(!bq->count)) + goto out; + + nxmit = libie_xdp_tx_xmit_bulk(bq, prep, libie_xdp_xmit_fill_buf, + xmit); + if (unlikely(!nxmit)) + goto out; + + finalize(bq->xdpq, flags & XDP_XMIT_FLUSH); + + if (likely(nxmit == n)) + err = 0; + +out: + trace_xdp_bulk_tx(bq->dev, nxmit, n - nxmit, err); + + return nxmit; +} + +#define libie_xdp_xmit_init_bulk(bq, dev, xdpqs, num) \ + __libie_xdp_tx_init_bulk(bq, NULL, dev, \ + (xdpqs)[libie_xdp_sq_id(num)]) + +#define _libie_xdp_xmit_do_bulk(dev, n, fr, fl, xqs, nqs, pr, xm, fin, un) ({ \ + struct libie_xdp_tx_bulk un; \ + \ + libie_xdp_xmit_init_bulk(&un, dev, xqs, nqs); \ + __libie_xdp_xmit_do_bulk(&un, fr, n, fl, pr, xm, fin); \ +}) +#define libie_xdp_xmit_do_bulk(dev, n, fr, fl, xqs, nqs, pr, xm, fin) \ + _libie_xdp_xmit_do_bulk(dev, n, fr, fl, xqs, nqs, pr, xm, fin, \ + __UNIQUE_ID(bq_)) + +/* Rx polling path */ + +static inline void libie_xdp_init_buff(struct xdp_buff *dst, + const struct xdp_buff *src, + struct xdp_rxq_info *rxq) +{ + if (!src->data) { + dst->data = NULL; + dst->rxq = rxq; + } else { + *dst = *src; + } +} + +#define libie_xdp_save_buff(dst, src) libie_xdp_init_buff(dst, src, NULL) + +/** + * libie_xdp_process_buff - process an Rx buffer + * @xdp: XDP buffer to attach the buffer to + * @buf: Rx buffer to process + * @len: received data length from the descriptor + * + * Return: false if the descriptor must be skipped, true otherwise. + */ +static inline bool libie_xdp_process_buff(struct xdp_buff *xdp, + const struct libie_rx_buffer *buf, + u32 len) +{ + if (!libie_rx_sync_for_cpu(buf, len)) + return false; + + if (!xdp->data) { + xdp->flags = 0; + xdp->frame_sz = buf->truesize; + + xdp_prepare_buff(xdp, page_address(buf->page) + buf->offset, + buf->page->pp->p.offset, len, true); + } else if (!xdp_buff_add_frag(xdp, buf->page, + buf->offset + buf->page->pp->p.offset, + len, buf->truesize)) { + xdp_return_buff(xdp); + xdp->data = NULL; + + return false; + } + + return true; +} + +/** + * __libie_xdp_run_prog - run XDP program on an XDP buffer + * @xdp: XDP buffer to run the prog on + * @bq: buffer bulk for ``XDP_TX`` queueing + * + * Return: LIBIE_XDP_{PASS,DROP,TX,REDIRECT} depending on the prog's verdict. + */ +static inline u32 __libie_xdp_run_prog(struct xdp_buff *xdp, + struct libie_xdp_tx_bulk *bq) +{ + const struct bpf_prog *prog = bq->prog; + u32 act; + + act = bpf_prog_run_xdp(prog, xdp); + switch (act) { + case XDP_ABORTED: +err: + trace_xdp_exception(bq->dev, prog, act); + fallthrough; + case XDP_DROP: + xdp_return_buff(xdp); + xdp->data = NULL; + + return LIBIE_XDP_DROP; + case XDP_PASS: + return LIBIE_XDP_PASS; + case XDP_TX: + libie_xdp_tx_queue_bulk(bq, xdp); + xdp->data = NULL; + + return LIBIE_XDP_TX; + case XDP_REDIRECT: + if (unlikely(xdp_do_redirect(bq->dev, xdp, prog))) + goto err; + + xdp->data = NULL; + + return LIBIE_XDP_REDIRECT; + default: + bpf_warn_invalid_xdp_action(bq->dev, prog, act); + goto err; + } +} + +static __always_inline u32 +__libie_xdp_run_flush(struct xdp_buff *xdp, struct libie_xdp_tx_bulk *bq, + u32 (*run)(struct xdp_buff *xdp, + struct libie_xdp_tx_bulk *bq), + bool (*flush_bulk)(struct libie_xdp_tx_bulk *)) +{ + u32 act; + + act = run(xdp, bq); + if (act == LIBIE_XDP_TX && + unlikely(bq->count == LIBIE_XDP_TX_BULK && !flush_bulk(bq))) + act = LIBIE_XDP_DROP; + + bq->act_mask |= act; + + return act; +} + +#define libie_xdp_run_prog(xdp, bq, fl) \ + (__libie_xdp_run_flush(xdp, bq, __libie_xdp_run_prog, fl) == \ + XDP_PASS) + +static __always_inline void +libie_xdp_finalize_rx(struct libie_xdp_tx_bulk *bq, + bool (*flush_bulk)(struct libie_xdp_tx_bulk *), + void (*finalize)(void *xdpq, bool tail)) +{ + if (bq->act_mask & LIBIE_XDP_TX) { + if (bq->count) + flush_bulk(bq); + finalize(bq->xdpq, true); + } + if (bq->act_mask & LIBIE_XDP_REDIRECT) + xdp_do_flush(); +} + +/* Tx buffer completion */ + +static inline void libie_xdp_return_sinfo(const struct libie_tx_buffer *buf, + bool napi) +{ + const struct skb_shared_info *sinfo = buf->sinfo; + struct page *page; + + if (likely(buf->gso_segs == 1)) + goto return_head; + + for (u32 i = 0; i < sinfo->nr_frags; i++) { + page = skb_frag_page(&sinfo->frags[i]); + page_pool_put_full_page(page->pp, page, napi); + } + +return_head: + page = virt_to_page(sinfo); + page_pool_put_full_page(page->pp, page, napi); +} + +static inline void libie_xdp_complete_tx_buf(struct libie_tx_buffer *buf, + struct device *dev, bool napi, + struct xdp_frame_bulk *bq, + u32 *xdp_tx_active, + struct libie_sq_onstack_stats *ss) +{ + switch (buf->type) { + case LIBIE_TX_BUF_EMPTY: + return; + case LIBIE_TX_BUF_XDP_TX: + libie_xdp_return_sinfo(buf, napi); + break; + case LIBIE_TX_BUF_XDP_XMIT: + dma_unmap_page(dev, dma_unmap_addr(buf, dma), + dma_unmap_len(buf, len), DMA_TO_DEVICE); + xdp_return_frame_bulk(buf->xdpf, bq); + break; + case LIBIE_TX_BUF_XSK_TX: + xsk_buff_free(buf->xdp); + break; + default: + break; + } + + (*xdp_tx_active)--; + + ss->packets += buf->gso_segs; + ss->bytes += buf->bytecount; + + buf->type = LIBIE_TX_BUF_EMPTY; +} + +#endif /* __LIBIE_XDP_H */ diff --git a/include/linux/net/intel/libie/xsk.h b/include/linux/net/intel/libie/xsk.h new file mode 100644 index 000000000000..d21fdb69a5e0 --- /dev/null +++ b/include/linux/net/intel/libie/xsk.h @@ -0,0 +1,172 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2023 Intel Corporation. */ + +#ifndef __LIBIE_XSK_H +#define __LIBIE_XSK_H + +#include + +/* ``XDP_TX`` bulking */ + +#define libie_xsk_tx_init_bulk(bq, prog, dev, xdpqs, num) \ + __libie_xdp_tx_init_bulk(bq, rcu_dereference(prog), dev, \ + (xdpqs)[libie_xdp_sq_id(num)]) + +static inline void libie_xsk_tx_queue_bulk(struct libie_xdp_tx_bulk *bq, + struct xdp_buff *xdp) +{ + bq->bulk[bq->count++] = (typeof(*bq->bulk)){ + .xsk = xdp, + .len = xdp->data_end - xdp->data, + }; +} + +static inline struct libie_xdp_tx_desc +libie_xsk_tx_fill_buf(const struct libie_xdp_tx_frame *frm, + const struct libie_xdp_tx_queue *sq) +{ + struct libie_xdp_tx_desc desc = { + .len = frm->len, + }; + struct xdp_buff *xdp = frm->xsk; + struct libie_tx_buffer *tx_buf; + + desc.addr = xsk_buff_xdp_get_dma(xdp); + xsk_buff_raw_dma_sync_for_device(sq->pool, desc.addr, desc.len); + + tx_buf = &sq->tx_buf[*sq->next_to_use]; + tx_buf->type = LIBIE_TX_BUF_XSK_TX; + tx_buf->gso_segs = 1; + tx_buf->bytecount = desc.len; + tx_buf->xdp = xdp; + + return desc; +} + +#define libie_xsk_tx_flush_bulk(bq, prep, xmit) \ + __libie_xdp_tx_flush_bulk(bq, prep, libie_xsk_tx_fill_buf, xmit) + +/* XSk xmit implementation */ + +#define libie_xsk_xmit_init_bulk(bq, xdpq) \ + __libie_xdp_tx_init_bulk(bq, NULL, NULL, xdpq) + +static inline struct libie_xdp_tx_desc +libie_xsk_xmit_fill_buf(const struct libie_xdp_tx_frame *frm, + const struct libie_xdp_tx_queue *sq) +{ + struct libie_xdp_tx_desc desc = { + .len = frm->desc.len, + }; + + desc.addr = xsk_buff_raw_get_dma(sq->pool, frm->desc.addr); + xsk_buff_raw_dma_sync_for_device(sq->pool, desc.addr, desc.len); + + return desc; +} + +static __always_inline bool +libie_xsk_xmit_do_bulk(void *xdpq, struct xsk_buff_pool *pool, u32 budget, + u32 (*prep)(void *xdpq, struct libie_xdp_tx_queue *sq), + void (*xmit)(struct libie_xdp_tx_desc desc, + const struct libie_xdp_tx_queue *sq), + void (*finalize)(void *xdpq, bool tail)) +{ + struct libie_xdp_tx_bulk bq; + u32 n, batched; + + n = xsk_tx_peek_release_desc_batch(pool, budget); + if (unlikely(!n)) + return true; + + batched = ALIGN_DOWN(n, LIBIE_XDP_TX_BULK); + + libie_xsk_xmit_init_bulk(&bq, xdpq); + bq.count = LIBIE_XDP_TX_BULK; + + for (u32 i = 0; i < batched; i += LIBIE_XDP_TX_BULK) { + memcpy(bq.bulk, &pool->tx_descs[i], sizeof(bq.bulk)); + libie_xdp_tx_xmit_bulk(&bq, prep, libie_xsk_xmit_fill_buf, + xmit); + } + + bq.count = n - batched; + + memcpy(bq.bulk, &pool->tx_descs[batched], bq.count * sizeof(*bq.bulk)); + libie_xdp_tx_xmit_bulk(&bq, prep, libie_xsk_xmit_fill_buf, xmit); + + finalize(bq.xdpq, true); + + if (xsk_uses_need_wakeup(pool)) + xsk_set_tx_need_wakeup(pool); + + return n < budget; +} + +/* Rx polling path */ + +/** + * __libie_xsk_run_prog - run XDP program on an XDP buffer + * @xdp: XDP buffer to run the prog on + * @bq: buffer bulk for ``XDP_TX`` queueing + * + * Return: LIBIE_XDP_{PASS,DROP,ABORTED,TX,REDIRECT} depending on the prog's + * verdict. + */ +static inline u32 __libie_xsk_run_prog(struct xdp_buff *xdp, + struct libie_xdp_tx_bulk *bq) +{ + const struct bpf_prog *prog = bq->prog; + u32 act, drop = LIBIE_XDP_DROP; + struct xdp_buff_xsk *xsk; + int ret; + + act = bpf_prog_run_xdp(prog, xdp); + if (unlikely(act != XDP_REDIRECT)) + goto rest; + + ret = xdp_do_redirect(bq->dev, xdp, prog); + if (unlikely(ret)) + goto check_err; + + return LIBIE_XDP_REDIRECT; + +rest: + switch (act) { + case XDP_ABORTED: +err: + trace_xdp_exception(bq->dev, prog, act); + fallthrough; + case XDP_DROP: + xsk_buff_free(xdp); + + return drop; + case XDP_PASS: + return LIBIE_XDP_PASS; + case XDP_TX: + libie_xsk_tx_queue_bulk(bq, xdp); + + return LIBIE_XDP_TX; + default: + bpf_warn_invalid_xdp_action(bq->dev, prog, act); + goto err; + } + +check_err: + xsk = container_of(xdp, typeof(*xsk), xdp); + if (xsk_uses_need_wakeup(xsk->pool) && ret == -ENOBUFS) + drop = LIBIE_XDP_ABORTED; + + goto err; +} + +#define libie_xsk_run_prog(xdp, bq, fl) \ + __libie_xdp_run_flush(xdp, bq, __libie_xsk_run_prog, fl) + +/* Externals */ + +int libie_xsk_enable_pool(struct net_device *dev, u32 qid, unsigned long *map); +int libie_xsk_disable_pool(struct net_device *dev, u32 qid, + unsigned long *map); + +#endif /* __LIBIE_XSK_H */ From patchwork Sat Dec 23 02:55:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182888 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1461665dyi; Fri, 22 Dec 2023 19:05:58 -0800 (PST) X-Google-Smtp-Source: AGHT+IFyCLi3D8zLFlaSDiO9E0ZI8klDvJTK3iV4xPxH0qBsitWofJeExvoG6FHKvcyx49ZYSGI0 X-Received: by 2002:a05:6a20:8f1c:b0:191:bc5f:e803 with SMTP id b28-20020a056a208f1c00b00191bc5fe803mr3036024pzk.72.1703300758264; Fri, 22 Dec 2023 19:05:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300758; cv=none; d=google.com; s=arc-20160816; b=OYqfANeNAMCCryOf17I45d0UqPWjTN4AtAV5o+s39iPMQlMCPlRlO9uxJeir93pM0X y98ukUpvkUt9tUbkhb7rrI4WSkQQsNeBUHr2+0irBQukpuY9KHXTmoikbbeVHE9HKIvO gu7kOchIxZ62y3FfZDXrvd6HWhwmhYRD+FK20FydLLc642E6id4Ci5h5e8ytJgFxwl0Z AN1ZVkZKsnWvoNsZpKOmx2bY0DVNr6dzDCG76GBSt5M6aJzE+S+nD934/zoaDCJ01Bdd zg8vAM0BPymo8FvhYWamF699pt0yvXNhrK19/CSndyYiSkqqod97d+reUsGLfNDPrM9C 2LhA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=NG7l7AhtnDZ3sQo4XGXWtY563RD8zTpXXYrRiJtt/P0=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=EEepy0N9Ld5zt/IXKAYzP/b9XXZpNuH6V2NFfKLMN5K2UU7zG8i6rzJ+9pMUh+hcRJ KVgl8SuhH3j3U70lCMVTO9kXBae9Umk2I6XMYCI8GrqGWf2aMUuwqkVaaD4H9Qe7y9CV fDFi0zL3G/5Ttfl21TtZ4nRM4vU6NySVQbw+QkwoTBonPBucic+ydhOzWI6HUBR6vQAC 4fup613wpAARaKnQvJzifjgaOQkRYeZLD/BlZSu6r+NeE7ZroX/1ycPx3fPbeYYPRSng WOU3ZZFsuauFump/JurUR9sZAsswipc374vuE72t/LGbQP6h1cTuahd3KHib24AIo4ny ORgw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mdCo6+IY; spf=pass (google.com: domain of linux-kernel+bounces-10235-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10235-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id 5-20020a630905000000b005cdfa58ffe8si1612070pgj.240.2023.12.22.19.05.57 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:05:58 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10235-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=mdCo6+IY; spf=pass (google.com: domain of linux-kernel+bounces-10235-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10235-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 8F974B23FF3 for ; Sat, 23 Dec 2023 03:05:06 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3E0772F86A; Sat, 23 Dec 2023 02:59:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="mdCo6+IY" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29AA62D045; Sat, 23 Dec 2023 02:59:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300358; x=1734836358; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=55nUHP1btzkRwh2wBgXXfQGCTaUg8xL69mt9JVj+8po=; b=mdCo6+IY9QguRuLFZXWLuL406SDbM7TMgXhi/hp12mT+TKgsACA3ZYLG NNggdbXavddypYg9DMSsb4W0L8hQ4t8PUE0Ndi0JCnBASk0n/foKnv9Nm v59Enzf2sJkDQbL55Heq3EuYLThR4srsxsDH+s3ZJ/leBs65tCAVMpvpC Xrnoeh1WTc6mHatn8SLUVduzKyqrw5/rcbf8kuGXkxun95zc9ayWFx5cE MdMwSmqUs9dlcOwNEQp4lNlKu37tiDW7wRgLZECS9a5r8SYiy1u9xTKL3 IvX86o1iUM6AC3tfBlDtnd6t7uVdU2mn0qbO/6GeDrD0ZZuOdYjmGURuN A==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610978" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610978" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:17 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537556" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:14 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 19/34] idpf: stop using macros for accessing queue descriptors Date: Sat, 23 Dec 2023 03:55:39 +0100 Message-ID: <20231223025554.2316836-20-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040296148645106 X-GMAIL-MSGID: 1786040296148645106 In C, we have structures and unions. Casting `void *` via macros is not only error-prone, but also looks confusing and awful in general. Replace it with a union and direct array dereferences. Had idpf had separate queue structures, it would look way more elegant -- will do one day. Signed-off-by: Alexander Lobakin --- .../ethernet/intel/idpf/idpf_singleq_txrx.c | 20 +++++----- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 30 +++++++-------- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 37 ++++++++----------- 3 files changed, 40 insertions(+), 47 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index 23dcc02e6976..7072d45f007b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -206,7 +206,7 @@ static void idpf_tx_singleq_map(struct idpf_queue *tx_q, data_len = skb->data_len; size = skb_headlen(skb); - tx_desc = IDPF_BASE_TX_DESC(tx_q, i); + tx_desc = &tx_q->base_tx[i]; dma = dma_map_single(tx_q->dev, skb->data, size, DMA_TO_DEVICE); @@ -242,7 +242,7 @@ static void idpf_tx_singleq_map(struct idpf_queue *tx_q, i++; if (i == tx_q->desc_count) { - tx_desc = IDPF_BASE_TX_DESC(tx_q, 0); + tx_desc = &tx_q->base_tx[0]; i = 0; } @@ -262,7 +262,7 @@ static void idpf_tx_singleq_map(struct idpf_queue *tx_q, i++; if (i == tx_q->desc_count) { - tx_desc = IDPF_BASE_TX_DESC(tx_q, 0); + tx_desc = &tx_q->base_tx[0]; i = 0; } @@ -311,7 +311,7 @@ idpf_tx_singleq_get_ctx_desc(struct idpf_queue *txq) memset(&txq->tx_buf[ntu], 0, sizeof(struct idpf_tx_buf)); txq->tx_buf[ntu].ctx_entry = true; - ctx_desc = IDPF_BASE_TX_CTX_DESC(txq, ntu); + ctx_desc = &txq->base_ctx[ntu]; IDPF_SINGLEQ_BUMP_RING_IDX(txq, ntu); txq->next_to_use = ntu; @@ -460,7 +460,7 @@ static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget, struct netdev_queue *nq; bool dont_wake; - tx_desc = IDPF_BASE_TX_DESC(tx_q, ntc); + tx_desc = &tx_q->base_tx[ntc]; tx_buf = &tx_q->tx_buf[ntc]; ntc -= tx_q->desc_count; @@ -509,7 +509,7 @@ static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget, if (unlikely(!ntc)) { ntc -= tx_q->desc_count; tx_buf = tx_q->tx_buf; - tx_desc = IDPF_BASE_TX_DESC(tx_q, 0); + tx_desc = &tx_q->base_tx[0]; } /* unmap any remaining paged data */ @@ -527,7 +527,7 @@ static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget, if (unlikely(!ntc)) { ntc -= tx_q->desc_count; tx_buf = tx_q->tx_buf; - tx_desc = IDPF_BASE_TX_DESC(tx_q, 0); + tx_desc = &tx_q->base_tx[0]; } } while (likely(budget)); @@ -880,7 +880,7 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rx_q, if (!cleaned_count) return false; - desc = IDPF_SINGLEQ_RX_BUF_DESC(rx_q, nta); + desc = &rx_q->single_buf[nta]; do { dma_addr_t addr; @@ -898,7 +898,7 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rx_q, nta++; if (unlikely(nta == rx_q->desc_count)) { - desc = IDPF_SINGLEQ_RX_BUF_DESC(rx_q, 0); + desc = &rx_q->single_buf[0]; nta = 0; } @@ -998,7 +998,7 @@ static int idpf_rx_singleq_clean(struct idpf_queue *rx_q, int budget) struct idpf_rx_buf *rx_buf; /* get the Rx desc from Rx queue based on 'next_to_clean' */ - rx_desc = IDPF_RX_DESC(rx_q, ntc); + rx_desc = &rx_q->rx[ntc]; /* status_error_ptype_len will always be zero for unused * descriptors because it's cleared in cleanup, and overlaps diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 6fd9128e61d8..40b8d8b17827 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -533,7 +533,7 @@ static bool idpf_rx_post_buf_desc(struct idpf_queue *bufq, u16 buf_id) u16 nta = bufq->next_to_alloc; dma_addr_t addr; - splitq_rx_desc = IDPF_SPLITQ_RX_BUF_DESC(bufq, nta); + splitq_rx_desc = &bufq->split_buf[nta]; if (bufq->rx_hsplit_en) { bq.pp = bufq->hdr_pp; @@ -1560,7 +1560,7 @@ do { \ if (unlikely(!(ntc))) { \ ntc -= (txq)->desc_count; \ buf = (txq)->tx_buf; \ - desc = IDPF_FLEX_TX_DESC(txq, 0); \ + desc = &(txq)->flex_tx[0]; \ } else { \ (buf)++; \ (desc)++; \ @@ -1593,8 +1593,8 @@ static void idpf_tx_splitq_clean(struct idpf_queue *tx_q, u16 end, s16 ntc = tx_q->next_to_clean; struct idpf_tx_buf *tx_buf; - tx_desc = IDPF_FLEX_TX_DESC(tx_q, ntc); - next_pending_desc = IDPF_FLEX_TX_DESC(tx_q, end); + tx_desc = &tx_q->flex_tx[ntc]; + next_pending_desc = &tx_q->flex_tx[end]; tx_buf = &tx_q->tx_buf[ntc]; ntc -= tx_q->desc_count; @@ -1774,7 +1774,7 @@ static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget, int i; complq_budget = vport->compln_clean_budget; - tx_desc = IDPF_SPLITQ_TX_COMPLQ_DESC(complq, ntc); + tx_desc = &complq->comp[ntc]; ntc -= complq->desc_count; do { @@ -1840,7 +1840,7 @@ static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget, ntc++; if (unlikely(!ntc)) { ntc -= complq->desc_count; - tx_desc = IDPF_SPLITQ_TX_COMPLQ_DESC(complq, 0); + tx_desc = &complq->comp[0]; change_bit(__IDPF_Q_GEN_CHK, complq->flags); } @@ -2107,7 +2107,7 @@ void idpf_tx_dma_map_error(struct idpf_queue *txq, struct sk_buff *skb, * used one additional descriptor for a context * descriptor. Reset that here. */ - tx_desc = IDPF_FLEX_TX_DESC(txq, idx); + tx_desc = &txq->flex_tx[idx]; memset(tx_desc, 0, sizeof(struct idpf_flex_tx_ctx_desc)); if (idx == 0) idx = txq->desc_count; @@ -2167,7 +2167,7 @@ static void idpf_tx_splitq_map(struct idpf_queue *tx_q, data_len = skb->data_len; size = skb_headlen(skb); - tx_desc = IDPF_FLEX_TX_DESC(tx_q, i); + tx_desc = &tx_q->flex_tx[i]; dma = dma_map_single(tx_q->dev, skb->data, size, DMA_TO_DEVICE); @@ -2241,7 +2241,7 @@ static void idpf_tx_splitq_map(struct idpf_queue *tx_q, i++; if (i == tx_q->desc_count) { - tx_desc = IDPF_FLEX_TX_DESC(tx_q, 0); + tx_desc = &tx_q->flex_tx[0]; i = 0; tx_q->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q); @@ -2286,7 +2286,7 @@ static void idpf_tx_splitq_map(struct idpf_queue *tx_q, i++; if (i == tx_q->desc_count) { - tx_desc = IDPF_FLEX_TX_DESC(tx_q, 0); + tx_desc = &tx_q->flex_tx[0]; i = 0; tx_q->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q); } @@ -2520,7 +2520,7 @@ idpf_tx_splitq_get_ctx_desc(struct idpf_queue *txq) txq->tx_buf[i].compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG; /* grab the next descriptor */ - desc = IDPF_FLEX_TX_CTX_DESC(txq, i); + desc = &txq->flex_ctx[i]; txq->next_to_use = idpf_tx_splitq_bump_ntu(txq, i); return desc; @@ -3020,7 +3020,7 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) u8 rxdid; /* get the Rx desc from Rx queue based on 'next_to_clean' */ - desc = IDPF_RX_DESC(rxq, ntc); + desc = &rxq->rx[ntc]; rx_desc = (struct virtchnl2_rx_flex_desc_adv_nic_3 *)desc; /* This memory barrier is needed to keep us from reading @@ -3225,11 +3225,11 @@ static void idpf_rx_clean_refillq(struct idpf_queue *bufq, int cleaned = 0; u16 gen; - buf_desc = IDPF_SPLITQ_RX_BUF_DESC(bufq, bufq_nta); + buf_desc = &bufq->split_buf[bufq_nta]; /* make sure we stop at ring wrap in the unlikely case ring is full */ while (likely(cleaned < refillq->desc_count)) { - u16 refill_desc = IDPF_SPLITQ_RX_BI_DESC(refillq, ntc); + u16 refill_desc = refillq->ring[ntc]; bool failure; gen = FIELD_GET(IDPF_RX_BI_GEN_M, refill_desc); @@ -3247,7 +3247,7 @@ static void idpf_rx_clean_refillq(struct idpf_queue *bufq, } if (unlikely(++bufq_nta == bufq->desc_count)) { - buf_desc = IDPF_SPLITQ_RX_BUF_DESC(bufq, 0); + buf_desc = &bufq->split_buf[0]; bufq_nta = 0; } else { buf_desc++; diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 5975c6d029d7..2584bd94363f 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -112,24 +112,6 @@ do { \ #define IDPF_RXD_EOF_SPLITQ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_M #define IDPF_RXD_EOF_SINGLEQ VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_M -#define IDPF_SINGLEQ_RX_BUF_DESC(rxq, i) \ - (&(((struct virtchnl2_singleq_rx_buf_desc *)((rxq)->desc_ring))[i])) -#define IDPF_SPLITQ_RX_BUF_DESC(rxq, i) \ - (&(((struct virtchnl2_splitq_rx_buf_desc *)((rxq)->desc_ring))[i])) -#define IDPF_SPLITQ_RX_BI_DESC(rxq, i) ((((rxq)->ring))[i]) - -#define IDPF_BASE_TX_DESC(txq, i) \ - (&(((struct idpf_base_tx_desc *)((txq)->desc_ring))[i])) -#define IDPF_BASE_TX_CTX_DESC(txq, i) \ - (&(((struct idpf_base_tx_ctx_desc *)((txq)->desc_ring))[i])) -#define IDPF_SPLITQ_TX_COMPLQ_DESC(txcq, i) \ - (&(((struct idpf_splitq_tx_compl_desc *)((txcq)->desc_ring))[i])) - -#define IDPF_FLEX_TX_DESC(txq, i) \ - (&(((union idpf_tx_flex_desc *)((txq)->desc_ring))[i])) -#define IDPF_FLEX_TX_CTX_DESC(txq, i) \ - (&(((struct idpf_flex_tx_ctx_desc *)((txq)->desc_ring))[i])) - #define IDPF_DESC_UNUSED(txq) \ ((((txq)->next_to_clean > (txq)->next_to_use) ? 0 : (txq)->desc_count) + \ (txq)->next_to_clean - (txq)->next_to_use - 1) @@ -275,9 +257,6 @@ struct idpf_rx_extracted { #define IDPF_TX_MAX_DESC_DATA_ALIGNED \ ALIGN_DOWN(IDPF_TX_MAX_DESC_DATA, IDPF_TX_MAX_READ_REQ_SIZE) -#define IDPF_RX_DESC(rxq, i) \ - (&(((union virtchnl2_rx_desc *)((rxq)->desc_ring))[i])) - #define idpf_rx_buf libie_rx_buffer #define IDPF_RX_MAX_PTYPE_PROTO_IDS 32 @@ -586,7 +565,21 @@ struct idpf_queue { struct page_pool *pp; struct device *dev; }; - void *desc_ring; + union { + union virtchnl2_rx_desc *rx; + + struct virtchnl2_singleq_rx_buf_desc *single_buf; + struct virtchnl2_splitq_rx_buf_desc *split_buf; + + struct idpf_base_tx_desc *base_tx; + struct idpf_base_tx_ctx_desc *base_ctx; + union idpf_tx_flex_desc *flex_tx; + struct idpf_flex_tx_ctx_desc *flex_ctx; + + struct idpf_splitq_tx_compl_desc *comp; + + void *desc_ring; + }; u32 hdr_truesize; u32 truesize; From patchwork Sat Dec 23 02:55:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182885 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1461499dyi; Fri, 22 Dec 2023 19:05:27 -0800 (PST) X-Google-Smtp-Source: AGHT+IEi8gE4qyQ/NQaPNmvwk/QQPRoV4JyhiQtlIEeNgf74U8qx10yqjHVqf3/HNVeihjNCsXsW X-Received: by 2002:a05:6a20:daa6:b0:195:5f5b:5464 with SMTP id iy38-20020a056a20daa600b001955f5b5464mr504942pzb.10.1703300727248; Fri, 22 Dec 2023 19:05:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300727; cv=none; d=google.com; s=arc-20160816; b=hdFS9t9USzXcfxza9D7RMwEDFB7vb0S8gJZlO00nT+oDHc6Jm8nCPvvPJwKovQ8u7M pfd02bF7+BxNL0dTJXc3Ullha36QlUcVxV9kVocmTU7v+eIFN6/KFwcKy0IRvFPgoJY6 c3HXtDbFYydkqBDu3SIbF7c8XgTYnvkYm244tzYTgcE6aJD5Cd8X8+3oxZaSl9k8VHRr c8nFvLwI5GdT95KjbDAVpcMjj8QL8niVHdIMN1eL79gncxT4UyaJCv3clWF4hDhdknXk fspJgvw7ZaLKnF44MtE89a63R6tgqEMVL/hPBARZNF2rkefU6kTP9HNXjDdf2bO+6f4P 4QKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=LT0q30qf9AhYyJBzv9dqYZ2s+JIxwyZtD8WILdKaKm8=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=CKAPVlKl7/ZHcb8RDb8n0bjKfKsczkYKb+wnBDHtmlVE/tYILbMNGlMHZqAX7u3OAX GLs9fp5iO0L/ftx0dWdq3kTt2kaI4BbcTwQ5/wHqAc5btl7g0co3oa3h3viPQ4U3nq2M zcBgQVSMGUTJuKQSiJBj9NFQ/pwXjGJ9oVPThi+4vJIaxQt0ZAyh1wNaQ6MHC2YBj7ba Ooeu7KOeF691Q71GnjlrXqzJTjAN9c5wytGO4hA+Ua4WTGgoCDT8V6Nf1lB9/SGOGpjP BV+QZ0+Iz7euLOufv2OLMopUsRNU0ZmN43x46DwGccLKVKw9Dxl1JgqyKLhZSr71a+HM RP4Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=G0BxMYx9; spf=pass (google.com: domain of linux-kernel+bounces-10236-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10236-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id bi11-20020a056a02024b00b00564c67e66fbsi4433778pgb.842.2023.12.22.19.05.27 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:05:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10236-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=G0BxMYx9; spf=pass (google.com: domain of linux-kernel+bounces-10236-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10236-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 043CB285B75 for ; Sat, 23 Dec 2023 03:05:27 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2A46F33CF7; Sat, 23 Dec 2023 02:59:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="G0BxMYx9" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E018D2F508; Sat, 23 Dec 2023 02:59:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300361; x=1734836361; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9bOZfZie5hng33Moek6x3lg0RS/bldLmOI4f+Of4coI=; b=G0BxMYx9gO1Ay9gxfHySlYiBUH1wNqtgzTWs47uCzw9p+4L4kNhzz+o3 ayWJ7xjtSL7YOoVRyuvjnoUOOmnhco9yxqrdsYpBTUAITb2LAxaz43DWF OYymyifYfJ29wpgW9iGJ1uWJcDWj86JIxghRYtCKEkLcb/WauZCfLoHJk c9zCXfZ3hn+Ho5Wx5IHpomsxk0EZ+fhYs2iusPlcLi42BZCinOUM9GxQ5 pHayul3PuuO4s6PM/+NnCu6cNcl0wRqYTuczbOx8iJF07XTkX5+JjkAKe I0/PsOsxCL//3bqPcn4Ht/dSgeQMB1ATdy6C7eFaEwL2ipve2LAYBf3kU Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610986" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610986" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:21 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537570" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:18 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 20/34] idpf: make complq cleaning dependent on scheduling mode Date: Sat, 23 Dec 2023 03:55:40 +0100 Message-ID: <20231223025554.2316836-21-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040262897288524 X-GMAIL-MSGID: 1786040262897288524 From: Michal Kubiak Extend completion queue cleaning function to support queue-based scheduling mode needed for XDP queues. Add 4-byte descriptor for queue-based scheduling mode and perform some refactoring to extract the common code for both scheduling modes. Make the completion desc parsing function a static inline, as we'll need it in a different file soon. Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- .../net/ethernet/intel/idpf/idpf_lan_txrx.h | 6 +- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 203 ++++++++++-------- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 44 ++++ 3 files changed, 166 insertions(+), 87 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h index a5752dcab888..7f8fc9b61e90 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h @@ -184,13 +184,17 @@ struct idpf_base_tx_desc { __le64 qw1; /* type_cmd_offset_bsz_l2tag1 */ }; /* read used with buffer queues */ -struct idpf_splitq_tx_compl_desc { +struct idpf_splitq_4b_tx_compl_desc { /* qid=[10:0] comptype=[13:11] rsvd=[14] gen=[15] */ __le16 qid_comptype_gen; union { __le16 q_head; /* Queue head */ __le16 compl_tag; /* Completion tag */ } q_head_compl_tag; +}; /* writeback used with completion queues */ + +struct idpf_splitq_tx_compl_desc { + struct idpf_splitq_4b_tx_compl_desc common; u8 ts[3]; u8 rsvd; /* Reserved */ }; /* writeback used with completion queues */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 40b8d8b17827..1a79ec1fb838 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -1722,8 +1722,8 @@ static bool idpf_tx_clean_buf_ring(struct idpf_queue *txq, u16 compl_tag, } /** - * idpf_tx_handle_rs_completion - clean a single packet and all of its buffers - * whether on the buffer ring or in the hash table + * idpf_tx_handle_rs_cmpl_qb - clean a single packet and all of its buffers + * whether the Tx queue is working in queue-based scheduling * @txq: Tx ring to clean * @desc: pointer to completion queue descriptor to extract completion * information from @@ -1732,20 +1732,33 @@ static bool idpf_tx_clean_buf_ring(struct idpf_queue *txq, u16 compl_tag, * * Returns bytes/packets cleaned */ -static void idpf_tx_handle_rs_completion(struct idpf_queue *txq, - struct idpf_splitq_tx_compl_desc *desc, - struct idpf_cleaned_stats *cleaned, - int budget) +static void +idpf_tx_handle_rs_cmpl_qb(struct idpf_queue *txq, + const struct idpf_splitq_4b_tx_compl_desc *desc, + struct idpf_cleaned_stats *cleaned, int budget) { - u16 compl_tag; + u16 head = le16_to_cpu(desc->q_head_compl_tag.q_head); - if (!test_bit(__IDPF_Q_FLOW_SCH_EN, txq->flags)) { - u16 head = le16_to_cpu(desc->q_head_compl_tag.q_head); - - return idpf_tx_splitq_clean(txq, head, budget, cleaned, false); - } + return idpf_tx_splitq_clean(txq, head, budget, cleaned, false); +} - compl_tag = le16_to_cpu(desc->q_head_compl_tag.compl_tag); +/** + * idpf_tx_handle_rs_cmpl_fb - clean a single packet and all of its buffers + * whether on the buffer ring or in the hash table (flow-based scheduling only) + * @txq: Tx ring to clean + * @desc: pointer to completion queue descriptor to extract completion + * information from + * @cleaned: pointer to stats struct to track cleaned packets/bytes + * @budget: Used to determine if we are in netpoll + * + * Returns bytes/packets cleaned + */ +static void +idpf_tx_handle_rs_cmpl_fb(struct idpf_queue *txq, + const struct idpf_splitq_4b_tx_compl_desc *desc, + struct idpf_cleaned_stats *cleaned,int budget) +{ + u16 compl_tag = le16_to_cpu(desc->q_head_compl_tag.compl_tag); /* If we didn't clean anything on the ring, this packet must be * in the hash table. Go clean it there. @@ -1754,6 +1767,65 @@ static void idpf_tx_handle_rs_completion(struct idpf_queue *txq, idpf_tx_clean_stashed_bufs(txq, compl_tag, cleaned, budget); } +/** + * idpf_tx_finalize_complq - Finalize completion queue cleaning + * @complq: completion queue to finalize + * @ntc: next to complete index + * @gen_flag: current state of generation flag + * @cleaned: returns number of packets cleaned + */ +static void idpf_tx_finalize_complq(struct idpf_queue *complq, int ntc, + bool gen_flag, int *cleaned) +{ + struct idpf_netdev_priv *np; + bool complq_ok = true; + int i; + + /* Store the state of the complq to be used later in deciding if a + * TXQ can be started again + */ + if (unlikely(IDPF_TX_COMPLQ_PENDING(complq->txq_grp) > + IDPF_TX_COMPLQ_OVERFLOW_THRESH(complq))) + complq_ok = false; + + np = netdev_priv(complq->vport->netdev); + for (i = 0; i < complq->txq_grp->num_txq; ++i) { + struct idpf_queue *tx_q = complq->txq_grp->txqs[i]; + struct netdev_queue *nq; + bool dont_wake; + + /* We didn't clean anything on this queue, move along */ + if (!tx_q->cleaned_bytes) + continue; + + *cleaned += tx_q->cleaned_pkts; + + /* Update BQL */ + nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx); + + dont_wake = !complq_ok || IDPF_TX_BUF_RSV_LOW(tx_q) || + np->state != __IDPF_VPORT_UP || + !netif_carrier_ok(tx_q->vport->netdev); + /* Check if the TXQ needs to and can be restarted */ + __netif_txq_completed_wake(nq, tx_q->cleaned_pkts, tx_q->cleaned_bytes, + IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH, + dont_wake); + + /* Reset cleaned stats for the next time this queue is + * cleaned + */ + tx_q->cleaned_bytes = 0; + tx_q->cleaned_pkts = 0; + } + + ntc += complq->desc_count; + complq->next_to_clean = ntc; + if (gen_flag) + set_bit(__IDPF_Q_GEN_CHK, complq->flags); + else + clear_bit(__IDPF_Q_GEN_CHK, complq->flags); +} + /** * idpf_tx_clean_complq - Reclaim resources on completion queue * @complq: Tx ring to clean @@ -1765,61 +1837,55 @@ static void idpf_tx_handle_rs_completion(struct idpf_queue *txq, static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget, int *cleaned) { - struct idpf_splitq_tx_compl_desc *tx_desc; + struct idpf_splitq_4b_tx_compl_desc *tx_desc; struct idpf_vport *vport = complq->vport; s16 ntc = complq->next_to_clean; - struct idpf_netdev_priv *np; unsigned int complq_budget; - bool complq_ok = true; - int i; + bool flow, gen_flag; + u32 pos = ntc; + + flow = test_bit(__IDPF_Q_FLOW_SCH_EN, complq->flags); + gen_flag = test_bit(__IDPF_Q_GEN_CHK, complq->flags); complq_budget = vport->compln_clean_budget; - tx_desc = &complq->comp[ntc]; + tx_desc = flow ? &complq->comp[pos].common : &complq->comp_4b[pos]; ntc -= complq->desc_count; do { struct idpf_cleaned_stats cleaned_stats = { }; struct idpf_queue *tx_q; - int rel_tx_qid; u16 hw_head; - u8 ctype; /* completion type */ - u16 gen; - - /* if the descriptor isn't done, no work yet to do */ - gen = (le16_to_cpu(tx_desc->qid_comptype_gen) & - IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S; - if (test_bit(__IDPF_Q_GEN_CHK, complq->flags) != gen) - break; - - /* Find necessary info of TX queue to clean buffers */ - rel_tx_qid = (le16_to_cpu(tx_desc->qid_comptype_gen) & - IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S; - if (rel_tx_qid >= complq->txq_grp->num_txq || - !complq->txq_grp->txqs[rel_tx_qid]) { - dev_err(&complq->vport->adapter->pdev->dev, - "TxQ not found\n"); - goto fetch_next_desc; - } - tx_q = complq->txq_grp->txqs[rel_tx_qid]; + int ctype; - /* Determine completion type */ - ctype = (le16_to_cpu(tx_desc->qid_comptype_gen) & - IDPF_TXD_COMPLQ_COMPL_TYPE_M) >> - IDPF_TXD_COMPLQ_COMPL_TYPE_S; + ctype = idpf_parse_compl_desc(tx_desc, complq, &tx_q, + gen_flag); switch (ctype) { case IDPF_TXD_COMPLT_RE: + if (unlikely(!flow)) + goto fetch_next_desc; + hw_head = le16_to_cpu(tx_desc->q_head_compl_tag.q_head); idpf_tx_splitq_clean(tx_q, hw_head, budget, &cleaned_stats, true); break; case IDPF_TXD_COMPLT_RS: - idpf_tx_handle_rs_completion(tx_q, tx_desc, - &cleaned_stats, budget); + if (flow) + idpf_tx_handle_rs_cmpl_fb(tx_q, tx_desc, + &cleaned_stats, + budget); + else + idpf_tx_handle_rs_cmpl_qb(tx_q, tx_desc, + &cleaned_stats, + budget); break; case IDPF_TXD_COMPLT_SW_MARKER: idpf_tx_handle_sw_marker(tx_q); break; + case -ENODATA: + goto exit_clean_complq; + case -EINVAL: + goto fetch_next_desc; default: dev_err(&tx_q->vport->adapter->pdev->dev, "Unknown TX completion type: %d\n", @@ -1836,59 +1902,24 @@ static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget, u64_stats_update_end(&tx_q->stats_sync); fetch_next_desc: - tx_desc++; + pos++; ntc++; if (unlikely(!ntc)) { ntc -= complq->desc_count; - tx_desc = &complq->comp[0]; - change_bit(__IDPF_Q_GEN_CHK, complq->flags); + pos = 0; + gen_flag = !gen_flag; } + tx_desc = flow ? &complq->comp[pos].common : + &complq->comp_4b[pos]; prefetch(tx_desc); /* update budget accounting */ complq_budget--; } while (likely(complq_budget)); - /* Store the state of the complq to be used later in deciding if a - * TXQ can be started again - */ - if (unlikely(IDPF_TX_COMPLQ_PENDING(complq->txq_grp) > - IDPF_TX_COMPLQ_OVERFLOW_THRESH(complq))) - complq_ok = false; - - np = netdev_priv(complq->vport->netdev); - for (i = 0; i < complq->txq_grp->num_txq; ++i) { - struct idpf_queue *tx_q = complq->txq_grp->txqs[i]; - struct netdev_queue *nq; - bool dont_wake; - - /* We didn't clean anything on this queue, move along */ - if (!tx_q->cleaned_bytes) - continue; - - *cleaned += tx_q->cleaned_pkts; - - /* Update BQL */ - nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx); - - dont_wake = !complq_ok || IDPF_TX_BUF_RSV_LOW(tx_q) || - np->state != __IDPF_VPORT_UP || - !netif_carrier_ok(tx_q->vport->netdev); - /* Check if the TXQ needs to and can be restarted */ - __netif_txq_completed_wake(nq, tx_q->cleaned_pkts, tx_q->cleaned_bytes, - IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH, - dont_wake); - - /* Reset cleaned stats for the next time this queue is - * cleaned - */ - tx_q->cleaned_bytes = 0; - tx_q->cleaned_pkts = 0; - } - - ntc += complq->desc_count; - complq->next_to_clean = ntc; +exit_clean_complq: + idpf_tx_finalize_complq(complq, ntc, gen_flag, cleaned); return !!complq_budget; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 2584bd94363f..3e15ed779860 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -577,6 +577,7 @@ struct idpf_queue { struct idpf_flex_tx_ctx_desc *flex_ctx; struct idpf_splitq_tx_compl_desc *comp; + struct idpf_splitq_4b_tx_compl_desc *comp_4b; void *desc_ring; }; @@ -821,6 +822,49 @@ static inline void idpf_tx_splitq_build_desc(union idpf_tx_flex_desc *desc, idpf_tx_splitq_build_flow_desc(desc, params, td_cmd, size); } +/** + * idpf_parse_compl_desc - Parse the completion descriptor + * @desc: completion descriptor to be parsed + * @complq: completion queue containing the descriptor + * @txq: returns corresponding Tx queue for a given descriptor + * @gen_flag: current generation flag in the completion queue + * + * Returns completion type from descriptor or negative value in case of error: + * -ENODATA if there is no completion descriptor to be cleaned + * -EINVAL if no Tx queue has been found for the completion queue + */ +static inline int +idpf_parse_compl_desc(struct idpf_splitq_4b_tx_compl_desc *desc, + struct idpf_queue *complq, struct idpf_queue **txq, + bool gen_flag) +{ + int rel_tx_qid; + u8 ctype; /* completion type */ + u16 gen; + + /* if the descriptor isn't done, no work yet to do */ + gen = (le16_to_cpu(desc->qid_comptype_gen) & + IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S; + if (gen_flag != gen) + return -ENODATA; + + /* Find necessary info of TX queue to clean buffers */ + rel_tx_qid = (le16_to_cpu(desc->qid_comptype_gen) & + IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S; + if (rel_tx_qid >= complq->txq_grp->num_txq || + !complq->txq_grp->txqs[rel_tx_qid]) + return -EINVAL; + + *txq = complq->txq_grp->txqs[rel_tx_qid]; + + /* Determine completion type */ + ctype = (le16_to_cpu(desc->qid_comptype_gen) & + IDPF_TXD_COMPLQ_COMPL_TYPE_M) >> + IDPF_TXD_COMPLQ_COMPL_TYPE_S; + + return ctype; +} + /** * idpf_vport_intr_set_wb_on_itr - enable descriptor writeback on disabled interrupts * @q_vector: pointer to queue vector struct From patchwork Sat Dec 23 02:55:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182887 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1461632dyi; Fri, 22 Dec 2023 19:05:51 -0800 (PST) X-Google-Smtp-Source: AGHT+IFpozzbCkdAL9cP1xDlh8majSfGXcn7N9pYxOKswOQifxmZKsSkP9fFUEXpCuPEPPvavQct X-Received: by 2002:a50:c092:0:b0:554:5def:8d90 with SMTP id k18-20020a50c092000000b005545def8d90mr814414edf.81.1703300751093; Fri, 22 Dec 2023 19:05:51 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300751; cv=none; d=google.com; s=arc-20160816; b=qXHbfi9E2fHGnySN8IlgSlOsYI1ungDelGrz7w7rcqnx7GbWlAK2Wpie9b9Znp6wmq fPFHnpijH1AKCJgH1v/T2bayJNbmU3B5I14OiCg3+g2nPNYPGoQKWq1oxfHK6DH0+Ez5 1Jp5kgDeulwJkpmGcC9YyiuXwVdMSEGlL5CmX6kAoq6OGAdZYX6WGYeFXTu0KQQCJ/YX 3XomJC6yWxVPWHC40jkNa7VDyezKGEZU/XMy1aG7koevGGYj3KE+Lgfhi+w9YD1Hn01H gru3CsJX3yGinem/X8xLIJyRP+N3iMnDWp/tYk2exiBoCVKbRN9CF0GF9dh0O3cJFs4/ iPFQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=YoKBDZybctcWUWKci6dA3y18cK6NYV/dOx5e09ndeyE=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=wQ3i8cltBUviHB3uABmwyeIjK0rV+HjNvsWHXktslERslJj1hscrLirkBWPpcWtgaG JyemBAMWUA925uFCZfO9ikjtHHQpZr9WsYpYEsoqARZZrh3UHZq6Bjj+wqDr2EOfR72S XffZIAvo8cma5/G2Vfk4kg4vQXSXckJ7y+8JyROBzqfk8P2Bn6HJdGoVfWtjhHwlzOwb /T3ud6kXlv/asNM1zFrvcC24uur3xL37/dM4c0cR7yqVirs/+S+QTt04V0L1ltVFnBov xgXioh4+JCvCWX3PeLpkiS9Q1RhPO8c1TMSHMQDUVJrfIP2kIucyCgUKHEc5cXfoApEK y1BQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FdC0eYAI; spf=pass (google.com: domain of linux-kernel+bounces-10237-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10237-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id y9-20020a056402440900b0055424c3315dsi2085669eda.244.2023.12.22.19.05.50 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:05:51 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10237-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=FdC0eYAI; spf=pass (google.com: domain of linux-kernel+bounces-10237-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10237-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 3DAEE1F2404E for ; Sat, 23 Dec 2023 03:05:50 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8C71335EFC; Sat, 23 Dec 2023 02:59:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="FdC0eYAI" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 009C733CF4; Sat, 23 Dec 2023 02:59:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300365; x=1734836365; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lqmmrXO5N7W2qFb+zJKFXhMZxR89GgPDo7wFTIL1NWs=; b=FdC0eYAInVPrIrREm8Tktv911B60rNANIp6gjYIrxKE2XJvBdVbho4af zqCl2fXTxFGDILbh/+bRQt96s1PaGOb22Egj0GlDOISAwIH/WZDHZUO7o XOO2yNLReLr5NF5dqOAfMieAalKBvZxEx6BACHONjGyySbW0hJfVTCM8d f81RGimZw82I25BPmb18g1tM59LF4+Y5vlgGw//Z64D3rkgKuln4HbE7j rBDpwwMoPwQVJv03sDkPZTpkdlpfmrIgWLcncPNrUL2BqG0PEt5uk8o0i nRLycR7HCaJFN/XPjgtpLlET7e/z2bwhGEUepBrwwyEC6Ls7o7obGa5Ap g==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386610997" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386610997" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:25 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537588" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:21 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 21/34] idpf: prepare structures to support xdp Date: Sat, 23 Dec 2023 03:55:41 +0100 Message-ID: <20231223025554.2316836-22-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040288265943330 X-GMAIL-MSGID: 1786040288265943330 From: Michal Kubiak Extend basic structures of the driver (e.g. 'idpf_vport', 'idpf_queue', 'idpf_vport_user_config_data') by adding members necessary to support XDP. Add extra XDP Tx queues needed to support XDP_TX and XDP_REDIRECT actions without interfering a regular Tx traffic. Also add functions dedicated to support XDP initialization for Rx and Tx queues and call those functions from the existing algorithms of queues configuration. Signed-off-by: Michal Kubiak Co-developed-by: Alexander Lobakin Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/Makefile | 2 + drivers/net/ethernet/intel/idpf/idpf.h | 23 +++ .../net/ethernet/intel/idpf/idpf_ethtool.c | 6 +- drivers/net/ethernet/intel/idpf/idpf_lib.c | 25 ++- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 122 ++++++++++++++- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 24 ++- .../net/ethernet/intel/idpf/idpf_virtchnl.c | 36 +++-- drivers/net/ethernet/intel/idpf/idpf_xdp.c | 147 ++++++++++++++++++ drivers/net/ethernet/intel/idpf/idpf_xdp.h | 15 ++ 9 files changed, 375 insertions(+), 25 deletions(-) create mode 100644 drivers/net/ethernet/intel/idpf/idpf_xdp.c create mode 100644 drivers/net/ethernet/intel/idpf/idpf_xdp.h diff --git a/drivers/net/ethernet/intel/idpf/Makefile b/drivers/net/ethernet/intel/idpf/Makefile index 6844ead2f3ac..4024781ff02b 100644 --- a/drivers/net/ethernet/intel/idpf/Makefile +++ b/drivers/net/ethernet/intel/idpf/Makefile @@ -16,3 +16,5 @@ idpf-y := \ idpf_txrx.o \ idpf_virtchnl.o \ idpf_vf_dev.o + +idpf-objs += idpf_xdp.o diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 596ece7df26a..76df52b797d9 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -376,6 +376,14 @@ struct idpf_vport { struct idpf_queue **txqs; bool crc_enable; + bool xdpq_share; + u16 num_xdp_txq; + u16 num_xdp_rxq; + u16 num_xdp_complq; + u16 xdp_txq_offset; + u16 xdp_rxq_offset; + u16 xdp_complq_offset; + u16 num_rxq; u16 num_bufq; u32 rxq_desc_count; @@ -465,8 +473,11 @@ struct idpf_vport_user_config_data { struct idpf_rss_data rss_data; u16 num_req_tx_qs; u16 num_req_rx_qs; + u16 num_req_xdp_qs; u32 num_req_txq_desc; u32 num_req_rxq_desc; + /* Duplicated in queue structure for performance reasons */ + struct bpf_prog *xdp_prog; DECLARE_BITMAP(user_flags, __IDPF_USER_FLAGS_NBITS); struct list_head mac_filter_list; }; @@ -685,6 +696,18 @@ static inline int idpf_is_queue_model_split(u16 q_model) return q_model == VIRTCHNL2_QUEUE_MODEL_SPLIT; } +/** + * idpf_xdp_is_prog_ena - check if there is an XDP program on adapter + * @vport: vport to check + */ +static inline bool idpf_xdp_is_prog_ena(const struct idpf_vport *vport) +{ + if (!vport->adapter) + return false; + + return !!vport->adapter->vport_config[vport->idx]->user_config.xdp_prog; +} + #define idpf_is_cap_ena(adapter, field, flag) \ idpf_is_capability_ena(adapter, false, field, flag) #define idpf_is_cap_ena_all(adapter, field, flag) \ diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c index da7963f27bd8..0d192417205d 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c +++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c @@ -186,9 +186,11 @@ static void idpf_get_channels(struct net_device *netdev, { struct idpf_netdev_priv *np = netdev_priv(netdev); struct idpf_vport_config *vport_config; + const struct idpf_vport *vport; u16 num_txq, num_rxq; u16 combined; + vport = idpf_netdev_to_vport(netdev); vport_config = np->adapter->vport_config[np->vport_idx]; num_txq = vport_config->user_config.num_req_tx_qs; @@ -202,8 +204,8 @@ static void idpf_get_channels(struct net_device *netdev, ch->max_rx = vport_config->max_q.max_rxq; ch->max_tx = vport_config->max_q.max_txq; - ch->max_other = IDPF_MAX_MBXQ; - ch->other_count = IDPF_MAX_MBXQ; + ch->max_other = IDPF_MAX_MBXQ + vport->num_xdp_txq; + ch->other_count = IDPF_MAX_MBXQ + vport->num_xdp_txq; ch->combined_count = combined; ch->rx_count = num_rxq - combined; diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index 5fea2fd957eb..c3fb20197725 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -2,6 +2,7 @@ /* Copyright (C) 2023 Intel Corporation */ #include "idpf.h" +#include "idpf_xdp.h" static const struct net_device_ops idpf_netdev_ops_splitq; static const struct net_device_ops idpf_netdev_ops_singleq; @@ -912,6 +913,7 @@ static void idpf_vport_stop(struct idpf_vport *vport) idpf_remove_features(vport); vport->link_up = false; + idpf_xdp_rxq_info_deinit_all(vport); idpf_vport_intr_deinit(vport); idpf_vport_intr_rel(vport); idpf_vport_queues_rel(vport); @@ -1299,13 +1301,18 @@ static void idpf_restore_features(struct idpf_vport *vport) */ static int idpf_set_real_num_queues(struct idpf_vport *vport) { - int err; + int num_txq, err; err = netif_set_real_num_rx_queues(vport->netdev, vport->num_rxq); if (err) return err; - return netif_set_real_num_tx_queues(vport->netdev, vport->num_txq); + if (idpf_xdp_is_prog_ena(vport)) + num_txq = vport->num_txq - vport->num_xdp_txq; + else + num_txq = vport->num_txq; + + return netif_set_real_num_tx_queues(vport->netdev, num_txq); } /** @@ -1418,18 +1425,26 @@ static int idpf_vport_open(struct idpf_vport *vport, bool alloc_res) idpf_rx_init_buf_tail(vport); + err = idpf_xdp_rxq_info_init_all(vport); + if (err) { + netdev_err(vport->netdev, + "Failed to initialize XDP RxQ info for vport %u: %pe\n", + vport->vport_id, ERR_PTR(err)); + goto intr_deinit; + } + err = idpf_send_config_queues_msg(vport); if (err) { dev_err(&adapter->pdev->dev, "Failed to configure queues for vport %u, %d\n", vport->vport_id, err); - goto intr_deinit; + goto rxq_deinit; } err = idpf_send_map_unmap_queue_vector_msg(vport, true); if (err) { dev_err(&adapter->pdev->dev, "Failed to map queue vectors for vport %u: %d\n", vport->vport_id, err); - goto intr_deinit; + goto rxq_deinit; } err = idpf_send_enable_queues_msg(vport); @@ -1477,6 +1492,8 @@ static int idpf_vport_open(struct idpf_vport *vport, bool alloc_res) idpf_send_disable_queues_msg(vport); unmap_queue_vectors: idpf_send_map_unmap_queue_vector_msg(vport, false); +rxq_deinit: + idpf_xdp_rxq_info_deinit_all(vport); intr_deinit: idpf_vport_intr_deinit(vport); intr_rel: diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 1a79ec1fb838..d4a9f4c36b63 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -2,6 +2,7 @@ /* Copyright (C) 2023 Intel Corporation */ #include "idpf.h" +#include "idpf_xdp.h" /** * idpf_buf_lifo_push - push a buffer pointer onto stack @@ -61,15 +62,23 @@ void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue) static void idpf_tx_buf_rel_all(struct idpf_queue *txq) { struct libie_sq_onstack_stats ss = { }; + struct xdp_frame_bulk bq; u16 i; /* Buffers already cleared, nothing to do */ if (!txq->tx_buf) return; + xdp_frame_bulk_init(&bq); + rcu_read_lock(); + /* Free all the Tx buffer sk_buffs */ for (i = 0; i < txq->desc_count; i++) - libie_tx_complete_buf(&txq->tx_buf[i], txq->dev, false, &ss); + libie_tx_complete_any(&txq->tx_buf[i], txq->dev, &bq, + &txq->xdp_tx_active, &ss); + + xdp_flush_frame_bulk(&bq); + rcu_read_unlock(); kfree(txq->tx_buf); txq->tx_buf = NULL; @@ -469,6 +478,7 @@ static int idpf_rx_hdr_buf_alloc_all(struct idpf_queue *rxq) struct libie_buf_queue bq = { .count = rxq->desc_count, .type = LIBIE_RX_BUF_HDR, + .xdp = idpf_xdp_is_prog_ena(rxq->vport), }; struct libie_rx_buffer *hdr_buf; int ret; @@ -647,6 +657,7 @@ static int idpf_rx_bufs_init(struct idpf_queue *rxbufq, .count = rxbufq->desc_count, .type = type, .hsplit = rxbufq->rx_hsplit_en, + .xdp = idpf_xdp_is_prog_ena(rxbufq->vport), }; int ret; @@ -917,6 +928,7 @@ static void idpf_vport_queue_grp_rel_all(struct idpf_vport *vport) */ void idpf_vport_queues_rel(struct idpf_vport *vport) { + idpf_vport_xdpq_put(vport); idpf_tx_desc_rel_all(vport); idpf_rx_desc_rel_all(vport); idpf_vport_queue_grp_rel_all(vport); @@ -984,6 +996,27 @@ void idpf_vport_init_num_qs(struct idpf_vport *vport, if (idpf_is_queue_model_split(vport->rxq_model)) vport->num_bufq = le16_to_cpu(vport_msg->num_rx_bufq); + vport->num_xdp_rxq = 0; + vport->xdp_rxq_offset = 0; + + if (idpf_xdp_is_prog_ena(vport)) { + vport->xdp_txq_offset = config_data->num_req_tx_qs; + vport->num_xdp_txq = le16_to_cpu(vport_msg->num_tx_q) - + vport->xdp_txq_offset; + vport->xdpq_share = libie_xdp_sq_shared(vport->num_xdp_txq); + } else { + vport->xdp_txq_offset = 0; + vport->num_xdp_txq = 0; + vport->xdpq_share = 0; + } + + if (idpf_is_queue_model_split(vport->txq_model)) { + vport->num_xdp_complq = vport->num_xdp_txq; + vport->xdp_complq_offset = vport->xdp_txq_offset; + } + + config_data->num_req_xdp_qs = vport->num_xdp_txq; + /* Adjust number of buffer queues per Rx queue group. */ if (!idpf_is_queue_model_split(vport->rxq_model)) { vport->num_bufqs_per_qgrp = 0; @@ -1055,9 +1088,10 @@ int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_idx, int dflt_splitq_txq_grps = 0, dflt_singleq_txqs = 0; int dflt_splitq_rxq_grps = 0, dflt_singleq_rxqs = 0; u16 num_req_tx_qs = 0, num_req_rx_qs = 0; + struct idpf_vport_user_config_data *user; struct idpf_vport_config *vport_config; u16 num_txq_grps, num_rxq_grps; - u32 num_qs; + u32 num_qs, num_xdpq; vport_config = adapter->vport_config[vport_idx]; if (vport_config) { @@ -1105,6 +1139,29 @@ int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_idx, vport_msg->num_rx_bufq = 0; } + if (!vport_config) + return 0; + + user = &vport_config->user_config; + user->num_req_rx_qs = le16_to_cpu(vport_msg->num_rx_q); + user->num_req_tx_qs = le16_to_cpu(vport_msg->num_tx_q); + + if (vport_config->user_config.xdp_prog) + /* As we now know new number of Rx and Tx queues, we can + * request additional Tx queues for XDP. + */ + num_xdpq = libie_xdp_get_sq_num(user->num_req_rx_qs, + user->num_req_tx_qs, + IDPF_LARGE_MAX_Q); + else + num_xdpq = 0; + + user->num_req_xdp_qs = num_xdpq; + + vport_msg->num_tx_q = cpu_to_le16(user->num_req_tx_qs + num_xdpq); + if (idpf_is_queue_model_split(le16_to_cpu(vport_msg->txq_model))) + vport_msg->num_tx_complq = vport_msg->num_tx_q; + return 0; } @@ -1446,6 +1503,8 @@ int idpf_vport_queues_alloc(struct idpf_vport *vport) if (err) goto err_out; + idpf_vport_xdpq_get(vport); + return 0; err_out: @@ -3791,9 +3850,15 @@ static bool idpf_tx_splitq_clean_all(struct idpf_q_vector *q_vec, return true; budget_per_q = DIV_ROUND_UP(budget, num_txq); - for (i = 0; i < num_txq; i++) - clean_complete &= idpf_tx_clean_complq(q_vec->tx[i], - budget_per_q, cleaned); + + for (i = 0; i < num_txq; i++) { + struct idpf_queue *cq = q_vec->tx[i]; + + if (!test_bit(__IDPF_Q_XDP, cq->flags)) + clean_complete &= idpf_tx_clean_complq(cq, + budget_per_q, + cleaned); + } return clean_complete; } @@ -3893,13 +3958,22 @@ static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget) */ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) { + bool is_xdp_prog_ena = idpf_xdp_is_prog_ena(vport); u16 num_txq_grp = vport->num_txq_grp; int i, j, qv_idx, bufq_vidx = 0; struct idpf_rxq_group *rx_qgrp; struct idpf_txq_group *tx_qgrp; struct idpf_queue *q, *bufq; + int num_active_rxq; u16 q_index; + /* XDP Tx queues are handled within Rx loop, correct num_txq_grp so + * that it stores number of regular Tx queue groups. This way when we + * later assign Tx to qvector, we go only through regular Tx queues. + */ + if (is_xdp_prog_ena && idpf_is_queue_model_split(vport->txq_model)) + num_txq_grp = vport->xdp_txq_offset; + for (i = 0, qv_idx = 0; i < vport->num_rxq_grp; i++) { u16 num_rxq; @@ -3909,6 +3983,8 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) else num_rxq = rx_qgrp->singleq.num_rxq; + num_active_rxq = num_rxq - vport->num_xdp_rxq; + for (j = 0; j < num_rxq; j++) { if (qv_idx >= vport->num_q_vectors) qv_idx = 0; @@ -3921,6 +3997,30 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) q_index = q->q_vector->num_rxq; q->q_vector->rx[q_index] = q; q->q_vector->num_rxq++; + + /* Do not setup XDP Tx queues for dummy Rx queues. */ + if (j >= num_active_rxq) + goto skip_xdp_txq_config; + + if (is_xdp_prog_ena) { + if (idpf_is_queue_model_split(vport->txq_model)) { + tx_qgrp = &vport->txq_grps[i + vport->xdp_txq_offset]; + q = tx_qgrp->complq; + q->q_vector = &vport->q_vectors[qv_idx]; + q_index = q->q_vector->num_txq; + q->q_vector->tx[q_index] = q; + q->q_vector->num_txq++; + } else { + tx_qgrp = &vport->txq_grps[i]; + q = tx_qgrp->txqs[j + vport->xdp_txq_offset]; + q->q_vector = &vport->q_vectors[qv_idx]; + q_index = q->q_vector->num_txq; + q->q_vector->tx[q_index] = q; + q->q_vector->num_txq++; + } + } + +skip_xdp_txq_config: qv_idx++; } @@ -3954,6 +4054,9 @@ static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport) q->q_vector->num_txq++; qv_idx++; } else { + num_txq = is_xdp_prog_ena ? tx_qgrp->num_txq - vport->xdp_txq_offset + : tx_qgrp->num_txq; + for (j = 0; j < num_txq; j++) { if (qv_idx >= vport->num_q_vectors) qv_idx = 0; @@ -4175,6 +4278,15 @@ static void idpf_fill_dflt_rss_lut(struct idpf_vport *vport) rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data; + /* When we use this code for legacy devices (e.g. in AVF driver), some + * Rx queues may not be used because we would not be able to create XDP + * Tx queues for them. In such a case do not add their queue IDs to the + * RSS LUT by setting the number of active Rx queues to XDP Tx queues + * count. + */ + if (idpf_xdp_is_prog_ena(vport)) + num_active_rxq -= vport->num_xdp_rxq; + for (i = 0; i < rss_data->rss_lut_size; i++) { rss_data->rss_lut[i] = i % num_active_rxq; rss_data->cached_lut[i] = rss_data->rss_lut[i]; diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 3e15ed779860..b1c30795f376 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -4,8 +4,7 @@ #ifndef _IDPF_TXRX_H_ #define _IDPF_TXRX_H_ -#include -#include +#include #include #include @@ -319,6 +318,7 @@ enum idpf_queue_flags_t { __IDPF_Q_FLOW_SCH_EN, __IDPF_Q_SW_MARKER, __IDPF_Q_POLL_MODE, + __IDPF_Q_XDP, __IDPF_Q_FLAGS_NBITS, }; @@ -554,13 +554,20 @@ struct idpf_queue { }; void __iomem *tail; union { - struct idpf_tx_buf *tx_buf; + struct { + struct idpf_tx_buf *tx_buf; + struct libie_xdp_sq_lock xdp_lock; + }; + u32 num_xdp_txq; struct { struct libie_rx_buffer *hdr_buf; struct idpf_rx_buf *buf; } rx_buf; }; - struct page_pool *hdr_pp; + union { + struct page_pool *hdr_pp; + struct idpf_queue **xdpqs; + }; union { struct page_pool *pp; struct device *dev; @@ -582,7 +589,10 @@ struct idpf_queue { void *desc_ring; }; - u32 hdr_truesize; + union { + u32 hdr_truesize; + u32 xdp_tx_active; + }; u32 truesize; u16 idx; u16 q_type; @@ -627,8 +637,12 @@ struct idpf_queue { union { /* Rx */ struct { + struct xdp_rxq_info xdp_rxq; + + struct bpf_prog __rcu *xdp_prog; struct sk_buff *skb; }; + /* Tx */ struct { u16 compl_tag_bufid_m; diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 5c3d7c3534af..59b8bbebead7 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -1947,20 +1947,27 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; for (j = 0; j < tx_qgrp->num_txq; j++, k++) { + const struct idpf_q_vector *vec; + u32 v_idx, tx_itr_idx; + vqv[k].queue_type = cpu_to_le32(tx_qgrp->txqs[j]->q_type); vqv[k].queue_id = cpu_to_le32(tx_qgrp->txqs[j]->q_id); - if (idpf_is_queue_model_split(vport->txq_model)) { - vqv[k].vector_id = - cpu_to_le16(tx_qgrp->complq->q_vector->v_idx); - vqv[k].itr_idx = - cpu_to_le32(tx_qgrp->complq->q_vector->tx_itr_idx); + if (idpf_is_queue_model_split(vport->txq_model)) + vec = tx_qgrp->complq->q_vector; + else + vec = tx_qgrp->txqs[j]->q_vector; + + if (vec) { + v_idx = vec->v_idx; + tx_itr_idx = vec->tx_itr_idx; } else { - vqv[k].vector_id = - cpu_to_le16(tx_qgrp->txqs[j]->q_vector->v_idx); - vqv[k].itr_idx = - cpu_to_le32(tx_qgrp->txqs[j]->q_vector->tx_itr_idx); + v_idx = 0; + tx_itr_idx = VIRTCHNL2_ITR_IDX_1; } + + vqv[k].vector_id = cpu_to_le16(v_idx); + vqv[k].itr_idx = cpu_to_le32(tx_itr_idx); } } @@ -3253,6 +3260,17 @@ int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport) vec_info.default_vport = vport->default_vport; vec_info.index = vport->idx; + /* Additional XDP Tx queues share the q_vector with regular Tx and Rx + * queues to which they are assigned. Also, XDP shall request additional + * Tx queues via VIRTCHNL. Therefore, to avoid exceeding over + * "vport->q_vector_idxs array", do not request empty q_vectors + * for XDP Tx queues. + */ + if (idpf_xdp_is_prog_ena(vport)) + vec_info.num_req_vecs = max_t(u16, + vport->num_txq - vport->num_xdp_txq, + vport->num_rxq); + num_alloc_vecs = idpf_req_rel_vector_indexes(vport->adapter, vport->q_vector_idxs, &vec_info); diff --git a/drivers/net/ethernet/intel/idpf/idpf_xdp.c b/drivers/net/ethernet/intel/idpf/idpf_xdp.c new file mode 100644 index 000000000000..29b2fe68c7eb --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/idpf_xdp.c @@ -0,0 +1,147 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2023 Intel Corporation */ + +#include "idpf.h" +#include "idpf_xdp.h" + +static int idpf_rxq_for_each(const struct idpf_vport *vport, + int (*fn)(struct idpf_queue *rxq, void *arg), + void *arg) +{ + bool splitq = idpf_is_queue_model_split(vport->rxq_model); + + for (u32 i = 0; i < vport->num_rxq_grp; i++) { + const struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; + u32 num_rxq; + + if (splitq) + num_rxq = rx_qgrp->splitq.num_rxq_sets; + else + num_rxq = rx_qgrp->singleq.num_rxq; + + for (u32 j = 0; j < num_rxq; j++) { + struct idpf_queue *q; + int err; + + if (splitq) + q = &rx_qgrp->splitq.rxq_sets[j]->rxq; + else + q = rx_qgrp->singleq.rxqs[j]; + + err = fn(q, arg); + if (err) + return err; + } + } + + return 0; +} + +/** + * idpf_xdp_rxq_info_init - Setup XDP RxQ info for a given Rx queue + * @rxq: Rx queue for which the resources are setup + * @splitq: flag indicating if the HW works in split queue mode + * + * Return: 0 on success, negative on failure. + */ +static int idpf_xdp_rxq_info_init(struct idpf_queue *rxq, void *arg) +{ + const struct idpf_vport *vport = rxq->vport; + const struct page_pool *pp; + int err; + + err = __xdp_rxq_info_reg(&rxq->xdp_rxq, vport->netdev, rxq->idx, + rxq->q_vector->napi.napi_id, + rxq->rx_buf_size); + if (err) + return err; + + pp = arg ? rxq->rxq_grp->splitq.bufq_sets[0].bufq.pp : rxq->pp; + xdp_rxq_info_attach_page_pool(&rxq->xdp_rxq, pp); + + rxq->xdpqs = &vport->txqs[vport->xdp_txq_offset]; + rxq->num_xdp_txq = vport->num_xdp_txq; + + return 0; +} + +/** + * idpf_xdp_rxq_info_init_all - initialize RxQ info for all Rx queues in vport + * @vport: vport to setup the info + * + * Return: 0 on success, negative on failure. + */ +int idpf_xdp_rxq_info_init_all(const struct idpf_vport *vport) +{ + void *arg; + + arg = (void *)(size_t)idpf_is_queue_model_split(vport->rxq_model); + + return idpf_rxq_for_each(vport, idpf_xdp_rxq_info_init, arg); +} + +/** + * idpf_xdp_rxq_info_deinit - Deinit XDP RxQ info for a given Rx queue + * @rxq: Rx queue for which the resources are destroyed + */ +static int idpf_xdp_rxq_info_deinit(struct idpf_queue *rxq, void *arg) +{ + rxq->xdpqs = NULL; + rxq->num_xdp_txq = 0; + + xdp_rxq_info_detach_mem_model(&rxq->xdp_rxq); + xdp_rxq_info_unreg(&rxq->xdp_rxq); + + return 0; +} + +/** + * idpf_xdp_rxq_info_deinit_all - deinit RxQ info for all Rx queues in vport + * @vport: vport to setup the info + */ +void idpf_xdp_rxq_info_deinit_all(const struct idpf_vport *vport) +{ + idpf_rxq_for_each(vport, idpf_xdp_rxq_info_deinit, NULL); +} + +void idpf_vport_xdpq_get(const struct idpf_vport *vport) +{ + if (!idpf_xdp_is_prog_ena(vport)) + return; + + cpus_read_lock(); + + for (u32 j = vport->xdp_txq_offset; j < vport->num_txq; j++) { + struct idpf_queue *xdpq = vport->txqs[j]; + + __clear_bit(__IDPF_Q_FLOW_SCH_EN, xdpq->flags); + __clear_bit(__IDPF_Q_FLOW_SCH_EN, + xdpq->txq_grp->complq->flags); + __set_bit(__IDPF_Q_XDP, xdpq->flags); + __set_bit(__IDPF_Q_XDP, xdpq->txq_grp->complq->flags); + + libie_xdp_sq_get(&xdpq->xdp_lock, vport->netdev, + vport->xdpq_share); + } + + cpus_read_unlock(); +} + +void idpf_vport_xdpq_put(const struct idpf_vport *vport) +{ + if (!idpf_xdp_is_prog_ena(vport)) + return; + + cpus_read_lock(); + + for (u32 j = vport->xdp_txq_offset; j < vport->num_txq; j++) { + struct idpf_queue *xdpq = vport->txqs[j]; + + if (!__test_and_clear_bit(__IDPF_Q_XDP, xdpq->flags)) + continue; + + libie_xdp_sq_put(&xdpq->xdp_lock, vport->netdev); + } + + cpus_read_unlock(); +} diff --git a/drivers/net/ethernet/intel/idpf/idpf_xdp.h b/drivers/net/ethernet/intel/idpf/idpf_xdp.h new file mode 100644 index 000000000000..16b30caaac3f --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/idpf_xdp.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2023 Intel Corporation */ + +#ifndef _IDPF_XDP_H_ +#define _IDPF_XDP_H_ + +struct idpf_vport; + +int idpf_xdp_rxq_info_init_all(const struct idpf_vport *vport); +void idpf_xdp_rxq_info_deinit_all(const struct idpf_vport *vport); + +void idpf_vport_xdpq_get(const struct idpf_vport *vport); +void idpf_vport_xdpq_put(const struct idpf_vport *vport); + +#endif /* _IDPF_XDP_H_ */ From patchwork Sat Dec 23 02:55:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182889 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1461744dyi; Fri, 22 Dec 2023 19:06:11 -0800 (PST) X-Google-Smtp-Source: AGHT+IF89Y65U5xq8GXe6xWk9PNgerbqBVS1yr3+asJsSse1n6xx0Ler3AKAovSZ7zbWlcqdKrZd X-Received: by 2002:a05:6214:1d05:b0:67f:4752:2079 with SMTP id e5-20020a0562141d0500b0067f47522079mr6350986qvd.35.1703300771138; Fri, 22 Dec 2023 19:06:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300771; cv=none; d=google.com; s=arc-20160816; b=jjvCeuB7j2gN15hAsJCNi2ncyntqKUuQgVXbZV399AHa6w8yYIxwCXfe61Bc5qWryR /K3ocdpeD+3Oymd+aTGdnCOrys3jl0ltjrtWYCz4tdo47hMW5p0nRSHqv8LPiJEdvoS9 2rM2uj92ZwsSUrMiv2XYwoXZateGiY23y/wpVAtbR+h1qs+ZJxLlxg8Jve9j67hRkG/Y 3KSTScbTN8ysOokOf4Rv6rVqGyREoICdHbV38VsW9e/pUMKlvM7PhYuw+MhQHfwGm4Ts 0h6oPZtB8yffRpzY1GIl1fL642YxSFmfbidlUD90t9A5KpbDSsKvJl8lk7LMMqnUgKXr uYfA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=sUnTs7DvJdy1hWY0p9eKFycFpwp8P5TmBhL9Gbe8+r8=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=K2oDZwQ8ZpagAQhkQZOJboLVi61xPsu0/0f7Okpx1ll5k7EPHzXX6dPXYoOdEM+5y9 HJw+XOvNvBlm90eKMzadAJum26I3AVbb1J4DvamhvORHZRdTqcZmJu8dnYn/KoEoxpEv uzmgmUZjdXzp383aET67o8/5TNeOnXXWWETb+BGibsFXRduaFaaG8ObQSPSyApNgHUTy Shn+pjC1icPb2cdrmRaaLh5l1z+Ht7dYOuPif8mmBJB6GbzDhp56Z31TriaGTMhDC7Lw OwIODLyWBGA2Zaj8e2YzhL7x+ibOEF1+qkGCvh2NiJ43sPJ7mHc03PeKtdmCchx6gh+9 oe9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cxM4AeUX; spf=pass (google.com: domain of linux-kernel+bounces-10238-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10238-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id t15-20020a05620a450f00b0077d6739e62bsi6116613qkp.26.2023.12.22.19.06.11 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:06:11 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10238-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=cxM4AeUX; spf=pass (google.com: domain of linux-kernel+bounces-10238-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10238-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id DBB5D1C227B9 for ; Sat, 23 Dec 2023 03:06:10 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 048133608A; Sat, 23 Dec 2023 02:59:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="cxM4AeUX" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E52F035F06; Sat, 23 Dec 2023 02:59:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300370; x=1734836370; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=OzmR9y65n4szYK76EdIk2iRvc2hyld6yhCktx9lGAaA=; b=cxM4AeUXD08AYFDaXsb3NzQ3aEBfLg+aiyfQbcppebIlFOC5/rRWAJqA YULkcDnMYikRLAXq9lLTyZC3Q0yuV0Z9aoxnasHXMUrK6BhWwVEPwHJsO IV17LU+VOnc5NIk6dNKtA6XuuDMPz8IeZIMqvd3NVmyuQ4Xm7nXYbCLLA WCN9Rx1cB9zhd5CgDCCvOvckh8FNYl+p6f9SkPaEJGowLGwzEt8P0Woz6 b/29JdHJ7qmCBLFSujpzKvXfBXC0aPjfEfnIEDxbVss1R5f6DCIa2cFJz SvWuBFUOvYNLXKdlqEfib6raxZsDPEzWyUE3Xz6Lt5ovisqvlDjxfr0N3 g==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386611008" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386611008" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:30 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537600" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:25 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 22/34] idpf: implement XDP_SETUP_PROG in ndo_bpf for splitq Date: Sat, 23 Dec 2023 03:55:42 +0100 Message-ID: <20231223025554.2316836-23-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040309582145988 X-GMAIL-MSGID: 1786040309582145988 From: Michal Kubiak Implement loading the XDP program using ndo_bpf callback for splitq and XDP_SETUP_PROG parameter. Add functions for stopping, reconfiguring and restarting all queues when needed. Also, implement the XDP hot swap mechanism when the existing XDP program is replaced by another one (without a necessity of reconfiguring anything). Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf.h | 2 + drivers/net/ethernet/intel/idpf/idpf_lib.c | 5 +- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 5 + drivers/net/ethernet/intel/idpf/idpf_xdp.c | 160 ++++++++++++++++++++ drivers/net/ethernet/intel/idpf/idpf_xdp.h | 7 + 5 files changed, 177 insertions(+), 2 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 76df52b797d9..91f61060f500 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -975,6 +975,8 @@ int idpf_set_promiscuous(struct idpf_adapter *adapter, u32 vport_id); int idpf_send_disable_queues_msg(struct idpf_vport *vport); void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q); +int idpf_vport_open(struct idpf_vport *vport, bool alloc_res); +void idpf_vport_stop(struct idpf_vport *vport); u32 idpf_get_vport_id(struct idpf_vport *vport); int idpf_vport_queue_ids_init(struct idpf_vport *vport); int idpf_queue_reg_init(struct idpf_vport *vport); diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index c3fb20197725..01130e7c4d2e 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -889,7 +889,7 @@ static void idpf_remove_features(struct idpf_vport *vport) * idpf_vport_stop - Disable a vport * @vport: vport to disable */ -static void idpf_vport_stop(struct idpf_vport *vport) +void idpf_vport_stop(struct idpf_vport *vport) { struct idpf_netdev_priv *np = netdev_priv(vport->netdev); @@ -1369,7 +1369,7 @@ static void idpf_rx_init_buf_tail(struct idpf_vport *vport) * @vport: vport to bring up * @alloc_res: allocate queue resources */ -static int idpf_vport_open(struct idpf_vport *vport, bool alloc_res) +int idpf_vport_open(struct idpf_vport *vport, bool alloc_res) { struct idpf_netdev_priv *np = netdev_priv(vport->netdev); struct idpf_adapter *adapter = vport->adapter; @@ -2444,6 +2444,7 @@ static const struct net_device_ops idpf_netdev_ops_splitq = { .ndo_get_stats64 = idpf_get_stats64, .ndo_set_features = idpf_set_features, .ndo_tx_timeout = idpf_tx_timeout, + .ndo_bpf = idpf_xdp, }; static const struct net_device_ops idpf_netdev_ops_singleq = { diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index d4a9f4c36b63..e7081b68bc7d 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -929,6 +929,8 @@ static void idpf_vport_queue_grp_rel_all(struct idpf_vport *vport) void idpf_vport_queues_rel(struct idpf_vport *vport) { idpf_vport_xdpq_put(vport); + idpf_copy_xdp_prog_to_qs(vport, NULL); + idpf_tx_desc_rel_all(vport); idpf_rx_desc_rel_all(vport); idpf_vport_queue_grp_rel_all(vport); @@ -1485,6 +1487,7 @@ static int idpf_vport_queue_grp_alloc_all(struct idpf_vport *vport) */ int idpf_vport_queues_alloc(struct idpf_vport *vport) { + struct bpf_prog *prog; int err; err = idpf_vport_queue_grp_alloc_all(vport); @@ -1503,6 +1506,8 @@ int idpf_vport_queues_alloc(struct idpf_vport *vport) if (err) goto err_out; + prog = vport->adapter->vport_config[vport->idx]->user_config.xdp_prog; + idpf_copy_xdp_prog_to_qs(vport, prog); idpf_vport_xdpq_get(vport); return 0; diff --git a/drivers/net/ethernet/intel/idpf/idpf_xdp.c b/drivers/net/ethernet/intel/idpf/idpf_xdp.c index 29b2fe68c7eb..87d147e80047 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xdp.c +++ b/drivers/net/ethernet/intel/idpf/idpf_xdp.c @@ -104,6 +104,33 @@ void idpf_xdp_rxq_info_deinit_all(const struct idpf_vport *vport) idpf_rxq_for_each(vport, idpf_xdp_rxq_info_deinit, NULL); } +static int idpf_xdp_rxq_assign_prog(struct idpf_queue *rxq, void *arg) +{ + struct mutex *lock = &rxq->vport->adapter->vport_ctrl_lock; + struct bpf_prog *prog = arg; + struct bpf_prog *old; + + if (prog) + bpf_prog_inc(prog); + + old = rcu_replace_pointer(rxq->xdp_prog, prog, lockdep_is_held(lock)); + if (old) + bpf_prog_put(old); + + return 0; +} + +/** + * idpf_copy_xdp_prog_to_qs - set pointers to xdp program for each Rx queue + * @vport: vport to setup XDP for + * @xdp_prog: XDP program that should be copied to all Rx queues + */ +void idpf_copy_xdp_prog_to_qs(const struct idpf_vport *vport, + struct bpf_prog *xdp_prog) +{ + idpf_rxq_for_each(vport, idpf_xdp_rxq_assign_prog, xdp_prog); +} + void idpf_vport_xdpq_get(const struct idpf_vport *vport) { if (!idpf_xdp_is_prog_ena(vport)) @@ -145,3 +172,136 @@ void idpf_vport_xdpq_put(const struct idpf_vport *vport) cpus_read_unlock(); } + +/** + * idpf_xdp_reconfig_queues - reconfigure queues after the XDP setup + * @vport: vport to load or unload XDP for + */ +static int idpf_xdp_reconfig_queues(struct idpf_vport *vport) +{ + int err; + + err = idpf_vport_adjust_qs(vport); + if (err) { + netdev_err(vport->netdev, + "Could not adjust queue number for XDP\n"); + return err; + } + idpf_vport_calc_num_q_desc(vport); + + err = idpf_vport_queues_alloc(vport); + if (err) { + netdev_err(vport->netdev, + "Could not allocate queues for XDP\n"); + return err; + } + + err = idpf_send_add_queues_msg(vport, vport->num_txq, + vport->num_complq, + vport->num_rxq, vport->num_bufq); + if (err) { + netdev_err(vport->netdev, + "Could not add queues for XDP, VC message sent failed\n"); + return err; + } + + idpf_vport_alloc_vec_indexes(vport); + + return 0; +} + +/** + * idpf_assign_bpf_prog - Assign a given BPF program to vport + * @current_prog: pointer to XDP program in user config data + * @prog: BPF program to be assigned to vport + */ +static void idpf_assign_bpf_prog(struct bpf_prog **current_prog, + struct bpf_prog *prog) +{ + struct bpf_prog *old_prog = *current_prog; + + *current_prog = prog; + if (old_prog) + bpf_prog_put(old_prog); +} + +/** + * idpf_xdp_setup_prog - Add or remove XDP eBPF program + * @vport: vport to setup XDP for + * @prog: XDP program + * @extack: netlink extended ack + */ +static int +idpf_xdp_setup_prog(struct idpf_vport *vport, struct bpf_prog *prog, + struct netlink_ext_ack *extack) +{ + struct idpf_netdev_priv *np = netdev_priv(vport->netdev); + bool needs_reconfig, vport_is_up; + struct bpf_prog **current_prog; + u16 idx = vport->idx; + int err; + + vport_is_up = np->state == __IDPF_VPORT_UP; + + current_prog = &vport->adapter->vport_config[idx]->user_config.xdp_prog; + needs_reconfig = !!(*current_prog) != !!prog; + + if (!needs_reconfig) { + idpf_copy_xdp_prog_to_qs(vport, prog); + idpf_assign_bpf_prog(current_prog, prog); + + return 0; + } + + if (!vport_is_up) { + idpf_send_delete_queues_msg(vport); + } else { + set_bit(IDPF_VPORT_DEL_QUEUES, vport->flags); + idpf_vport_stop(vport); + } + + idpf_deinit_rss(vport); + + idpf_assign_bpf_prog(current_prog, prog); + + err = idpf_xdp_reconfig_queues(vport); + if (err) { + NL_SET_ERR_MSG_MOD(extack, "Could not reconfigure the queues after XDP setup\n"); + return err; + } + + if (vport_is_up) { + err = idpf_vport_open(vport, false); + if (err) { + NL_SET_ERR_MSG_MOD(extack, "Could not re-open the vport after XDP setup\n"); + return err; + } + } + + return 0; +} + +/** + * idpf_xdp - implements XDP handler + * @netdev: netdevice + * @xdp: XDP command + */ +int idpf_xdp(struct net_device *netdev, struct netdev_bpf *xdp) +{ + struct idpf_vport *vport; + int err; + + idpf_vport_ctrl_lock(netdev); + vport = idpf_netdev_to_vport(netdev); + + switch (xdp->command) { + case XDP_SETUP_PROG: + err = idpf_xdp_setup_prog(vport, xdp->prog, xdp->extack); + break; + default: + err = -EINVAL; + } + + idpf_vport_ctrl_unlock(netdev); + return err; +} diff --git a/drivers/net/ethernet/intel/idpf/idpf_xdp.h b/drivers/net/ethernet/intel/idpf/idpf_xdp.h index 16b30caaac3f..1d102b1fd2ac 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xdp.h +++ b/drivers/net/ethernet/intel/idpf/idpf_xdp.h @@ -4,12 +4,19 @@ #ifndef _IDPF_XDP_H_ #define _IDPF_XDP_H_ +struct bpf_prog; struct idpf_vport; +struct net_device; +struct netdev_bpf; int idpf_xdp_rxq_info_init_all(const struct idpf_vport *vport); void idpf_xdp_rxq_info_deinit_all(const struct idpf_vport *vport); +void idpf_copy_xdp_prog_to_qs(const struct idpf_vport *vport, + struct bpf_prog *xdp_prog); void idpf_vport_xdpq_get(const struct idpf_vport *vport); void idpf_vport_xdpq_put(const struct idpf_vport *vport); +int idpf_xdp(struct net_device *netdev, struct netdev_bpf *xdp); + #endif /* _IDPF_XDP_H_ */ From patchwork Sat Dec 23 02:55:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182890 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1461886dyi; Fri, 22 Dec 2023 19:06:33 -0800 (PST) X-Google-Smtp-Source: AGHT+IHt/1bg/AK3QlLvIe7OOg8qRxE+JzmFoGAE6XF1uQyoSAjRWbtTkJDDCSN0YpIKp/+ED0UA X-Received: by 2002:a05:622a:1715:b0:427:a2f7:43b8 with SMTP id h21-20020a05622a171500b00427a2f743b8mr3079495qtk.47.1703300792923; Fri, 22 Dec 2023 19:06:32 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300792; cv=none; d=google.com; s=arc-20160816; b=qUbU8lAzZKUN9M9Eb7/kLwghR0289GpBGn9VOMwa9Nz+dBqN8bG9GxI7EymOZUdLe/ mPvoP/rNV52gzqx5e4ThGikaz1PwZRXCvCgg3+yKNGeCde5GoeAGTjAzcGcuglGRH1Sm L9peqjEVoRU0dmAw6TkC5XsF4gsUxKL/2ArCX4/MWphZkcVpMfY72GmZawyEp+s8TZzx iaPhgwxTCBCddoUiLyUWhihtlbYek5S7gakbV0eVHEuKaozVoSokJ0L4GWtkQoz48DaY TtGQNubUsQWP/eJUFXVekZPGlZfwlKTXzsXbguLExtQMwzah1aXiZQ9hENUY0d+ryvVR 5RdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=0smMbPTyIvqcdf9ocrjI8rBbrDynFfQbf6yNUbuKqQQ=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=VK9HOKvx8wzQ/+RkRpOySVah0k92UesRcM9WrsfUhiIggnHDn84Inv/hB27cNUoLWD DzfWow/wp38l5/9FWlhktfGy0ZM27+UP1fCXLlgOZ3URaFRhFIDr7FXiu6XLTFfBF27U Uws3yptvkm3endmXYpoLCmE46RRgBLKiBulfw+aJVN9zLevMzYyoLsus0s2lN8IVNMYe XQERWM5cNM94Jr1S6YTQWxIhNEIpfXyzDGWWBu0RSmSXN7RJlrB7qRt419dxHJNtE17+ E4VO2/DJDG3w45zXV9XGIdFXtWBx/UHMfG6MEc3lSMP475BOlZ1I93/lpNU5/Xs7HDZr cHAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JC53GI9N; spf=pass (google.com: domain of linux-kernel+bounces-10239-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10239-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id bb29-20020a05622a1b1d00b004254a8e613asi5744934qtb.667.2023.12.22.19.06.32 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:06:32 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10239-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JC53GI9N; spf=pass (google.com: domain of linux-kernel+bounces-10239-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10239-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id A59C11C22373 for ; Sat, 23 Dec 2023 03:06:32 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id BAAA4364D5; Sat, 23 Dec 2023 02:59:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="JC53GI9N" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B71235F1E; Sat, 23 Dec 2023 02:59:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300374; x=1734836374; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=g84mIZ7oZRiBu9UMjC2ESkfmtWg04z86beCO9L8dm/4=; b=JC53GI9NwDzwHmv2KblqczZT95f1d1r7tCHj5FftygOwnp+Xn/gUpUHy aVCLXU3YfZcAZ0SfRs9r0HEAC6j0DYXgms45jgpw/2uaYpKbWqdpUpYf/ s/WRce1IabOEu6C4tA2Sdl/0jLOFKzr7lUuP/LHApix0R+wtfggRFl8F9 XYSvIghFqAeR2wLFpbVbYh/hWefC8w7ky31kWuNS5KwFt3BBUvvnkT/1K rhm6QHKQRu+KVL+uSGgtXnperpSGW8+0UAAZwWQKDZNJR7243NDwE/fQH 3Fe6CDWzhsZFTaJmwoGkkDmjZh5vKEW9begfQFRRg1DlJS/AXAbcWmEn7 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386611017" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386611017" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537610" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:30 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 23/34] idpf: use generic functions to build xdp_buff and skb Date: Sat, 23 Dec 2023 03:55:43 +0100 Message-ID: <20231223025554.2316836-24-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040332490895374 X-GMAIL-MSGID: 1786040332490895374 In preparation of XDP support, move from having skb as the main frame container during the Rx polling to &xdp_buff. This allows to use generic and libie helpers for building an XDP buffer and changes the logics: now we try to allocate an skb only when we processed all the descriptors related to the frame. For sure, &xdp_buff is "a bit" bigger in size comparing to skb pointer to store on the ring, but I already reserved a cacheline-aligned slot for it earlier. Signed-off-by: Alexander Lobakin --- .../ethernet/intel/idpf/idpf_singleq_txrx.c | 60 +++------- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 108 ++++-------------- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 5 +- 3 files changed, 41 insertions(+), 132 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c index 7072d45f007b..fa1b66595024 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c @@ -601,14 +601,9 @@ static bool idpf_rx_singleq_test_staterr(const union virtchnl2_rx_desc *rx_desc, /** * idpf_rx_singleq_is_non_eop - process handling of non-EOP buffers - * @rxq: Rx ring being processed * @rx_desc: Rx descriptor for current buffer - * @skb: Current socket buffer containing buffer in progress - * @ntc: next to clean */ -static bool idpf_rx_singleq_is_non_eop(struct idpf_queue *rxq, - union virtchnl2_rx_desc *rx_desc, - struct sk_buff *skb, u16 ntc) +static bool idpf_rx_singleq_is_non_eop(const union virtchnl2_rx_desc *rx_desc) { /* if we are the last buffer then there is nothing else to do */ if (likely(idpf_rx_singleq_test_staterr(rx_desc, IDPF_RXD_EOF_SINGLEQ))) @@ -843,9 +838,6 @@ static void idpf_rx_singleq_process_skb_fields(struct idpf_queue *rx_q, struct libie_rx_ptype_parsed parsed = rx_q->vport->rx_ptype_lkup[ptype]; - /* modifies the skb - consumes the enet header */ - skb->protocol = eth_type_trans(skb, rx_q->vport->netdev); - /* Check if we're using base mode descriptor IDs */ if (rx_q->rxdids == VIRTCHNL2_RXDID_1_32B_BASE_M) { idpf_rx_singleq_base_hash(rx_q, skb, rx_desc, parsed); @@ -854,8 +846,6 @@ static void idpf_rx_singleq_process_skb_fields(struct idpf_queue *rx_q, idpf_rx_singleq_flex_hash(rx_q, skb, rx_desc, parsed); idpf_rx_singleq_flex_csum(rx_q, skb, rx_desc, parsed); } - - skb_record_rx_queue(skb, rx_q->idx); } /** @@ -986,16 +976,19 @@ static void idpf_rx_singleq_extract_fields(struct idpf_queue *rx_q, static int idpf_rx_singleq_clean(struct idpf_queue *rx_q, int budget) { unsigned int total_rx_bytes = 0, total_rx_pkts = 0; - struct sk_buff *skb = rx_q->skb; u16 ntc = rx_q->next_to_clean; u16 cleaned_count = 0; bool failure = false; + struct xdp_buff xdp; + + libie_xdp_init_buff(&xdp, &rx_q->xdp, &rx_q->xdp_rxq); /* Process Rx packets bounded by budget */ while (likely(total_rx_pkts < (unsigned int)budget)) { struct idpf_rx_extracted fields = { }; union virtchnl2_rx_desc *rx_desc; struct idpf_rx_buf *rx_buf; + struct sk_buff *skb; /* get the Rx desc from Rx queue based on 'next_to_clean' */ rx_desc = &rx_q->rx[ntc]; @@ -1019,45 +1012,35 @@ static int idpf_rx_singleq_clean(struct idpf_queue *rx_q, int budget) idpf_rx_singleq_extract_fields(rx_q, rx_desc, &fields); rx_buf = &rx_q->rx_buf.buf[ntc]; - if (!libie_rx_sync_for_cpu(rx_buf, fields.size)) - goto skip_data; - - if (skb) - idpf_rx_add_frag(rx_buf, skb, fields.size); - else - skb = idpf_rx_build_skb(rx_buf, fields.size); - - /* exit if we failed to retrieve a buffer */ - if (!skb) - break; - -skip_data: + libie_xdp_process_buff(&xdp, rx_buf, fields.size); rx_buf->page = NULL; IDPF_SINGLEQ_BUMP_RING_IDX(rx_q, ntc); cleaned_count++; /* skip if it is non EOP desc */ - if (idpf_rx_singleq_is_non_eop(rx_q, rx_desc, skb, ntc)) + if (!xdp.data || idpf_rx_singleq_is_non_eop(rx_desc)) continue; #define IDPF_RXD_ERR_S FIELD_PREP(VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M, \ VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_M) if (unlikely(idpf_rx_singleq_test_staterr(rx_desc, IDPF_RXD_ERR_S))) { - dev_kfree_skb_any(skb); - skb = NULL; - continue; - } +drop_cont: + xdp_return_buff(&xdp); + xdp.data = NULL; - /* pad skb if needed (to make valid ethernet frame) */ - if (eth_skb_pad(skb)) { - skb = NULL; continue; } - /* probably a little skewed due to removing CRC */ - total_rx_bytes += skb->len; + total_rx_bytes += xdp_get_buff_len(&xdp); + total_rx_pkts++; + + skb = xdp_build_skb_from_buff(&xdp); + if (unlikely(!skb)) + goto drop_cont; + + xdp.data = NULL; /* protocol */ idpf_rx_singleq_process_skb_fields(rx_q, skb, @@ -1065,15 +1048,10 @@ static int idpf_rx_singleq_clean(struct idpf_queue *rx_q, int budget) /* send completed skb up the stack */ napi_gro_receive(&rx_q->q_vector->napi, skb); - skb = NULL; - - /* update budget accounting */ - total_rx_pkts++; } - rx_q->skb = skb; - rx_q->next_to_clean = ntc; + libie_xdp_save_buff(&rx_q->xdp, &xdp); if (cleaned_count) failure = idpf_rx_singleq_buf_hw_alloc_all(rx_q, cleaned_count); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index e7081b68bc7d..cbbb6bf85b19 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -390,9 +390,9 @@ static void idpf_rx_desc_rel(struct idpf_queue *rxq, bool bufq, s32 q_model) if (!rxq) return; - if (rxq->skb) { - dev_kfree_skb_any(rxq->skb); - rxq->skb = NULL; + if (rxq->xdp.data) { + xdp_return_buff(&rxq->xdp); + rxq->xdp.data = NULL; } if (bufq || !idpf_is_queue_model_split(q_model)) @@ -2971,8 +2971,6 @@ static int idpf_rx_process_skb_fields(struct idpf_queue *rxq, /* process RSS/hash */ idpf_rx_hash(rxq, skb, rx_desc, parsed); - skb->protocol = eth_type_trans(skb, rxq->vport->netdev); - if (FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M, le16_to_cpu(rx_desc->hdrlen_flags))) return idpf_rx_rsc(rxq, skb, rx_desc, parsed); @@ -2980,59 +2978,9 @@ static int idpf_rx_process_skb_fields(struct idpf_queue *rxq, idpf_rx_splitq_extract_csum_bits(rx_desc, &csum_bits); idpf_rx_csum(rxq, skb, csum_bits, parsed); - skb_record_rx_queue(skb, rxq->idx); - return 0; } -/** - * idpf_rx_add_frag - Add contents of Rx buffer to sk_buff as a frag - * @rx_buf: buffer containing page to add - * @skb: sk_buff to place the data into - * @size: packet length from rx_desc - * - * This function will add the data contained in rx_buf->page to the skb. - * It will just attach the page as a frag to the skb. - * The function will then update the page offset. - */ -void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb, - unsigned int size) -{ - u32 hr = rx_buf->page->pp->p.offset; - - skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page, - rx_buf->offset + hr, size, rx_buf->truesize); -} - -/** - * idpf_rx_build_skb - Allocate skb and populate it from header buffer - * @buf: Rx buffer to pull data from - * @size: the length of the packet - * - * This function allocates an skb. It then populates it with the page data from - * the current receive descriptor, taking care to set up the skb correctly. - */ -struct sk_buff *idpf_rx_build_skb(const struct libie_rx_buffer *buf, u32 size) -{ - u32 hr = buf->page->pp->p.offset; - struct sk_buff *skb; - void *va; - - va = page_address(buf->page) + buf->offset; - net_prefetch(va + hr); - - skb = napi_build_skb(va, buf->truesize); - if (unlikely(!skb)) - return NULL; - - skb_mark_for_recycle(skb); - - skb_reserve(skb, hr); - __skb_put(skb, size); - - return skb; -} - /** * idpf_rx_splitq_test_staterr - tests bits in Rx descriptor * status and error fields @@ -3096,8 +3044,10 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) { int total_rx_bytes = 0, total_rx_pkts = 0; struct idpf_queue *rx_bufq = NULL; - struct sk_buff *skb = rxq->skb; u16 ntc = rxq->next_to_clean; + struct xdp_buff xdp; + + libie_xdp_init_buff(&xdp, &rxq->xdp, &rxq->xdp_rxq); /* Process Rx packets bounded by budget */ while (likely(total_rx_pkts < budget)) { @@ -3109,6 +3059,7 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) unsigned int pkt_len = 0; unsigned int hdr_len = 0; u16 gen_id, buf_id = 0; + struct sk_buff *skb; int bufq_id; /* Header buffer overflow only valid for header split */ bool hbo; @@ -3179,7 +3130,7 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) hdr = &rx_bufq->rx_buf.hdr_buf[buf_id]; - if (unlikely(!hdr_len && !skb)) { + if (unlikely(!hdr_len && !xdp.data)) { hdr_len = idpf_rx_hsplit_wa(hdr, rx_buf, pkt_len); pkt_len -= hdr_len; @@ -3188,11 +3139,7 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) u64_stats_update_end(&rxq->stats_sync); } - if (libie_rx_sync_for_cpu(hdr, hdr_len)) { - skb = idpf_rx_build_skb(hdr, hdr_len); - if (!skb) - break; - + if (libie_xdp_process_buff(&xdp, hdr, hdr_len)) { u64_stats_update_begin(&rxq->stats_sync); u64_stats_inc(&rxq->q_stats.rx.hsplit_pkts); u64_stats_update_end(&rxq->stats_sync); @@ -3201,55 +3148,42 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) hdr->page = NULL; payload: - if (!libie_rx_sync_for_cpu(rx_buf, pkt_len)) - goto skip_data; - - if (skb) - idpf_rx_add_frag(rx_buf, skb, pkt_len); - else - skb = idpf_rx_build_skb(rx_buf, pkt_len); - - /* exit if we failed to retrieve a buffer */ - if (!skb) - break; - -skip_data: + libie_xdp_process_buff(&xdp, rx_buf, pkt_len); rx_buf->page = NULL; idpf_rx_post_buf_refill(refillq, buf_id); IDPF_RX_BUMP_NTC(rxq, ntc); /* skip if it is non EOP desc */ - if (!idpf_rx_splitq_is_eop(rx_desc)) + if (!xdp.data || !idpf_rx_splitq_is_eop(rx_desc)) continue; - /* pad skb if needed (to make valid ethernet frame) */ - if (eth_skb_pad(skb)) { - skb = NULL; + total_rx_bytes += xdp_get_buff_len(&xdp); + total_rx_pkts++; + + skb = xdp_build_skb_from_buff(&xdp); + if (unlikely(!skb)) { + xdp_return_buff(&xdp); + xdp.data = NULL; + continue; } - /* probably a little skewed due to removing CRC */ - total_rx_bytes += skb->len; + xdp.data = NULL; /* protocol */ if (unlikely(idpf_rx_process_skb_fields(rxq, skb, rx_desc))) { dev_kfree_skb_any(skb); - skb = NULL; continue; } /* send completed skb up the stack */ napi_gro_receive(&rxq->q_vector->napi, skb); - skb = NULL; - - /* update budget accounting */ - total_rx_pkts++; } rxq->next_to_clean = ntc; + libie_xdp_save_buff(&rxq->xdp, &xdp); - rxq->skb = skb; u64_stats_update_begin(&rxq->stats_sync); u64_stats_add(&rxq->q_stats.rx.packets, total_rx_pkts); u64_stats_add(&rxq->q_stats.rx.bytes, total_rx_bytes); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index b1c30795f376..318241020347 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -640,7 +640,7 @@ struct idpf_queue { struct xdp_rxq_info xdp_rxq; struct bpf_prog __rcu *xdp_prog; - struct sk_buff *skb; + struct xdp_buff xdp; }; /* Tx */ @@ -918,9 +918,6 @@ int idpf_config_rss(struct idpf_vport *vport); int idpf_init_rss(struct idpf_vport *vport); void idpf_deinit_rss(struct idpf_vport *vport); int idpf_rx_bufs_init_all(struct idpf_vport *vport); -void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb, - unsigned int size); -struct sk_buff *idpf_rx_build_skb(const struct libie_rx_buffer *buf, u32 size); bool idpf_init_rx_buf_hw_alloc(struct idpf_queue *rxq, struct idpf_rx_buf *buf); void idpf_rx_buf_hw_update(struct idpf_queue *rxq, u32 val); void idpf_tx_buf_hw_update(struct idpf_queue *tx_q, u32 val, From patchwork Sat Dec 23 02:55:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182891 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1461993dyi; Fri, 22 Dec 2023 19:06:54 -0800 (PST) X-Google-Smtp-Source: AGHT+IEprmV7r3eY8zHctOw9ItusQjFtIrmrdrBECvsoXO/a0mt+KlAeTLOBTMegdsgr/Ipctb6i X-Received: by 2002:a05:6808:2f14:b0:3b2:db24:6384 with SMTP id gu20-20020a0568082f1400b003b2db246384mr3298905oib.38.1703300814043; Fri, 22 Dec 2023 19:06:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300814; cv=none; d=google.com; s=arc-20160816; b=q/MK8XxnvU42wStisY1WaF/AyPuuS/qc6F7UgwCXkaszlUzBDZKLMUbJz81iZAjVnb BuweTPl64o4PePuBY2WKQlYnVSfz5sg170J6lEIj8YdNEYWquGwhYcMaR2zDgisTzapp GSD/stxMjoHuyPVrAs6H+W6fZ7QNuJeY8SVMmPVei7rpLL6orCz+LDHPCwl6EZPNuVLb DXLPS9lrld8dATIqmkyoz5TAdfkrJaN8jX4P4fp9qQGZtRVEYJEFi5mlcX14zQYdt36p kod60glICM8OiVc5FkcFWlGqw892sDRQBC0lMWmj1zgCqyI7jvVgtujZhnijscL5T/Um r5nQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=snPZaLIvMwg8WPNSa2MWWPPcxv2g23wW1GMgyWJG/6Q=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=bCBIaf7OU0bcWgU+opdS0SPuWlMr9XLf+gXnnyyJwk3aSx2ZNZ0gH3l+GEf/XTnzUb 4YKuxhhIrcjwZEfrG9HlTb9NPg0uQ3cJEKtmAs0pSiynSqHiS3Y2hp+snN6IjmQdmD5r cw7ciXCH6TZZCCkvwRgS5PRq1rN0yDPy4T5zOLqHYzrloDMbji674UDeSv8fPiHwsOHr a2/W9tVqLab0m7hLgQBUyvmzG1mWQEQGtLP2JTnnvGdPDb9okg0ic+StwZ0u+Ri1BlUB WksmfUy2QFeWz1CHxyISxV1uGJUxRv7kckePApUZN/GEIlCDkoyTkkXQQU+tKIyXXm5s quvw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="ZQuu/OFl"; spf=pass (google.com: domain of linux-kernel+bounces-10240-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10240-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id k12-20020a17090a9d8c00b0028c1af28c9dsi1314823pjp.167.2023.12.22.19.06.53 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:06:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10240-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="ZQuu/OFl"; spf=pass (google.com: domain of linux-kernel+bounces-10240-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10240-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id C0A57285C59 for ; Sat, 23 Dec 2023 03:06:53 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 5C4DA381DB; Sat, 23 Dec 2023 02:59:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="ZQuu/OFl" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3279E364CB; Sat, 23 Dec 2023 02:59:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300378; x=1734836378; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=3Hl8jA0nkF3VRC2QZ5shDCjQ+QpnIc5D2V4s/YyOE+s=; b=ZQuu/OFlK7i3mgwG2XiHSoRpJ65clNjI4eU2Fz6yRSAXBx80CU2o0PbU PjcNDSHBbWCKrQmHetpeH7ovq0xxRVX2hAoOIWndOOlo3HSdbd2tQIMKD 9s0EUBcoKd5eCcuRLaucJyxJoLbqrM0ReM3NDGchNZcNd4BXSwUMloRrO fzSXMWfHz3VFSb1E0x2vHUMn8MXG5k0iEXSbxTKg0KDfE6DfAEXHdnpLm QTNmi5yI4ANDE1MNypjMvvrO91sqY0uj4miKthU63fVwhgqdIKGwlc6Ep nPvJpO8187GQMv2poWHfeMlMSBJ8K4ADMau1v2rlq+QrtimVHWV5co2Jv A==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386611025" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386611025" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:37 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537622" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:34 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 24/34] idpf: add support for XDP on Rx Date: Sat, 23 Dec 2023 03:55:44 +0100 Message-ID: <20231223025554.2316836-25-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040354183075911 X-GMAIL-MSGID: 1786040354183075911 Use libie XDP infra to support running XDP program on Rx polling. This includes all of the possible verdicts/actions. XDP Tx queues are cleaned only in "lazy" mode when there are less than 1/4 free descriptors left on the ring. Some functions are oneliners around libie's __always_inlines, so that the compiler could uninline them when needed. Co-developed-by: Michal Kubiak Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_lib.c | 6 + drivers/net/ethernet/intel/idpf/idpf_txrx.c | 10 +- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 55 ++++++++ drivers/net/ethernet/intel/idpf/idpf_xdp.c | 140 ++++++++++++++++++++ drivers/net/ethernet/intel/idpf/idpf_xdp.h | 20 ++- 5 files changed, 227 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index 01130e7c4d2e..a19704c4c421 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -840,6 +840,12 @@ static int idpf_cfg_netdev(struct idpf_vport *vport) netdev->features |= dflt_features; netdev->hw_features |= dflt_features | offloads; netdev->hw_enc_features |= dflt_features | offloads; + + if (idpf_is_queue_model_split(vport->rxq_model)) + xdp_set_features_flag(netdev, NETDEV_XDP_ACT_BASIC | + NETDEV_XDP_ACT_REDIRECT | + NETDEV_XDP_ACT_RX_SG); + idpf_set_ethtool_ops(netdev); SET_NETDEV_DEV(netdev, &adapter->pdev->dev); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index cbbb6bf85b19..99c9b889507b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -1522,7 +1522,7 @@ int idpf_vport_queues_alloc(struct idpf_vport *vport) * idpf_tx_handle_sw_marker - Handle queue marker packet * @tx_q: tx queue to handle software marker */ -static void idpf_tx_handle_sw_marker(struct idpf_queue *tx_q) +void idpf_tx_handle_sw_marker(struct idpf_queue *tx_q) { struct idpf_vport *vport = tx_q->vport; int i; @@ -3045,8 +3045,11 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) int total_rx_bytes = 0, total_rx_pkts = 0; struct idpf_queue *rx_bufq = NULL; u16 ntc = rxq->next_to_clean; + struct libie_xdp_tx_bulk bq; struct xdp_buff xdp; + libie_xdp_tx_init_bulk(&bq, rxq->xdp_prog, rxq->xdp_rxq.dev, + rxq->xdpqs, rxq->num_xdp_txq); libie_xdp_init_buff(&xdp, &rxq->xdp, &rxq->xdp_rxq); /* Process Rx packets bounded by budget */ @@ -3161,6 +3164,9 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) total_rx_bytes += xdp_get_buff_len(&xdp); total_rx_pkts++; + if (!idpf_xdp_run_prog(&xdp, &bq)) + continue; + skb = xdp_build_skb_from_buff(&xdp); if (unlikely(!skb)) { xdp_return_buff(&xdp); @@ -3182,7 +3188,9 @@ static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget) } rxq->next_to_clean = ntc; + libie_xdp_save_buff(&rxq->xdp, &xdp); + idpf_xdp_finalize_rx(&bq); u64_stats_update_begin(&rxq->stats_sync); u64_stats_add(&rxq->q_stats.rx.packets, total_rx_pkts); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 318241020347..20f484712ac2 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -135,6 +135,8 @@ do { \ ((++(txq)->compl_tag_cur_gen) >= (txq)->compl_tag_gen_max ? \ 0 : (txq)->compl_tag_cur_gen) +#define IDPF_QUEUE_QUARTER(Q) ((Q)->desc_count >> 2) + #define IDPF_TXD_LAST_DESC_CMD (IDPF_TX_DESC_CMD_EOP | IDPF_TX_DESC_CMD_RS) #define IDPF_TX_FLAGS_TSO BIT(0) @@ -939,5 +941,58 @@ netdev_tx_t idpf_tx_singleq_start(struct sk_buff *skb, bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rxq, u16 cleaned_count); int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off); +void idpf_tx_handle_sw_marker(struct idpf_queue *tx_q); + +/** + * idpf_xdpq_update_tail - Updates the XDP Tx queue tail register + * @xdpq: XDP Tx queue + * + * This function updates the XDP Tx queue tail register. + */ +static inline void idpf_xdpq_update_tail(const struct idpf_queue *xdpq) +{ + /* Force memory writes to complete before letting h/w + * know there are new descriptors to fetch. + */ + wmb(); + writel_relaxed(xdpq->next_to_use, xdpq->tail); +} + +/** + * idpf_set_rs_bit - set RS bit on last produced descriptor. + * @xdpq: XDP queue to produce the HW Tx descriptors on + * + * Returns the index of descriptor RS bit was set on (one behind current NTU). + */ +static inline void idpf_set_rs_bit(const struct idpf_queue *xdpq) +{ + int rs_idx = xdpq->next_to_use ? xdpq->next_to_use - 1 : + xdpq->desc_count - 1; + union idpf_tx_flex_desc *tx_desc; + + tx_desc = &xdpq->flex_tx[rs_idx]; + tx_desc->q.qw1.cmd_dtype |= le16_encode_bits(IDPF_TXD_LAST_DESC_CMD, + IDPF_FLEX_TXD_QW1_CMD_M); +} + +/** + * idpf_xdp_tx_finalize - Bump XDP Tx tail and/or flush redirect map + * @xdpq: XDP Tx queue + * + * This function bumps XDP Tx tail and should be called when a batch of packets + * has been processed in the napi loop. + */ +static inline void idpf_xdp_tx_finalize(void *_xdpq, bool tail) +{ + struct idpf_queue *xdpq = _xdpq; + + libie_xdp_sq_lock(&xdpq->xdp_lock); + + idpf_set_rs_bit(xdpq); + if (tail) + idpf_xdpq_update_tail(xdpq); + + libie_xdp_sq_unlock(&xdpq->xdp_lock); +} #endif /* !_IDPF_TXRX_H_ */ diff --git a/drivers/net/ethernet/intel/idpf/idpf_xdp.c b/drivers/net/ethernet/intel/idpf/idpf_xdp.c index 87d147e80047..b9952ebda4fb 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xdp.c +++ b/drivers/net/ethernet/intel/idpf/idpf_xdp.c @@ -173,6 +173,146 @@ void idpf_vport_xdpq_put(const struct idpf_vport *vport) cpus_read_unlock(); } +/** + * idpf_clean_xdp_irq - Reclaim a batch of TX resources from completed XDP_TX + * @xdpq: XDP Tx queue + * + * Returns number of cleaned descriptors. + */ +static u32 idpf_clean_xdp_irq(struct idpf_queue *xdpq) +{ + struct idpf_queue *complq = xdpq->txq_grp->complq, *txq; + struct idpf_splitq_4b_tx_compl_desc *last_rs_desc; + struct libie_sq_onstack_stats ss = { }; + int complq_budget = complq->desc_count; + u32 tx_ntc = xdpq->next_to_clean; + u32 ntc = complq->next_to_clean; + u32 cnt = xdpq->desc_count; + u32 done_frames = 0, i = 0; + struct xdp_frame_bulk bq; + int head = tx_ntc; + bool gen_flag; + + last_rs_desc = &complq->comp_4b[ntc]; + gen_flag = test_bit(__IDPF_Q_GEN_CHK, complq->flags); + + do { + int ctype = idpf_parse_compl_desc(last_rs_desc, complq, + &txq, gen_flag); + if (likely(ctype == IDPF_TXD_COMPLT_RS)) { + head = le16_to_cpu(last_rs_desc->q_head_compl_tag.q_head); + goto fetch_next_desc; + } + + switch (ctype) { + case IDPF_TXD_COMPLT_SW_MARKER: + idpf_tx_handle_sw_marker(xdpq); + break; + case -ENODATA: + goto exit_xdp_irq; + case -EINVAL: + break; + default: + dev_err(&xdpq->vport->adapter->pdev->dev, + "Unsupported completion type for XDP\n"); + break; + } + +fetch_next_desc: + last_rs_desc++; + ntc++; + if (unlikely(ntc == complq->desc_count)) { + ntc = 0; + last_rs_desc = &complq->comp_4b[0]; + gen_flag = !gen_flag; + change_bit(__IDPF_Q_GEN_CHK, complq->flags); + } + prefetch(last_rs_desc); + complq_budget--; + } while (likely(complq_budget)); + +exit_xdp_irq: + complq->next_to_clean = ntc; + done_frames = head >= tx_ntc ? head - tx_ntc : + head + cnt - tx_ntc; + + xdp_frame_bulk_init(&bq); + + for (i = 0; i < done_frames; i++) { + libie_xdp_complete_tx_buf(&xdpq->tx_buf[tx_ntc], xdpq->dev, + true, &bq, &xdpq->xdp_tx_active, + &ss); + + if (unlikely(++tx_ntc == cnt)) + tx_ntc = 0; + } + + xdpq->next_to_clean = tx_ntc; + + xdp_flush_frame_bulk(&bq); + libie_sq_napi_stats_add((struct libie_sq_stats *)&xdpq->q_stats.tx, + &ss); + + return i; +} + +static u32 idpf_xdp_tx_prep(void *_xdpq, struct libie_xdp_tx_queue *sq) +{ + struct idpf_queue *xdpq = _xdpq; + u32 free; + + libie_xdp_sq_lock(&xdpq->xdp_lock); + + free = IDPF_DESC_UNUSED(xdpq); + if (unlikely(free < IDPF_QUEUE_QUARTER(xdpq))) + free += idpf_clean_xdp_irq(xdpq); + + *sq = (struct libie_xdp_tx_queue){ + .dev = xdpq->dev, + .tx_buf = xdpq->tx_buf, + .desc_ring = xdpq->desc_ring, + .xdp_lock = &xdpq->xdp_lock, + .next_to_use = &xdpq->next_to_use, + .desc_count = xdpq->desc_count, + .xdp_tx_active = &xdpq->xdp_tx_active, + }; + + return free; +} + +static void idpf_xdp_tx_xmit(struct libie_xdp_tx_desc desc, + const struct libie_xdp_tx_queue *sq) +{ + union idpf_tx_flex_desc *tx_desc = sq->desc_ring; + struct idpf_tx_splitq_params tx_params = { + .dtype = IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2, + .eop_cmd = IDPF_TX_DESC_CMD_EOP, + }; + + tx_desc = &tx_desc[*sq->next_to_use]; + tx_desc->q.buf_addr = cpu_to_le64(desc.addr); + + idpf_tx_splitq_build_desc(tx_desc, &tx_params, + tx_params.eop_cmd | tx_params.offload.td_cmd, + desc.len); +} + +static bool idpf_xdp_tx_flush_bulk(struct libie_xdp_tx_bulk *bq) +{ + return libie_xdp_tx_flush_bulk(bq, idpf_xdp_tx_prep, idpf_xdp_tx_xmit); +} + +void __idpf_xdp_finalize_rx(struct libie_xdp_tx_bulk *bq) +{ + libie_xdp_finalize_rx(bq, idpf_xdp_tx_flush_bulk, + idpf_xdp_tx_finalize); +} + +bool __idpf_xdp_run_prog(struct xdp_buff *xdp, struct libie_xdp_tx_bulk *bq) +{ + return libie_xdp_run_prog(xdp, bq, idpf_xdp_tx_flush_bulk); +} + /** * idpf_xdp_reconfig_queues - reconfigure queues after the XDP setup * @vport: vport to load or unload XDP for diff --git a/drivers/net/ethernet/intel/idpf/idpf_xdp.h b/drivers/net/ethernet/intel/idpf/idpf_xdp.h index 1d102b1fd2ac..1f299c268ca5 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xdp.h +++ b/drivers/net/ethernet/intel/idpf/idpf_xdp.h @@ -4,10 +4,9 @@ #ifndef _IDPF_XDP_H_ #define _IDPF_XDP_H_ -struct bpf_prog; +#include + struct idpf_vport; -struct net_device; -struct netdev_bpf; int idpf_xdp_rxq_info_init_all(const struct idpf_vport *vport); void idpf_xdp_rxq_info_deinit_all(const struct idpf_vport *vport); @@ -17,6 +16,21 @@ void idpf_copy_xdp_prog_to_qs(const struct idpf_vport *vport, void idpf_vport_xdpq_get(const struct idpf_vport *vport); void idpf_vport_xdpq_put(const struct idpf_vport *vport); +bool __idpf_xdp_run_prog(struct xdp_buff *xdp, struct libie_xdp_tx_bulk *bq); +void __idpf_xdp_finalize_rx(struct libie_xdp_tx_bulk *bq); + +static inline bool idpf_xdp_run_prog(struct xdp_buff *xdp, + struct libie_xdp_tx_bulk *bq) +{ + return bq->prog ? __idpf_xdp_run_prog(xdp, bq) : true; +} + +static inline void idpf_xdp_finalize_rx(struct libie_xdp_tx_bulk *bq) +{ + if (bq->act_mask >= LIBIE_XDP_TX) + __idpf_xdp_finalize_rx(bq); +} + int idpf_xdp(struct net_device *netdev, struct netdev_bpf *xdp); #endif /* _IDPF_XDP_H_ */ From patchwork Sat Dec 23 02:55:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182893 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1462421dyi; Fri, 22 Dec 2023 19:08:16 -0800 (PST) X-Google-Smtp-Source: AGHT+IGaCfGDZfI+qjByrPgpLHScqsW1MaHZFOqEBbgnQYFxhvFSBodQborswdIMyMdmvqimmx1N X-Received: by 2002:a05:6808:23cf:b0:3bb:9673:4edd with SMTP id bq15-20020a05680823cf00b003bb96734eddmr629967oib.50.1703300896369; Fri, 22 Dec 2023 19:08:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300896; cv=none; d=google.com; s=arc-20160816; b=NtztFgIu4s4tdIND+mB+PA7m7LDpGmFZgzPwWc0/0H1ZLBNgNz8EoDPpLG78qTuvdj gq9vXbruXWwVwsyVzhOmMv0+0FAEhFCFDnXxUThJ7pfGv0Yij83gEjm4stK6RY7dg2YP ahWcmwz6XyaZXoBnk9ZmRbe/yAOIeJgOoCc/vopip/Wt/8hrONlqpAeNP10QZCHv8unV BrRcG36iXQJjFInzB+0Ej8usXzuG5TTK+Sk6hnAXCBOOOVnmjTYpc6HjZ3Pfe1l+7X9I smOYJufoKmEzmBk2gDpeq6dgUxMbJlFPH/RpqX20i4pTzwf0sJMRYgR8XGMkk9RJX1iK Egbg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=eOc4ws114/3ovN048yP45G2yo363lXTzug+TpLuPtdM=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=IP3ijtjcmutBTBQtdvBIz9Eupi2aGZHuppA1ZFpWswdb1Kiv2GMUzq/JjrWLMf3LBj f1JAopM9xWgt2fAuYiZoJMfDNpNUalCuBoZoeYPytcImyNFzF/rS6/AcZbEahhMYMXpM rsEdGvl5iUCpmWxJQ105pAwBfC6+V5Q/xI0hHqKA/u8UP47gQWSgamqdRtppLCJZUE4Z y3A8F3kO4TsZW9gSOsBNVhWeMvSRfadECXF/spV2z2lZmeT90cLU3C2FDtEhkaHE6kV2 bMbyDCFGa+EEQCHmo+1nmvjnq3IxJW9C4+HphX4ST9r4KtvyK1k8kO3PG5vXjOs5VR9P x9/w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KgpMNTpM; spf=pass (google.com: domain of linux-kernel+bounces-10241-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10241-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id cb9-20020a056a02070900b005cd8fa82040si535442pgb.614.2023.12.22.19.08.15 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:08:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10241-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=KgpMNTpM; spf=pass (google.com: domain of linux-kernel+bounces-10241-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10241-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 36E08B22BAC for ; Sat, 23 Dec 2023 03:07:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id ECD3A39FCC; Sat, 23 Dec 2023 02:59:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="KgpMNTpM" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2AC2381D9; Sat, 23 Dec 2023 02:59:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300381; x=1734836381; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vBLq+bSc90hU1HHgtTujEszVAfYomJgRRmZIp5oafNw=; b=KgpMNTpMHEveGBfy/CV+TPZH3CG5dMunsdBMtmT+eLHkOU1DEGDqparU 4Bt/dxr9FdhIUwOowWK9VcqVmHtiLbHCTqCrqyOUBbcYhy9zC8hCcWUps X3Zn4l1ElI8vsRM7KSvuQqpnLQ/slNkXc8+byLRaYbPLOrmxKIhhPI4gv y3b7Kem20FJT5AO9GO1JtT3zaq/0IqTrJDYGjkXFd3jygyfX4LzUjn6cZ Ij2x2Bs0hSZ7vANq4L7/LqsfvPQASEn1y6EWPh1W8jvCdSVRj6gz4xFt/ IksZy8iAynQg/1PJlcr2EQMwo/7+XbYC3weTTbnTF+hHfiODW3Y2Hf/Ig Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386611034" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386611034" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537631" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:38 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 25/34] idpf: add support for .ndo_xdp_xmit() Date: Sat, 23 Dec 2023 03:55:45 +0100 Message-ID: <20231223025554.2316836-26-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040441104082341 X-GMAIL-MSGID: 1786040441104082341 Use libie XDP infra to implement .ndo_xdp_xmit() in idpf. The Tx callbacks are reused from XDP_TX code. XDP redirect target feature is set/cleared depending on the XDP prog presence, as for now we still don't allocate XDP Tx queues when there's no program. Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_lib.c | 1 + drivers/net/ethernet/intel/idpf/idpf_xdp.c | 34 ++++++++++++++++++++++ drivers/net/ethernet/intel/idpf/idpf_xdp.h | 2 ++ 3 files changed, 37 insertions(+) diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index a19704c4c421..7c3d45f84e1b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -2451,6 +2451,7 @@ static const struct net_device_ops idpf_netdev_ops_splitq = { .ndo_set_features = idpf_set_features, .ndo_tx_timeout = idpf_tx_timeout, .ndo_bpf = idpf_xdp, + .ndo_xdp_xmit = idpf_xdp_xmit, }; static const struct net_device_ops idpf_netdev_ops_singleq = { diff --git a/drivers/net/ethernet/intel/idpf/idpf_xdp.c b/drivers/net/ethernet/intel/idpf/idpf_xdp.c index b9952ebda4fb..b4f096186302 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xdp.c +++ b/drivers/net/ethernet/intel/idpf/idpf_xdp.c @@ -313,6 +313,35 @@ bool __idpf_xdp_run_prog(struct xdp_buff *xdp, struct libie_xdp_tx_bulk *bq) return libie_xdp_run_prog(xdp, bq, idpf_xdp_tx_flush_bulk); } +/** + * idpf_xdp_xmit - submit packets to xdp ring for transmission + * @dev: netdev + * @n: number of xdp frames to be transmitted + * @frames: xdp frames to be transmitted + * @flags: transmit flags + * + * Returns number of frames successfully sent. Frames that fail are + * free'ed via XDP return API. + * For error cases, a negative errno code is returned and no-frames + * are transmitted (caller must handle freeing frames). + */ +int idpf_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, + u32 flags) +{ + struct idpf_netdev_priv *np = netdev_priv(dev); + struct idpf_vport *vport = np->vport; + + if (unlikely(!netif_carrier_ok(dev) || !vport->link_up)) + return -ENETDOWN; + if (unlikely(!idpf_xdp_is_prog_ena(vport))) + return -ENXIO; + + return libie_xdp_xmit_do_bulk(dev, n, frames, flags, + &vport->txqs[vport->xdp_txq_offset], + vport->num_xdp_txq, idpf_xdp_tx_prep, + idpf_xdp_tx_xmit, idpf_xdp_tx_finalize); +} + /** * idpf_xdp_reconfig_queues - reconfigure queues after the XDP setup * @vport: vport to load or unload XDP for @@ -410,6 +439,11 @@ idpf_xdp_setup_prog(struct idpf_vport *vport, struct bpf_prog *prog, return err; } + if (prog) + xdp_features_set_redirect_target(vport->netdev, false); + else + xdp_features_clear_redirect_target(vport->netdev); + if (vport_is_up) { err = idpf_vport_open(vport, false); if (err) { diff --git a/drivers/net/ethernet/intel/idpf/idpf_xdp.h b/drivers/net/ethernet/intel/idpf/idpf_xdp.h index 1f299c268ca5..f1444482f69d 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xdp.h +++ b/drivers/net/ethernet/intel/idpf/idpf_xdp.h @@ -31,6 +31,8 @@ static inline void idpf_xdp_finalize_rx(struct libie_xdp_tx_bulk *bq) __idpf_xdp_finalize_rx(bq); } +int idpf_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames, + u32 flags); int idpf_xdp(struct net_device *netdev, struct netdev_bpf *xdp); #endif /* _IDPF_XDP_H_ */ From patchwork Sat Dec 23 02:55:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182894 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1462538dyi; Fri, 22 Dec 2023 19:08:39 -0800 (PST) X-Google-Smtp-Source: AGHT+IGgGDCYBvQS9KMaGHb7FUnPjgkvdezXG8h7eUlXjeje9osuxH3c57lqzXN0YwuVnjC3w5QQ X-Received: by 2002:a17:90b:268a:b0:28c:1017:fbda with SMTP id pl10-20020a17090b268a00b0028c1017fbdamr1868988pjb.0.1703300919182; Fri, 22 Dec 2023 19:08:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300919; cv=none; d=google.com; s=arc-20160816; b=CXMeXmdKCQDFz1v3e+alStiuV0ZKXH2frMtxnZcfix4gKoH2Hc4HMQkAdaZt+FHvqK q6hCUp+ytx7eH+pWuqjdXhZCqIqUJOZk50cAfUHEuWzIsgWTod/H1XYTK9G9XVSNkjka u8XM7DDj+k5otNeUExqEskudhplRFeakZUF0w8Y6kUcovy5/PuGj9foC9bj7jq0tTV2S ENHlYkUvuysJTF3C4QuWd0Xaczrt9e/uXVDLo8FxPZARsu22ZRFtwe+jy4qcwcUbTX0k Fi2Z1w75y61aLxB/fu6+t7FCyf5XsiIXBACUDemC3I3Rkfr2wKidwie/Z3U2v6oW9UM2 sIeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=++L6ZZtpr4puZy1tQOJ2aj7LMQFCrbne9P5iC7Fqox0=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=N/2s+bPS9+nEuHyt1vQwr1BNW6DLQG52DjkCKJ/605ZUhLfHbTk5ny4/mReBNsr8wQ 0qPqM3NnuQWHz5G7gniKjJtAwk9HonArF38iZQxcZ9+0JrnKRqb4GFpWtpnDF+ekda+P y4DpkgBv02GG0y7/18vwlPa0l8WqTR70wxwMoSPHaxroEsCCp3b+S8btQkkI+iZdsnmj BuAHdbu1jTISyVmEYKJB38wK2gR98LYtpuLh4/20nAoN3a6smKkWXsWsDMV+N93G746Y sSKpM/wqohFwwI70L19C3djbhbUQo/gguMdiFIV9F0xaO0F15NDeFZeCQECjouqKDAkV 9uGg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=dxWSx7yi; spf=pass (google.com: domain of linux-kernel+bounces-10242-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10242-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id b4-20020a633404000000b005cdf95ddf8dsi1719976pga.271.2023.12.22.19.08.38 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:08:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10242-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=dxWSx7yi; spf=pass (google.com: domain of linux-kernel+bounces-10242-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10242-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 4E0BAB244D3 for ; Sat, 23 Dec 2023 03:07:35 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3F7FA3A287; Sat, 23 Dec 2023 02:59:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="dxWSx7yi" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6535839FD9; Sat, 23 Dec 2023 02:59:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300386; x=1734836386; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=yxZYHXhn633FddOUdDcraFQxZakRgJVpB7yodhsBcbU=; b=dxWSx7yi1SeULGUBhFjT876DsNvVTOtyQQ0dAZrFGCzV8CIySEA+W1+v wHMA+yazUMdCFC4jxyMK9IdGbwyCzXglg3xQvNr6U5icLLTvlJoCV56Zm ncLvEAegztKXObmFAdup6YWfp/B+CkwxPX+w624azzvf6/Ez09rXEXjmH 2n1vUumI9Y4lledZ8vRdvlCsZGMKnTemvArEt4prEjzBlY50DvbbnVBgH 2J88DeMpAXXZK59Uk2jpMM85TjWuQsY/2x7SR7tFY6bC4VoeJd4GVLQzq EzOb3Iol/4GAX93zQ8uxpcVN5Idxt+9DWwSgJ0eLHSqi228iTld6Hn9EK Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386611043" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386611043" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537644" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:42 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 26/34] xdp: add generic XSk xdp_buff -> skb conversion Date: Sat, 23 Dec 2023 03:55:46 +0100 Message-ID: <20231223025554.2316836-27-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040464415898944 X-GMAIL-MSGID: 1786040464415898944 Same as with converting &xdp_buff to skb on Rx, the code which allocates a new skb and copies the XSk frame there is identical across the drivers, so make it generic. Note that this time skb_record_rx_queue() is called unconditionally, as it's not intended to call this function with a non-registered RxQ info. Signed-off-by: Alexander Lobakin --- include/net/xdp.h | 11 ++++++++++- net/core/xdp.c | 41 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 51 insertions(+), 1 deletion(-) diff --git a/include/net/xdp.h b/include/net/xdp.h index 66854b755b58..23ada4bb0e69 100644 --- a/include/net/xdp.h +++ b/include/net/xdp.h @@ -273,7 +273,16 @@ void xdp_warn(const char *msg, const char *func, const int line); struct sk_buff *__xdp_build_skb_from_buff(struct sk_buff *skb, const struct xdp_buff *xdp); -#define xdp_build_skb_from_buff(xdp) __xdp_build_skb_from_buff(NULL, xdp) +struct sk_buff *xdp_build_skb_from_zc(struct napi_struct *napi, + struct xdp_buff *xdp); + +static inline struct sk_buff *xdp_build_skb_from_buff(struct xdp_buff *xdp) +{ + if (xdp->rxq->mem.type == MEM_TYPE_XSK_BUFF_POOL) + return xdp_build_skb_from_zc(NULL, xdp); + + return __xdp_build_skb_from_buff(NULL, xdp); +} struct xdp_frame *xdp_convert_zc_to_xdp_frame(struct xdp_buff *xdp); struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, diff --git a/net/core/xdp.c b/net/core/xdp.c index 8ef1d735a7eb..2bdb1fb8a9b8 100644 --- a/net/core/xdp.c +++ b/net/core/xdp.c @@ -21,6 +21,8 @@ #include #include +#include "dev.h" + #define REG_STATE_NEW 0x0 #define REG_STATE_REGISTERED 0x1 #define REG_STATE_UNREGISTERED 0x2 @@ -647,6 +649,45 @@ struct sk_buff *__xdp_build_skb_from_buff(struct sk_buff *skb, } EXPORT_SYMBOL_GPL(__xdp_build_skb_from_buff); +struct sk_buff *xdp_build_skb_from_zc(struct napi_struct *napi, + struct xdp_buff *xdp) +{ + const struct xdp_rxq_info *rxq = xdp->rxq; + u32 totalsize, metasize; + struct sk_buff *skb; + + if (!napi) { + napi = napi_by_id(rxq->napi_id); + if (unlikely(!napi)) + return NULL; + } + + totalsize = xdp->data_end - xdp->data_meta; + + skb = __napi_alloc_skb(napi, totalsize, GFP_ATOMIC | __GFP_NOWARN); + if (unlikely(!skb)) + return NULL; + + net_prefetch(xdp->data_meta); + + memcpy(__skb_put(skb, totalsize), xdp->data_meta, + ALIGN(totalsize, sizeof(long))); + + metasize = xdp->data - xdp->data_meta; + if (metasize) { + skb_metadata_set(skb, metasize); + __skb_pull(skb, metasize); + } + + skb_record_rx_queue(skb, rxq->queue_index); + skb->protocol = eth_type_trans(skb, rxq->dev); + + xsk_buff_free(xdp); + + return skb; +} +EXPORT_SYMBOL_GPL(xdp_build_skb_from_zc); + struct sk_buff *__xdp_build_skb_from_frame(struct xdp_frame *xdpf, struct sk_buff *skb, struct net_device *dev) From patchwork Sat Dec 23 02:55:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182892 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1462296dyi; Fri, 22 Dec 2023 19:07:54 -0800 (PST) X-Google-Smtp-Source: AGHT+IHr2GWZSdGOQLTVSrMUTp9pTz2Nio7Oj4/+ygftvGxejFm/JqUg/aCzZRpnFcZVv2/bRKWr X-Received: by 2002:a05:620a:111a:b0:781:1e80:251 with SMTP id o26-20020a05620a111a00b007811e800251mr2754451qkk.133.1703300874433; Fri, 22 Dec 2023 19:07:54 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300874; cv=none; d=google.com; s=arc-20160816; b=Ve21jcVd/74tqYgdJWOaVINVRU2aG4XqvePprRzXPCxWVrOfU4/o2iwXlraYsAgC8M sYpgxo+lrl0wBqIs2zd3O0sM9gBT4mmzvtqIpgl1qXtE1MTSVIV/ygj2/AQ171Pu7g0r 2JXmgJ546s/S2yjPiw22qbnFUQ6qmm7UQjH/bjgAB10NMc6JhafvwU4JoFpXV79Ju8g6 MSiuzKemwiGy6IF6IrhakadXLKEBRDOoOxKDZ2tZ4Qy67vegnZaED4Aa170ldLxc2QM/ dtGCVEK51BgWxJK04lak159GWmHk/xNwiax/p1QXjz2Aj8LFLsbXdRmpEDRjv7/nkeTh T/zA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=mNDO5Ep8aKIsrfRKIm45gevPadVXaFfgta1l1/6vk6E=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=JV8P2gmcItHBsBnxfWlD7Vmp65bR3cm0Zo9C7HsA13JgoftkHsJDbKBOTH+QOFiBji /MyGopvMfqclbSxQ3zomFA4mdGVSGXyQcITzESOS8+Gjo10hCPMqG/ZR2d093qAXWu8G Kfe/Z1DD8jMprAzPNE/m6DAtuZbqndPMU5eAs1gvHIFYSd7AsbnDJhLfSo5UJmcsbAlN SBvKYQY5O8ZN4yTyGXLPAHWw66FgfRXCT6Ow3DiMlK2vr5p2irv0G/FV1oaEzNks6YZi n+GG8yzLcSlEfIlkvvK5xkyTS6kJF8WfP/9CV5Dy7Qr8O6YMCZIxGIANUF9Nr2amJz3p JYYA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jNuBqnfK; spf=pass (google.com: domain of linux-kernel+bounces-10243-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10243-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id vy1-20020a05620a490100b00781264e788csi3772064qkn.517.2023.12.22.19.07.54 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:07:54 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10243-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=jNuBqnfK; spf=pass (google.com: domain of linux-kernel+bounces-10243-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10243-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 394C11C22B44 for ; Sat, 23 Dec 2023 03:07:54 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id D54733AC0F; Sat, 23 Dec 2023 02:59:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="jNuBqnfK" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 01CE239FF9; Sat, 23 Dec 2023 02:59:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300389; x=1734836389; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=+6J77JDqTiJN5j8IBkCDiN5nsC96D4OujctoP2BKkx4=; b=jNuBqnfKahPAKDJJrRwnOItMuiiaX08HLq7NFFBiTEkFWczkzYcl5tua lFYGLEbrQzofBP+KqF5eUKTbM2TFz3cWt3+ZVmeLs7aFYA9D95wuEkzGY Y18U9SIoTLb9oEY80wNsXNpLSps2Nx4Gm/OtkvkPAYu4p0DyE+g/sh7wI 81hPMM5cXjemOKWZmJAkhPL5OJpEZqXTdSxLMykgPVQ+mY8GjOOUeh+wN YZzdJuY2USvPnBoQaT2WFmNn9stFJYWTGt1YK97+NomvHjfz8ZoGQQokh u1R8dy43noUj9Hm3tKXOU/rPn/6+M7S3P9vOr/LjQx11AVCxqN9FMWO4/ w==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386611051" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386611051" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537650" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:46 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 27/34] idpf: add support for sw interrupt Date: Sat, 23 Dec 2023 03:55:47 +0100 Message-ID: <20231223025554.2316836-28-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040417767933376 X-GMAIL-MSGID: 1786040417767933376 From: Michal Kubiak Sometimes, it may be needed to be able to trigger a HW interrupt without a "real" interrupt from the hardware. Add the corresponding register fields to the register table. Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_dev.c | 3 +++ drivers/net/ethernet/intel/idpf/idpf_txrx.h | 4 ++++ drivers/net/ethernet/intel/idpf/idpf_vf_dev.c | 3 +++ 3 files changed, 10 insertions(+) diff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_dev.c index 2c6776086130..335bf789d908 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_dev.c @@ -100,6 +100,9 @@ static int idpf_intr_reg_init(struct idpf_vport *vport) intr->dyn_ctl_itridx_s = PF_GLINT_DYN_CTL_ITR_INDX_S; intr->dyn_ctl_intrvl_s = PF_GLINT_DYN_CTL_INTERVAL_S; intr->dyn_ctl_wb_on_itr_m = PF_GLINT_DYN_CTL_WB_ON_ITR_M; + intr->dyn_ctl_swint_trig_m = PF_GLINT_DYN_CTL_SWINT_TRIG_M; + intr->dyn_ctl_sw_itridx_ena_m = + PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_M; spacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing, IDPF_PF_ITR_IDX_SPACING); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index 20f484712ac2..fa21feddd204 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -347,6 +347,8 @@ struct idpf_vec_regs { * @dyn_ctl_itridx_m: Mask for ITR index * @dyn_ctl_intrvl_s: Register bit offset for ITR interval * @dyn_ctl_wb_on_itr_m: Mask for WB on ITR feature + * @dyn_ctl_swint_trig_m: Mask for SW ITR trigger register + * @dyn_ctl_sw_itridx_ena_m: Mask for SW ITR enable index * @rx_itr: RX ITR register * @tx_itr: TX ITR register * @icr_ena: Interrupt cause register offset @@ -360,6 +362,8 @@ struct idpf_intr_reg { u32 dyn_ctl_itridx_m; u32 dyn_ctl_intrvl_s; u32 dyn_ctl_wb_on_itr_m; + u32 dyn_ctl_swint_trig_m; + u32 dyn_ctl_sw_itridx_ena_m; void __iomem *rx_itr; void __iomem *tx_itr; void __iomem *icr_ena; diff --git a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c index f5b0a0666636..a78ae0e618ca 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c +++ b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c @@ -99,6 +99,9 @@ static int idpf_vf_intr_reg_init(struct idpf_vport *vport) intr->dyn_ctl_intena_msk_m = VF_INT_DYN_CTLN_INTENA_MSK_M; intr->dyn_ctl_itridx_s = VF_INT_DYN_CTLN_ITR_INDX_S; intr->dyn_ctl_wb_on_itr_m = VF_INT_DYN_CTLN_WB_ON_ITR_M; + intr->dyn_ctl_itridx_m = VF_INT_DYN_CTLN_ITR_INDX_M; + intr->dyn_ctl_sw_itridx_ena_m = + VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_M; spacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing, IDPF_VF_ITR_IDX_SPACING); From patchwork Sat Dec 23 02:55:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182898 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1462804dyi; Fri, 22 Dec 2023 19:09:27 -0800 (PST) X-Google-Smtp-Source: AGHT+IG0kTKHPBejnEIo0ONWSKqS7E+1hUojiXADMO8OzNaSNz4fnzZYJ6o3D9NF2kVeYT4ovSTh X-Received: by 2002:a05:6a20:4290:b0:190:c386:3eb6 with SMTP id o16-20020a056a20429000b00190c3863eb6mr3008064pzj.101.1703300967633; Fri, 22 Dec 2023 19:09:27 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300967; cv=none; d=google.com; s=arc-20160816; b=zPhZElwYBb9LzCeql+7qBPId4UvWYTvkMyo0kbg3UOomkWL2bv0zYwDC/eAeWda+8y eCGmd5P8ZopbvdrBLO3nLkOBKLbG5osFQ7g43Z0aHcs4HhTx7MmqwEFtnkC8LuRuJTYX zVGtBzP+c5ldH4dflR8k9wJRm9W7MgsUY0yKrqjY0vwXEE0vWcvszIrtfljPZLcxXkaa 40s9qoXeVPiuRW1Pc1UsEVzxGed8seALSm0d3ywywhzlUs0RsntOFmiY9WHDjYitm69N Tssd0IjOir7hpwRMN5H08tea7HNs3sNEg3DWI8ZYgXuUDHk8/QQq3zGUHhDak/fxbwUB FjRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=MXZcmWtwxPlnXaSZNKau+96pXIIXiNn0GvZBsjpPjjU=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=zlydww4+LVtUmbqYoUb3GOJRRl6XWBt2G84B22lToCYVqYYZCbD6I1c/Jh9W3RQ2co KokUNgZAaa6kJ+GFnQnbZZACv8OJ1Mx6Kj1TgEt/duObeuksEE8yaafgUkWxcSPdUHtq pqi325X5OYCXff9ixXV+9J1FlPwljxOnRMK9k7fJLi0aAvOgIkGbgdtvOuZli91ntNZn oSRNInrko+y+kGbwmJl6qA+5fVkPFFEMutHrtFtyjrQ+LP9LyEipNILjjpOp9nrxpTd0 6ZTH9FaEyY2d+fKfiVptNEYDglIdhwjLg//JuA16QHCXXRvg6gcbgFt1v26u3J+l9eg3 3jTw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Z7U6cbn7; spf=pass (google.com: domain of linux-kernel+bounces-10244-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10244-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sy.mirrors.kernel.org (sy.mirrors.kernel.org. [147.75.48.161]) by mx.google.com with ESMTPS id h184-20020a6383c1000000b005cdfd6c1a12si1232952pge.431.2023.12.22.19.09.27 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:09:27 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10244-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) client-ip=147.75.48.161; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Z7U6cbn7; spf=pass (google.com: domain of linux-kernel+bounces-10244-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.48.161 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10244-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sy.mirrors.kernel.org (Postfix) with ESMTPS id 16D32B21FC7 for ; Sat, 23 Dec 2023 03:08:15 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 198E53BB41; Sat, 23 Dec 2023 02:59:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Z7U6cbn7" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 461603A8EF; Sat, 23 Dec 2023 02:59:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300393; x=1734836393; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iIBWKyIWEBkirTUyFRzlT8kDZ54sFD3qq2HToLFVqMI=; b=Z7U6cbn73RX3JgTSbztC91KjkplTW/3AIFA3gepFZNeXLjZZKdMTduGU hmqgCLQksrK6QPCnpkUnhd65jZwoCSEf0n1+mAFvg3mA6MiEePWXRILf1 zmcPkKCeFYrn6P2NFQ2VAaCpJ+GyIQqRiAspLujdyQbtdg89ObayxBM6T suL0snSROuhvBN2GkeGfRe05Q9r2alR6oVks4+mkKE1KATZ44QMVYstkG MPT1Bi7kQUfDQ8r4akxTxJWE8oFRyMeM+6lnroGNHLWAz3IPhhNcjIgIh Pk60YPK7RHu6gIuyUDlXUCdfB1nZkhKDGSKYLsFnYNx3L5foufAb7t0IA Q==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386611059" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386611059" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:52 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537655" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:50 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 28/34] idpf: add relative queue id member to idpf_queue Date: Sat, 23 Dec 2023 03:55:48 +0100 Message-ID: <20231223025554.2316836-29-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040515240866799 X-GMAIL-MSGID: 1786040515240866799 From: Michal Kubiak Relative queue id is one of the required fields of the Tx queue description in VC 2.0 for splitq mode. In the current VC implementation all Tx queues are configured together, so the relative queue id (the index of the Tx queue in the queue group) can be computed on the fly. However, such a solution is not flexible because it is not easy to configure a single Tx queue. So, instead, introduce a new structure member in 'idpf_queue' dedicated to storing the relative queue id. Then send that value over the VC. This patch is the first step in making the existing VC API more flexible to allow configuration of single queues. Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 1 + drivers/net/ethernet/intel/idpf/idpf_txrx.h | 2 ++ drivers/net/ethernet/intel/idpf/idpf_virtchnl.c | 3 ++- 3 files changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 99c9b889507b..3dc21731df2f 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -1276,6 +1276,7 @@ static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq) q->tx_min_pkt_len = idpf_get_min_tx_pkt_len(adapter); q->vport = vport; q->txq_grp = tx_qgrp; + q->relative_q_id = j; if (!flow_sch_en) continue; diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index fa21feddd204..f32d854fe850 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -665,6 +665,8 @@ struct idpf_queue { dma_addr_t dma; unsigned int size; + + u32 relative_q_id; } ____cacheline_internodealigned_in_smp; /** diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 59b8bbebead7..49a96af52343 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -1491,7 +1491,8 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport) qi[k].tx_compl_queue_id = cpu_to_le16(tx_qgrp->complq->q_id); - qi[k].relative_queue_id = cpu_to_le16(j); + qi[k].relative_queue_id = + cpu_to_le16(q->relative_q_id); if (test_bit(__IDPF_Q_FLOW_SCH_EN, q->flags)) qi[k].sched_mode = From patchwork Sat Dec 23 02:55:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182895 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1462542dyi; Fri, 22 Dec 2023 19:08:39 -0800 (PST) X-Google-Smtp-Source: AGHT+IHpZgaWYstf61TzuS5GbBHshHzCVLrkNmSTH3O1Q6E9ieVFDCim9ao01cG3boxNCGyt1fpq X-Received: by 2002:a05:6a20:8e02:b0:194:f912:b61f with SMTP id y2-20020a056a208e0200b00194f912b61fmr2664663pzj.58.1703300919408; Fri, 22 Dec 2023 19:08:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300919; cv=none; d=google.com; s=arc-20160816; b=JmWAb6+W4GWWLY5cJ1ul1PKpnzWIlRlWFtEpX2hZAG5m3ebcqEzpMjeRAjOdCZaPtR 5D5ymZI5j4NWx0KjL5a6lDuKbi+mGbxs+XJBYiPeW/GOhslFxGyRE8SrADoVI1uFPUrA tzFTIVft8HqIL2Ng2I+toM0wwVekh/rpnDbQ7Br8seDhpHd5vpuMCai5B/tKCaQ11Yu8 wS+HK2nP/+Atib3nEX9k0b3mF1GJ3o9f12CD1lS1clK+cuhIK6z8DQ/18nWUNRrA7uau IywWPSUHbRaQW58c7ntS74zpnYmjhEr3gRe3S2khZ1IVncmLXlD6DQWYGIJ1Uk9Jl+Lj tZFA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=/Eg9cQHi651eXM2Jlmq7vlbe+Z7648BdWmk0sJJLudA=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=CP830LxXEx6DyP7mWvd2kWZ2qJkb8n8m2JTLraMUzOIUZhqs+SWrNtOl7UU+PAVrs5 ePt5K3+47VwOMpRCYDSW5zzVvAZGpxS+Ef6JNlF20Cu/m7O/Qy253xVmLO//b4SyMUc9 Z9W6TQqwUJSPEdbwNeeujjGzzP9XoJcqY28IL+TJdbK595MpCj84RdmEh8knstb4lDhj RTzFpoCspcYk2kOH5u8dDnqs0EpfHvDXDTPvhUf2v+uzhIGEWqk8Mn+22bIxvdULiTvb avSDOuZQo7U3eX5f1YSZLXl6OPPWDDdAvk558Dwf+IvgLDsKV6zzZuVQTa7GgC5ELHfL uPdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="jm98ywo/"; spf=pass (google.com: domain of linux-kernel+bounces-10245-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10245-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id a184-20020a6390c1000000b005859c255ce8si888245pge.819.2023.12.22.19.08.39 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:08:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10245-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="jm98ywo/"; spf=pass (google.com: domain of linux-kernel+bounces-10245-ouuuleilei=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10245-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 234E5285F4D for ; Sat, 23 Dec 2023 03:08:39 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id B9EFA3C47E; Sat, 23 Dec 2023 03:00:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="jm98ywo/" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 808073BB52; Sat, 23 Dec 2023 02:59:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300397; x=1734836397; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=18x3JbWOZi2L2Q1g86gqflps74SQDh3HA4cpeRwIlJc=; b=jm98ywo/dHTzvZr0FhcxnuedAeyZIFPZ2c+AbBr0csz1FdTTt4DNKXQ2 awyxONwMYVbZCUCVCHvFXZZjGyD2htLwvNVf6oxp7yvVdtpO6aMRwATiO UevZ34SwI5R4r4Is+7oO+E58VYhO4LBuLesUPDoruNhzCHcX0BLn2v8N/ Y5ywU7wQH3OXQUdZvu7bvj1alW4ko9/jZvFRp56SXDCBxLrdsyy1bbKlW WiD/mj+GAATrNH1sgRew6F8az2Z9dT1VjIu1DctifevD69eknzs24QoMY AmfmZtkFiG07/NKDaVO8JMaIeuhFNNIUfqdEJs33aBZy4sCC35HtRP1eK g==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386611069" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386611069" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 18:59:56 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537658" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:53 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 29/34] idpf: add vc functions to manage selected queues Date: Sat, 23 Dec 2023 03:55:49 +0100 Message-ID: <20231223025554.2316836-30-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040465204755094 X-GMAIL-MSGID: 1786040465204755094 From: Michal Kubiak Implement VC functions dedicated to enabling, disabling and configuring randomly selected queues. Also, refactor the existing implementation to make the code more modular. Introduce new generic functions for sending VC messages consisting of chunks, in order to isolate the sending algorithm and its implementation for specific VC messages. Finally, rewrite the function for mapping queues to q_vectors using the new modular approach to avoid copying the code that implements the VC message sending algorithm. Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf.h | 9 + .../net/ethernet/intel/idpf/idpf_virtchnl.c | 1085 +++++++++++------ 2 files changed, 693 insertions(+), 401 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index 91f61060f500..a12c56f9f2ef 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -982,6 +982,15 @@ int idpf_vport_queue_ids_init(struct idpf_vport *vport); int idpf_queue_reg_init(struct idpf_vport *vport); int idpf_send_config_queues_msg(struct idpf_vport *vport); int idpf_send_enable_queues_msg(struct idpf_vport *vport); +int idpf_send_enable_selected_queues_msg(struct idpf_vport *vport, + struct idpf_queue **qs, + int num_q); +int idpf_send_disable_selected_queues_msg(struct idpf_vport *vport, + struct idpf_queue **qs, + int num_q); +int idpf_send_config_selected_queues_msg(struct idpf_vport *vport, + struct idpf_queue **qs, + int num_q); int idpf_send_create_vport_msg(struct idpf_adapter *adapter, struct idpf_vport_max_q *max_q); int idpf_check_supported_desc_ids(struct idpf_vport *vport); diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c index 49a96af52343..24df268ad49e 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c +++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c @@ -748,26 +748,31 @@ static int idpf_wait_for_event(struct idpf_adapter *adapter, } /** - * idpf_wait_for_marker_event - wait for software marker response + * idpf_wait_for_selected_marker_events - wait for software marker response + * for selected tx queues * @vport: virtual port data structure + * @qs: array of tx queues on which the function should wait for marker events + * @num_qs: number of queues contained in the 'qs' array * * Returns 0 success, negative on failure. - **/ -static int idpf_wait_for_marker_event(struct idpf_vport *vport) + */ +static int idpf_wait_for_selected_marker_events(struct idpf_vport *vport, + struct idpf_queue **qs, + int num_qs) { int event; int i; - for (i = 0; i < vport->num_txq; i++) - set_bit(__IDPF_Q_SW_MARKER, vport->txqs[i]->flags); + for (i = 0; i < num_qs; i++) + set_bit(__IDPF_Q_SW_MARKER, qs[i]->flags); event = wait_event_timeout(vport->sw_marker_wq, test_and_clear_bit(IDPF_VPORT_SW_MARKER, vport->flags), msecs_to_jiffies(500)); - for (i = 0; i < vport->num_txq; i++) - clear_bit(__IDPF_Q_POLL_MODE, vport->txqs[i]->flags); + for (i = 0; i < num_qs; i++) + clear_bit(__IDPF_Q_POLL_MODE, qs[i]->flags); if (event) return 0; @@ -777,6 +782,19 @@ static int idpf_wait_for_marker_event(struct idpf_vport *vport) return -ETIMEDOUT; } +/** + * idpf_wait_for_marker_event - wait for software marker response + * @vport: virtual port data structure + * + * Returns 0 success, negative on failure. + **/ +static int idpf_wait_for_marker_event(struct idpf_vport *vport) +{ + return idpf_wait_for_selected_marker_events(vport, + vport->txqs, + vport->num_txq); +} + /** * idpf_send_ver_msg - send virtchnl version message * @adapter: Driver specific private structure @@ -1450,6 +1468,195 @@ int idpf_send_disable_vport_msg(struct idpf_vport *vport) return err; } +struct idpf_chunked_msg_params { + u32 op; + enum idpf_vport_vc_state state; + enum idpf_vport_vc_state err_check; + int timeout; + int config_sz; + int chunk_sz; + int (*prepare_msg)(struct idpf_vport *, u8 *, u8 *, int); + u32 num_chunks; + u8 *chunks; +}; + +/** + * idpf_send_chunked_msg - Send a VC message consisting of chunks. + * @vport: virtual port data structure + * @xn_params: parameters for this particular transaction including + * @prep_msg: pointer to the helper function for preparing the VC message + * @config_data_size: size of the message config data + * @num_chunks: number of chunks in the message + * @chunk_size: size of single message chunk + * @chunks: pointer to a buffer containig chunks of the message + * + * Helper function for preparing the message describing queues to be enabled + * or disabled. + * Returns the total size of the prepared message. + */ +static int idpf_send_chunked_msg(struct idpf_vport *vport, + struct idpf_chunked_msg_params *params) +{ + u32 num_msgs, max_chunks, left_chunks, max_buf_sz; + u32 num_chunks = params->num_chunks; + int err = 0, i; + u8 *msg_buf; + + max_chunks = min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(params->config_sz, + params->chunk_sz), + params->num_chunks); + max_buf_sz = params->config_sz + max_chunks * params->chunk_sz; + num_msgs = DIV_ROUND_UP(params->num_chunks, max_chunks); + + msg_buf = kzalloc(max_buf_sz, GFP_KERNEL); + if (!msg_buf) { + err = -ENOMEM; + goto error; + } + + mutex_lock(&vport->vc_buf_lock); + + left_chunks = num_chunks; + for (i = 0; i < num_msgs; i++) { + u8 *chunks = params->chunks; + int buf_sz, msg_size; + u8 *first_chunk; + + num_chunks = min(max_chunks, left_chunks); + first_chunk = chunks + (i * params->chunk_sz * max_chunks); + msg_size = params->chunk_sz * num_chunks + params->config_sz; + + buf_sz = params->prepare_msg(vport, msg_buf, first_chunk, + num_chunks); + if (buf_sz != msg_size) { + err = -EINVAL; + goto msg_error; + } + + err = idpf_send_mb_msg(vport->adapter, params->op, + buf_sz, msg_buf); + if (err) + goto msg_error; + + err = __idpf_wait_for_event(vport->adapter, vport, + params->state, params->err_check, + params->timeout); + if (err) + goto msg_error; + + left_chunks -= num_chunks; + } + + if (left_chunks != 0) + err = -EINVAL; +msg_error: + mutex_unlock(&vport->vc_buf_lock); + kfree(msg_buf); +error: + return err; +} + +/** + * idpf_fill_txcomplq_config_chunk - Fill the chunk describing the tx queue. + * @vport: virtual port data structure + * @q: tx queue to be inserted into VC chunk + * @qi: pointer to the buffer containing the VC chunk + */ +static void idpf_fill_txcomplq_config_chunk(struct idpf_vport *vport, + struct idpf_queue *q, + struct virtchnl2_txq_info *qi) +{ + qi->queue_id = cpu_to_le32(q->q_id); + qi->model = cpu_to_le16(vport->txq_model); + qi->type = cpu_to_le32(q->q_type); + qi->ring_len = cpu_to_le16(q->desc_count); + qi->dma_ring_addr = cpu_to_le64(q->dma); + + if (!idpf_is_queue_model_split(vport->txq_model)) + return; + + if (test_bit(__IDPF_Q_FLOW_SCH_EN, q->flags)) + qi->sched_mode = cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_FLOW); + else + qi->sched_mode = cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_QUEUE); + + if (q->q_type != VIRTCHNL2_QUEUE_TYPE_TX) + return; + + qi->tx_compl_queue_id = cpu_to_le16(q->txq_grp->complq->q_id); + qi->relative_queue_id = cpu_to_le16(q->relative_q_id); +} + +/** + * idpf_prepare_cfg_txqs_msg - Prepare message to configure selected tx queues. + * @vport: virtual port data structure + * @msg_buf: buffer containing the message + * @first_chunk: pointer to the first chunk describing the tx queue + * @num_chunks: number of chunks in the message + * + * Helper function for preparing the message describing configuration of + * tx queues. + * Returns the total size of the prepared message. + */ +static int idpf_prepare_cfg_txqs_msg(struct idpf_vport *vport, + u8 *msg_buf, u8 *first_chunk, + int num_chunks) +{ + int chunk_size = sizeof(struct virtchnl2_txq_info); + struct virtchnl2_config_tx_queues *ctq; + + ctq = (struct virtchnl2_config_tx_queues *)msg_buf; + ctq->vport_id = cpu_to_le32(vport->vport_id); + ctq->num_qinfo = cpu_to_le16(num_chunks); + + memcpy(ctq->qinfo, first_chunk, num_chunks * chunk_size); + + return (chunk_size * num_chunks + sizeof(*ctq)); +} + +/** + * idpf_send_config_selected_tx_queues_msg - Send virtchnl config tx queues + * message for selected tx queues only. + * @vport: virtual port data structure + * @qs: array of tx queues to be configured + * @num_q: number of tx queues contained in 'qs' array + * + * Send config queues virtchnl message for queues contained in 'qs' array. + * The 'qs' array can contain tx queues (or completion queues) only. + * Returns 0 on success, negative on failure. + */ +static int idpf_send_config_selected_tx_queues_msg(struct idpf_vport *vport, + struct idpf_queue **qs, + int num_q) +{ + struct idpf_chunked_msg_params params = { }; + struct virtchnl2_txq_info *qi; + int err, i; + + qi = (struct virtchnl2_txq_info *) + kcalloc(num_q, sizeof(struct virtchnl2_txq_info), GFP_KERNEL); + if (!qi) + return -ENOMEM; + + params.op = VIRTCHNL2_OP_CONFIG_TX_QUEUES; + params.state = IDPF_VC_CONFIG_TXQ; + params.err_check = IDPF_VC_CONFIG_TXQ_ERR; + params.timeout = IDPF_WAIT_FOR_EVENT_TIMEO; + params.config_sz = sizeof(struct virtchnl2_config_tx_queues); + params.chunk_sz = sizeof(struct virtchnl2_txq_info); + params.prepare_msg = &idpf_prepare_cfg_txqs_msg; + params.num_chunks = num_q; + params.chunks = (u8 *)qi; + + for (i = 0; i < num_q; i++) + idpf_fill_txcomplq_config_chunk(vport, qs[i], &qi[i]); + + err = idpf_send_chunked_msg(vport, ¶ms); + + kfree(qi); + return err; +} + /** * idpf_send_config_tx_queues_msg - Send virtchnl config tx queues message * @vport: virtual port data structure @@ -1459,69 +1666,24 @@ int idpf_send_disable_vport_msg(struct idpf_vport *vport) */ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport) { - struct virtchnl2_config_tx_queues *ctq; - u32 config_sz, chunk_sz, buf_sz; - int totqs, num_msgs, num_chunks; - struct virtchnl2_txq_info *qi; - int err = 0, i, k = 0; + int totqs, err = 0, i, k = 0; + struct idpf_queue **qs; totqs = vport->num_txq + vport->num_complq; - qi = kcalloc(totqs, sizeof(struct virtchnl2_txq_info), GFP_KERNEL); - if (!qi) + qs = (struct idpf_queue **)kzalloc(totqs * sizeof(*qs), GFP_KERNEL); + if (!qs) return -ENOMEM; /* Populate the queue info buffer with all queue context info */ for (i = 0; i < vport->num_txq_grp; i++) { struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; - int j, sched_mode; - - for (j = 0; j < tx_qgrp->num_txq; j++, k++) { - qi[k].queue_id = - cpu_to_le32(tx_qgrp->txqs[j]->q_id); - qi[k].model = - cpu_to_le16(vport->txq_model); - qi[k].type = - cpu_to_le32(tx_qgrp->txqs[j]->q_type); - qi[k].ring_len = - cpu_to_le16(tx_qgrp->txqs[j]->desc_count); - qi[k].dma_ring_addr = - cpu_to_le64(tx_qgrp->txqs[j]->dma); - if (idpf_is_queue_model_split(vport->txq_model)) { - struct idpf_queue *q = tx_qgrp->txqs[j]; - - qi[k].tx_compl_queue_id = - cpu_to_le16(tx_qgrp->complq->q_id); - qi[k].relative_queue_id = - cpu_to_le16(q->relative_q_id); - - if (test_bit(__IDPF_Q_FLOW_SCH_EN, q->flags)) - qi[k].sched_mode = - cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_FLOW); - else - qi[k].sched_mode = - cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_QUEUE); - } else { - qi[k].sched_mode = - cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_QUEUE); - } - } - - if (!idpf_is_queue_model_split(vport->txq_model)) - continue; - - qi[k].queue_id = cpu_to_le32(tx_qgrp->complq->q_id); - qi[k].model = cpu_to_le16(vport->txq_model); - qi[k].type = cpu_to_le32(tx_qgrp->complq->q_type); - qi[k].ring_len = cpu_to_le16(tx_qgrp->complq->desc_count); - qi[k].dma_ring_addr = cpu_to_le64(tx_qgrp->complq->dma); + int j; - if (test_bit(__IDPF_Q_FLOW_SCH_EN, tx_qgrp->complq->flags)) - sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW; - else - sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE; - qi[k].sched_mode = cpu_to_le16(sched_mode); + for (j = 0; j < tx_qgrp->num_txq; j++, k++) + qs[k] = tx_qgrp->txqs[j]; - k++; + if (idpf_is_queue_model_split(vport->txq_model)) + qs[k++] = tx_qgrp->complq; } /* Make sure accounting agrees */ @@ -1530,56 +1692,142 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport) goto error; } - /* Chunk up the queue contexts into multiple messages to avoid - * sending a control queue message buffer that is too large + err = idpf_send_config_selected_tx_queues_msg(vport, qs, totqs); +error: + kfree(qs); + return err; +} + +/** + * idpf_fill_bufq_config_chunk - Fill the chunk describing the rx or buf queue. + * @vport: virtual port data structure + * @rxbufq: rx or buffer queue to be inserted into VC chunk + * @qi: pointer to the buffer containing the VC chunk + */ +static void idpf_fill_rxbufq_config_chunk(struct idpf_vport *vport, + struct idpf_queue *q, + struct virtchnl2_rxq_info *qi) +{ + const struct idpf_bufq_set *sets; + + qi->queue_id = cpu_to_le32(q->q_id); + qi->model = cpu_to_le16(vport->rxq_model); + qi->type = cpu_to_le32(q->q_type); + qi->ring_len = cpu_to_le16(q->desc_count); + qi->dma_ring_addr = cpu_to_le64(q->dma); + qi->data_buffer_size = cpu_to_le32(q->rx_buf_size); + qi->rx_buffer_low_watermark = + cpu_to_le16(q->rx_buffer_low_watermark); + if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW)) + qi->qflags |= cpu_to_le16(VIRTCHNL2_RXQ_RSC); + + if (q->q_type == VIRTCHNL2_QUEUE_TYPE_RX_BUFFER) { + qi->desc_ids = cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M); + qi->buffer_notif_stride = q->rx_buf_stride; + } + + if (q->q_type != VIRTCHNL2_QUEUE_TYPE_RX) + return; + + qi->max_pkt_size = cpu_to_le32(q->rx_max_pkt_size); + qi->qflags |= cpu_to_le16(VIRTCHNL2_RX_DESC_SIZE_32BYTE); + qi->desc_ids = cpu_to_le64(q->rxdids); + + if (!idpf_is_queue_model_split(vport->rxq_model)) + return; + + sets = q->rxq_grp->splitq.bufq_sets; + + /* In splitq mode, RXQ buffer size should be set to that of the first + * buffer queue associated with this RXQ. */ - config_sz = sizeof(struct virtchnl2_config_tx_queues); - chunk_sz = sizeof(struct virtchnl2_txq_info); + q->rx_buf_size = sets[0].bufq.rx_buf_size; + qi->data_buffer_size = cpu_to_le32(q->rx_buf_size); + + qi->rx_bufq1_id = cpu_to_le16(sets[0].bufq.q_id); + if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) { + qi->bufq2_ena = IDPF_BUFQ2_ENA; + qi->rx_bufq2_id = cpu_to_le16(sets[1].bufq.q_id); + } - num_chunks = min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz), - totqs); - num_msgs = DIV_ROUND_UP(totqs, num_chunks); + q->rx_hbuf_size = sets[0].bufq.rx_hbuf_size; - buf_sz = struct_size(ctq, qinfo, num_chunks); - ctq = kzalloc(buf_sz, GFP_KERNEL); - if (!ctq) { - err = -ENOMEM; - goto error; + if (q->rx_hsplit_en) { + qi->qflags |= cpu_to_le16(VIRTCHNL2_RXQ_HDR_SPLIT); + qi->hdr_buffer_size = cpu_to_le16(q->rx_hbuf_size); } +} - mutex_lock(&vport->vc_buf_lock); +/** + * idpf_prepare_cfg_rxqs_msg - Prepare message to configure selected rx queues. + * @vport: virtual port data structure + * @msg_buf: buffer containing the message + * @first_chunk: pointer to the first chunk describing the rx queue + * @num_chunks: number of chunks in the message + * + * Helper function for preparing the message describing configuration of + * rx queues. + * Returns the total size of the prepared message. + */ +static int idpf_prepare_cfg_rxqs_msg(struct idpf_vport *vport, + u8 *msg_buf, u8 *first_chunk, + int num_chunks) +{ + int chunk_size = sizeof(struct virtchnl2_rxq_info); + struct virtchnl2_config_rx_queues *crq; - for (i = 0, k = 0; i < num_msgs; i++) { - memset(ctq, 0, buf_sz); - ctq->vport_id = cpu_to_le32(vport->vport_id); - ctq->num_qinfo = cpu_to_le16(num_chunks); - memcpy(ctq->qinfo, &qi[k], chunk_sz * num_chunks); - - err = idpf_send_mb_msg(vport->adapter, - VIRTCHNL2_OP_CONFIG_TX_QUEUES, - buf_sz, (u8 *)ctq); - if (err) - goto mbx_error; + crq = (struct virtchnl2_config_rx_queues *)msg_buf; - err = idpf_wait_for_event(vport->adapter, vport, - IDPF_VC_CONFIG_TXQ, - IDPF_VC_CONFIG_TXQ_ERR); - if (err) - goto mbx_error; + crq->vport_id = cpu_to_le32(vport->vport_id); + crq->num_qinfo = cpu_to_le16(num_chunks); + + memcpy(crq->qinfo, first_chunk, num_chunks * chunk_size); + + return (chunk_size * num_chunks + sizeof(*crq)); +} - k += num_chunks; - totqs -= num_chunks; - num_chunks = min(num_chunks, totqs); - /* Recalculate buffer size */ - buf_sz = struct_size(ctq, qinfo, num_chunks); +/** + * idpf_send_config_selected_rx_queues_msg - Send virtchnl config rx queues + * message for selected rx queues only. + * @vport: virtual port data structure + * @qs: array of rx queues to be configured + * @num_q: number of rx queues contained in 'qs' array + * + * Send config queues virtchnl message for queues contained in 'qs' array. + * The 'qs' array can contain rx queues (or buffer queues) only. + * Returns 0 on success, negative on failure. + */ +static int idpf_send_config_selected_rx_queues_msg(struct idpf_vport *vport, + struct idpf_queue **qs, + int num_q) +{ + struct idpf_chunked_msg_params params = { }; + struct virtchnl2_rxq_info *qi; + int err, i; + + qi = (struct virtchnl2_rxq_info *) + kcalloc(num_q, sizeof(struct virtchnl2_rxq_info), GFP_KERNEL); + if (!qi) { + err = -ENOMEM; + goto alloc_error; } -mbx_error: - mutex_unlock(&vport->vc_buf_lock); - kfree(ctq); -error: - kfree(qi); + params.op = VIRTCHNL2_OP_CONFIG_RX_QUEUES; + params.state = IDPF_VC_CONFIG_RXQ; + params.err_check = IDPF_VC_CONFIG_RXQ_ERR; + params.timeout = IDPF_WAIT_FOR_EVENT_TIMEO; + params.config_sz = sizeof(struct virtchnl2_config_rx_queues); + params.chunk_sz = sizeof(struct virtchnl2_rxq_info); + params.prepare_msg = &idpf_prepare_cfg_rxqs_msg; + params.num_chunks = num_q; + params.chunks = (u8 *)qi; + for (i = 0; i < num_q; i++) + idpf_fill_rxbufq_config_chunk(vport, qs[i], &qi[i]); + + err = idpf_send_chunked_msg(vport, ¶ms); + kfree(qi); +alloc_error: return err; } @@ -1592,43 +1840,25 @@ static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport) */ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) { - struct virtchnl2_config_rx_queues *crq; - u32 config_sz, chunk_sz, buf_sz; - int totqs, num_msgs, num_chunks; - struct virtchnl2_rxq_info *qi; - int err = 0, i, k = 0; + int totqs, err = 0, i, k = 0; + struct idpf_queue **qs; totqs = vport->num_rxq + vport->num_bufq; - qi = kcalloc(totqs, sizeof(struct virtchnl2_rxq_info), GFP_KERNEL); - if (!qi) + qs = (struct idpf_queue **)kzalloc(totqs * sizeof(*qs), GFP_KERNEL); + if (!qs) return -ENOMEM; /* Populate the queue info buffer with all queue context info */ for (i = 0; i < vport->num_rxq_grp; i++) { struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; - u16 num_rxq; + int num_rxq; int j; if (!idpf_is_queue_model_split(vport->rxq_model)) goto setup_rxqs; - for (j = 0; j < vport->num_bufqs_per_qgrp; j++, k++) { - struct idpf_queue *bufq = - &rx_qgrp->splitq.bufq_sets[j].bufq; - - qi[k].queue_id = cpu_to_le32(bufq->q_id); - qi[k].model = cpu_to_le16(vport->rxq_model); - qi[k].type = cpu_to_le32(bufq->q_type); - qi[k].desc_ids = cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M); - qi[k].ring_len = cpu_to_le16(bufq->desc_count); - qi[k].dma_ring_addr = cpu_to_le64(bufq->dma); - qi[k].data_buffer_size = cpu_to_le32(bufq->rx_buf_size); - qi[k].buffer_notif_stride = bufq->rx_buf_stride; - qi[k].rx_buffer_low_watermark = - cpu_to_le16(bufq->rx_buffer_low_watermark); - if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW)) - qi[k].qflags |= cpu_to_le16(VIRTCHNL2_RXQ_RSC); - } + for (j = 0; j < vport->num_bufqs_per_qgrp; j++, k++) + qs[k] = &rx_qgrp->splitq.bufq_sets[j].bufq; setup_rxqs: if (idpf_is_queue_model_split(vport->rxq_model)) @@ -1636,56 +1866,11 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) else num_rxq = rx_qgrp->singleq.num_rxq; - for (j = 0; j < num_rxq; j++, k++) { - const struct idpf_bufq_set *sets; - struct idpf_queue *rxq; - - if (!idpf_is_queue_model_split(vport->rxq_model)) { - rxq = rx_qgrp->singleq.rxqs[j]; - goto common_qi_fields; - } - - rxq = &rx_qgrp->splitq.rxq_sets[j]->rxq; - sets = rxq->rxq_grp->splitq.bufq_sets; - - /* In splitq mode, RXQ buffer size should be - * set to that of the first buffer queue - * associated with this RXQ. - */ - rxq->rx_buf_size = sets[0].bufq.rx_buf_size; - - qi[k].rx_bufq1_id = cpu_to_le16(sets[0].bufq.q_id); - if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) { - qi[k].bufq2_ena = IDPF_BUFQ2_ENA; - qi[k].rx_bufq2_id = - cpu_to_le16(sets[1].bufq.q_id); - } - qi[k].rx_buffer_low_watermark = - cpu_to_le16(rxq->rx_buffer_low_watermark); - if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW)) - qi[k].qflags |= cpu_to_le16(VIRTCHNL2_RXQ_RSC); - - rxq->rx_hbuf_size = sets[0].bufq.rx_hbuf_size; - - if (rxq->rx_hsplit_en) { - qi[k].qflags |= - cpu_to_le16(VIRTCHNL2_RXQ_HDR_SPLIT); - qi[k].hdr_buffer_size = - cpu_to_le16(rxq->rx_hbuf_size); - } - -common_qi_fields: - qi[k].queue_id = cpu_to_le32(rxq->q_id); - qi[k].model = cpu_to_le16(vport->rxq_model); - qi[k].type = cpu_to_le32(rxq->q_type); - qi[k].ring_len = cpu_to_le16(rxq->desc_count); - qi[k].dma_ring_addr = cpu_to_le64(rxq->dma); - qi[k].max_pkt_size = cpu_to_le32(rxq->rx_max_pkt_size); - qi[k].data_buffer_size = cpu_to_le32(rxq->rx_buf_size); - qi[k].qflags |= - cpu_to_le16(VIRTCHNL2_RX_DESC_SIZE_32BYTE); - qi[k].desc_ids = cpu_to_le64(rxq->rxdids); - } + for (j = 0; j < num_rxq; j++, k++) + if (!idpf_is_queue_model_split(vport->rxq_model)) + qs[k] = rx_qgrp->singleq.rxqs[j]; + else + qs[k] = &rx_qgrp->splitq.rxq_sets[j]->rxq; } /* Make sure accounting agrees */ @@ -1694,56 +1879,94 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) goto error; } - /* Chunk up the queue contexts into multiple messages to avoid - * sending a control queue message buffer that is too large - */ - config_sz = sizeof(struct virtchnl2_config_rx_queues); - chunk_sz = sizeof(struct virtchnl2_rxq_info); + err = idpf_send_config_selected_rx_queues_msg(vport, qs, totqs); - num_chunks = min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz), - totqs); - num_msgs = DIV_ROUND_UP(totqs, num_chunks); +error: + kfree(qs); + return err; +} - buf_sz = struct_size(crq, qinfo, num_chunks); - crq = kzalloc(buf_sz, GFP_KERNEL); - if (!crq) { - err = -ENOMEM; - goto error; - } +/** + * idpf_prepare_ena_dis_qs_msg - Prepare message to enable/disable selected + * queues. + * @vport: virtual port data structure + * @msg_buf: buffer containing the message + * @first_chunk: pointer to the first chunk describing the queue + * @num_chunks: number of chunks in the message + * + * Helper function for preparing the message describing queues to be enabled + * or disabled. + * Returns the total size of the prepared message. + */ +static int idpf_prepare_ena_dis_qs_msg(struct idpf_vport *vport, + u8 *msg_buf, u8 *first_chunk, + int num_chunks) +{ + int chunk_size = sizeof(struct virtchnl2_queue_chunk); + struct virtchnl2_del_ena_dis_queues *eq; - mutex_lock(&vport->vc_buf_lock); + eq = (struct virtchnl2_del_ena_dis_queues *)msg_buf; - for (i = 0, k = 0; i < num_msgs; i++) { - memset(crq, 0, buf_sz); - crq->vport_id = cpu_to_le32(vport->vport_id); - crq->num_qinfo = cpu_to_le16(num_chunks); - memcpy(crq->qinfo, &qi[k], chunk_sz * num_chunks); - - err = idpf_send_mb_msg(vport->adapter, - VIRTCHNL2_OP_CONFIG_RX_QUEUES, - buf_sz, (u8 *)crq); - if (err) - goto mbx_error; + eq->vport_id = cpu_to_le32(vport->vport_id); + eq->chunks.num_chunks = cpu_to_le16(num_chunks); - err = idpf_wait_for_event(vport->adapter, vport, - IDPF_VC_CONFIG_RXQ, - IDPF_VC_CONFIG_RXQ_ERR); - if (err) - goto mbx_error; + memcpy(eq->chunks.chunks, first_chunk, num_chunks * chunk_size); + + return (chunk_size * num_chunks + sizeof(*eq)); +} + +/** + * idpf_send_ena_dis_selected_qs_msg - Send virtchnl enable or disable + * queues message for selected queues only + * @vport: virtual port data structure + * @qs: array containing pointers to queues to be enabled/disabled + * @num_q: number of queues contained in 'qs' array. + * @ena: if true enable, false disable + * + * Send enable or disable queues virtchnl message for queues contained + * in 'qs' array. + * The 'qs' array can contain pointers to both rx and tx queues. + * Returns 0 on success, negative on failure. + */ +static int idpf_send_ena_dis_selected_qs_msg(struct idpf_vport *vport, + struct idpf_queue **qs, + int num_q, + u32 vc_op) +{ + bool en = vc_op == VIRTCHNL2_OP_ENABLE_QUEUES; + struct idpf_chunked_msg_params params = { }; + struct virtchnl2_queue_chunk *qc; + int err, i; - k += num_chunks; - totqs -= num_chunks; - num_chunks = min(num_chunks, totqs); - /* Recalculate buffer size */ - buf_sz = struct_size(crq, qinfo, num_chunks); + qc = (struct virtchnl2_queue_chunk *)kzalloc(sizeof(*qc) * num_q, + GFP_KERNEL); + if (!qc) { + err = -ENOMEM; + goto alloc_error; } -mbx_error: - mutex_unlock(&vport->vc_buf_lock); - kfree(crq); -error: - kfree(qi); + params.op = vc_op; + params.state = en ? IDPF_VC_ENA_QUEUES : IDPF_VC_DIS_QUEUES; + params.err_check = en ? IDPF_VC_ENA_QUEUES_ERR : IDPF_VC_DIS_QUEUES_ERR; + params.timeout = en ? IDPF_WAIT_FOR_EVENT_TIMEO : + IDPF_WAIT_FOR_EVENT_TIMEO_MIN; + params.config_sz = sizeof(struct virtchnl2_del_ena_dis_queues); + params.chunk_sz = sizeof(struct virtchnl2_queue_chunk); + params.prepare_msg = &idpf_prepare_ena_dis_qs_msg; + params.num_chunks = num_q; + params.chunks = (u8 *)qc; + + for (i = 0; i < num_q; i++) { + struct idpf_queue *q = qs[i]; + + qc[i].start_queue_id = cpu_to_le32(q->q_id); + qc[i].type = cpu_to_le32(q->q_type); + qc[i].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK); + } + err = idpf_send_chunked_msg(vport, ¶ms); + kfree(qc); +alloc_error: return err; } @@ -1758,39 +1981,23 @@ static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport) */ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, u32 vc_op) { - u32 num_msgs, num_chunks, num_txq, num_rxq, num_q; - struct idpf_adapter *adapter = vport->adapter; - struct virtchnl2_del_ena_dis_queues *eq; - struct virtchnl2_queue_chunks *qcs; - struct virtchnl2_queue_chunk *qc; - u32 config_sz, chunk_sz, buf_sz; - int i, j, k = 0, err = 0; - - /* validate virtchnl op */ - switch (vc_op) { - case VIRTCHNL2_OP_ENABLE_QUEUES: - case VIRTCHNL2_OP_DISABLE_QUEUES: - break; - default: - return -EINVAL; - } + int num_txq, num_rxq, num_q, err = 0; + struct idpf_queue **qs; + int i, j, k = 0; num_txq = vport->num_txq + vport->num_complq; num_rxq = vport->num_rxq + vport->num_bufq; num_q = num_txq + num_rxq; - buf_sz = sizeof(struct virtchnl2_queue_chunk) * num_q; - qc = kzalloc(buf_sz, GFP_KERNEL); - if (!qc) + + qs = (struct idpf_queue **)kzalloc(num_q * sizeof(*qs), GFP_KERNEL); + if (!qs) return -ENOMEM; for (i = 0; i < vport->num_txq_grp; i++) { struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; - for (j = 0; j < tx_qgrp->num_txq; j++, k++) { - qc[k].type = cpu_to_le32(tx_qgrp->txqs[j]->q_type); - qc[k].start_queue_id = cpu_to_le32(tx_qgrp->txqs[j]->q_id); - qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK); - } + for (j = 0; j < tx_qgrp->num_txq; j++, k++) + qs[k] = tx_qgrp->txqs[j]; } if (vport->num_txq != k) { err = -EINVAL; @@ -1800,13 +2007,9 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, u32 vc_op) if (!idpf_is_queue_model_split(vport->txq_model)) goto setup_rx; - for (i = 0; i < vport->num_txq_grp; i++, k++) { - struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; + for (i = 0; i < vport->num_txq_grp; i++, k++) + qs[k] = vport->txq_grps[i].complq; - qc[k].type = cpu_to_le32(tx_qgrp->complq->q_type); - qc[k].start_queue_id = cpu_to_le32(tx_qgrp->complq->q_id); - qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK); - } if (vport->num_complq != (k - vport->num_txq)) { err = -EINVAL; goto error; @@ -1822,18 +2025,10 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, u32 vc_op) num_rxq = rx_qgrp->singleq.num_rxq; for (j = 0; j < num_rxq; j++, k++) { - if (idpf_is_queue_model_split(vport->rxq_model)) { - qc[k].start_queue_id = - cpu_to_le32(rx_qgrp->splitq.rxq_sets[j]->rxq.q_id); - qc[k].type = - cpu_to_le32(rx_qgrp->splitq.rxq_sets[j]->rxq.q_type); - } else { - qc[k].start_queue_id = - cpu_to_le32(rx_qgrp->singleq.rxqs[j]->q_id); - qc[k].type = - cpu_to_le32(rx_qgrp->singleq.rxqs[j]->q_type); - } - qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK); + if (idpf_is_queue_model_split(vport->rxq_model)) + qs[k] = &rx_qgrp->splitq.rxq_sets[j]->rxq; + else + qs[k] = rx_qgrp->singleq.rxqs[j]; } } if (vport->num_rxq != k - (vport->num_txq + vport->num_complq)) { @@ -1847,14 +2042,8 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, u32 vc_op) for (i = 0; i < vport->num_rxq_grp; i++) { struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; - for (j = 0; j < vport->num_bufqs_per_qgrp; j++, k++) { - struct idpf_queue *q; - - q = &rx_qgrp->splitq.bufq_sets[j].bufq; - qc[k].type = cpu_to_le32(q->q_type); - qc[k].start_queue_id = cpu_to_le32(q->q_id); - qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK); - } + for (j = 0; j < vport->num_bufqs_per_qgrp; j++, k++) + qs[k] = &rx_qgrp->splitq.bufq_sets[j].bufq; } if (vport->num_bufq != k - (vport->num_txq + vport->num_complq + @@ -1864,58 +2053,117 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, u32 vc_op) } send_msg: - /* Chunk up the queue info into multiple messages */ - config_sz = sizeof(struct virtchnl2_del_ena_dis_queues); - chunk_sz = sizeof(struct virtchnl2_queue_chunk); + err = idpf_send_ena_dis_selected_qs_msg(vport, qs, num_q, vc_op); +error: + kfree(qs); + return err; +} - num_chunks = min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz), - num_q); - num_msgs = DIV_ROUND_UP(num_q, num_chunks); +/** + * idpf_prep_map_unmap_sel_queue_vector_msg - Prepare message to map or unmap + * selected queues to the interrupt vector. + * @vport: virtual port data structure + * @msg_buf: buffer containing the message + * @first_chunk: pointer to the first chunk describing the vector mapping + * @num_chunks: number of chunks in the message + * + * Helper function for preparing the message describing mapping queues to + * q_vectors. + * Returns the total size of the prepared message. + */ +static int +idpf_prep_map_unmap_sel_queue_vector_msg(struct idpf_vport *vport, + u8 *msg_buf, u8 *first_chunk, + int num_chunks) +{ + int chunk_size = sizeof(struct virtchnl2_queue_vector); + struct virtchnl2_queue_vector_maps *vqvm; - buf_sz = struct_size(eq, chunks.chunks, num_chunks); - eq = kzalloc(buf_sz, GFP_KERNEL); - if (!eq) { - err = -ENOMEM; - goto error; - } + vqvm = (struct virtchnl2_queue_vector_maps *)msg_buf; - mutex_lock(&vport->vc_buf_lock); + vqvm->vport_id = cpu_to_le32(vport->vport_id); + vqvm->num_qv_maps = cpu_to_le16(num_chunks); - for (i = 0, k = 0; i < num_msgs; i++) { - memset(eq, 0, buf_sz); - eq->vport_id = cpu_to_le32(vport->vport_id); - eq->chunks.num_chunks = cpu_to_le16(num_chunks); - qcs = &eq->chunks; - memcpy(qcs->chunks, &qc[k], chunk_sz * num_chunks); + memcpy(vqvm->qv_maps, first_chunk, num_chunks * chunk_size); - err = idpf_send_mb_msg(adapter, vc_op, buf_sz, (u8 *)eq); - if (err) - goto mbx_error; + return chunk_size * num_chunks + sizeof(*vqvm); +} - if (vc_op == VIRTCHNL2_OP_ENABLE_QUEUES) - err = idpf_wait_for_event(adapter, vport, - IDPF_VC_ENA_QUEUES, - IDPF_VC_ENA_QUEUES_ERR); - else - err = idpf_min_wait_for_event(adapter, vport, - IDPF_VC_DIS_QUEUES, - IDPF_VC_DIS_QUEUES_ERR); - if (err) - goto mbx_error; +/** + * idpf_send_map_unmap_sel_queue_vector_msg - Send virtchnl map or unmap + * selected queue vector message + * @vport: virtual port data structure + * @qs: array of queues to be mapped/unmapped + * @num_q: number of queues in 'qs' array + * @map: true for map and false for unmap + * + * Send map or unmap queue vector virtchnl message for selected queues only. + * Returns 0 on success, negative on failure. + */ +static int idpf_send_map_unmap_sel_queue_vector_msg(struct idpf_vport *vport, + struct idpf_queue **qs, + int num_q, + bool map) +{ + struct idpf_chunked_msg_params params = { }; + struct virtchnl2_queue_vector *vqv; + int err, i; - k += num_chunks; - num_q -= num_chunks; - num_chunks = min(num_chunks, num_q); - /* Recalculate buffer size */ - buf_sz = struct_size(eq, chunks.chunks, num_chunks); + vqv = (struct virtchnl2_queue_vector *)kzalloc(sizeof(*vqv) * num_q, + GFP_KERNEL); + if (!vqv) { + err = -ENOMEM; + goto alloc_error; } -mbx_error: - mutex_unlock(&vport->vc_buf_lock); - kfree(eq); -error: - kfree(qc); + params.op = map ? VIRTCHNL2_OP_MAP_QUEUE_VECTOR : + VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR; + params.state = map ? IDPF_VC_MAP_IRQ : IDPF_VC_UNMAP_IRQ; + params.err_check = map ? IDPF_VC_MAP_IRQ_ERR : IDPF_VC_UNMAP_IRQ_ERR; + params.timeout = map ? IDPF_WAIT_FOR_EVENT_TIMEO : + IDPF_WAIT_FOR_EVENT_TIMEO_MIN; + params.config_sz = sizeof(struct virtchnl2_queue_vector_maps); + params.chunk_sz = sizeof(struct virtchnl2_queue_vector); + params.prepare_msg = &idpf_prep_map_unmap_sel_queue_vector_msg; + params.num_chunks = num_q; + params.chunks = (u8 *)vqv; + + for (i = 0; i < num_q; i++) { + const struct idpf_q_vector *vec; + struct idpf_queue *q = qs[i]; + u32 vector_id, itr_idx; + + vqv[i].queue_type = cpu_to_le32(q->q_type); + vqv[i].queue_id = cpu_to_le32(q->q_id); + + if (q->q_type != VIRTCHNL2_QUEUE_TYPE_TX) { + vector_id = q->q_vector->v_idx; + itr_idx = q->q_vector->rx_itr_idx; + + goto fill; + } + if (idpf_is_queue_model_split(vport->txq_model)) + vec = q->txq_grp->complq->q_vector; + else + vec = q->q_vector; + + if (vec) { + vector_id = vec->v_idx; + itr_idx = vec->tx_itr_idx; + } else { + vector_id = 0; + itr_idx = VIRTCHNL2_ITR_IDX_1; + } + +fill: + vqv[i].vector_id = cpu_to_le16(vector_id); + vqv[i].itr_idx = cpu_to_le32(itr_idx); + } + + err = idpf_send_chunked_msg(vport, ¶ms); + kfree(vqv); +alloc_error: return err; } @@ -1930,46 +2178,21 @@ static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, u32 vc_op) */ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) { - struct idpf_adapter *adapter = vport->adapter; - struct virtchnl2_queue_vector_maps *vqvm; - struct virtchnl2_queue_vector *vqv; - u32 config_sz, chunk_sz, buf_sz; - u32 num_msgs, num_chunks, num_q; - int i, j, k = 0, err = 0; + struct idpf_queue **qs; + int num_q, err = 0; + int i, j, k = 0; num_q = vport->num_txq + vport->num_rxq; - buf_sz = sizeof(struct virtchnl2_queue_vector) * num_q; - vqv = kzalloc(buf_sz, GFP_KERNEL); - if (!vqv) + qs = (struct idpf_queue **)kzalloc(num_q * sizeof(*qs), GFP_KERNEL); + if (!qs) return -ENOMEM; for (i = 0; i < vport->num_txq_grp; i++) { struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i]; - for (j = 0; j < tx_qgrp->num_txq; j++, k++) { - const struct idpf_q_vector *vec; - u32 v_idx, tx_itr_idx; - - vqv[k].queue_type = cpu_to_le32(tx_qgrp->txqs[j]->q_type); - vqv[k].queue_id = cpu_to_le32(tx_qgrp->txqs[j]->q_id); - - if (idpf_is_queue_model_split(vport->txq_model)) - vec = tx_qgrp->complq->q_vector; - else - vec = tx_qgrp->txqs[j]->q_vector; - - if (vec) { - v_idx = vec->v_idx; - tx_itr_idx = vec->tx_itr_idx; - } else { - v_idx = 0; - tx_itr_idx = VIRTCHNL2_ITR_IDX_1; - } - - vqv[k].vector_id = cpu_to_le16(v_idx); - vqv[k].itr_idx = cpu_to_le32(tx_itr_idx); - } + for (j = 0; j < tx_qgrp->num_txq; j++, k++) + qs[k] = tx_qgrp->txqs[j]; } if (vport->num_txq != k) { @@ -1979,7 +2202,7 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) for (i = 0; i < vport->num_rxq_grp; i++) { struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i]; - u16 num_rxq; + int num_rxq; if (idpf_is_queue_model_split(vport->rxq_model)) num_rxq = rx_qgrp->splitq.num_rxq_sets; @@ -1994,10 +2217,7 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) else rxq = rx_qgrp->singleq.rxqs[j]; - vqv[k].queue_type = cpu_to_le32(rxq->q_type); - vqv[k].queue_id = cpu_to_le32(rxq->q_id); - vqv[k].vector_id = cpu_to_le16(rxq->q_vector->v_idx); - vqv[k].itr_idx = cpu_to_le32(rxq->q_vector->rx_itr_idx); + qs[k] = rxq; } } @@ -2013,63 +2233,9 @@ int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map) } } - /* Chunk up the vector info into multiple messages */ - config_sz = sizeof(struct virtchnl2_queue_vector_maps); - chunk_sz = sizeof(struct virtchnl2_queue_vector); - - num_chunks = min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz), - num_q); - num_msgs = DIV_ROUND_UP(num_q, num_chunks); - - buf_sz = struct_size(vqvm, qv_maps, num_chunks); - vqvm = kzalloc(buf_sz, GFP_KERNEL); - if (!vqvm) { - err = -ENOMEM; - goto error; - } - - mutex_lock(&vport->vc_buf_lock); - - for (i = 0, k = 0; i < num_msgs; i++) { - memset(vqvm, 0, buf_sz); - vqvm->vport_id = cpu_to_le32(vport->vport_id); - vqvm->num_qv_maps = cpu_to_le16(num_chunks); - memcpy(vqvm->qv_maps, &vqv[k], chunk_sz * num_chunks); - - if (map) { - err = idpf_send_mb_msg(adapter, - VIRTCHNL2_OP_MAP_QUEUE_VECTOR, - buf_sz, (u8 *)vqvm); - if (!err) - err = idpf_wait_for_event(adapter, vport, - IDPF_VC_MAP_IRQ, - IDPF_VC_MAP_IRQ_ERR); - } else { - err = idpf_send_mb_msg(adapter, - VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR, - buf_sz, (u8 *)vqvm); - if (!err) - err = - idpf_min_wait_for_event(adapter, vport, - IDPF_VC_UNMAP_IRQ, - IDPF_VC_UNMAP_IRQ_ERR); - } - if (err) - goto mbx_error; - - k += num_chunks; - num_q -= num_chunks; - num_chunks = min(num_chunks, num_q); - /* Recalculate buffer size */ - buf_sz = struct_size(vqvm, qv_maps, num_chunks); - } - -mbx_error: - mutex_unlock(&vport->vc_buf_lock); - kfree(vqvm); + err = idpf_send_map_unmap_sel_queue_vector_msg(vport, qs, num_q, map); error: - kfree(vqv); - + kfree(qs); return err; } @@ -2113,6 +2279,123 @@ int idpf_send_disable_queues_msg(struct idpf_vport *vport) return idpf_wait_for_marker_event(vport); } +/** + * idpf_send_enable_selected_queues_msg - send enable queues virtchnl message + * for selected queues only + * @vport: Virtual port private data structure + * @qs: array containing queues to be enabled + * @num_q: number of queues in the 'qs' array + * + * Will send enable queues virtchnl message for queues contained in 'qs' table. + * Returns 0 on success, negative on failure. + */ +int idpf_send_enable_selected_queues_msg(struct idpf_vport *vport, + struct idpf_queue **qs, + int num_q) +{ + return idpf_send_ena_dis_selected_qs_msg(vport, qs, num_q, + VIRTCHNL2_OP_ENABLE_QUEUES); +} + +/** + * idpf_send_disable_selected_queues_msg - send disable queues virtchnl message + * for selected queues only + * @vport: Virtual port private data structure + * @qs: array containing queues to be disabled + * @num_q: number of queues in the 'qs' array + * + * Will send disable queues virtchnl message for queues contained in 'qs' table. + * Returns 0 on success, negative on failure. + */ +int idpf_send_disable_selected_queues_msg(struct idpf_vport *vport, + struct idpf_queue **qs, + int num_q) +{ + struct idpf_queue **tx_qs; + int err, i, tx_idx = 0; + + err = idpf_send_ena_dis_selected_qs_msg(vport, qs, num_q, + VIRTCHNL2_OP_DISABLE_QUEUES); + if (err) + return err; + + tx_qs = (struct idpf_queue **)kzalloc(num_q * sizeof(*qs), GFP_KERNEL); + if (!tx_qs) + return -ENOMEM; + + for (i = 0; i < num_q; i++) { + if (qs[i]->q_type == VIRTCHNL2_QUEUE_TYPE_TX) { + set_bit(__IDPF_Q_POLL_MODE, qs[i]->flags); + tx_qs[tx_idx++] = qs[i]; + } + + if (qs[i]->q_type == VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION) + napi_schedule(&qs[i]->q_vector->napi); + } + + err = idpf_wait_for_selected_marker_events(vport, tx_qs, tx_idx); + + kfree(tx_qs); + return err; +} + +/** + * idpf_send_config_selected_queues_msg - Send virtchnl config queues message + * for selected queues only. + * @vport: virtual port data structure + * @qs: array of queues to be configured + * @num_q: number of queues contained in 'qs' array + * + * Send config queues virtchnl message for queues contained in 'qs' array. + * The 'qs' array can contain both RX or TX queues. + * Returns 0 on success, negative on failure. + */ +int idpf_send_config_selected_queues_msg(struct idpf_vport *vport, + struct idpf_queue **qs, + int num_q) +{ + int num_rxq = 0, num_txq = 0, i, err; + struct idpf_queue **txqs, **rxqs; + + txqs = (struct idpf_queue **)kzalloc(num_q * sizeof(*qs), GFP_KERNEL); + if (!txqs) + return -ENOMEM; + + rxqs = (struct idpf_queue **)kzalloc(num_q * sizeof(*qs), GFP_KERNEL); + if (!rxqs) { + err = -ENOMEM; + goto rxq_alloc_err; + } + + for (i = 0; i < num_q; i++) { + struct idpf_queue *q = qs[i]; + + if (q->q_type == VIRTCHNL2_QUEUE_TYPE_TX || + q->q_type == VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION) { + txqs[num_txq] = q; + num_txq++; + } + else if (q->q_type == VIRTCHNL2_QUEUE_TYPE_RX || + q->q_type == VIRTCHNL2_QUEUE_TYPE_RX_BUFFER) { + rxqs[num_rxq] = q; + num_rxq++; + } + } + + err = idpf_send_config_selected_tx_queues_msg(vport, txqs, num_txq); + if (err) + goto send_txq_err; + + err = idpf_send_config_selected_rx_queues_msg(vport, rxqs, num_rxq); + +send_txq_err: + kfree(rxqs); +rxq_alloc_err: + kfree(txqs); + + return err; +} + /** * idpf_convert_reg_to_queue_chunks - Copy queue chunk information to the right * structure From patchwork Sat Dec 23 02:55:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182896 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1462652dyi; Fri, 22 Dec 2023 19:09:00 -0800 (PST) X-Google-Smtp-Source: AGHT+IG9zz3LGqjQlAY1+ihnNUoq2uKT0w8Er3+9E3Wm5u2sgJyJ3wgl/R2gYoCAZE5R5Wv5r+8h X-Received: by 2002:a7b:cd8e:0:b0:40c:6ae0:559b with SMTP id y14-20020a7bcd8e000000b0040c6ae0559bmr712589wmj.343.1703300939780; Fri, 22 Dec 2023 19:08:59 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300939; cv=none; d=google.com; s=arc-20160816; b=vWWXa6CDfvmWKjLMyOIwZjAnMnwYR6roF4HNxGLj4kizxtAgL7nl/wim7Hjwf+vi/w FqKdZy5M8+oHABHgMeib+TNGgKP4F2EUoQLKS2pNz6CjLxQbryNHVBjKm2SmrpW0VLEO DT6d+7O2TUHwKeIFhlgBNUH70Jsd0uwf39w/IA9JcKnpypB2rLhZ2J0E+ntOw9/QpSMA qfSLSsNvL7aihtcKinCrfzIVZbw7K7xxFMTAYwc3DcA4SnMCq50pvDiwhN+VOwa2UHkc OtM3V9BPttiiDuNu+5VyQs850v2Dnwgw6x/za/DTqkwYDXE//u4KEvrZXP5La9GiGZ3p Tk1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=QH9cIs+OMmq3ZoK8z1fKZ1I0db2qYDBpd5VkrKFMyno=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=r1pILQ92Bg/k600Ry0dmsE83omyc98/FD2KYjzvHPYZYblyKeMkhoTcRbhO40XWM/x 25Q1qcSGl0NftTyOqXWauHkiO8123rSooyHO1DGFYY4Y8IIh7Y+5Wl2jogRYUj2Bdb6w Axka870MGVEwQVNDCoT2aYG0oabVKprjjBpkV0GA1OEBRI75RUJ/iwsFKJiC7IfuDbyk xpNhlxrutb2bqzd3SqrQccJhtQKRzoFrA1PU+5iU1DqjjDgJoOaw1z2/IgFkEbrJCWRd DwdkrBsZ8HBwRJ0KM8PmmRtItm+/6NZDiwDVbOzetEJE5lMpkIRYdk+Zncyfkg6xGKE6 Kwlw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=fHpdwHDj; spf=pass (google.com: domain of linux-kernel+bounces-10246-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10246-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id gi34-20020a1709070ca200b00a1d3eccb0f0si2166095ejc.428.2023.12.22.19.08.59 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:08:59 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10246-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=fHpdwHDj; spf=pass (google.com: domain of linux-kernel+bounces-10246-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10246-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 406121F23D23 for ; Sat, 23 Dec 2023 03:08:59 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 02A5F3D3B8; Sat, 23 Dec 2023 03:00:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="fHpdwHDj" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 136383C06A; Sat, 23 Dec 2023 03:00:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300401; x=1734836401; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aYQdHCgMRF5nMXbGmL6Lo6tQCKIRvL6ThHkKSgt1nOY=; b=fHpdwHDjMTWRltW1AYtolldYXWoTuvBaHC5VTWLetpe15sr/aAXzKoFu U6WUcCRz6CL1OB43hwHqYM3FoeFx2wdcQBhKIMZAGiTLTMH9qNvXF8eeM n7FM1hy4N9Z1Kv+TNeS34IiM54AE8JpKJ9YG1znhguAAes74nKsDcfb2+ a4SAyTQ/pmc0zBfCFVegcFxi6jmpqaZnuQ1vCLaENLz3XQnx7AVjYvzO+ 13J7RHETJjbqK/UNRpoVYhL1c8Z92DFhMrSY84B58HliiewILgQIiLTFz IfJeUZyygqdAap4aJWUlFrSBSCXW9wvGCtxkfqdJifvA6trxujcwGGYYI g==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386611078" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386611078" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 19:00:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537663" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 18:59:57 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 30/34] idpf: move search rx and tx queues to header Date: Sat, 23 Dec 2023 03:55:50 +0100 Message-ID: <20231223025554.2316836-31-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040485978121381 X-GMAIL-MSGID: 1786040485978121381 From: Michal Kubiak Move Rx and Tx queue lookup functions from the ethtool implementation to the idpf header. Now, those functions can be used globally, including XDP configuration. Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf.h | 41 +++++++++++++++++++ .../net/ethernet/intel/idpf/idpf_ethtool.c | 39 ------------------ 2 files changed, 41 insertions(+), 39 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index a12c56f9f2ef..d99ebd045c4e 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -696,6 +696,47 @@ static inline int idpf_is_queue_model_split(u16 q_model) return q_model == VIRTCHNL2_QUEUE_MODEL_SPLIT; } +/** + * idpf_find_rxq - find rxq from q index + * @vport: virtual port associated to queue + * @q_num: q index used to find queue + * + * returns pointer to rx queue + */ +static inline struct idpf_queue * +idpf_find_rxq(struct idpf_vport *vport, int q_num) +{ + int q_grp, q_idx; + + if (!idpf_is_queue_model_split(vport->rxq_model)) + return vport->rxq_grps->singleq.rxqs[q_num]; + + q_grp = q_num / IDPF_DFLT_SPLITQ_RXQ_PER_GROUP; + q_idx = q_num % IDPF_DFLT_SPLITQ_RXQ_PER_GROUP; + + return &vport->rxq_grps[q_grp].splitq.rxq_sets[q_idx]->rxq; +} + +/** + * idpf_find_txq - find txq from q index + * @vport: virtual port associated to queue + * @q_num: q index used to find queue + * + * returns pointer to tx queue + */ +static inline struct idpf_queue * +idpf_find_txq(struct idpf_vport *vport, int q_num) +{ + int q_grp; + + if (!idpf_is_queue_model_split(vport->txq_model)) + return vport->txqs[q_num]; + + q_grp = q_num / IDPF_DFLT_SPLITQ_TXQ_PER_GROUP; + + return vport->txq_grps[q_grp].complq; +} + /** * idpf_xdp_is_prog_ena - check if there is an XDP program on adapter * @vport: vport to check diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c index 0d192417205d..f7ec679c9b16 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c +++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c @@ -953,45 +953,6 @@ static void idpf_get_ethtool_stats(struct net_device *netdev, idpf_vport_ctrl_unlock(netdev); } -/** - * idpf_find_rxq - find rxq from q index - * @vport: virtual port associated to queue - * @q_num: q index used to find queue - * - * returns pointer to rx queue - */ -static struct idpf_queue *idpf_find_rxq(struct idpf_vport *vport, int q_num) -{ - int q_grp, q_idx; - - if (!idpf_is_queue_model_split(vport->rxq_model)) - return vport->rxq_grps->singleq.rxqs[q_num]; - - q_grp = q_num / IDPF_DFLT_SPLITQ_RXQ_PER_GROUP; - q_idx = q_num % IDPF_DFLT_SPLITQ_RXQ_PER_GROUP; - - return &vport->rxq_grps[q_grp].splitq.rxq_sets[q_idx]->rxq; -} - -/** - * idpf_find_txq - find txq from q index - * @vport: virtual port associated to queue - * @q_num: q index used to find queue - * - * returns pointer to tx queue - */ -static struct idpf_queue *idpf_find_txq(struct idpf_vport *vport, int q_num) -{ - int q_grp; - - if (!idpf_is_queue_model_split(vport->txq_model)) - return vport->txqs[q_num]; - - q_grp = q_num / IDPF_DFLT_SPLITQ_TXQ_PER_GROUP; - - return vport->txq_grps[q_grp].complq; -} - /** * __idpf_get_q_coalesce - get ITR values for specific queue * @ec: ethtool structure to fill with driver's coalesce settings From patchwork Sat Dec 23 02:55:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182897 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1462765dyi; Fri, 22 Dec 2023 19:09:21 -0800 (PST) X-Google-Smtp-Source: AGHT+IEt408v7kC09o3Ze/tHCDNUi+UxkB+PBA2MiYDR8G+tBAyMWnoMUygOyvIT/9aDdCMo9A3s X-Received: by 2002:a50:c282:0:b0:552:d6e4:adb0 with SMTP id o2-20020a50c282000000b00552d6e4adb0mr671288edf.111.1703300961507; Fri, 22 Dec 2023 19:09:21 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300961; cv=none; d=google.com; s=arc-20160816; b=zEHSPBTxfN79HU/wZ4RC+GmaiFZVXsCgiNc6JqZkWNkNi0kQyxnqdK1EKzrWMwLfPA 2E9jZU3cTKVnweAnBNMJ4nRDNIINbPH5jaupHVf/Gk6V/RRexvuOIpB2jExaoDlUYZmN bmVwZXcHf4PdIRD4RsOV9Z9j5jO71fmUueGlT2036tlHMQBXviyO4yAxFycHuyuemvMW r0PDx9hSe4SKwncm3a54Pm2lo+IBco/10QmcjxuslXe5yHvLWoIvIJmT4S6UiW/4IU+x PfIu1DFG6HgHZA49vLZK6Mk60wS57le6FE+YwYqzn9AR10Koggc6Hprz03hYO5YEpIda jVGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=ODeF9xKSC814wyNLAu5x0Y8RmTT/XTbWwNEQjFWLSEk=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=JWv3qeontZYEB0XMfbQ5JiSqG2kKgUsadEqVPC4bTaNRsR+t3qFp3TdJeaUBGclsXx pSxyidmE0uKDvk5EfqtbEnW5VxlcIxiKJ8JUaNo5ZQ2Uy4nUtQU8tBBBIa2EyW2BWn6+ jAY3mJ7MHSZIzl4zkzh5jDCxBnvD8w12wjxoy0EambfNf5snJPE9xEmtCoumLTpvTkIR MI9AKPTPKjbQE/UnetpC3mI/qFyBjoNyGMbRk+Rlt/wdBRdTYmP7lzcvOrJvIOJ0GXr5 PmK2vcGeoLXA023JRBFIKhvmIvcx9dOG2DentqMWKfzzxre6zf2pmRELJCLyh895gYrn L5Kg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Tr3C3IR6; spf=pass (google.com: domain of linux-kernel+bounces-10247-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10247-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [147.75.80.249]) by mx.google.com with ESMTPS id u27-20020a50d51b000000b0054c6ee88805si2422307edi.649.2023.12.22.19.09.21 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:09:21 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10247-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) client-ip=147.75.80.249; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Tr3C3IR6; spf=pass (google.com: domain of linux-kernel+bounces-10247-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.80.249 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10247-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id EFC611F24390 for ; Sat, 23 Dec 2023 03:09:20 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 41DEA3D993; Sat, 23 Dec 2023 03:00:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Tr3C3IR6" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A9B0B3D96D; Sat, 23 Dec 2023 03:00:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300406; x=1734836406; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=IETM6DFCXtbBVRnZwslRqbFcq/erXvw6nPWBmZ/KIAA=; b=Tr3C3IR6LM0l4gtcTKsn2WHY6NfIUj2B0ozj7jmDrRmHG+M8E2ICOiZ7 jlCFg2fIdcOZGDELHdA4D5IDu3NaCgoQyDHRE4mOCFCE9+YJ+B/9Mx+qg TPQUWN1WDJ3LyytudM1571Ur4oc58glIy1TCt4ay3J3zc0JA9gynrijeD FCj1sPpy75KTzwmYPdyuqSgVlwocVe9fofa5tW8eVHpeIS7ywFdObyb15 VBdrAZqtVV8hZhMI/mhp2+HAa/0ZuMYGaujAiOAnOIWpU1XPsypx0BwEI YopjXmDo2m5/yq5YIwtdVgeWiGTuKL3rCLfJlDGhhQV1OUg+oo2JnfVzy w==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386611090" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386611090" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 19:00:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537678" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 19:00:01 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 31/34] idpf: add XSk pool initialization Date: Sat, 23 Dec 2023 03:55:51 +0100 Message-ID: <20231223025554.2316836-32-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040508853920523 X-GMAIL-MSGID: 1786040508853920523 From: Michal Kubiak Add functionality to setup an XSk buffer pool, including ability to stop, reconfig and start only selected queues, not the whole device. Pool DMA mapping is managed by libie. Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/Makefile | 1 + drivers/net/ethernet/intel/idpf/idpf.h | 13 + drivers/net/ethernet/intel/idpf/idpf_txrx.c | 14 +- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 6 + drivers/net/ethernet/intel/idpf/idpf_xdp.c | 5 + drivers/net/ethernet/intel/idpf/idpf_xsk.c | 474 ++++++++++++++++++++ drivers/net/ethernet/intel/idpf/idpf_xsk.h | 15 + 7 files changed, 521 insertions(+), 7 deletions(-) create mode 100644 drivers/net/ethernet/intel/idpf/idpf_xsk.c create mode 100644 drivers/net/ethernet/intel/idpf/idpf_xsk.h diff --git a/drivers/net/ethernet/intel/idpf/Makefile b/drivers/net/ethernet/intel/idpf/Makefile index 4024781ff02b..f4bc1fd96092 100644 --- a/drivers/net/ethernet/intel/idpf/Makefile +++ b/drivers/net/ethernet/intel/idpf/Makefile @@ -18,3 +18,4 @@ idpf-y := \ idpf_vf_dev.o idpf-objs += idpf_xdp.o +idpf-objs += idpf_xsk.o diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h index d99ebd045c4e..f05ed84600fd 100644 --- a/drivers/net/ethernet/intel/idpf/idpf.h +++ b/drivers/net/ethernet/intel/idpf/idpf.h @@ -464,6 +464,7 @@ struct idpf_rss_data { * ethtool * @num_req_rxq_desc: Number of user requested RX queue descriptors through * ethtool + * @af_xdp_zc_qps: Mask of queue pairs where the AF_XDP socket is established * @user_flags: User toggled config flags * @mac_filter_list: List of MAC filters * @@ -480,6 +481,7 @@ struct idpf_vport_user_config_data { struct bpf_prog *xdp_prog; DECLARE_BITMAP(user_flags, __IDPF_USER_FLAGS_NBITS); struct list_head mac_filter_list; + DECLARE_BITMAP(af_xdp_zc_qps, IDPF_LARGE_MAX_Q); }; /** @@ -959,6 +961,17 @@ static inline void idpf_vport_ctrl_unlock(struct net_device *netdev) mutex_unlock(&np->adapter->vport_ctrl_lock); } +/** + * idpf_vport_ctrl_is_locked - Check if vport control lock is taken + * @netdev: Network interface device structure + */ +static inline bool idpf_vport_ctrl_is_locked(struct net_device *netdev) +{ + struct idpf_netdev_priv *np = netdev_priv(netdev); + + return mutex_is_locked(&np->adapter->vport_ctrl_lock); +} + void idpf_statistics_task(struct work_struct *work); void idpf_init_task(struct work_struct *work); void idpf_service_task(struct work_struct *work); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 3dc21731df2f..e3f59bbe7c90 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -100,7 +100,7 @@ static void idpf_tx_buf_rel_all(struct idpf_queue *txq) * * Free all transmit software resources */ -static void idpf_tx_desc_rel(struct idpf_queue *txq, bool bufq) +void idpf_tx_desc_rel(struct idpf_queue *txq, bool bufq) { if (bufq) idpf_tx_buf_rel_all(txq); @@ -194,7 +194,7 @@ static int idpf_tx_buf_alloc_all(struct idpf_queue *tx_q) * * Returns 0 on success, negative on failure */ -static int idpf_tx_desc_alloc(struct idpf_queue *tx_q, bool bufq) +int idpf_tx_desc_alloc(struct idpf_queue *tx_q, bool bufq) { struct device *dev = tx_q->dev; u32 desc_sz; @@ -385,7 +385,7 @@ static void idpf_rx_buf_rel_all(struct idpf_queue *rxq) * * Free a specific rx queue resources */ -static void idpf_rx_desc_rel(struct idpf_queue *rxq, bool bufq, s32 q_model) +void idpf_rx_desc_rel(struct idpf_queue *rxq, bool bufq, s32 q_model) { if (!rxq) return; @@ -649,8 +649,7 @@ static int idpf_rx_buf_alloc_all(struct idpf_queue *rxbufq) * * Returns 0 on success, negative on failure */ -static int idpf_rx_bufs_init(struct idpf_queue *rxbufq, - enum libie_rx_buf_type type) +int idpf_rx_bufs_init(struct idpf_queue *rxbufq, enum libie_rx_buf_type type) { struct libie_buf_queue bq = { .truesize = rxbufq->truesize, @@ -730,7 +729,7 @@ int idpf_rx_bufs_init_all(struct idpf_vport *vport) * * Returns 0 on success, negative on failure */ -static int idpf_rx_desc_alloc(struct idpf_queue *rxq, bool bufq, s32 q_model) +int idpf_rx_desc_alloc(struct idpf_queue *rxq, bool bufq, s32 q_model) { struct device *dev = rxq->dev; @@ -1870,7 +1869,8 @@ static void idpf_tx_finalize_complq(struct idpf_queue *complq, int ntc, dont_wake = !complq_ok || IDPF_TX_BUF_RSV_LOW(tx_q) || np->state != __IDPF_VPORT_UP || - !netif_carrier_ok(tx_q->vport->netdev); + !netif_carrier_ok(tx_q->vport->netdev) || + idpf_vport_ctrl_is_locked(tx_q->vport->netdev); /* Check if the TXQ needs to and can be restarted */ __netif_txq_completed_wake(nq, tx_q->cleaned_pkts, tx_q->cleaned_bytes, IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH, diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index f32d854fe850..be396f1e346a 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -519,6 +519,7 @@ union idpf_queue_stats { * @size: Length of descriptor ring in bytes * @dma: Physical address of ring * @desc_ring: Descriptor ring memory + * @xsk_pool: Pointer to a description of a buffer pool for AF_XDP socket * @tx_max_bufs: Max buffers that can be transmitted with scatter-gather * @tx_min_pkt_len: Min supported packet length * @num_completions: Only relevant for TX completion queue. It tracks the @@ -948,6 +949,11 @@ bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rxq, u16 cleaned_count); int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off); void idpf_tx_handle_sw_marker(struct idpf_queue *tx_q); +int idpf_rx_desc_alloc(struct idpf_queue *rxq, bool bufq, s32 q_model); +void idpf_rx_desc_rel(struct idpf_queue *rxq, bool bufq, s32 q_model); +int idpf_tx_desc_alloc(struct idpf_queue *tx_q, bool bufq); +void idpf_tx_desc_rel(struct idpf_queue *txq, bool bufq); +int idpf_rx_bufs_init(struct idpf_queue *rxbufq, enum libie_rx_buf_type type); /** * idpf_xdpq_update_tail - Updates the XDP Tx queue tail register diff --git a/drivers/net/ethernet/intel/idpf/idpf_xdp.c b/drivers/net/ethernet/intel/idpf/idpf_xdp.c index b4f096186302..c20c805583be 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xdp.c +++ b/drivers/net/ethernet/intel/idpf/idpf_xdp.c @@ -3,6 +3,7 @@ #include "idpf.h" #include "idpf_xdp.h" +#include "idpf_xsk.h" static int idpf_rxq_for_each(const struct idpf_vport *vport, int (*fn)(struct idpf_queue *rxq, void *arg), @@ -472,6 +473,10 @@ int idpf_xdp(struct net_device *netdev, struct netdev_bpf *xdp) case XDP_SETUP_PROG: err = idpf_xdp_setup_prog(vport, xdp->prog, xdp->extack); break; + case XDP_SETUP_XSK_POOL: + err = idpf_xsk_pool_setup(vport, xdp->xsk.pool, + xdp->xsk.queue_id); + break; default: err = -EINVAL; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_xsk.c b/drivers/net/ethernet/intel/idpf/idpf_xsk.c new file mode 100644 index 000000000000..3017680fedb3 --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/idpf_xsk.c @@ -0,0 +1,474 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* Copyright (C) 2023 Intel Corporation */ + +#include + +#include "idpf.h" +#include "idpf_xsk.h" + +/** + * idpf_qp_cfg_qs - Configure all queues contained from a given array. + * @vport: vport structure + * @qs: an array of queues to configure + * @num_qs: number of queues in the 'qs' array + * + * Returns 0 in case of success, false otherwise. + */ +static int +idpf_qp_cfg_qs(struct idpf_vport *vport, struct idpf_queue **qs, int num_qs) +{ + bool splitq = idpf_is_queue_model_split(vport->rxq_model); + int i, err; + + for (i = 0; i < num_qs; i++) { + const struct idpf_bufq_set *sets; + struct idpf_queue *q = qs[i]; + enum libie_rx_buf_type qt; + u32 ts; + + switch (q->q_type) { + case VIRTCHNL2_QUEUE_TYPE_RX: + err = idpf_rx_desc_alloc(q, false, vport->rxq_model); + if (err) { + netdev_err(vport->netdev, "Could not allocate buffer for RX queue.\n"); + break; + } + if (!splitq) + err = idpf_rx_bufs_init(q, LIBIE_RX_BUF_MTU); + break; + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: + err = idpf_rx_desc_alloc(q, true, vport->rxq_model); + if (err) + break; + + sets = q->rxq_grp->splitq.bufq_sets; + qt = q->idx ? LIBIE_RX_BUF_SHORT : LIBIE_RX_BUF_MTU; + ts = q->idx ? sets[q->idx - 1].bufq.truesize >> 1 : 0; + q->truesize = ts; + + err = idpf_rx_bufs_init(q, qt); + break; + case VIRTCHNL2_QUEUE_TYPE_TX: + err = idpf_tx_desc_alloc(q, true); + break; + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: + err = idpf_tx_desc_alloc(q, false); + break; + } + + if (err) + return err; + } + + return 0; +} + +/** + * idpf_qp_clean_qs - Clean all queues contained from a given array. + * @vport: vport structure + * @qs: an array of queues to clean + * @num_qs: number of queues in the 'qs' array + */ +static void +idpf_qp_clean_qs(struct idpf_vport *vport, struct idpf_queue **qs, int num_qs) +{ + for (u32 i = 0; i < num_qs; i++) { + struct idpf_queue *q = qs[i]; + + switch (q->q_type) { + case VIRTCHNL2_QUEUE_TYPE_RX: + idpf_rx_desc_rel(q, false, vport->rxq_model); + break; + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: + idpf_rx_desc_rel(q, true, vport->rxq_model); + break; + case VIRTCHNL2_QUEUE_TYPE_TX: + idpf_tx_desc_rel(q, true); + q->txq_grp->num_completions_pending = 0; + writel(q->next_to_use, q->tail); + break; + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: + idpf_tx_desc_rel(q, false); + q->num_completions = 0; + break; + } + } +} + +/** + * idpf_qvec_ena_irq - Enable IRQ for given queue vector + * @q_vector: queue vector + */ +static void +idpf_qvec_ena_irq(struct idpf_q_vector *q_vector) +{ + /* Write the default ITR values */ + if (q_vector->num_rxq) + idpf_vport_intr_write_itr(q_vector, q_vector->rx_itr_value, + false); + if (q_vector->num_txq) + idpf_vport_intr_write_itr(q_vector, q_vector->tx_itr_value, + true); + if (q_vector->num_rxq || q_vector->num_txq) + idpf_vport_intr_update_itr_ena_irq(q_vector); +} + +/** + * idpf_insert_txqs_from_grp - Insert all tx and buffer queues from txq group + * to a given array. + * @vport: vport structure + * @txq: pointer to a tx queue + * @qs: pointer to an element of array where tx queues should be inserted + * + * Returns the number of queues that has been inserted to an output 'qs' + * array. + * Note that the caller of this function must ensure that there is enough space + * in the 'qs' array to insert all the queues from the rx queue group. + */ +static int +idpf_insert_txqs_from_grp(struct idpf_vport *vport, + struct idpf_queue *txq, + struct idpf_queue **qs) +{ + int qs_idx = 0; + + if (!idpf_is_queue_model_split(vport->txq_model)) { + qs[qs_idx++] = txq; + } else { + struct idpf_txq_group *txq_grp = txq->txq_grp; + int i; + + for (i = 0; i < txq_grp->num_txq; i++) + qs[qs_idx++] = txq_grp->txqs[i]; + + for (i = 0; i < IDPF_COMPLQ_PER_GROUP; i++) + qs[qs_idx++] = &txq_grp->complq[i]; + } + + return qs_idx; +} + +/** + * idpf_insert_rxqs_from_grp - Insert all rx and buffer queues from rxq group + * to a given array. + * @vport: vport structure + * @rxq: pointer to a rx queue + * @qs: pointer to an element of array where rx queues should be inserted + * + * Returns the number of queues that has been inserted to an output 'qs' + * array. + * Note that the caller of this function must ensure that there is enough space + * in the 'qs' array to insert all the queues from the rx queue group. + */ +static int +idpf_insert_rxqs_from_grp(struct idpf_vport *vport, + struct idpf_queue *rxq, + struct idpf_queue **qs) +{ + int qs_idx = 0; + + if (!idpf_is_queue_model_split(vport->rxq_model)) { + qs[qs_idx++] = rxq; + } else { + struct idpf_rxq_group *rxq_grp = rxq->rxq_grp; + int i; + + for (i = 0; i < rxq_grp->splitq.num_rxq_sets; i++) + qs[qs_idx++] = &rxq_grp->splitq.rxq_sets[i]->rxq; + + for (i = 0; i < vport->num_bufqs_per_qgrp; i++) + qs[qs_idx++] = &rxq_grp->splitq.bufq_sets[i].bufq; + } + + return qs_idx; +} + +/** + * idpf_count_rxqs_in_grp - Returns the number of rx queues in rx queue group + * containing a given rx queue. + * @vport: vport structure + * @rxq: pointer to a rx queue + * + * Returns the number of rx queues in the rx queue group associated with + * a given rx queue. Or, in case of singleq mode, 1, because rx queues + * are not grouped. + */ +static int +idpf_count_rxqs_in_grp(struct idpf_vport *vport, struct idpf_queue *rxq) +{ + if (!idpf_is_queue_model_split(vport->rxq_model)) + return 1; + + return rxq->rxq_grp->splitq.num_rxq_sets + vport->num_bufqs_per_qgrp; +} + +/** + * idpf_count_txqs_in_grp - Returns the number of tx queues in tx queue group + * containing a given tx queue. + * @vport: vport structure + * @txq: pointer to a tx queue + * + * Returns the number of tx queues in the tx queue group associated with + * a given tx queue. Or, in case of singleq mode, 1, because tx queues + * are not grouped. + */ +static int +idpf_count_txqs_in_grp(struct idpf_vport *vport, struct idpf_queue *txq) +{ + if (!idpf_is_queue_model_split(vport->txq_model)) + return 1; + + return txq->txq_grp->num_txq + IDPF_COMPLQ_PER_GROUP; +} + +/** + * idpf_create_queue_list - Creates a list of queues associated with a given + * queue index. + * @vport: vport structure + * @q_idx: index of queue pair to establish XSK socket + * @num_qs: number of queues in returned array. + * + * Returns a pointer to a dynamically allocated array of pointers to all + * queues associated with a given queue index (q_idx). + * Please note that the caller is responsible to free the memory allocated + * by this function using 'kfree()'. + * NULL-pointer will be returned in case of error. + */ +static struct idpf_queue ** +idpf_create_queue_list(struct idpf_vport *vport, u16 q_idx, int *num_qs) +{ + struct idpf_queue *rxq, *txq, *xdpq = NULL; + struct idpf_queue **qs; + int qs_idx; + + *num_qs = 0; + + if (q_idx >= vport->num_rxq || q_idx >= vport->num_txq) + return NULL; + + rxq = idpf_find_rxq(vport, q_idx); + txq = idpf_find_txq(vport, q_idx); + + *num_qs += idpf_count_rxqs_in_grp(vport, rxq); + *num_qs += idpf_count_txqs_in_grp(vport, txq); + + if (idpf_xdp_is_prog_ena(vport)) { + xdpq = vport->txqs[q_idx + vport->xdp_txq_offset]; + *num_qs += idpf_count_txqs_in_grp(vport, xdpq); + } + + qs = kcalloc(*num_qs, sizeof(*qs), GFP_KERNEL); + if (!qs) + return NULL; + + qs_idx = 0; + qs_idx += idpf_insert_txqs_from_grp(vport, txq, &qs[qs_idx]); + + if (xdpq) + qs_idx += idpf_insert_txqs_from_grp(vport, xdpq, &qs[qs_idx]); + + qs_idx += idpf_insert_rxqs_from_grp(vport, rxq, &qs[qs_idx]); + + if (*num_qs != qs_idx) { + kfree(qs); + *num_qs = 0; + qs = NULL; + } + + return qs; +} + +/** + * idpf_qp_dis - Disables queues associated with a queue pair + * @vport: vport structure + * @q_vector: interrupt vector mapped to a given queue pair + * @qs: array of pointers to queues to enable + * @num_qs: number of queues in 'qs' array + * @q_idx: index of queue pair to enable + * + * Returns 0 on success, negative on failure. + */ +static int idpf_qp_dis(struct idpf_vport *vport, struct idpf_q_vector *q_vector, + struct idpf_queue **qs, int num_qs, u16 q_idx) +{ + int err = 0; + + netif_stop_subqueue(vport->netdev, q_idx); + + err = idpf_send_disable_vport_msg(vport); + if (err) { + netdev_err(vport->netdev, "Could not disable vport, error = %d\n", + err); + goto err_send_msg; + } + err = idpf_send_disable_selected_queues_msg(vport, qs, num_qs); + if (err) { + netdev_err(vport->netdev, "Could not disable queues for index %d, error = %d\n", + q_idx, err); + goto err_send_msg; + } + + napi_disable(&q_vector->napi); + writel(0, q_vector->intr_reg.dyn_ctl); + idpf_qp_clean_qs(vport, qs, num_qs); + + return 0; + +err_send_msg: + netif_start_subqueue(vport->netdev, q_idx); + + return err; +} + +/** + * idpf_qp_ena - Enables queues associated with a queue pair + * @vport: vport structure + * @q_vector: interrupt vector mapped to a given queue pair + * @qs: array of pointers to queues to enable + * @num_qs: number of queues in 'qs' array + * @q_idx: index of queue pair to enable + * + * Returns 0 on success, negative on failure. + */ +static int idpf_qp_ena(struct idpf_vport *vport, struct idpf_q_vector *q_vector, + struct idpf_queue **qs, int num_qs, u16 q_idx) +{ + int err; + + err = idpf_qp_cfg_qs(vport, qs, num_qs); + if (err) { + netdev_err(vport->netdev, "Could not initialize queues for index %d, error = %d\n", + q_idx, err); + return err; + } + + napi_enable(&q_vector->napi); + idpf_qvec_ena_irq(q_vector); + + err = idpf_send_config_selected_queues_msg(vport, qs, num_qs); + if (err) { + netdev_err(vport->netdev, "Could not configure queues for index %d, error = %d\n", + q_idx, err); + return err; + } + + err = idpf_send_enable_selected_queues_msg(vport, qs, num_qs); + if (err) { + netdev_err(vport->netdev, "Could not enable queues for index %d, error = %d\n", + q_idx, err); + return err; + } + + err = idpf_send_enable_vport_msg(vport); + if (err) { + netdev_err(vport->netdev, "Could not enable vport, error = %d\n", + err); + return err; + } + + netif_start_subqueue(vport->netdev, q_idx); + + return 0; +} + +/** + * idpf_xsk_pool_disable - disables a BUFF POOL region + * @vport: vport to allocate the buffer pool on + * @qid: queue id + * + * Returns 0 on success, negative on error + */ +static int idpf_xsk_pool_disable(struct idpf_vport *vport, u16 qid) +{ + struct idpf_vport_user_config_data *cfg_data; + + if (!vport->rxq_grps) + return -EINVAL; + + cfg_data = &vport->adapter->vport_config[vport->idx]->user_config; + + return libie_xsk_disable_pool(vport->netdev, qid, + cfg_data->af_xdp_zc_qps); +} +/** + * idpf_xsk_pool_enable - enables a BUFF POOL region + * @vport: vport to allocate the buffer pool on + * @pool: pointer to a requested BUFF POOL region + * @qid: queue id + * + * Returns 0 on success, negative on error + */ +static int idpf_xsk_pool_enable(struct idpf_vport *vport, u16 qid) +{ + struct idpf_vport_user_config_data *cfg_data; + + cfg_data = &vport->adapter->vport_config[vport->idx]->user_config; + + return libie_xsk_enable_pool(vport->netdev, qid, + cfg_data->af_xdp_zc_qps); +} + +/** + * idpf_xsk_pool_setup - enable/disable a BUFF POOL region + * @vport: current vport of interest + * @pool: pointer to a requested BUFF POOL region + * @qid: queue id + * + * Returns 0 on success, negative on failure + */ +int idpf_xsk_pool_setup(struct idpf_vport *vport, struct xsk_buff_pool *pool, + u32 qid) +{ + bool if_running, pool_present = !!pool; + int err = 0, pool_failure = 0, num_qs; + struct idpf_q_vector *q_vector; + struct idpf_queue *rxq, **qs; + + if_running = netif_running(vport->netdev) && + idpf_xdp_is_prog_ena(vport); + + if (if_running) { + rxq = idpf_find_rxq(vport, qid); + q_vector = rxq->q_vector; + + qs = idpf_create_queue_list(vport, qid, &num_qs); + if (!qs) { + err = -ENOMEM; + goto xsk_exit; + } + + err = idpf_qp_dis(vport, q_vector, qs, num_qs, qid); + if (err) { + netdev_err(vport->netdev, "Cannot disable queues for XSK setup, error = %d\n", + err); + goto xsk_pool_if_up; + } + } + + pool_failure = pool_present ? idpf_xsk_pool_enable(vport, qid) : + idpf_xsk_pool_disable(vport, qid); + + if (!idpf_xdp_is_prog_ena(vport)) + netdev_warn(vport->netdev, "RSS may schedule pkts to q occupied by AF XDP\n"); + +xsk_pool_if_up: + if (if_running) { + err = idpf_qp_ena(vport, q_vector, qs, num_qs, qid); + if (!err && pool_present) + napi_schedule(&rxq->q_vector->napi); + else if (err) + netdev_err(vport->netdev, + "Could not enable queues after XSK setup, error = %d\n", + err); + kfree(qs); + } + + if (pool_failure) { + netdev_err(vport->netdev, "Could not %sable BUFF POOL, error = %d\n", + pool_present ? "en" : "dis", pool_failure); + err = pool_failure; + } + +xsk_exit: + return err; +} diff --git a/drivers/net/ethernet/intel/idpf/idpf_xsk.h b/drivers/net/ethernet/intel/idpf/idpf_xsk.h new file mode 100644 index 000000000000..93705900f592 --- /dev/null +++ b/drivers/net/ethernet/intel/idpf/idpf_xsk.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2023 Intel Corporation */ + +#ifndef _IDPF_XSK_H_ +#define _IDPF_XSK_H_ + +#include + +struct idpf_vport; +struct xsk_buff_pool; + +int idpf_xsk_pool_setup(struct idpf_vport *vport, struct xsk_buff_pool *pool, + u32 qid); + +#endif /* !_IDPF_XSK_H_ */ From patchwork Sat Dec 23 02:55:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182899 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1462899dyi; Fri, 22 Dec 2023 19:09:45 -0800 (PST) X-Google-Smtp-Source: AGHT+IEXzBRPN7GCKP5MGtAsXmIGvviNm+Nn5XEtCxNWpnNL1cN7WDie5zSam7DmvhP7uwM6xENd X-Received: by 2002:ad4:5963:0:b0:67a:aa87:7ae2 with SMTP id eq3-20020ad45963000000b0067aaa877ae2mr3860660qvb.40.1703300985561; Fri, 22 Dec 2023 19:09:45 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703300985; cv=none; d=google.com; s=arc-20160816; b=IdLlgMc2W+FSVb+/31SrxOm9gHBhcWQnmJ3reYp25hK84BdlvXtof7r8toWwHCq0JA 2UFsl5iwR6pIQbTZMSEifnd1Z6AsUx8LttCWszrkXwbMM3eGQzYflMf83/QMwPuCGRz2 2fXtu0Pu4rRTjZGqCd6hotzbyw2JVpoZmpT4GjIma5BQcX/Q3qla522adxaTeO/rjzIy 7j5Cvm/asZxr1adw4ZKB+UVPXDIucDv3AdecAU3GaJZnVVXBtk+bRw8a212F6tUgxy2r eOzI8H46+UHa75BIov7RTGtM2FgkmW3zCj7uHkXE49+vQPeyiZfrMDmnc6UxZGG3V6ME oKSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=EUKKy95nc07JqLkckpNTmJIloi5lwAiN1viu1YJoE3Q=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=rVIHcyRLjxe0cW+Sh1y94DHzzpv3/Ej/ysIyv9X8FG9hzA9YCJvXkAJqZ29ci4Fj+u 54s3QFWUgsomrKgR7fpaUHKahHxDCoQ4MFx0LHRZ9NVezacTui7nWiuOcL/SuE3ZlSKL BFNWrgEiMEV99Y8b4NU5ReqGpDecT5ICZjjMUB7XDhED7VFttUMJHyKJ4M79VdjvRL/I 6DEWq7wL0Do01izcENs5CF0p7xESZ1Y7u6P7fZLfdgDuqfYqD7vqc1E2xxT4UZ7CZW9r f3/BODjppYqVl+PW0EwNbOYRwjOz/4HZ2wET9SBYt8xlqJr9URZKeyLJ/l8f1ud6Y+ze SeMg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Cf4oE3C3; spf=pass (google.com: domain of linux-kernel+bounces-10248-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10248-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id j19-20020a0cf313000000b0067f95cebfa1si3113406qvl.352.2023.12.22.19.09.45 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:09:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10248-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=Cf4oE3C3; spf=pass (google.com: domain of linux-kernel+bounces-10248-ouuuleilei=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10248-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id AEFEC1C22678 for ; Sat, 23 Dec 2023 03:09:41 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 754233E482; Sat, 23 Dec 2023 03:00:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Cf4oE3C3" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 40B3C3D984; Sat, 23 Dec 2023 03:00:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300410; x=1734836410; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9m+hJARl+7l9gBAwp/JVuXNb4MiWER0ftpx5hz4sFLg=; b=Cf4oE3C3NZ15qE3PoyiWTz+LeMUB9SKZYFT8xAUYhEud2J1yw5jd2o/Y RM/CG08yvQA5wvd/g+8J7EuMGzzBLnpKRAhTdxvdHQz1sPfon8MtAylEn dvzeX9KFEHPmUL5JCssFBfpRbQPd4M8PTD58sHLlEw+Ti8M/lupoTd2J+ L9qlk7y2/VSaV3OJBNBwVWcMV0Jbc6HHAuXMNXKXlgk2jk1xrdcPusRAm XoUv5jj/OqrkZQs7RnxtqNUM/0iA16RGne3Y3UJtIKbweGMkkwB/tGP7y 1sgKGdrSgAX52dYKzYjSglrOAx2HXMtXC/mLtuikZ+IMpdADiEpYU5GqU w==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386611098" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386611098" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 19:00:09 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537683" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 19:00:06 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 32/34] idpf: implement Tx path for AF_XDP Date: Sat, 23 Dec 2023 03:55:52 +0100 Message-ID: <20231223025554.2316836-33-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040534382956370 X-GMAIL-MSGID: 1786040534382956370 From: Michal Kubiak Implement Tx handling for AF_XDP feature in zero-copy mode using the libie XSk infra. Signed-off-by: Michal Kubiak Co-developed-by: Alexander Lobakin Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 44 ++- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 4 + drivers/net/ethernet/intel/idpf/idpf_xsk.c | 318 ++++++++++++++++++++ drivers/net/ethernet/intel/idpf/idpf_xsk.h | 9 + 4 files changed, 361 insertions(+), 14 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index e3f59bbe7c90..5ba880c2bedc 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -3,6 +3,7 @@ #include "idpf.h" #include "idpf_xdp.h" +#include "idpf_xsk.h" /** * idpf_buf_lifo_push - push a buffer pointer onto stack @@ -55,30 +56,36 @@ void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue) } } -/** - * idpf_tx_buf_rel_all - Free any empty Tx buffers - * @txq: queue to be cleaned - */ -static void idpf_tx_buf_rel_all(struct idpf_queue *txq) +static void idpf_tx_buf_clean(struct idpf_queue *txq) { struct libie_sq_onstack_stats ss = { }; struct xdp_frame_bulk bq; - u16 i; - - /* Buffers already cleared, nothing to do */ - if (!txq->tx_buf) - return; xdp_frame_bulk_init(&bq); rcu_read_lock(); - /* Free all the Tx buffer sk_buffs */ - for (i = 0; i < txq->desc_count; i++) + for (u32 i = 0; i < txq->desc_count; i++) libie_tx_complete_any(&txq->tx_buf[i], txq->dev, &bq, &txq->xdp_tx_active, &ss); xdp_flush_frame_bulk(&bq); rcu_read_unlock(); +} + +/** + * idpf_tx_buf_rel_all - Free any empty Tx buffers + * @txq: queue to be cleaned + */ +static void idpf_tx_buf_rel_all(struct idpf_queue *txq) +{ + /* Buffers already cleared, nothing to do */ + if (!txq->tx_buf) + return; + + if (test_bit(__IDPF_Q_XSK, txq->flags)) + idpf_xsk_clean_xdpq(txq); + else + idpf_tx_buf_clean(txq); kfree(txq->tx_buf); txq->tx_buf = NULL; @@ -86,7 +93,7 @@ static void idpf_tx_buf_rel_all(struct idpf_queue *txq) if (!txq->buf_stack.bufs) return; - for (i = 0; i < txq->buf_stack.size; i++) + for (u32 i = 0; i < txq->buf_stack.size; i++) kfree(txq->buf_stack.bufs[i]); kfree(txq->buf_stack.bufs); @@ -105,6 +112,8 @@ void idpf_tx_desc_rel(struct idpf_queue *txq, bool bufq) if (bufq) idpf_tx_buf_rel_all(txq); + idpf_xsk_clear_queue(txq); + if (!txq->desc_ring) return; @@ -196,6 +205,7 @@ static int idpf_tx_buf_alloc_all(struct idpf_queue *tx_q) */ int idpf_tx_desc_alloc(struct idpf_queue *tx_q, bool bufq) { + enum virtchnl2_queue_type type; struct device *dev = tx_q->dev; u32 desc_sz; int err; @@ -228,6 +238,10 @@ int idpf_tx_desc_alloc(struct idpf_queue *tx_q, bool bufq) tx_q->next_to_clean = 0; set_bit(__IDPF_Q_GEN_CHK, tx_q->flags); + type = bufq ? VIRTCHNL2_QUEUE_TYPE_TX : + VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION; + idpf_xsk_setup_queue(tx_q, type); + return 0; err_alloc: @@ -3802,7 +3816,9 @@ static bool idpf_tx_splitq_clean_all(struct idpf_q_vector *q_vec, for (i = 0; i < num_txq; i++) { struct idpf_queue *cq = q_vec->tx[i]; - if (!test_bit(__IDPF_Q_XDP, cq->flags)) + if (test_bit(__IDPF_Q_XSK, cq->flags)) + clean_complete &= idpf_xmit_zc(cq); + else if (!test_bit(__IDPF_Q_XDP, cq->flags)) clean_complete &= idpf_tx_clean_complq(cq, budget_per_q, cleaned); diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index be396f1e346a..d55ff6aaae2b 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -313,6 +313,7 @@ struct idpf_ptype_state { * @__IDPF_Q_SW_MARKER: Used to indicate TX queue marker completions * @__IDPF_Q_POLL_MODE: Enable poll mode * @__IDPF_Q_FLAGS_NBITS: Must be last + * @__IDPF_Q_XSK: Queue used to handle the AF_XDP socket */ enum idpf_queue_flags_t { __IDPF_Q_GEN_CHK, @@ -321,6 +322,7 @@ enum idpf_queue_flags_t { __IDPF_Q_SW_MARKER, __IDPF_Q_POLL_MODE, __IDPF_Q_XDP, + __IDPF_Q_XSK, __IDPF_Q_FLAGS_NBITS, }; @@ -574,10 +576,12 @@ struct idpf_queue { union { struct page_pool *hdr_pp; struct idpf_queue **xdpqs; + struct xsk_buff_pool *xsk_tx; }; union { struct page_pool *pp; struct device *dev; + struct xsk_buff_pool *xsk_rx; }; union { union virtchnl2_rx_desc *rx; diff --git a/drivers/net/ethernet/intel/idpf/idpf_xsk.c b/drivers/net/ethernet/intel/idpf/idpf_xsk.c index 3017680fedb3..6f1870c05948 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xsk.c +++ b/drivers/net/ethernet/intel/idpf/idpf_xsk.c @@ -6,6 +6,89 @@ #include "idpf.h" #include "idpf_xsk.h" +/** + * idpf_xsk_setup_queue - set xsk_pool pointer from netdev to the queue structure + * @q: queue to use + * + * Assigns pointer to xsk_pool field in queue struct if it is supported in + * netdev, NULL otherwise. + */ +void idpf_xsk_setup_queue(struct idpf_queue *q, enum virtchnl2_queue_type t) +{ + struct idpf_vport_user_config_data *cfg_data; + struct idpf_vport *vport = q->vport; + struct xsk_buff_pool *pool; + bool is_rx = false; + int qid; + + __clear_bit(__IDPF_Q_XSK, q->flags); + + if (!idpf_xdp_is_prog_ena(q->vport)) + return; + + switch (t) { + case VIRTCHNL2_QUEUE_TYPE_RX: + is_rx = true; + qid = q->idx; + break; + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: + is_rx = true; + qid = q->rxq_grp->splitq.rxq_sets[0]->rxq.idx; + break; + case VIRTCHNL2_QUEUE_TYPE_TX: + qid = q->idx - q->vport->xdp_txq_offset; + break; + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: + qid = q->txq_grp->txqs[0]->idx - q->vport->xdp_txq_offset; + break; + default: + return; + } + + if (!is_rx && !test_bit(__IDPF_Q_XDP, q->flags)) + return; + + cfg_data = &vport->adapter->vport_config[vport->idx]->user_config; + + if (!test_bit(qid, cfg_data->af_xdp_zc_qps)) + return; + + pool = xsk_get_pool_from_qid(q->vport->netdev, qid); + + if (pool && is_rx && !xsk_buff_can_alloc(pool, 1)) + return; + + if (is_rx) + q->xsk_rx = pool; + else + q->xsk_tx = pool; + + __set_bit(__IDPF_Q_XSK, q->flags); +} + +void idpf_xsk_clear_queue(struct idpf_queue *q) +{ + struct device *dev; + + if (!__test_and_clear_bit(__IDPF_Q_XSK, q->flags)) + return; + + switch (q->q_type) { + case VIRTCHNL2_QUEUE_TYPE_RX: + case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER: + dev = q->xsk_rx->dev; + q->xsk_rx = NULL; + q->dev = dev; + break; + case VIRTCHNL2_QUEUE_TYPE_TX: + case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION: + dev = q->xsk_tx->dev; + q->xsk_tx = NULL; + q->dev = dev; + break; + } +} + /** * idpf_qp_cfg_qs - Configure all queues contained from a given array. * @vport: vport structure @@ -95,6 +178,23 @@ idpf_qp_clean_qs(struct idpf_vport *vport, struct idpf_queue **qs, int num_qs) } } +/** + * idpf_trigger_sw_intr - trigger a software interrupt + * @hw: pointer to the HW structure + * @q_vector: interrupt vector to trigger the software interrupt for + */ +static void +idpf_trigger_sw_intr(struct idpf_hw *hw, struct idpf_q_vector *q_vector) +{ + struct idpf_intr_reg *intr = &q_vector->intr_reg; + u32 val; + + val = intr->dyn_ctl_intena_m | intr->dyn_ctl_itridx_m | /* set no itr*/ + intr->dyn_ctl_swint_trig_m |intr->dyn_ctl_sw_itridx_ena_m; + + writel(val, intr->dyn_ctl); +} + /** * idpf_qvec_ena_irq - Enable IRQ for given queue vector * @q_vector: queue vector @@ -472,3 +572,221 @@ int idpf_xsk_pool_setup(struct idpf_vport *vport, struct xsk_buff_pool *pool, xsk_exit: return err; } + +/** + * idpf_xsk_clean_xdpq - Clean the XDP Tx queue and its buffer pool queues + * @xdpq: XDP_Tx queue + */ +void idpf_xsk_clean_xdpq(struct idpf_queue *xdpq) +{ + u32 ntc = xdpq->next_to_clean, ntu = xdpq->next_to_use; + struct device *dev = xdpq->xsk_tx->dev; + struct libie_sq_onstack_stats ss = { }; + struct xdp_frame_bulk bq; + u32 xsk_frames = 0; + + xdp_frame_bulk_init(&bq); + rcu_read_lock(); + + while (ntc != ntu) { + struct libie_tx_buffer *tx_buf = &xdpq->tx_buf[ntc]; + + if (tx_buf->type) + libie_xdp_complete_tx_buf(tx_buf, dev, false, &bq, + &xdpq->xdp_tx_active, &ss); + else + xsk_frames++; + + if (unlikely(++ntc >= xdpq->desc_count)) + ntc = 0; + } + + xdp_flush_frame_bulk(&bq); + rcu_read_unlock(); + + if (xsk_frames) + xsk_tx_completed(xdpq->xsk_tx, xsk_frames); +} + +/** + * idpf_clean_xdp_irq_zc - produce AF_XDP descriptors to CQ + * @complq: completion queue associated with zero-copy Tx queue + */ +static u32 idpf_clean_xdp_irq_zc(struct idpf_queue *complq) +{ + struct idpf_splitq_4b_tx_compl_desc *last_rs_desc; + struct device *dev = complq->xsk_tx->dev; + struct libie_sq_onstack_stats ss = { }; + int complq_budget = complq->desc_count; + u32 ntc = complq->next_to_clean; + struct idpf_queue *xdpq = NULL; + struct xdp_frame_bulk bq; + u32 done_frames = 0; + u32 xsk_frames = 0; + u32 tx_ntc, cnt; + bool gen_flag; + int head, i; + + last_rs_desc = &complq->comp_4b[ntc]; + gen_flag = test_bit(__IDPF_Q_GEN_CHK, complq->flags); + + do { + int ctype = idpf_parse_compl_desc(last_rs_desc, complq, + &xdpq, gen_flag); + + if (likely(ctype == IDPF_TXD_COMPLT_RS)) { + head = le16_to_cpu(last_rs_desc->q_head_compl_tag.q_head); + goto fetch_next_desc; + } + + switch (ctype) { + case IDPF_TXD_COMPLT_SW_MARKER: + idpf_tx_handle_sw_marker(xdpq); + break; + case -ENODATA: + goto clean_xdpq; + case -EINVAL: + goto fetch_next_desc; + default: + dev_err(&xdpq->vport->adapter->pdev->dev, + "Unsupported completion type for XSK\n"); + goto fetch_next_desc; + } + +fetch_next_desc: + last_rs_desc++; + ntc++; + if (unlikely(ntc == complq->desc_count)) { + ntc = 0; + last_rs_desc = &complq->comp_4b[0]; + gen_flag = !gen_flag; + change_bit(__IDPF_Q_GEN_CHK, complq->flags); + } + prefetch(last_rs_desc); + complq_budget--; + } while (likely(complq_budget)); + +clean_xdpq: + complq->next_to_clean = ntc; + + if (!xdpq) + return 0; + + cnt = xdpq->desc_count; + tx_ntc = xdpq->next_to_clean; + done_frames = head >= tx_ntc ? head - tx_ntc : + head + cnt - tx_ntc; + if (!done_frames) + return 0; + + if (likely(!complq->xdp_tx_active)) + goto xsk; + + xdp_frame_bulk_init(&bq); + + for (i = 0; i < done_frames; i++) { + struct libie_tx_buffer *tx_buf = &xdpq->tx_buf[tx_ntc]; + + if (tx_buf->type) + libie_xdp_complete_tx_buf(tx_buf, dev, true, &bq, + &xdpq->xdp_tx_active, + &ss); + else + xsk_frames++; + + if (unlikely(++tx_ntc == cnt)) + tx_ntc = 0; + } + + xdp_flush_frame_bulk(&bq); + +xsk: + xdpq->next_to_clean += done_frames; + if (xdpq->next_to_clean >= cnt) + xdpq->next_to_clean -= cnt; + + if (xsk_frames) + xsk_tx_completed(xdpq->xsk_tx, xsk_frames); + + return done_frames; +} + +/** + * idpf_xmit_pkt - produce a single HW Tx descriptor out of AF_XDP descriptor + * @xdpq: XDP queue to produce the HW Tx descriptor on + * @desc: AF_XDP descriptor to pull the DMA address and length from + * @total_bytes: bytes accumulator that will be used for stats update + */ +static void idpf_xsk_xmit_pkt(struct libie_xdp_tx_desc desc, + const struct libie_xdp_tx_queue *sq) +{ + union idpf_tx_flex_desc *tx_desc = sq->desc_ring; + struct idpf_tx_splitq_params tx_params = { + .dtype = IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2, + .eop_cmd = IDPF_TX_DESC_CMD_EOP, + }; + + tx_desc = &tx_desc[*sq->next_to_use]; + tx_desc->q.buf_addr = cpu_to_le64(desc.addr); + + idpf_tx_splitq_build_desc(tx_desc, &tx_params, + tx_params.eop_cmd | tx_params.offload.td_cmd, + desc.len); +} + +static u32 idpf_xsk_xmit_prep(void *_xdpq, struct libie_xdp_tx_queue *sq) +{ + struct idpf_queue *xdpq = _xdpq; + + libie_xdp_sq_lock(&xdpq->xdp_lock); + + *sq = (struct libie_xdp_tx_queue){ + .dev = xdpq->dev, + .tx_buf = xdpq->tx_buf, + .desc_ring = xdpq->desc_ring, + .xdp_lock = &xdpq->xdp_lock, + .next_to_use = &xdpq->next_to_use, + .desc_count = xdpq->desc_count, + .xdp_tx_active = &xdpq->xdp_tx_active, + }; + + return IDPF_DESC_UNUSED(xdpq); +} + +/** + * idpf_xmit_xdpq_zc - take entries from XSK Tx queue and place them onto HW Tx queue + * @xdpq: XDP queue to produce the HW Tx descriptors on + * + * Returns true if there is no more work that needs to be done, false otherwise + */ +static bool idpf_xmit_xdpq_zc(struct idpf_queue *xdpq) +{ + u32 budget; + + budget = IDPF_DESC_UNUSED(xdpq); + budget = min_t(u32, budget, IDPF_QUEUE_QUARTER(xdpq)); + + return libie_xsk_xmit_do_bulk(xdpq, xdpq->xsk_tx, budget, + idpf_xsk_xmit_prep, idpf_xsk_xmit_pkt, + idpf_xdp_tx_finalize); +} + +/** + * idpf_xmit_zc - perform xmit from all XDP queues assigned to the completion queue + * @complq: Completion queue associated with one or more XDP queues + * + * Returns true if there is no more work that needs to be done, false otherwise + */ +bool idpf_xmit_zc(struct idpf_queue *complq) +{ + struct idpf_txq_group *xdpq_grp = complq->txq_grp; + bool result = true; + int i; + + idpf_clean_xdp_irq_zc(complq); + + for (i = 0; i < xdpq_grp->num_txq; i++) + result &= idpf_xmit_xdpq_zc(xdpq_grp->txqs[i]); + + return result; +} diff --git a/drivers/net/ethernet/intel/idpf/idpf_xsk.h b/drivers/net/ethernet/intel/idpf/idpf_xsk.h index 93705900f592..777d6ab7891d 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xsk.h +++ b/drivers/net/ethernet/intel/idpf/idpf_xsk.h @@ -6,9 +6,18 @@ #include +enum virtchnl2_queue_type; + +struct idpf_queue; struct idpf_vport; struct xsk_buff_pool; +void idpf_xsk_setup_queue(struct idpf_queue *q, enum virtchnl2_queue_type t); +void idpf_xsk_clear_queue(struct idpf_queue *q); + +void idpf_xsk_clean_xdpq(struct idpf_queue *xdpq); +bool idpf_xmit_zc(struct idpf_queue *complq); + int idpf_xsk_pool_setup(struct idpf_vport *vport, struct xsk_buff_pool *pool, u32 qid); From patchwork Sat Dec 23 02:55:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182900 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1463005dyi; Fri, 22 Dec 2023 19:10:03 -0800 (PST) X-Google-Smtp-Source: AGHT+IGF7U3j20iKXT5E0CEf4LXBEgngWbS6d3D6Dp8IIDZerfeNirYR7MQPXRxecL+/BVoEWYFn X-Received: by 2002:a05:620a:14af:b0:77f:b706:ad0a with SMTP id x15-20020a05620a14af00b0077fb706ad0amr2417905qkj.99.1703301003362; Fri, 22 Dec 2023 19:10:03 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703301003; cv=none; d=google.com; s=arc-20160816; b=OpmS7eEk3/5SZpgpGc1Smwh0fqf3Pc4d6rWK+VsIUAzo3T6WeCPv8rn6PucdNH+k2t G1Ne4kB1dr3DRuOCHfIRl3+mQYzpxzcy0KboKbUvtaI8h7B24GVMdXEEjBwcWKkjlqHK fiKzYiqjHWitERqSjTJipso5C2BUZ6k2WbCBAoj/sbFPC4lmWNpxDynaZpsc8dbeiCnT DCEYbbOZw6Ej3gww9FmdhrzGlOxyaGRFOqiPd2Ib4cOp4vvmTV8+ukf2WqT5LTiQBSC1 7NXGKKPnHjfjEUhIoH2SXDHsv8XdvmbDQe5IjsOVaPoLgDp52lG0MBfx66okOhycRkgZ qXlA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=NH6ZApq9Pll7aYiIbKPiowq8ts8Gh4MNXM3RCp3CNPc=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=w8Ne0bG0YcN4MYh//FK98bJuxB5LLJS7K7sTKRkWUkvA6vsbCtM0tdLG2e7fgnJ/eT kTF0lFGEURBH5FzJSd4jiBTXL+YCZbmM+3EOziuKgUjdR4qwcEhAkET1FRm1p0FClXF1 yl3OE9Bf/BDn2oXfIfRSqk7XwtcOqjJ9jpMRM+6ROiEPDD7pCqntmnOYK/CnkgPcaEf+ kPKiY1CzVMtbdXzoqKzHiDbVs/qnjV9pQGvoEu624+V6lMl7w/eM5WhSZP6f4HU/uxjD ZajV+UzRIv9SA74NyX2TCiFujcXQSS6mX9zdXk9VylZ7gKCkYo5PWOqkVGte+xZfUrfv Avzg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="V/FCAV7k"; spf=pass (google.com: domain of linux-kernel+bounces-10249-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10249-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id v18-20020a05620a441200b0077d95f62af3si6400611qkp.544.2023.12.22.19.10.03 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:10:03 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10249-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b="V/FCAV7k"; spf=pass (google.com: domain of linux-kernel+bounces-10249-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10249-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 1DF2C1C20FB4 for ; Sat, 23 Dec 2023 03:10:03 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 787293F8F1; Sat, 23 Dec 2023 03:00:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="V/FCAV7k" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 02BD63E483; Sat, 23 Dec 2023 03:00:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300413; x=1734836413; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=BCgdqYGgp7i35gD2revPuKhwoQXv+qZoYNqZ2KhNIkI=; b=V/FCAV7kG3to8f2LAuTUJnTYYKe78KfSHxhdwouS298TKAGyyiyf4i2T RwhGiZl7nPc5ViPKZbLCGBaCpVseo2NlNeSTg8+/F96HYerHP53u1EAln tRTt/Bz2kYfLjmk7K1pwR4+oVEs08s/kfx6pmI5hM+Bicv+6efqmNh/SF p+/tBguLVf3OeOoNOMfUn+UT8FMup60L8Ilsixt9znOiMWmrlACYiSoPj QdBAAsB9hA8YMR6SuvGMZij1PzYNFmG7+/FBdPtp06aFVnvhIsEXJnzXY kUSQ5gOQLThKn8k3pt8ot2VVY+50gbVP4tpyUof+b+dLchP8dQmkirr6A A==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386611107" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386611107" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 19:00:13 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537687" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 19:00:10 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 33/34] idpf: implement Rx path for AF_XDP Date: Sat, 23 Dec 2023 03:55:53 +0100 Message-ID: <20231223025554.2316836-34-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040553117836413 X-GMAIL-MSGID: 1786040553117836413 From: Michal Kubiak Implement RX packet processing specific to AF_XDP ZC using the libie XSk infra. XDP_PASS case is implemented using the generic ZC-to-skb conversion function. Signed-off-by: Michal Kubiak Co-developed-by: Alexander Lobakin Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_txrx.c | 36 +- drivers/net/ethernet/intel/idpf/idpf_txrx.h | 6 + drivers/net/ethernet/intel/idpf/idpf_xdp.c | 44 ++- drivers/net/ethernet/intel/idpf/idpf_xsk.c | 347 ++++++++++++++++++++ drivers/net/ethernet/intel/idpf/idpf_xsk.h | 4 + 5 files changed, 422 insertions(+), 15 deletions(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c index 5ba880c2bedc..0c78811d65e5 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c @@ -409,9 +409,13 @@ void idpf_rx_desc_rel(struct idpf_queue *rxq, bool bufq, s32 q_model) rxq->xdp.data = NULL; } - if (bufq || !idpf_is_queue_model_split(q_model)) + if (bufq && test_bit(__IDPF_Q_XSK, rxq->flags)) + idpf_xsk_buf_rel(rxq); + else if (bufq || !idpf_is_queue_model_split(q_model)) idpf_rx_buf_rel_all(rxq); + idpf_xsk_clear_queue(rxq); + rxq->next_to_alloc = 0; rxq->next_to_clean = 0; rxq->next_to_use = 0; @@ -674,6 +678,9 @@ int idpf_rx_bufs_init(struct idpf_queue *rxbufq, enum libie_rx_buf_type type) }; int ret; + if (test_bit(__IDPF_Q_XSK, rxbufq->flags)) + return idpf_check_alloc_rx_buffers_zc(rxbufq); + ret = libie_rx_page_pool_create(&bq, &rxbufq->q_vector->napi); if (ret) return ret; @@ -745,6 +752,7 @@ int idpf_rx_bufs_init_all(struct idpf_vport *vport) */ int idpf_rx_desc_alloc(struct idpf_queue *rxq, bool bufq, s32 q_model) { + enum virtchnl2_queue_type type; struct device *dev = rxq->dev; if (bufq) @@ -769,6 +777,9 @@ int idpf_rx_desc_alloc(struct idpf_queue *rxq, bool bufq, s32 q_model) rxq->next_to_use = 0; set_bit(__IDPF_Q_GEN_CHK, rxq->flags); + type = bufq ? VIRTCHNL2_QUEUE_TYPE_RX_BUFFER : VIRTCHNL2_QUEUE_TYPE_RX; + idpf_xsk_setup_queue(rxq, type); + return 0; } @@ -2788,8 +2799,8 @@ netdev_tx_t idpf_tx_splitq_start(struct sk_buff *skb, * @rx_desc: Receive descriptor * @parsed: parsed Rx packet type related fields */ -static void idpf_rx_hash(struct idpf_queue *rxq, struct sk_buff *skb, - struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc, +static void idpf_rx_hash(const struct idpf_queue *rxq, struct sk_buff *skb, + const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc, struct libie_rx_ptype_parsed parsed) { u32 hash; @@ -2865,7 +2876,7 @@ static void idpf_rx_csum(struct idpf_queue *rxq, struct sk_buff *skb, * @csum: structure to extract checksum fields * **/ -static void idpf_rx_splitq_extract_csum_bits(struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc, +static void idpf_rx_splitq_extract_csum_bits(const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc, struct idpf_rx_csum_decoded *csum) { u8 qword0, qword1; @@ -2901,7 +2912,7 @@ static void idpf_rx_splitq_extract_csum_bits(struct virtchnl2_rx_flex_desc_adv_n * length and packet type. */ static int idpf_rx_rsc(struct idpf_queue *rxq, struct sk_buff *skb, - struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc, + const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc, struct libie_rx_ptype_parsed parsed) { u16 rsc_segments, rsc_seg_len; @@ -2970,9 +2981,8 @@ static int idpf_rx_rsc(struct idpf_queue *rxq, struct sk_buff *skb, * order to populate the hash, checksum, protocol, and * other fields within the skb. */ -static int idpf_rx_process_skb_fields(struct idpf_queue *rxq, - struct sk_buff *skb, - struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc) +int idpf_rx_process_skb_fields(struct idpf_queue *rxq, struct sk_buff *skb, + const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc) { struct idpf_rx_csum_decoded csum_bits = { }; struct libie_rx_ptype_parsed parsed; @@ -3851,7 +3861,9 @@ static bool idpf_rx_splitq_clean_all(struct idpf_q_vector *q_vec, int budget, struct idpf_queue *rxq = q_vec->rx[i]; int pkts_cleaned_per_q; - pkts_cleaned_per_q = idpf_rx_splitq_clean(rxq, budget_per_q); + pkts_cleaned_per_q = test_bit(__IDPF_Q_XSK, rxq->flags) ? + idpf_clean_rx_irq_zc(rxq, budget_per_q) : + idpf_rx_splitq_clean(rxq, budget_per_q); /* if we clean as many as budgeted, we must not be done */ if (pkts_cleaned_per_q >= budget_per_q) clean_complete = false; @@ -3859,8 +3871,10 @@ static bool idpf_rx_splitq_clean_all(struct idpf_q_vector *q_vec, int budget, } *cleaned = pkts_cleaned; - for (i = 0; i < q_vec->num_bufq; i++) - idpf_rx_clean_refillq_all(q_vec->bufq[i]); + for (i = 0; i < q_vec->num_bufq; i++) { + if (!test_bit(__IDPF_Q_XSK, q_vec->bufq[i]->flags)) + idpf_rx_clean_refillq_all(q_vec->bufq[i]); + } return clean_complete; } diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h index d55ff6aaae2b..bfb867256ad2 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h @@ -572,6 +572,7 @@ struct idpf_queue { struct libie_rx_buffer *hdr_buf; struct idpf_rx_buf *buf; } rx_buf; + struct xdp_buff **xsk; }; union { struct page_pool *hdr_pp; @@ -951,6 +952,11 @@ netdev_tx_t idpf_tx_singleq_start(struct sk_buff *skb, struct net_device *netdev); bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rxq, u16 cleaned_count); + +struct virtchnl2_rx_flex_desc_adv_nic_3; + +int idpf_rx_process_skb_fields(struct idpf_queue *rxq, struct sk_buff *skb, + const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc); int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off); void idpf_tx_handle_sw_marker(struct idpf_queue *tx_q); int idpf_rx_desc_alloc(struct idpf_queue *rxq, bool bufq, s32 q_model); diff --git a/drivers/net/ethernet/intel/idpf/idpf_xdp.c b/drivers/net/ethernet/intel/idpf/idpf_xdp.c index c20c805583be..de5187192c58 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xdp.c +++ b/drivers/net/ethernet/intel/idpf/idpf_xdp.c @@ -48,7 +48,6 @@ static int idpf_rxq_for_each(const struct idpf_vport *vport, static int idpf_xdp_rxq_info_init(struct idpf_queue *rxq, void *arg) { const struct idpf_vport *vport = rxq->vport; - const struct page_pool *pp; int err; err = __xdp_rxq_info_reg(&rxq->xdp_rxq, vport->netdev, rxq->idx, @@ -57,13 +56,28 @@ static int idpf_xdp_rxq_info_init(struct idpf_queue *rxq, void *arg) if (err) return err; - pp = arg ? rxq->rxq_grp->splitq.bufq_sets[0].bufq.pp : rxq->pp; - xdp_rxq_info_attach_page_pool(&rxq->xdp_rxq, pp); + if (test_bit(__IDPF_Q_XSK, rxq->flags)) { + err = xdp_rxq_info_reg_mem_model(&rxq->xdp_rxq, + MEM_TYPE_XSK_BUFF_POOL, + NULL); + } else { + const struct page_pool *pp; + + pp = arg ? rxq->rxq_grp->splitq.bufq_sets[0].bufq.pp : rxq->pp; + xdp_rxq_info_attach_page_pool(&rxq->xdp_rxq, pp); + } + if (err) + goto unreg; rxq->xdpqs = &vport->txqs[vport->xdp_txq_offset]; rxq->num_xdp_txq = vport->num_xdp_txq; return 0; + +unreg: + xdp_rxq_info_unreg(&rxq->xdp_rxq); + + return err; } /** @@ -90,7 +104,9 @@ static int idpf_xdp_rxq_info_deinit(struct idpf_queue *rxq, void *arg) rxq->xdpqs = NULL; rxq->num_xdp_txq = 0; - xdp_rxq_info_detach_mem_model(&rxq->xdp_rxq); + if (!test_bit(__IDPF_Q_XSK, rxq->flags)) + xdp_rxq_info_detach_mem_model(&rxq->xdp_rxq); + xdp_rxq_info_unreg(&rxq->xdp_rxq); return 0; @@ -132,6 +148,23 @@ void idpf_copy_xdp_prog_to_qs(const struct idpf_vport *vport, idpf_rxq_for_each(vport, idpf_xdp_rxq_assign_prog, xdp_prog); } +static int idpf_rx_napi_schedule(struct idpf_queue *rxq, void *arg) +{ + if (test_bit(__IDPF_Q_XSK, rxq->flags)) + napi_schedule(&rxq->q_vector->napi); + + return 0; +} + +/** + * idpf_vport_rx_napi_schedule - Schedule napi on RX queues from vport + * @vport: vport to schedule napi on + */ +static void idpf_vport_rx_napi_schedule(const struct idpf_vport *vport) +{ + idpf_rxq_for_each(vport, idpf_rx_napi_schedule, NULL); +} + void idpf_vport_xdpq_get(const struct idpf_vport *vport) { if (!idpf_xdp_is_prog_ena(vport)) @@ -451,6 +484,9 @@ idpf_xdp_setup_prog(struct idpf_vport *vport, struct bpf_prog *prog, NL_SET_ERR_MSG_MOD(extack, "Could not re-open the vport after XDP setup\n"); return err; } + + if (prog) + idpf_vport_rx_napi_schedule(vport); } return 0; diff --git a/drivers/net/ethernet/intel/idpf/idpf_xsk.c b/drivers/net/ethernet/intel/idpf/idpf_xsk.c index 6f1870c05948..01231e828f6a 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xsk.c +++ b/drivers/net/ethernet/intel/idpf/idpf_xsk.c @@ -573,6 +573,171 @@ int idpf_xsk_pool_setup(struct idpf_vport *vport, struct xsk_buff_pool *pool, return err; } +/** + * idpf_init_rx_descs_zc - pick buffers from XSK buffer pool and use it + * @pool: XSK Buffer pool to pull the buffers from + * @xdp: SW ring of xdp_buff that will hold the buffers + * @buf_desc: Pointer to buffer descriptors that will be filled + * @first_buf_id: ID of the first buffer to be filled + * @count: The number of buffers to allocate + * + * This function allocates a number of Rx buffers from the fill queue + * or the internal recycle mechanism and places them on the buffer queue. + * + * Note that queue wrap should be handled by caller of this function. + * + * Returns the amount of allocated Rx descriptors + */ +static u32 idpf_init_rx_descs_zc(struct xsk_buff_pool *pool, + struct xdp_buff **xdp, + struct virtchnl2_splitq_rx_buf_desc *buf_desc, + u32 first_buf_id, + u32 count) +{ + dma_addr_t dma; + u32 num_buffs; + u32 i; + + num_buffs = xsk_buff_alloc_batch(pool, xdp, count); + for (i = 0; i < num_buffs; i++) { + dma = xsk_buff_xdp_get_dma(*xdp); + buf_desc->pkt_addr = cpu_to_le64(dma); + buf_desc->qword0.buf_id = cpu_to_le16(i + first_buf_id); + + buf_desc++; + xdp++; + } + + return num_buffs; +} + +static struct xdp_buff **idpf_get_xdp_buff(const struct idpf_queue *q, u32 idx) +{ + return &q->xsk[idx]; +} + +/** + * __idpf_alloc_rx_buffers_zc - allocate a number of Rx buffers + * @rxbufq: buffer queue + * @count: The number of buffers to allocate + * + * Place the @count of descriptors onto buffer queue. Handle the queue wrap + * for case where space from next_to_use up to the end of ring is less + * than @count. Finally do a tail bump. + * + * Returns true if all allocations were successful, false if any fail. + */ +static bool __idpf_alloc_rx_buffers_zc(struct idpf_queue *rxbufq, u32 count) +{ + struct virtchnl2_splitq_rx_buf_desc *buf_desc; + u32 nb_buffs_extra = 0, nb_buffs = 0; + u32 ntu = rxbufq->next_to_use; + u32 total_count = count; + struct xdp_buff **xdp; + + buf_desc = &rxbufq->split_buf[ntu]; + xdp = idpf_get_xdp_buff(rxbufq, ntu); + + if (ntu + count >= rxbufq->desc_count) { + nb_buffs_extra = idpf_init_rx_descs_zc(rxbufq->xsk_rx, xdp, + buf_desc, + ntu, + rxbufq->desc_count - ntu); + if (nb_buffs_extra != rxbufq->desc_count - ntu) { + ntu += nb_buffs_extra; + goto exit; + } + buf_desc = &rxbufq->split_buf[0]; + xdp = idpf_get_xdp_buff(rxbufq, 0); + ntu = 0; + count -= nb_buffs_extra; + idpf_rx_buf_hw_update(rxbufq, 0); + + if (!count) + goto exit; + } + + nb_buffs = idpf_init_rx_descs_zc(rxbufq->xsk_rx, xdp, + buf_desc, ntu, count); + + ntu += nb_buffs; + if (ntu == rxbufq->desc_count) + ntu = 0; + +exit: + if (rxbufq->next_to_use != ntu) + idpf_rx_buf_hw_update(rxbufq, ntu); + + rxbufq->next_to_alloc = ntu; + + return total_count == (nb_buffs_extra + nb_buffs); +} + +/** + * idpf_alloc_rx_buffers_zc - allocate a number of Rx buffers + * @rxbufq: buffer queue + * @count: The number of buffers to allocate + * + * Wrapper for internal allocation routine; figure out how many tail + * bumps should take place based on the given threshold + * + * Returns true if all calls to internal alloc routine succeeded + */ +static bool idpf_alloc_rx_buffers_zc(struct idpf_queue *rxbufq, u32 count) +{ + u32 rx_thresh = IDPF_QUEUE_QUARTER(rxbufq); + u32 leftover, i, tail_bumps; + + tail_bumps = count / rx_thresh; + leftover = count - (tail_bumps * rx_thresh); + + for (i = 0; i < tail_bumps; i++) + if (!__idpf_alloc_rx_buffers_zc(rxbufq, rx_thresh)) + return false; + return __idpf_alloc_rx_buffers_zc(rxbufq, leftover); +} + +/** + * idpf_check_alloc_rx_buffers_zc - allocate a number of Rx buffers with logs + * @adapter: board private structure + * @rxbufq: buffer queue + * + * Wrapper for internal allocation routine; Prints out logs, if allocation + * did not go as expected + */ +int idpf_check_alloc_rx_buffers_zc(struct idpf_queue *rxbufq) +{ + struct net_device *netdev = rxbufq->vport->netdev; + struct xsk_buff_pool *pool = rxbufq->xsk_rx; + u32 count = IDPF_DESC_UNUSED(rxbufq); + + rxbufq->xsk = kcalloc(rxbufq->desc_count, sizeof(*rxbufq->xsk), + GFP_KERNEL); + if (!rxbufq->xsk) + return -ENOMEM; + + if (!xsk_buff_can_alloc(pool, count)) { + netdev_warn(netdev, "XSK buffer pool does not provide enough addresses to fill %d buffers on Rx queue %d\n", + count, rxbufq->idx); + netdev_warn(netdev, "Change Rx queue/fill queue size to avoid performance issues\n"); + } + + if (!idpf_alloc_rx_buffers_zc(rxbufq, count)) + netdev_warn(netdev, "Failed to allocate some buffers on XSK buffer pool enabled Rx queue %d\n", + rxbufq->idx); + + rxbufq->rx_buf_size = xsk_pool_get_rx_frame_size(pool); + + return 0; +} + +void idpf_xsk_buf_rel(struct idpf_queue *rxbufq) +{ + rxbufq->rx_buf_size = 0; + + kfree(rxbufq->xsk); +} + /** * idpf_xsk_clean_xdpq - Clean the XDP Tx queue and its buffer pool queues * @xdpq: XDP_Tx queue @@ -711,6 +876,30 @@ static u32 idpf_clean_xdp_irq_zc(struct idpf_queue *complq) return done_frames; } +static u32 idpf_xsk_tx_prep(void *_xdpq, struct libie_xdp_tx_queue *sq) +{ + struct idpf_queue *xdpq = _xdpq; + u32 free; + + libie_xdp_sq_lock(&xdpq->xdp_lock); + + free = IDPF_DESC_UNUSED(xdpq); + if (unlikely(free < IDPF_QUEUE_QUARTER(xdpq))) + free += idpf_clean_xdp_irq_zc(xdpq->txq_grp->complq); + + *sq = (struct libie_xdp_tx_queue){ + .dev = xdpq->dev, + .tx_buf = xdpq->tx_buf, + .desc_ring = xdpq->desc_ring, + .xdp_lock = &xdpq->xdp_lock, + .next_to_use = &xdpq->next_to_use, + .desc_count = xdpq->desc_count, + .xdp_tx_active = &xdpq->xdp_tx_active, + }; + + return free; +} + /** * idpf_xmit_pkt - produce a single HW Tx descriptor out of AF_XDP descriptor * @xdpq: XDP queue to produce the HW Tx descriptor on @@ -734,6 +923,24 @@ static void idpf_xsk_xmit_pkt(struct libie_xdp_tx_desc desc, desc.len); } +static bool idpf_xsk_tx_flush_bulk(struct libie_xdp_tx_bulk *bq) +{ + return libie_xsk_tx_flush_bulk(bq, idpf_xsk_tx_prep, + idpf_xsk_xmit_pkt); +} + +static bool idpf_xsk_run_prog(struct xdp_buff *xdp, + struct libie_xdp_tx_bulk *bq) +{ + return libie_xdp_run_prog(xdp, bq, idpf_xsk_tx_flush_bulk); +} + +static void idpf_xsk_finalize_rx(struct libie_xdp_tx_bulk *bq) +{ + if (bq->act_mask >= LIBIE_XDP_TX) + libie_xdp_finalize_rx(bq, idpf_xsk_tx_flush_bulk, + idpf_xdp_tx_finalize); +} static u32 idpf_xsk_xmit_prep(void *_xdpq, struct libie_xdp_tx_queue *sq) { struct idpf_queue *xdpq = _xdpq; @@ -753,6 +960,146 @@ static u32 idpf_xsk_xmit_prep(void *_xdpq, struct libie_xdp_tx_queue *sq) return IDPF_DESC_UNUSED(xdpq); } +static bool +idpf_xsk_rx_skb(struct xdp_buff *xdp, + const struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc, + struct idpf_queue *rxq) +{ + struct napi_struct *napi = &rxq->q_vector->napi; + struct sk_buff *skb; + + skb = xdp_build_skb_from_zc(napi, xdp); + if (unlikely(!skb)) + return false; + + if (unlikely(!idpf_rx_process_skb_fields(rxq, skb, rx_desc))) { + kfree_skb(skb); + return false; + } + + napi_gro_receive(napi, skb); + + return true; +} + +/** + * idpf_clean_rx_irq_zc - consumes packets from the hardware queue + * @rxq: AF_XDP Rx queue + * @budget: NAPI budget + * + * Returns number of processed packets on success, remaining budget on failure. + */ +int idpf_clean_rx_irq_zc(struct idpf_queue *rxq, int budget) +{ + int total_rx_bytes = 0, total_rx_pkts = 0; + struct idpf_queue *rx_bufq = NULL; + u32 ntc = rxq->next_to_clean; + struct libie_xdp_tx_bulk bq; + bool failure = false; + u32 to_refill; + u16 buf_id; + + libie_xsk_tx_init_bulk(&bq, rxq->xdp_prog, rxq->xdp_rxq.dev, + rxq->xdpqs, rxq->num_xdp_txq); + + while (likely(total_rx_pkts < budget)) { + struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc; + union virtchnl2_rx_desc *desc; + unsigned int pkt_len = 0; + struct xdp_buff *xdp; + u32 bufq_id, xdp_act; + u16 gen_id; + u8 rxdid; + + desc = &rxq->rx[ntc]; + rx_desc = (struct virtchnl2_rx_flex_desc_adv_nic_3 *)desc; + + dma_rmb(); + + /* if the descriptor isn't done, no work yet to do */ + gen_id = le16_to_cpu(rx_desc->pktlen_gen_bufq_id); + gen_id = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M, gen_id); + + if (test_bit(__IDPF_Q_GEN_CHK, rxq->flags) != gen_id) + break; + + rxdid = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M, + rx_desc->rxdid_ucast); + if (rxdid != VIRTCHNL2_RXDID_2_FLEX_SPLITQ) { + IDPF_RX_BUMP_NTC(rxq, ntc); + u64_stats_update_begin(&rxq->stats_sync); + u64_stats_inc(&rxq->q_stats.rx.bad_descs); + u64_stats_update_end(&rxq->stats_sync); + continue; + } + + pkt_len = le16_to_cpu(rx_desc->pktlen_gen_bufq_id); + pkt_len = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M, + pkt_len); + + bufq_id = le16_to_cpu(rx_desc->pktlen_gen_bufq_id); + bufq_id = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M, + bufq_id); + + rx_bufq = &rxq->rxq_grp->splitq.bufq_sets[bufq_id].bufq; + buf_id = le16_to_cpu(rx_desc->buf_id); + + xdp = *idpf_get_xdp_buff(rx_bufq, buf_id); + + if (unlikely(!pkt_len)) { + xsk_buff_free(xdp); + goto next; + } + + xsk_buff_set_size(xdp, pkt_len); + xsk_buff_dma_sync_for_cpu(xdp, rxq->xsk_rx); + + xdp_act = idpf_xsk_run_prog(xdp, &bq); + if ((xdp_act == LIBIE_XDP_PASS && + unlikely(!idpf_xsk_rx_skb(xdp, rx_desc, rxq))) || + unlikely(xdp_act == LIBIE_XDP_ABORTED)) { + failure = true; + break; + } + + total_rx_bytes += pkt_len; + total_rx_pkts++; + +next: + IDPF_RX_BUMP_NTC(rxq, ntc); + } + + rxq->next_to_clean = ntc; + idpf_xsk_finalize_rx(&bq); + + u64_stats_update_begin(&rxq->stats_sync); + u64_stats_add(&rxq->q_stats.rx.packets, total_rx_pkts); + u64_stats_add(&rxq->q_stats.rx.bytes, total_rx_bytes); + u64_stats_update_end(&rxq->stats_sync); + + if (!rx_bufq) + goto skip_refill; + + IDPF_RX_BUMP_NTC(rx_bufq, buf_id); + rx_bufq->next_to_clean = buf_id; + + to_refill = IDPF_DESC_UNUSED(rx_bufq); + if (to_refill > IDPF_QUEUE_QUARTER(rx_bufq)) + failure |= !idpf_alloc_rx_buffers_zc(rx_bufq, to_refill); + +skip_refill: + if (xsk_uses_need_wakeup(rxq->xsk_rx)) { + if (failure || rxq->next_to_clean == rxq->next_to_use) + xsk_set_rx_need_wakeup(rxq->xsk_rx); + else + xsk_clear_rx_need_wakeup(rxq->xsk_rx); + + return total_rx_pkts; + } + + return unlikely(failure) ? budget : total_rx_pkts; +} + /** * idpf_xmit_xdpq_zc - take entries from XSK Tx queue and place them onto HW Tx queue * @xdpq: XDP queue to produce the HW Tx descriptors on diff --git a/drivers/net/ethernet/intel/idpf/idpf_xsk.h b/drivers/net/ethernet/intel/idpf/idpf_xsk.h index 777d6ab7891d..51ddf2e36577 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xsk.h +++ b/drivers/net/ethernet/intel/idpf/idpf_xsk.h @@ -15,7 +15,11 @@ struct xsk_buff_pool; void idpf_xsk_setup_queue(struct idpf_queue *q, enum virtchnl2_queue_type t); void idpf_xsk_clear_queue(struct idpf_queue *q); +int idpf_check_alloc_rx_buffers_zc(struct idpf_queue *rxbufq); +void idpf_xsk_buf_rel(struct idpf_queue *rxbufq); void idpf_xsk_clean_xdpq(struct idpf_queue *xdpq); + +int idpf_clean_rx_irq_zc(struct idpf_queue *rxq, int budget); bool idpf_xmit_zc(struct idpf_queue *complq); int idpf_xsk_pool_setup(struct idpf_vport *vport, struct xsk_buff_pool *pool, From patchwork Sat Dec 23 02:55:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 182901 Return-Path: Delivered-To: ouuuleilei@gmail.com Received: by 2002:a05:7300:2483:b0:fb:cd0c:d3e with SMTP id q3csp1463097dyi; Fri, 22 Dec 2023 19:10:23 -0800 (PST) X-Google-Smtp-Source: AGHT+IE1OcjJC/eolTqwnHBWuHK5qoLldMOsYqKt0ElVnJnB8pqEpPHeDSRAOc0AK6Vmjy2Wjy7n X-Received: by 2002:a05:620a:29d1:b0:77f:746d:2f61 with SMTP id s17-20020a05620a29d100b0077f746d2f61mr5546396qkp.1.1703301023743; Fri, 22 Dec 2023 19:10:23 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703301023; cv=none; d=google.com; s=arc-20160816; b=HN68KRSXEfhMmmFRWYOU5zK6Nxv4gkZqqUs0bOtlDvjSDhhmZh+Ybdnco+psEb+ihj re7x0cLfATfIuTGoRJjRFOqI6HQejj4qMumOz+TdO8ZBF5JisebATQBXYguKmz3yfAkv lfshAcmXRyMeXUSro3G/LczTYIcvu4zk6tRZAFejBYJytF5JmrOzA4H9X5Hvd/025xXs W52U48uJTXR68vgOrlw+BQmjAL6S0vDDYbzoZYxubNazMdOxa1Rj2P0yZKQ1clQoAdlo Na1mF6bs3GvD8XlhZjmFpYtZMLrFS9t+k5pcPr9+3wwhgLbvAHHvWe6LG3mOap6M8x0M feHQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:in-reply-to:message-id :date:subject:cc:to:from:dkim-signature; bh=nIcsvuKxYT2g3KK/5eN1W4MzaarxFmqakEhBdm5/1vA=; fh=J3H3jkZRTdH/CD0R7uXO8r43FYFiJ1FqB2ouoqeBxmQ=; b=mbw/ESpOwz5dWQJVIbclFGARHiusTrEkL3YiUI7xjyZgGx5OFRCrnmZVFkCSw3GFPf T2TvclZHsvEpPxyAwAaCSEYLY0AjO1jDmt1fsAYSLNVzauVYhWOXnpm4wg7S4pDUaEHh QbJTRVjw7sRlAVlJtpSYPw9GOQiIH/ea7iPY75WsASuN9P7ba/ZsKB76eViVFJRSjNN7 SbWEJBSsg7zn53wWegQKXAzGCSW+R4lsBi5Gz4WfFbS4JJlq8rXi1ULX472VQmdyOwuQ 0lqxz8LyQAp5ugnSk/Pz4LCGXZkuyjWrknRs8nccJ2KOp5r+yYig1ubQVtjDWwj7ndaK oWLw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JJD26Wo8; spf=pass (google.com: domain of linux-kernel+bounces-10250-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10250-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [147.75.199.223]) by mx.google.com with ESMTPS id k19-20020a05620a07f300b0078110391418si5655263qkk.776.2023.12.22.19.10.23 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 19:10:23 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-10250-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) client-ip=147.75.199.223; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=JJD26Wo8; spf=pass (google.com: domain of linux-kernel+bounces-10250-ouuuleilei=gmail.com@vger.kernel.org designates 147.75.199.223 as permitted sender) smtp.mailfrom="linux-kernel+bounces-10250-ouuuleilei=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 868B51C212D3 for ; Sat, 23 Dec 2023 03:10:23 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 69F813FB3C; Sat, 23 Dec 2023 03:00:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="JJD26Wo8" X-Original-To: linux-kernel@vger.kernel.org Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.20]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 73E893F8D3; Sat, 23 Dec 2023 03:00:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703300417; x=1734836417; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=B3hwJC99k6mjwP2SBzKX/gmp6IEmR5LqHfeVCYUgddI=; b=JJD26Wo8zJLaN5p3PfSjSGnO77Lulozu/LpPpGUpcA+L+sH6yd3bBgjE Ghg3cWD0qZ0YP4F2Kkd9aKC468RHQoWXuI09v7J2c1yTIrAW9OkVI9iGU bRTP3JwKP3J9iq3PH4YSXutjMm9IX+fPTi8yCDUHi8cV2T+kRGNeMc3GR +3MuNUJa/yO9IZFzWyogXOd7d44G/RmOK4eUba/9VogpFZ5TfGPpy5BFM 4EWFYMdxMLxuYB31pjkhcRuZRGoodVUNuA16pgg+zh/RgPPUSRhkOGmoI Tj5fCA69m4oqUu+iipcgXk+Dn+j5Tiq20tZqeb+dw3YT64sf+BqlFmrhe A==; X-IronPort-AV: E=McAfee;i="6600,9927,10932"; a="386611122" X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="386611122" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2023 19:00:17 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,298,1695711600"; d="scan'208";a="25537692" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 22 Dec 2023 19:00:13 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexei Starovoitov , Daniel Borkmann , Willem de Bruijn , intel-wired-lan@lists.osuosl.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH RFC net-next 34/34] idpf: enable XSk features and ndo_xsk_wakeup Date: Sat, 23 Dec 2023 03:55:54 +0100 Message-ID: <20231223025554.2316836-35-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20231223025554.2316836-1-aleksander.lobakin@intel.com> References: <20231223025554.2316836-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: 1786040574039570060 X-GMAIL-MSGID: 1786040574039570060 From: Michal Kubiak Now that AF_XDP functionality is fully implemented, advertise XSk XDP feature and add .ndo_xsk_wakeup() callback to be able to use it with this driver. Signed-off-by: Michal Kubiak Signed-off-by: Alexander Lobakin --- drivers/net/ethernet/intel/idpf/idpf_lib.c | 5 ++- drivers/net/ethernet/intel/idpf/idpf_xsk.c | 42 ++++++++++++++++++++++ drivers/net/ethernet/intel/idpf/idpf_xsk.h | 2 ++ 3 files changed, 48 insertions(+), 1 deletion(-) diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c index 7c3d45f84e1b..af4f708b82f3 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_lib.c +++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c @@ -3,6 +3,7 @@ #include "idpf.h" #include "idpf_xdp.h" +#include "idpf_xsk.h" static const struct net_device_ops idpf_netdev_ops_splitq; static const struct net_device_ops idpf_netdev_ops_singleq; @@ -844,7 +845,8 @@ static int idpf_cfg_netdev(struct idpf_vport *vport) if (idpf_is_queue_model_split(vport->rxq_model)) xdp_set_features_flag(netdev, NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT | - NETDEV_XDP_ACT_RX_SG); + NETDEV_XDP_ACT_RX_SG | + NETDEV_XDP_ACT_XSK_ZEROCOPY); idpf_set_ethtool_ops(netdev); SET_NETDEV_DEV(netdev, &adapter->pdev->dev); @@ -2452,6 +2454,7 @@ static const struct net_device_ops idpf_netdev_ops_splitq = { .ndo_tx_timeout = idpf_tx_timeout, .ndo_bpf = idpf_xdp, .ndo_xdp_xmit = idpf_xdp_xmit, + .ndo_xsk_wakeup = idpf_xsk_wakeup, }; static const struct net_device_ops idpf_netdev_ops_singleq = { diff --git a/drivers/net/ethernet/intel/idpf/idpf_xsk.c b/drivers/net/ethernet/intel/idpf/idpf_xsk.c index 01231e828f6a..aff37c6a5adb 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xsk.c +++ b/drivers/net/ethernet/intel/idpf/idpf_xsk.c @@ -1137,3 +1137,45 @@ bool idpf_xmit_zc(struct idpf_queue *complq) return result; } + +/** + * idpf_xsk_wakeup - Implements ndo_xsk_wakeup + * @netdev: net_device + * @queue_id: queue to wake up + * @flags: ignored in our case, since we have Rx and Tx in the same NAPI + * + * Returns negative on error, zero otherwise. + */ +int idpf_xsk_wakeup(struct net_device *netdev, u32 qid, u32 flags) +{ + struct idpf_netdev_priv *np = netdev_priv(netdev); + struct idpf_vport *vport = np->vport; + struct idpf_q_vector *q_vector; + struct idpf_queue *q; + int idx; + + if (idpf_vport_ctrl_is_locked(netdev)) + return -EBUSY; + + if (unlikely(!vport->link_up)) + return -ENETDOWN; + + if (unlikely(!idpf_xdp_is_prog_ena(vport))) + return -ENXIO; + + idx = qid + vport->xdp_txq_offset; + + if (unlikely(idx >= vport->num_txq)) + return -ENXIO; + + if (unlikely(!test_bit(__IDPF_Q_XSK, vport->txqs[idx]->flags))) + return -ENXIO; + + q = vport->txqs[idx]; + q_vector = q->txq_grp->complq->q_vector; + + if (!napi_if_scheduled_mark_missed(&q_vector->napi)) + idpf_trigger_sw_intr(&vport->adapter->hw, q_vector); + + return 0; +} diff --git a/drivers/net/ethernet/intel/idpf/idpf_xsk.h b/drivers/net/ethernet/intel/idpf/idpf_xsk.h index 51ddf2e36577..446ca971f37e 100644 --- a/drivers/net/ethernet/intel/idpf/idpf_xsk.h +++ b/drivers/net/ethernet/intel/idpf/idpf_xsk.h @@ -10,6 +10,7 @@ enum virtchnl2_queue_type; struct idpf_queue; struct idpf_vport; +struct net_device; struct xsk_buff_pool; void idpf_xsk_setup_queue(struct idpf_queue *q, enum virtchnl2_queue_type t); @@ -24,5 +25,6 @@ bool idpf_xmit_zc(struct idpf_queue *complq); int idpf_xsk_pool_setup(struct idpf_vport *vport, struct xsk_buff_pool *pool, u32 qid); +int idpf_xsk_wakeup(struct net_device *netdev, u32 qid, u32 flags); #endif /* !_IDPF_XSK_H_ */